Sep 11 00:30:27.721681 kernel: Linux version 6.12.46-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 10 22:25:29 -00 2025 Sep 11 00:30:27.721699 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=24178014e7d1a618b6c727661dc98ca9324f7f5aeefcaa5f4996d4d839e6e63a Sep 11 00:30:27.721706 kernel: Disabled fast string operations Sep 11 00:30:27.721710 kernel: BIOS-provided physical RAM map: Sep 11 00:30:27.721714 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Sep 11 00:30:27.721719 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Sep 11 00:30:27.721724 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Sep 11 00:30:27.721728 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Sep 11 00:30:27.721733 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Sep 11 00:30:27.721737 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Sep 11 00:30:27.721741 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Sep 11 00:30:27.721746 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Sep 11 00:30:27.721750 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Sep 11 00:30:27.721754 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Sep 11 00:30:27.721761 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Sep 11 00:30:27.721775 kernel: NX (Execute Disable) protection: active Sep 11 00:30:27.721782 kernel: APIC: Static calls initialized Sep 11 00:30:27.721787 kernel: SMBIOS 2.7 present. Sep 11 00:30:27.721792 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Sep 11 00:30:27.721797 kernel: DMI: Memory slots populated: 1/128 Sep 11 00:30:27.721804 kernel: vmware: hypercall mode: 0x00 Sep 11 00:30:27.721808 kernel: Hypervisor detected: VMware Sep 11 00:30:27.721813 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Sep 11 00:30:27.721818 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Sep 11 00:30:27.721822 kernel: vmware: using clock offset of 3255430192 ns Sep 11 00:30:27.721827 kernel: tsc: Detected 3408.000 MHz processor Sep 11 00:30:27.721832 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 11 00:30:27.721838 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 11 00:30:27.721843 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Sep 11 00:30:27.721848 kernel: total RAM covered: 3072M Sep 11 00:30:27.721854 kernel: Found optimal setting for mtrr clean up Sep 11 00:30:27.721859 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Sep 11 00:30:27.721864 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Sep 11 00:30:27.721869 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 11 00:30:27.721874 kernel: Using GB pages for direct mapping Sep 11 00:30:27.721879 kernel: ACPI: Early table checksum verification disabled Sep 11 00:30:27.721884 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Sep 11 00:30:27.721889 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Sep 11 00:30:27.721894 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Sep 11 00:30:27.721900 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Sep 11 00:30:27.721907 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Sep 11 00:30:27.721912 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Sep 11 00:30:27.721917 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Sep 11 00:30:27.721923 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Sep 11 00:30:27.721929 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Sep 11 00:30:27.721934 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Sep 11 00:30:27.721940 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Sep 11 00:30:27.721945 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Sep 11 00:30:27.721950 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Sep 11 00:30:27.721956 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Sep 11 00:30:27.721961 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Sep 11 00:30:27.721966 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Sep 11 00:30:27.721971 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Sep 11 00:30:27.721976 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Sep 11 00:30:27.721982 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Sep 11 00:30:27.721987 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Sep 11 00:30:27.721993 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Sep 11 00:30:27.721998 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Sep 11 00:30:27.722003 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 11 00:30:27.722008 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 11 00:30:27.722013 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Sep 11 00:30:27.722018 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00001000-0x7fffffff] Sep 11 00:30:27.722024 kernel: NODE_DATA(0) allocated [mem 0x7fff8dc0-0x7fffffff] Sep 11 00:30:27.722030 kernel: Zone ranges: Sep 11 00:30:27.722035 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 11 00:30:27.722041 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Sep 11 00:30:27.722046 kernel: Normal empty Sep 11 00:30:27.722051 kernel: Device empty Sep 11 00:30:27.722056 kernel: Movable zone start for each node Sep 11 00:30:27.722061 kernel: Early memory node ranges Sep 11 00:30:27.722066 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Sep 11 00:30:27.722071 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Sep 11 00:30:27.722077 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Sep 11 00:30:27.722083 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Sep 11 00:30:27.722088 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 11 00:30:27.722093 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Sep 11 00:30:27.722098 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Sep 11 00:30:27.722103 kernel: ACPI: PM-Timer IO Port: 0x1008 Sep 11 00:30:27.722109 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Sep 11 00:30:27.722114 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Sep 11 00:30:27.722119 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Sep 11 00:30:27.722124 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Sep 11 00:30:27.722130 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Sep 11 00:30:27.722135 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Sep 11 00:30:27.722140 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Sep 11 00:30:27.722145 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Sep 11 00:30:27.722151 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Sep 11 00:30:27.722156 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Sep 11 00:30:27.722161 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Sep 11 00:30:27.722166 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Sep 11 00:30:27.722171 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Sep 11 00:30:27.722177 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Sep 11 00:30:27.722183 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Sep 11 00:30:27.722188 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Sep 11 00:30:27.722193 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Sep 11 00:30:27.722198 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Sep 11 00:30:27.722203 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Sep 11 00:30:27.722208 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Sep 11 00:30:27.722214 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Sep 11 00:30:27.722219 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Sep 11 00:30:27.722224 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Sep 11 00:30:27.722229 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Sep 11 00:30:27.722235 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Sep 11 00:30:27.722240 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Sep 11 00:30:27.722245 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Sep 11 00:30:27.722250 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Sep 11 00:30:27.722255 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Sep 11 00:30:27.722260 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Sep 11 00:30:27.722265 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Sep 11 00:30:27.722270 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Sep 11 00:30:27.722275 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Sep 11 00:30:27.722281 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Sep 11 00:30:27.722286 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Sep 11 00:30:27.722292 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Sep 11 00:30:27.722297 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Sep 11 00:30:27.722302 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Sep 11 00:30:27.722307 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Sep 11 00:30:27.722312 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Sep 11 00:30:27.722322 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Sep 11 00:30:27.722327 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Sep 11 00:30:27.722332 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Sep 11 00:30:27.722339 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Sep 11 00:30:27.722349 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Sep 11 00:30:27.722354 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Sep 11 00:30:27.722360 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Sep 11 00:30:27.722365 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Sep 11 00:30:27.722371 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Sep 11 00:30:27.722376 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Sep 11 00:30:27.722381 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Sep 11 00:30:27.722388 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Sep 11 00:30:27.722394 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Sep 11 00:30:27.722399 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Sep 11 00:30:27.722405 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Sep 11 00:30:27.722410 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Sep 11 00:30:27.722415 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Sep 11 00:30:27.722421 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Sep 11 00:30:27.722426 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Sep 11 00:30:27.722432 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Sep 11 00:30:27.722437 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Sep 11 00:30:27.722444 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Sep 11 00:30:27.722449 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Sep 11 00:30:27.722455 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Sep 11 00:30:27.722593 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Sep 11 00:30:27.722599 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Sep 11 00:30:27.722605 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Sep 11 00:30:27.722610 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Sep 11 00:30:27.722616 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Sep 11 00:30:27.722621 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Sep 11 00:30:27.722626 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Sep 11 00:30:27.722634 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Sep 11 00:30:27.722640 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Sep 11 00:30:27.722645 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Sep 11 00:30:27.722650 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Sep 11 00:30:27.722656 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Sep 11 00:30:27.722661 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Sep 11 00:30:27.722667 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Sep 11 00:30:27.722672 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Sep 11 00:30:27.722677 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Sep 11 00:30:27.722683 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Sep 11 00:30:27.722690 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Sep 11 00:30:27.722695 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Sep 11 00:30:27.722700 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Sep 11 00:30:27.722706 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Sep 11 00:30:27.722711 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Sep 11 00:30:27.722717 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Sep 11 00:30:27.722722 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Sep 11 00:30:27.722728 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Sep 11 00:30:27.722733 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Sep 11 00:30:27.722739 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Sep 11 00:30:27.722745 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Sep 11 00:30:27.722751 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Sep 11 00:30:27.722756 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Sep 11 00:30:27.722762 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Sep 11 00:30:27.722767 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Sep 11 00:30:27.722772 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Sep 11 00:30:27.722778 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Sep 11 00:30:27.722784 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Sep 11 00:30:27.722789 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Sep 11 00:30:27.722795 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Sep 11 00:30:27.722801 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Sep 11 00:30:27.722807 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Sep 11 00:30:27.722812 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Sep 11 00:30:27.722818 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Sep 11 00:30:27.722823 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Sep 11 00:30:27.722828 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Sep 11 00:30:27.722834 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Sep 11 00:30:27.722839 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Sep 11 00:30:27.722845 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Sep 11 00:30:27.722850 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Sep 11 00:30:27.722857 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Sep 11 00:30:27.722864 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Sep 11 00:30:27.722879 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Sep 11 00:30:27.722887 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Sep 11 00:30:27.722893 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Sep 11 00:30:27.722898 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Sep 11 00:30:27.722906 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Sep 11 00:30:27.722916 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Sep 11 00:30:27.722926 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Sep 11 00:30:27.722939 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Sep 11 00:30:27.722950 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Sep 11 00:30:27.722958 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Sep 11 00:30:27.722966 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Sep 11 00:30:27.722975 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Sep 11 00:30:27.722984 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Sep 11 00:30:27.722990 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Sep 11 00:30:27.722996 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Sep 11 00:30:27.723001 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Sep 11 00:30:27.723007 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Sep 11 00:30:27.723014 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 11 00:30:27.723020 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Sep 11 00:30:27.723026 kernel: TSC deadline timer available Sep 11 00:30:27.723031 kernel: CPU topo: Max. logical packages: 128 Sep 11 00:30:27.723037 kernel: CPU topo: Max. logical dies: 128 Sep 11 00:30:27.723042 kernel: CPU topo: Max. dies per package: 1 Sep 11 00:30:27.723048 kernel: CPU topo: Max. threads per core: 1 Sep 11 00:30:27.723053 kernel: CPU topo: Num. cores per package: 1 Sep 11 00:30:27.723059 kernel: CPU topo: Num. threads per package: 1 Sep 11 00:30:27.723064 kernel: CPU topo: Allowing 2 present CPUs plus 126 hotplug CPUs Sep 11 00:30:27.723073 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Sep 11 00:30:27.723086 kernel: Booting paravirtualized kernel on VMware hypervisor Sep 11 00:30:27.723095 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 11 00:30:27.723104 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Sep 11 00:30:27.723114 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Sep 11 00:30:27.723138 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Sep 11 00:30:27.723155 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Sep 11 00:30:27.723163 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Sep 11 00:30:27.723170 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Sep 11 00:30:27.723177 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Sep 11 00:30:27.723183 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Sep 11 00:30:27.723189 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Sep 11 00:30:27.723194 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Sep 11 00:30:27.723199 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Sep 11 00:30:27.723205 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Sep 11 00:30:27.723211 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Sep 11 00:30:27.723216 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Sep 11 00:30:27.723222 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Sep 11 00:30:27.723229 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Sep 11 00:30:27.723234 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Sep 11 00:30:27.723240 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Sep 11 00:30:27.723245 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Sep 11 00:30:27.723252 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=24178014e7d1a618b6c727661dc98ca9324f7f5aeefcaa5f4996d4d839e6e63a Sep 11 00:30:27.723258 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 11 00:30:27.723263 kernel: random: crng init done Sep 11 00:30:27.723270 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Sep 11 00:30:27.723275 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Sep 11 00:30:27.723281 kernel: printk: log_buf_len min size: 262144 bytes Sep 11 00:30:27.723287 kernel: printk: log_buf_len: 1048576 bytes Sep 11 00:30:27.723292 kernel: printk: early log buf free: 245576(93%) Sep 11 00:30:27.723298 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 11 00:30:27.723303 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 11 00:30:27.723309 kernel: Fallback order for Node 0: 0 Sep 11 00:30:27.723315 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524157 Sep 11 00:30:27.723320 kernel: Policy zone: DMA32 Sep 11 00:30:27.723327 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 11 00:30:27.723332 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Sep 11 00:30:27.723338 kernel: ftrace: allocating 40103 entries in 157 pages Sep 11 00:30:27.723344 kernel: ftrace: allocated 157 pages with 5 groups Sep 11 00:30:27.723349 kernel: Dynamic Preempt: voluntary Sep 11 00:30:27.723355 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 11 00:30:27.723361 kernel: rcu: RCU event tracing is enabled. Sep 11 00:30:27.723366 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Sep 11 00:30:27.723372 kernel: Trampoline variant of Tasks RCU enabled. Sep 11 00:30:27.723379 kernel: Rude variant of Tasks RCU enabled. Sep 11 00:30:27.723384 kernel: Tracing variant of Tasks RCU enabled. Sep 11 00:30:27.723390 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 11 00:30:27.723395 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Sep 11 00:30:27.723401 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Sep 11 00:30:27.723407 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Sep 11 00:30:27.723412 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Sep 11 00:30:27.723418 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Sep 11 00:30:27.723424 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Sep 11 00:30:27.723431 kernel: Console: colour VGA+ 80x25 Sep 11 00:30:27.723436 kernel: printk: legacy console [tty0] enabled Sep 11 00:30:27.723442 kernel: printk: legacy console [ttyS0] enabled Sep 11 00:30:27.723448 kernel: ACPI: Core revision 20240827 Sep 11 00:30:27.723453 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Sep 11 00:30:27.723468 kernel: APIC: Switch to symmetric I/O mode setup Sep 11 00:30:27.723475 kernel: x2apic enabled Sep 11 00:30:27.723480 kernel: APIC: Switched APIC routing to: physical x2apic Sep 11 00:30:27.723486 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 11 00:30:27.723494 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Sep 11 00:30:27.723500 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Sep 11 00:30:27.723506 kernel: Disabled fast string operations Sep 11 00:30:27.723511 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 11 00:30:27.723517 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 11 00:30:27.723523 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 11 00:30:27.723529 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Sep 11 00:30:27.723534 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Sep 11 00:30:27.723540 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Sep 11 00:30:27.723547 kernel: RETBleed: Mitigation: Enhanced IBRS Sep 11 00:30:27.723552 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 11 00:30:27.723558 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 11 00:30:27.723563 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 11 00:30:27.723569 kernel: SRBDS: Unknown: Dependent on hypervisor status Sep 11 00:30:27.723575 kernel: GDS: Unknown: Dependent on hypervisor status Sep 11 00:30:27.723580 kernel: active return thunk: its_return_thunk Sep 11 00:30:27.723586 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 11 00:30:27.723591 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 11 00:30:27.723598 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 11 00:30:27.723604 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 11 00:30:27.723610 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 11 00:30:27.723616 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 11 00:30:27.723621 kernel: Freeing SMP alternatives memory: 32K Sep 11 00:30:27.723627 kernel: pid_max: default: 131072 minimum: 1024 Sep 11 00:30:27.723632 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 11 00:30:27.723638 kernel: landlock: Up and running. Sep 11 00:30:27.723643 kernel: SELinux: Initializing. Sep 11 00:30:27.723650 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 11 00:30:27.723656 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 11 00:30:27.723662 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Sep 11 00:30:27.723667 kernel: Performance Events: Skylake events, core PMU driver. Sep 11 00:30:27.723673 kernel: core: CPUID marked event: 'cpu cycles' unavailable Sep 11 00:30:27.723679 kernel: core: CPUID marked event: 'instructions' unavailable Sep 11 00:30:27.723684 kernel: core: CPUID marked event: 'bus cycles' unavailable Sep 11 00:30:27.723690 kernel: core: CPUID marked event: 'cache references' unavailable Sep 11 00:30:27.723695 kernel: core: CPUID marked event: 'cache misses' unavailable Sep 11 00:30:27.723702 kernel: core: CPUID marked event: 'branch instructions' unavailable Sep 11 00:30:27.723707 kernel: core: CPUID marked event: 'branch misses' unavailable Sep 11 00:30:27.723713 kernel: ... version: 1 Sep 11 00:30:27.723718 kernel: ... bit width: 48 Sep 11 00:30:27.723724 kernel: ... generic registers: 4 Sep 11 00:30:27.723730 kernel: ... value mask: 0000ffffffffffff Sep 11 00:30:27.723736 kernel: ... max period: 000000007fffffff Sep 11 00:30:27.723741 kernel: ... fixed-purpose events: 0 Sep 11 00:30:27.723746 kernel: ... event mask: 000000000000000f Sep 11 00:30:27.723753 kernel: signal: max sigframe size: 1776 Sep 11 00:30:27.723759 kernel: rcu: Hierarchical SRCU implementation. Sep 11 00:30:27.723764 kernel: rcu: Max phase no-delay instances is 400. Sep 11 00:30:27.723770 kernel: Timer migration: 3 hierarchy levels; 8 children per group; 3 crossnode level Sep 11 00:30:27.723775 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 11 00:30:27.723781 kernel: smp: Bringing up secondary CPUs ... Sep 11 00:30:27.723786 kernel: smpboot: x86: Booting SMP configuration: Sep 11 00:30:27.723792 kernel: .... node #0, CPUs: #1 Sep 11 00:30:27.723798 kernel: Disabled fast string operations Sep 11 00:30:27.723804 kernel: smp: Brought up 1 node, 2 CPUs Sep 11 00:30:27.723810 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Sep 11 00:30:27.723816 kernel: Memory: 1926308K/2096628K available (14336K kernel code, 2429K rwdata, 9960K rodata, 53832K init, 1088K bss, 158944K reserved, 0K cma-reserved) Sep 11 00:30:27.723821 kernel: devtmpfs: initialized Sep 11 00:30:27.723827 kernel: x86/mm: Memory block size: 128MB Sep 11 00:30:27.723832 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Sep 11 00:30:27.723838 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 11 00:30:27.723844 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Sep 11 00:30:27.723849 kernel: pinctrl core: initialized pinctrl subsystem Sep 11 00:30:27.723856 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 11 00:30:27.723862 kernel: audit: initializing netlink subsys (disabled) Sep 11 00:30:27.723867 kernel: audit: type=2000 audit(1757550624.267:1): state=initialized audit_enabled=0 res=1 Sep 11 00:30:27.723873 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 11 00:30:27.723878 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 11 00:30:27.723884 kernel: cpuidle: using governor menu Sep 11 00:30:27.723889 kernel: Simple Boot Flag at 0x36 set to 0x80 Sep 11 00:30:27.723895 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 11 00:30:27.723901 kernel: dca service started, version 1.12.1 Sep 11 00:30:27.723914 kernel: PCI: ECAM [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) for domain 0000 [bus 00-7f] Sep 11 00:30:27.723921 kernel: PCI: Using configuration type 1 for base access Sep 11 00:30:27.723927 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 11 00:30:27.723933 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 11 00:30:27.723939 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 11 00:30:27.723945 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 11 00:30:27.723951 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 11 00:30:27.723957 kernel: ACPI: Added _OSI(Module Device) Sep 11 00:30:27.723962 kernel: ACPI: Added _OSI(Processor Device) Sep 11 00:30:27.723970 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 11 00:30:27.723975 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 11 00:30:27.723981 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Sep 11 00:30:27.723987 kernel: ACPI: Interpreter enabled Sep 11 00:30:27.723993 kernel: ACPI: PM: (supports S0 S1 S5) Sep 11 00:30:27.723999 kernel: ACPI: Using IOAPIC for interrupt routing Sep 11 00:30:27.724005 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 11 00:30:27.724011 kernel: PCI: Using E820 reservations for host bridge windows Sep 11 00:30:27.724017 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Sep 11 00:30:27.724028 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Sep 11 00:30:27.724140 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 11 00:30:27.724196 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Sep 11 00:30:27.724281 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Sep 11 00:30:27.724293 kernel: PCI host bridge to bus 0000:00 Sep 11 00:30:27.724347 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 11 00:30:27.724408 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Sep 11 00:30:27.724453 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 11 00:30:27.727033 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 11 00:30:27.727090 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Sep 11 00:30:27.727136 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Sep 11 00:30:27.727203 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 conventional PCI endpoint Sep 11 00:30:27.727269 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 conventional PCI bridge Sep 11 00:30:27.727325 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 11 00:30:27.727385 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Sep 11 00:30:27.727441 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a conventional PCI endpoint Sep 11 00:30:27.727506 kernel: pci 0000:00:07.1: BAR 4 [io 0x1060-0x106f] Sep 11 00:30:27.727558 kernel: pci 0000:00:07.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Sep 11 00:30:27.727610 kernel: pci 0000:00:07.1: BAR 1 [io 0x03f6]: legacy IDE quirk Sep 11 00:30:27.727662 kernel: pci 0000:00:07.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Sep 11 00:30:27.727711 kernel: pci 0000:00:07.1: BAR 3 [io 0x0376]: legacy IDE quirk Sep 11 00:30:27.727768 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Sep 11 00:30:27.727819 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Sep 11 00:30:27.727872 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Sep 11 00:30:27.727931 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 conventional PCI endpoint Sep 11 00:30:27.727984 kernel: pci 0000:00:07.7: BAR 0 [io 0x1080-0x10bf] Sep 11 00:30:27.728043 kernel: pci 0000:00:07.7: BAR 1 [mem 0xfebfe000-0xfebfffff 64bit] Sep 11 00:30:27.728106 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 conventional PCI endpoint Sep 11 00:30:27.728165 kernel: pci 0000:00:0f.0: BAR 0 [io 0x1070-0x107f] Sep 11 00:30:27.728219 kernel: pci 0000:00:0f.0: BAR 1 [mem 0xe8000000-0xefffffff pref] Sep 11 00:30:27.728269 kernel: pci 0000:00:0f.0: BAR 2 [mem 0xfe000000-0xfe7fffff] Sep 11 00:30:27.728318 kernel: pci 0000:00:0f.0: ROM [mem 0x00000000-0x00007fff pref] Sep 11 00:30:27.728368 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 11 00:30:27.728423 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 conventional PCI bridge Sep 11 00:30:27.730514 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Sep 11 00:30:27.730591 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Sep 11 00:30:27.730652 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Sep 11 00:30:27.730704 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Sep 11 00:30:27.730765 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:30:27.730819 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Sep 11 00:30:27.730870 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Sep 11 00:30:27.730922 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Sep 11 00:30:27.730985 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Sep 11 00:30:27.731042 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:30:27.731097 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Sep 11 00:30:27.731148 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Sep 11 00:30:27.731200 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Sep 11 00:30:27.731251 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Sep 11 00:30:27.731302 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Sep 11 00:30:27.731362 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:30:27.731415 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Sep 11 00:30:27.731479 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Sep 11 00:30:27.731532 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Sep 11 00:30:27.731583 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Sep 11 00:30:27.731634 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Sep 11 00:30:27.731690 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:30:27.731743 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Sep 11 00:30:27.731797 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Sep 11 00:30:27.731849 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Sep 11 00:30:27.731900 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Sep 11 00:30:27.731964 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:30:27.732016 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Sep 11 00:30:27.732068 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Sep 11 00:30:27.732119 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Sep 11 00:30:27.732173 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Sep 11 00:30:27.732228 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:30:27.732281 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Sep 11 00:30:27.732332 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Sep 11 00:30:27.732399 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Sep 11 00:30:27.732452 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Sep 11 00:30:27.734772 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:30:27.734835 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Sep 11 00:30:27.734890 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Sep 11 00:30:27.734943 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Sep 11 00:30:27.734995 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Sep 11 00:30:27.735051 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:30:27.735107 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Sep 11 00:30:27.735161 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Sep 11 00:30:27.735230 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Sep 11 00:30:27.735289 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Sep 11 00:30:27.735361 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:30:27.735425 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Sep 11 00:30:27.735502 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Sep 11 00:30:27.735562 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Sep 11 00:30:27.735615 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Sep 11 00:30:27.735677 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:30:27.735738 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Sep 11 00:30:27.735792 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Sep 11 00:30:27.735849 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Sep 11 00:30:27.735910 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Sep 11 00:30:27.735974 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Sep 11 00:30:27.736033 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:30:27.736085 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Sep 11 00:30:27.736139 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Sep 11 00:30:27.736190 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Sep 11 00:30:27.736240 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Sep 11 00:30:27.736291 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Sep 11 00:30:27.736347 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:30:27.736398 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Sep 11 00:30:27.736448 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Sep 11 00:30:27.740600 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Sep 11 00:30:27.740664 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Sep 11 00:30:27.740724 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:30:27.740792 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Sep 11 00:30:27.740847 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Sep 11 00:30:27.740899 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Sep 11 00:30:27.740949 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Sep 11 00:30:27.741013 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:30:27.741066 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Sep 11 00:30:27.741121 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Sep 11 00:30:27.741172 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Sep 11 00:30:27.741223 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Sep 11 00:30:27.741278 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:30:27.741330 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Sep 11 00:30:27.741384 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Sep 11 00:30:27.741434 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Sep 11 00:30:27.741508 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Sep 11 00:30:27.741576 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:30:27.741629 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Sep 11 00:30:27.741684 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Sep 11 00:30:27.741744 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Sep 11 00:30:27.741803 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Sep 11 00:30:27.741864 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:30:27.741916 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Sep 11 00:30:27.741968 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Sep 11 00:30:27.742019 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Sep 11 00:30:27.742070 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Sep 11 00:30:27.742121 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Sep 11 00:30:27.742178 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:30:27.742237 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Sep 11 00:30:27.742298 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Sep 11 00:30:27.742358 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Sep 11 00:30:27.742417 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Sep 11 00:30:27.742486 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Sep 11 00:30:27.742549 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:30:27.742613 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Sep 11 00:30:27.742665 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Sep 11 00:30:27.742715 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Sep 11 00:30:27.742766 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Sep 11 00:30:27.742822 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Sep 11 00:30:27.742880 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:30:27.742932 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Sep 11 00:30:27.742982 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Sep 11 00:30:27.743034 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Sep 11 00:30:27.743085 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Sep 11 00:30:27.743141 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:30:27.743200 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Sep 11 00:30:27.743259 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Sep 11 00:30:27.743311 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Sep 11 00:30:27.743369 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Sep 11 00:30:27.743428 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:30:27.743501 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Sep 11 00:30:27.743562 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Sep 11 00:30:27.743624 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Sep 11 00:30:27.743684 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Sep 11 00:30:27.743743 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:30:27.743796 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Sep 11 00:30:27.743847 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Sep 11 00:30:27.743899 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Sep 11 00:30:27.743949 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Sep 11 00:30:27.744008 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:30:27.744065 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Sep 11 00:30:27.744117 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Sep 11 00:30:27.744168 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Sep 11 00:30:27.744222 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Sep 11 00:30:27.744280 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:30:27.744332 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Sep 11 00:30:27.744383 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Sep 11 00:30:27.744437 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Sep 11 00:30:27.745552 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Sep 11 00:30:27.745626 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Sep 11 00:30:27.745702 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:30:27.745778 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Sep 11 00:30:27.745845 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Sep 11 00:30:27.745913 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Sep 11 00:30:27.745979 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Sep 11 00:30:27.746039 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Sep 11 00:30:27.746130 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:30:27.746188 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Sep 11 00:30:27.746239 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Sep 11 00:30:27.746295 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Sep 11 00:30:27.746357 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Sep 11 00:30:27.746424 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:30:27.746503 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Sep 11 00:30:27.746561 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Sep 11 00:30:27.746613 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Sep 11 00:30:27.746664 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Sep 11 00:30:27.746726 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:30:27.746781 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Sep 11 00:30:27.746844 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Sep 11 00:30:27.746897 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Sep 11 00:30:27.746948 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Sep 11 00:30:27.747005 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:30:27.747057 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Sep 11 00:30:27.747127 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Sep 11 00:30:27.747190 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Sep 11 00:30:27.747254 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Sep 11 00:30:27.747336 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:30:27.747390 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Sep 11 00:30:27.747441 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Sep 11 00:30:27.747510 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Sep 11 00:30:27.747566 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Sep 11 00:30:27.747629 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 11 00:30:27.747685 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Sep 11 00:30:27.747736 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Sep 11 00:30:27.747787 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Sep 11 00:30:27.747839 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Sep 11 00:30:27.747906 kernel: pci_bus 0000:01: extended config space not accessible Sep 11 00:30:27.747972 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 11 00:30:27.748028 kernel: pci_bus 0000:02: extended config space not accessible Sep 11 00:30:27.748038 kernel: acpiphp: Slot [32] registered Sep 11 00:30:27.748047 kernel: acpiphp: Slot [33] registered Sep 11 00:30:27.748053 kernel: acpiphp: Slot [34] registered Sep 11 00:30:27.748058 kernel: acpiphp: Slot [35] registered Sep 11 00:30:27.748065 kernel: acpiphp: Slot [36] registered Sep 11 00:30:27.748073 kernel: acpiphp: Slot [37] registered Sep 11 00:30:27.748079 kernel: acpiphp: Slot [38] registered Sep 11 00:30:27.748085 kernel: acpiphp: Slot [39] registered Sep 11 00:30:27.748091 kernel: acpiphp: Slot [40] registered Sep 11 00:30:27.748097 kernel: acpiphp: Slot [41] registered Sep 11 00:30:27.748104 kernel: acpiphp: Slot [42] registered Sep 11 00:30:27.748110 kernel: acpiphp: Slot [43] registered Sep 11 00:30:27.748116 kernel: acpiphp: Slot [44] registered Sep 11 00:30:27.748122 kernel: acpiphp: Slot [45] registered Sep 11 00:30:27.748128 kernel: acpiphp: Slot [46] registered Sep 11 00:30:27.748134 kernel: acpiphp: Slot [47] registered Sep 11 00:30:27.748141 kernel: acpiphp: Slot [48] registered Sep 11 00:30:27.748150 kernel: acpiphp: Slot [49] registered Sep 11 00:30:27.748156 kernel: acpiphp: Slot [50] registered Sep 11 00:30:27.748162 kernel: acpiphp: Slot [51] registered Sep 11 00:30:27.748170 kernel: acpiphp: Slot [52] registered Sep 11 00:30:27.748176 kernel: acpiphp: Slot [53] registered Sep 11 00:30:27.748182 kernel: acpiphp: Slot [54] registered Sep 11 00:30:27.748191 kernel: acpiphp: Slot [55] registered Sep 11 00:30:27.748198 kernel: acpiphp: Slot [56] registered Sep 11 00:30:27.748204 kernel: acpiphp: Slot [57] registered Sep 11 00:30:27.748210 kernel: acpiphp: Slot [58] registered Sep 11 00:30:27.748216 kernel: acpiphp: Slot [59] registered Sep 11 00:30:27.748222 kernel: acpiphp: Slot [60] registered Sep 11 00:30:27.748229 kernel: acpiphp: Slot [61] registered Sep 11 00:30:27.748235 kernel: acpiphp: Slot [62] registered Sep 11 00:30:27.748243 kernel: acpiphp: Slot [63] registered Sep 11 00:30:27.748306 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Sep 11 00:30:27.748366 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Sep 11 00:30:27.748417 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Sep 11 00:30:27.748485 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Sep 11 00:30:27.748538 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Sep 11 00:30:27.748597 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Sep 11 00:30:27.748659 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 PCIe Endpoint Sep 11 00:30:27.748720 kernel: pci 0000:03:00.0: BAR 0 [io 0x4000-0x4007] Sep 11 00:30:27.748789 kernel: pci 0000:03:00.0: BAR 1 [mem 0xfd5f8000-0xfd5fffff 64bit] Sep 11 00:30:27.748843 kernel: pci 0000:03:00.0: ROM [mem 0x00000000-0x0000ffff pref] Sep 11 00:30:27.748901 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Sep 11 00:30:27.748960 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Sep 11 00:30:27.749023 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Sep 11 00:30:27.749084 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Sep 11 00:30:27.749152 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Sep 11 00:30:27.749208 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Sep 11 00:30:27.749264 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Sep 11 00:30:27.749327 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Sep 11 00:30:27.749400 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Sep 11 00:30:27.749480 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Sep 11 00:30:27.749546 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 PCIe Endpoint Sep 11 00:30:27.749611 kernel: pci 0000:0b:00.0: BAR 0 [mem 0xfd4fc000-0xfd4fcfff] Sep 11 00:30:27.749674 kernel: pci 0000:0b:00.0: BAR 1 [mem 0xfd4fd000-0xfd4fdfff] Sep 11 00:30:27.749734 kernel: pci 0000:0b:00.0: BAR 2 [mem 0xfd4fe000-0xfd4fffff] Sep 11 00:30:27.749902 kernel: pci 0000:0b:00.0: BAR 3 [io 0x5000-0x500f] Sep 11 00:30:27.750170 kernel: pci 0000:0b:00.0: ROM [mem 0x00000000-0x0000ffff pref] Sep 11 00:30:27.750234 kernel: pci 0000:0b:00.0: supports D1 D2 Sep 11 00:30:27.750301 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 11 00:30:27.750371 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Sep 11 00:30:27.750442 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Sep 11 00:30:27.750512 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Sep 11 00:30:27.750582 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Sep 11 00:30:27.750655 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Sep 11 00:30:27.750721 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Sep 11 00:30:27.750791 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Sep 11 00:30:27.750860 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Sep 11 00:30:27.750929 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Sep 11 00:30:27.751003 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Sep 11 00:30:27.751077 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Sep 11 00:30:27.751147 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Sep 11 00:30:27.751215 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Sep 11 00:30:27.751279 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Sep 11 00:30:27.751351 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Sep 11 00:30:27.751412 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Sep 11 00:30:27.751495 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Sep 11 00:30:27.751560 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Sep 11 00:30:27.751619 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Sep 11 00:30:27.751680 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Sep 11 00:30:27.751740 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Sep 11 00:30:27.751805 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Sep 11 00:30:27.751882 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Sep 11 00:30:27.751940 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Sep 11 00:30:27.752007 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Sep 11 00:30:27.752021 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Sep 11 00:30:27.752031 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Sep 11 00:30:27.752041 kernel: ACPI: PCI: Interrupt link LNKB disabled Sep 11 00:30:27.752048 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 11 00:30:27.752057 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Sep 11 00:30:27.752070 kernel: iommu: Default domain type: Translated Sep 11 00:30:27.752078 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 11 00:30:27.752086 kernel: PCI: Using ACPI for IRQ routing Sep 11 00:30:27.752093 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 11 00:30:27.752099 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Sep 11 00:30:27.752105 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Sep 11 00:30:27.752180 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Sep 11 00:30:27.752251 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Sep 11 00:30:27.752311 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 11 00:30:27.752320 kernel: vgaarb: loaded Sep 11 00:30:27.752326 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Sep 11 00:30:27.752333 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Sep 11 00:30:27.752340 kernel: clocksource: Switched to clocksource tsc-early Sep 11 00:30:27.752353 kernel: VFS: Disk quotas dquot_6.6.0 Sep 11 00:30:27.752361 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 11 00:30:27.752370 kernel: pnp: PnP ACPI init Sep 11 00:30:27.752435 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Sep 11 00:30:27.752514 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Sep 11 00:30:27.752577 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Sep 11 00:30:27.752639 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Sep 11 00:30:27.752697 kernel: pnp 00:06: [dma 2] Sep 11 00:30:27.752759 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Sep 11 00:30:27.752824 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Sep 11 00:30:27.752881 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Sep 11 00:30:27.752893 kernel: pnp: PnP ACPI: found 8 devices Sep 11 00:30:27.752904 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 11 00:30:27.752913 kernel: NET: Registered PF_INET protocol family Sep 11 00:30:27.752919 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 11 00:30:27.752925 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 11 00:30:27.752933 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 11 00:30:27.752943 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 11 00:30:27.752954 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 11 00:30:27.752961 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 11 00:30:27.752967 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 11 00:30:27.752973 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 11 00:30:27.752979 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 11 00:30:27.752987 kernel: NET: Registered PF_XDP protocol family Sep 11 00:30:27.753048 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Sep 11 00:30:27.753114 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Sep 11 00:30:27.753176 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 11 00:30:27.753240 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 11 00:30:27.753306 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 11 00:30:27.753364 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Sep 11 00:30:27.753427 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Sep 11 00:30:27.753521 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Sep 11 00:30:27.753583 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Sep 11 00:30:27.753658 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Sep 11 00:30:27.753718 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Sep 11 00:30:27.753779 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Sep 11 00:30:27.753841 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Sep 11 00:30:27.753910 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Sep 11 00:30:27.753992 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Sep 11 00:30:27.754063 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Sep 11 00:30:27.754139 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Sep 11 00:30:27.754214 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Sep 11 00:30:27.754286 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Sep 11 00:30:27.754349 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Sep 11 00:30:27.754432 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Sep 11 00:30:27.754504 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Sep 11 00:30:27.754588 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Sep 11 00:30:27.755590 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref]: assigned Sep 11 00:30:27.755668 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref]: assigned Sep 11 00:30:27.755743 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:30:27.755812 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Sep 11 00:30:27.755876 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:30:27.755951 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Sep 11 00:30:27.756022 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:30:27.756092 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Sep 11 00:30:27.756160 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:30:27.756228 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Sep 11 00:30:27.756299 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:30:27.756375 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Sep 11 00:30:27.756449 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:30:27.756526 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Sep 11 00:30:27.756591 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:30:27.756662 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Sep 11 00:30:27.756723 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:30:27.756789 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Sep 11 00:30:27.756860 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:30:27.756927 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Sep 11 00:30:27.756990 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:30:27.757048 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Sep 11 00:30:27.757119 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:30:27.757191 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Sep 11 00:30:27.757257 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:30:27.757318 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Sep 11 00:30:27.757377 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:30:27.757441 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Sep 11 00:30:27.758549 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:30:27.758640 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Sep 11 00:30:27.758712 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:30:27.758778 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Sep 11 00:30:27.758843 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:30:27.758917 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Sep 11 00:30:27.758996 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:30:27.759080 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Sep 11 00:30:27.759153 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:30:27.759219 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Sep 11 00:30:27.759281 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:30:27.759342 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Sep 11 00:30:27.759415 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:30:27.759483 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Sep 11 00:30:27.759542 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:30:27.759597 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Sep 11 00:30:27.759656 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:30:27.759725 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Sep 11 00:30:27.759784 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:30:27.759852 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Sep 11 00:30:27.761002 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:30:27.761093 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Sep 11 00:30:27.761179 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:30:27.761268 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Sep 11 00:30:27.761350 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:30:27.761432 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Sep 11 00:30:27.762003 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:30:27.762066 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Sep 11 00:30:27.762122 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:30:27.762176 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Sep 11 00:30:27.762229 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:30:27.762281 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Sep 11 00:30:27.762335 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:30:27.762397 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Sep 11 00:30:27.762450 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:30:27.762520 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Sep 11 00:30:27.762576 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:30:27.762629 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Sep 11 00:30:27.762681 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:30:27.762732 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Sep 11 00:30:27.762786 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:30:27.762849 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Sep 11 00:30:27.762907 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:30:27.762958 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Sep 11 00:30:27.763010 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:30:27.763062 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Sep 11 00:30:27.763114 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:30:27.763165 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Sep 11 00:30:27.763222 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:30:27.763273 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Sep 11 00:30:27.763326 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:30:27.763377 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Sep 11 00:30:27.763430 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:30:27.764541 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Sep 11 00:30:27.764613 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:30:27.764670 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Sep 11 00:30:27.764725 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Sep 11 00:30:27.764777 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Sep 11 00:30:27.764843 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 11 00:30:27.764896 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Sep 11 00:30:27.764947 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Sep 11 00:30:27.764998 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Sep 11 00:30:27.765048 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Sep 11 00:30:27.765104 kernel: pci 0000:03:00.0: ROM [mem 0xfd500000-0xfd50ffff pref]: assigned Sep 11 00:30:27.765157 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Sep 11 00:30:27.765209 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Sep 11 00:30:27.765263 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Sep 11 00:30:27.765314 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Sep 11 00:30:27.765373 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Sep 11 00:30:27.765424 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Sep 11 00:30:27.765489 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Sep 11 00:30:27.765549 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Sep 11 00:30:27.765603 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Sep 11 00:30:27.765654 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Sep 11 00:30:27.765706 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Sep 11 00:30:27.765760 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Sep 11 00:30:27.765811 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Sep 11 00:30:27.765862 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Sep 11 00:30:27.765913 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Sep 11 00:30:27.765964 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Sep 11 00:30:27.766014 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Sep 11 00:30:27.766066 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Sep 11 00:30:27.766117 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Sep 11 00:30:27.766170 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Sep 11 00:30:27.766221 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Sep 11 00:30:27.766274 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Sep 11 00:30:27.766325 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Sep 11 00:30:27.766377 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Sep 11 00:30:27.766430 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Sep 11 00:30:27.766718 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Sep 11 00:30:27.766774 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Sep 11 00:30:27.766834 kernel: pci 0000:0b:00.0: ROM [mem 0xfd400000-0xfd40ffff pref]: assigned Sep 11 00:30:27.766887 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Sep 11 00:30:27.766939 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Sep 11 00:30:27.766989 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Sep 11 00:30:27.767040 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Sep 11 00:30:27.767092 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Sep 11 00:30:27.767143 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Sep 11 00:30:27.767194 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Sep 11 00:30:27.767247 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Sep 11 00:30:27.767299 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Sep 11 00:30:27.767349 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Sep 11 00:30:27.767400 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Sep 11 00:30:27.767451 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Sep 11 00:30:27.767514 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Sep 11 00:30:27.767565 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Sep 11 00:30:27.767615 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Sep 11 00:30:27.767667 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Sep 11 00:30:27.767720 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Sep 11 00:30:27.767771 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Sep 11 00:30:27.767842 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Sep 11 00:30:27.767902 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Sep 11 00:30:27.767954 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Sep 11 00:30:27.768035 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Sep 11 00:30:27.768095 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Sep 11 00:30:27.768149 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Sep 11 00:30:27.768201 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Sep 11 00:30:27.768252 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Sep 11 00:30:27.768303 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Sep 11 00:30:27.768360 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Sep 11 00:30:27.768411 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Sep 11 00:30:27.768614 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Sep 11 00:30:27.768671 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Sep 11 00:30:27.768727 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Sep 11 00:30:27.768780 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Sep 11 00:30:27.768831 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Sep 11 00:30:27.768882 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Sep 11 00:30:27.768934 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Sep 11 00:30:27.768984 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Sep 11 00:30:27.769035 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Sep 11 00:30:27.769086 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Sep 11 00:30:27.769137 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Sep 11 00:30:27.769187 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Sep 11 00:30:27.769240 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Sep 11 00:30:27.769292 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Sep 11 00:30:27.769343 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Sep 11 00:30:27.769406 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Sep 11 00:30:27.769469 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Sep 11 00:30:27.769528 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Sep 11 00:30:27.769579 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Sep 11 00:30:27.769676 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Sep 11 00:30:27.769730 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Sep 11 00:30:27.769781 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Sep 11 00:30:27.769833 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Sep 11 00:30:27.769884 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Sep 11 00:30:27.769935 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Sep 11 00:30:27.769988 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Sep 11 00:30:27.770041 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Sep 11 00:30:27.770092 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Sep 11 00:30:27.770143 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Sep 11 00:30:27.770194 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Sep 11 00:30:27.770245 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Sep 11 00:30:27.770295 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Sep 11 00:30:27.770346 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Sep 11 00:30:27.770397 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Sep 11 00:30:27.770447 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Sep 11 00:30:27.770521 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Sep 11 00:30:27.770579 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Sep 11 00:30:27.770630 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Sep 11 00:30:27.770681 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Sep 11 00:30:27.770734 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Sep 11 00:30:27.770785 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Sep 11 00:30:27.770836 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Sep 11 00:30:27.770891 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Sep 11 00:30:27.770943 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Sep 11 00:30:27.770994 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Sep 11 00:30:27.771047 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Sep 11 00:30:27.771099 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Sep 11 00:30:27.771150 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Sep 11 00:30:27.771203 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Sep 11 00:30:27.771254 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Sep 11 00:30:27.771307 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Sep 11 00:30:27.771360 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Sep 11 00:30:27.771405 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Sep 11 00:30:27.771450 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Sep 11 00:30:27.771532 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Sep 11 00:30:27.771577 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Sep 11 00:30:27.771627 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Sep 11 00:30:27.771677 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Sep 11 00:30:27.771723 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Sep 11 00:30:27.771770 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Sep 11 00:30:27.771816 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Sep 11 00:30:27.771865 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Sep 11 00:30:27.771912 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Sep 11 00:30:27.771958 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Sep 11 00:30:27.772011 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Sep 11 00:30:27.772059 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Sep 11 00:30:27.772106 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Sep 11 00:30:27.772156 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Sep 11 00:30:27.772203 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Sep 11 00:30:27.772249 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Sep 11 00:30:27.772299 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Sep 11 00:30:27.772352 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Sep 11 00:30:27.772398 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Sep 11 00:30:27.772448 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Sep 11 00:30:27.772506 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Sep 11 00:30:27.772557 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Sep 11 00:30:27.772604 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Sep 11 00:30:27.772654 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Sep 11 00:30:27.772704 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Sep 11 00:30:27.772755 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Sep 11 00:30:27.772801 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Sep 11 00:30:27.772850 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Sep 11 00:30:27.772897 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Sep 11 00:30:27.772950 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Sep 11 00:30:27.772997 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Sep 11 00:30:27.773043 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Sep 11 00:30:27.773094 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Sep 11 00:30:27.773142 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Sep 11 00:30:27.773188 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Sep 11 00:30:27.773238 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Sep 11 00:30:27.773287 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Sep 11 00:30:27.773333 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Sep 11 00:30:27.773387 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Sep 11 00:30:27.773434 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Sep 11 00:30:27.773500 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Sep 11 00:30:27.773548 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Sep 11 00:30:27.773602 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Sep 11 00:30:27.773648 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Sep 11 00:30:27.773697 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Sep 11 00:30:27.773744 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Sep 11 00:30:27.773795 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Sep 11 00:30:27.773842 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Sep 11 00:30:27.773894 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Sep 11 00:30:27.773941 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Sep 11 00:30:27.773987 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Sep 11 00:30:27.774037 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Sep 11 00:30:27.774088 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Sep 11 00:30:27.774134 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Sep 11 00:30:27.774185 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Sep 11 00:30:27.774234 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Sep 11 00:30:27.774280 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Sep 11 00:30:27.774332 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Sep 11 00:30:27.774381 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Sep 11 00:30:27.774431 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Sep 11 00:30:27.774501 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Sep 11 00:30:27.774557 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Sep 11 00:30:27.774604 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Sep 11 00:30:27.774655 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Sep 11 00:30:27.774701 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Sep 11 00:30:27.774751 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Sep 11 00:30:27.774798 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Sep 11 00:30:27.774851 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Sep 11 00:30:27.774900 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Sep 11 00:30:27.774946 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Sep 11 00:30:27.774995 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Sep 11 00:30:27.775041 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Sep 11 00:30:27.775087 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Sep 11 00:30:27.775138 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Sep 11 00:30:27.775186 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Sep 11 00:30:27.775238 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Sep 11 00:30:27.775284 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Sep 11 00:30:27.775336 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Sep 11 00:30:27.775387 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Sep 11 00:30:27.775437 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Sep 11 00:30:27.775497 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Sep 11 00:30:27.775548 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Sep 11 00:30:27.775595 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Sep 11 00:30:27.775644 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Sep 11 00:30:27.775691 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Sep 11 00:30:27.775746 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 11 00:30:27.775755 kernel: PCI: CLS 32 bytes, default 64 Sep 11 00:30:27.775763 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 11 00:30:27.775769 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Sep 11 00:30:27.775776 kernel: clocksource: Switched to clocksource tsc Sep 11 00:30:27.775781 kernel: Initialise system trusted keyrings Sep 11 00:30:27.775788 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 11 00:30:27.775794 kernel: Key type asymmetric registered Sep 11 00:30:27.775800 kernel: Asymmetric key parser 'x509' registered Sep 11 00:30:27.775806 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 11 00:30:27.775812 kernel: io scheduler mq-deadline registered Sep 11 00:30:27.775819 kernel: io scheduler kyber registered Sep 11 00:30:27.775825 kernel: io scheduler bfq registered Sep 11 00:30:27.775877 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Sep 11 00:30:27.775929 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:30:27.775981 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Sep 11 00:30:27.776032 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:30:27.776083 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Sep 11 00:30:27.776137 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:30:27.776188 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Sep 11 00:30:27.776240 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:30:27.776291 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Sep 11 00:30:27.776342 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:30:27.776394 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Sep 11 00:30:27.776445 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:30:27.776517 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Sep 11 00:30:27.776573 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:30:27.776623 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Sep 11 00:30:27.776674 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:30:27.776726 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Sep 11 00:30:27.776777 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:30:27.776829 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Sep 11 00:30:27.776882 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:30:27.776933 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Sep 11 00:30:27.776984 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:30:27.777036 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Sep 11 00:30:27.777096 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:30:27.777149 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Sep 11 00:30:27.777201 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:30:27.777253 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Sep 11 00:30:27.777306 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:30:27.777357 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Sep 11 00:30:27.777408 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:30:27.777469 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Sep 11 00:30:27.777524 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:30:27.777590 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Sep 11 00:30:27.777643 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:30:27.777697 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Sep 11 00:30:27.777749 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:30:27.777800 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Sep 11 00:30:27.777851 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:30:27.777903 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Sep 11 00:30:27.777955 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:30:27.778006 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Sep 11 00:30:27.778057 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:30:27.778111 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Sep 11 00:30:27.778162 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:30:27.778213 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Sep 11 00:30:27.778264 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:30:27.778315 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Sep 11 00:30:27.778373 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:30:27.778425 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Sep 11 00:30:27.778493 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:30:27.778546 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Sep 11 00:30:27.778598 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:30:27.778649 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Sep 11 00:30:27.778701 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:30:27.778751 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Sep 11 00:30:27.778802 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:30:27.778854 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Sep 11 00:30:27.778910 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:30:27.778961 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Sep 11 00:30:27.779013 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:30:27.779065 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Sep 11 00:30:27.779116 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:30:27.779168 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Sep 11 00:30:27.779219 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 11 00:30:27.779232 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 11 00:30:27.779238 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 11 00:30:27.779245 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 11 00:30:27.779251 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Sep 11 00:30:27.779258 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 11 00:30:27.779264 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 11 00:30:27.779319 kernel: rtc_cmos 00:01: registered as rtc0 Sep 11 00:30:27.779372 kernel: rtc_cmos 00:01: setting system clock to 2025-09-11T00:30:27 UTC (1757550627) Sep 11 00:30:27.779421 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Sep 11 00:30:27.779430 kernel: intel_pstate: CPU model not supported Sep 11 00:30:27.779437 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 11 00:30:27.779443 kernel: NET: Registered PF_INET6 protocol family Sep 11 00:30:27.779449 kernel: Segment Routing with IPv6 Sep 11 00:30:27.779456 kernel: In-situ OAM (IOAM) with IPv6 Sep 11 00:30:27.779470 kernel: NET: Registered PF_PACKET protocol family Sep 11 00:30:27.779476 kernel: Key type dns_resolver registered Sep 11 00:30:27.779485 kernel: IPI shorthand broadcast: enabled Sep 11 00:30:27.779491 kernel: sched_clock: Marking stable (2600003305, 165843637)->(2783796449, -17949507) Sep 11 00:30:27.779498 kernel: registered taskstats version 1 Sep 11 00:30:27.779504 kernel: Loading compiled-in X.509 certificates Sep 11 00:30:27.779510 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.46-flatcar: 8138ce5002a1b572fd22b23ac238f29bab3f249f' Sep 11 00:30:27.779516 kernel: Demotion targets for Node 0: null Sep 11 00:30:27.779523 kernel: Key type .fscrypt registered Sep 11 00:30:27.779529 kernel: Key type fscrypt-provisioning registered Sep 11 00:30:27.779536 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 11 00:30:27.779543 kernel: ima: Allocated hash algorithm: sha1 Sep 11 00:30:27.779549 kernel: ima: No architecture policies found Sep 11 00:30:27.779556 kernel: clk: Disabling unused clocks Sep 11 00:30:27.779562 kernel: Warning: unable to open an initial console. Sep 11 00:30:27.779568 kernel: Freeing unused kernel image (initmem) memory: 53832K Sep 11 00:30:27.779575 kernel: Write protecting the kernel read-only data: 24576k Sep 11 00:30:27.779581 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Sep 11 00:30:27.779587 kernel: Run /init as init process Sep 11 00:30:27.779594 kernel: with arguments: Sep 11 00:30:27.779601 kernel: /init Sep 11 00:30:27.779607 kernel: with environment: Sep 11 00:30:27.779613 kernel: HOME=/ Sep 11 00:30:27.779619 kernel: TERM=linux Sep 11 00:30:27.779626 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 11 00:30:27.779633 systemd[1]: Successfully made /usr/ read-only. Sep 11 00:30:27.779642 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 11 00:30:27.779649 systemd[1]: Detected virtualization vmware. Sep 11 00:30:27.779656 systemd[1]: Detected architecture x86-64. Sep 11 00:30:27.779662 systemd[1]: Running in initrd. Sep 11 00:30:27.779669 systemd[1]: No hostname configured, using default hostname. Sep 11 00:30:27.779676 systemd[1]: Hostname set to . Sep 11 00:30:27.779682 systemd[1]: Initializing machine ID from random generator. Sep 11 00:30:27.779689 systemd[1]: Queued start job for default target initrd.target. Sep 11 00:30:27.779695 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 00:30:27.779702 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 00:30:27.779710 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 11 00:30:27.779717 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 11 00:30:27.779724 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 11 00:30:27.779731 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 11 00:30:27.779738 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 11 00:30:27.779745 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 11 00:30:27.779752 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 00:30:27.779759 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 11 00:30:27.779766 systemd[1]: Reached target paths.target - Path Units. Sep 11 00:30:27.779773 systemd[1]: Reached target slices.target - Slice Units. Sep 11 00:30:27.779779 systemd[1]: Reached target swap.target - Swaps. Sep 11 00:30:27.779786 systemd[1]: Reached target timers.target - Timer Units. Sep 11 00:30:27.779792 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 11 00:30:27.779799 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 11 00:30:27.779806 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 11 00:30:27.779814 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 11 00:30:27.779820 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 11 00:30:27.779827 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 11 00:30:27.779834 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 00:30:27.779840 systemd[1]: Reached target sockets.target - Socket Units. Sep 11 00:30:27.779847 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 11 00:30:27.779854 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 11 00:30:27.779862 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 11 00:30:27.779869 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 11 00:30:27.779876 systemd[1]: Starting systemd-fsck-usr.service... Sep 11 00:30:27.779883 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 11 00:30:27.779890 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 11 00:30:27.779896 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:30:27.779903 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 11 00:30:27.779912 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 00:30:27.779918 systemd[1]: Finished systemd-fsck-usr.service. Sep 11 00:30:27.779925 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 11 00:30:27.779943 systemd-journald[243]: Collecting audit messages is disabled. Sep 11 00:30:27.779962 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 11 00:30:27.779969 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 11 00:30:27.779976 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 00:30:27.779983 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:30:27.779990 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 11 00:30:27.779996 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 11 00:30:27.780003 kernel: Bridge firewalling registered Sep 11 00:30:27.780011 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 11 00:30:27.780018 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 11 00:30:27.780025 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 11 00:30:27.780032 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 11 00:30:27.780038 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:30:27.780045 systemd-journald[243]: Journal started Sep 11 00:30:27.780060 systemd-journald[243]: Runtime Journal (/run/log/journal/b50e34b9d65f4defb99d06064f9c2724) is 4.8M, max 38.9M, 34M free. Sep 11 00:30:27.718682 systemd-modules-load[244]: Inserted module 'overlay' Sep 11 00:30:27.758200 systemd-modules-load[244]: Inserted module 'br_netfilter' Sep 11 00:30:27.781467 systemd[1]: Started systemd-journald.service - Journal Service. Sep 11 00:30:27.783942 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 11 00:30:27.787289 dracut-cmdline[269]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=24178014e7d1a618b6c727661dc98ca9324f7f5aeefcaa5f4996d4d839e6e63a Sep 11 00:30:27.795483 systemd-tmpfiles[286]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 11 00:30:27.797693 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 00:30:27.798973 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 11 00:30:27.830181 systemd-resolved[318]: Positive Trust Anchors: Sep 11 00:30:27.830958 systemd-resolved[318]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 11 00:30:27.831115 systemd-resolved[318]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 11 00:30:27.833890 systemd-resolved[318]: Defaulting to hostname 'linux'. Sep 11 00:30:27.834668 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 11 00:30:27.834809 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 11 00:30:27.847474 kernel: SCSI subsystem initialized Sep 11 00:30:27.865491 kernel: Loading iSCSI transport class v2.0-870. Sep 11 00:30:27.875472 kernel: iscsi: registered transport (tcp) Sep 11 00:30:27.899483 kernel: iscsi: registered transport (qla4xxx) Sep 11 00:30:27.899529 kernel: QLogic iSCSI HBA Driver Sep 11 00:30:27.910084 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 11 00:30:27.927430 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 00:30:27.928415 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 11 00:30:27.954589 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 11 00:30:27.955560 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 11 00:30:27.998503 kernel: raid6: avx2x4 gen() 44307 MB/s Sep 11 00:30:28.014480 kernel: raid6: avx2x2 gen() 52100 MB/s Sep 11 00:30:28.031838 kernel: raid6: avx2x1 gen() 41629 MB/s Sep 11 00:30:28.031888 kernel: raid6: using algorithm avx2x2 gen() 52100 MB/s Sep 11 00:30:28.049699 kernel: raid6: .... xor() 28173 MB/s, rmw enabled Sep 11 00:30:28.049749 kernel: raid6: using avx2x2 recovery algorithm Sep 11 00:30:28.063473 kernel: xor: automatically using best checksumming function avx Sep 11 00:30:28.169483 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 11 00:30:28.172825 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 11 00:30:28.174156 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 00:30:28.193239 systemd-udevd[492]: Using default interface naming scheme 'v255'. Sep 11 00:30:28.196617 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 00:30:28.197360 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 11 00:30:28.214753 dracut-pre-trigger[493]: rd.md=0: removing MD RAID activation Sep 11 00:30:28.230408 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 11 00:30:28.231709 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 11 00:30:28.308261 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 00:30:28.310192 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 11 00:30:28.387480 kernel: VMware PVSCSI driver - version 1.0.7.0-k Sep 11 00:30:28.387520 kernel: vmw_pvscsi: using 64bit dma Sep 11 00:30:28.392478 kernel: VMware vmxnet3 virtual NIC driver - version 1.9.0.0-k-NAPI Sep 11 00:30:28.395477 kernel: vmw_pvscsi: max_id: 16 Sep 11 00:30:28.395507 kernel: vmw_pvscsi: setting ring_pages to 8 Sep 11 00:30:28.398578 kernel: vmw_pvscsi: enabling reqCallThreshold Sep 11 00:30:28.398609 kernel: vmw_pvscsi: driver-based request coalescing enabled Sep 11 00:30:28.398619 kernel: vmw_pvscsi: using MSI-X Sep 11 00:30:28.398626 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Sep 11 00:30:28.399473 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Sep 11 00:30:28.401479 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Sep 11 00:30:28.404470 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Sep 11 00:30:28.409478 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Sep 11 00:30:28.423483 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Sep 11 00:30:28.425178 (udev-worker)[546]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Sep 11 00:30:28.431614 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 00:30:28.431686 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:30:28.432194 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:30:28.433514 kernel: cryptd: max_cpu_qlen set to 1000 Sep 11 00:30:28.435575 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:30:28.438471 kernel: libata version 3.00 loaded. Sep 11 00:30:28.449490 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Sep 11 00:30:28.450894 kernel: ata_piix 0000:00:07.1: version 2.13 Sep 11 00:30:28.452052 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 11 00:30:28.452154 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Sep 11 00:30:28.452252 kernel: sd 0:0:0:0: [sda] Cache data unavailable Sep 11 00:30:28.452347 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Sep 11 00:30:28.455663 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Sep 11 00:30:28.457631 kernel: scsi host1: ata_piix Sep 11 00:30:28.457757 kernel: AES CTR mode by8 optimization enabled Sep 11 00:30:28.457768 kernel: scsi host2: ata_piix Sep 11 00:30:28.460351 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 lpm-pol 0 Sep 11 00:30:28.460388 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 lpm-pol 0 Sep 11 00:30:28.471474 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 11 00:30:28.471514 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 11 00:30:28.475053 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:30:28.629479 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Sep 11 00:30:28.633707 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Sep 11 00:30:28.660767 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Sep 11 00:30:28.660933 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 11 00:30:28.673475 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 11 00:30:28.693227 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Sep 11 00:30:28.706885 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Sep 11 00:30:28.712613 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Sep 11 00:30:28.717023 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Sep 11 00:30:28.717288 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Sep 11 00:30:28.718085 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 11 00:30:28.762482 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 11 00:30:28.951950 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 11 00:30:28.952329 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 11 00:30:28.952479 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 00:30:28.952692 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 11 00:30:28.953427 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 11 00:30:28.968181 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 11 00:30:29.820066 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 11 00:30:29.820343 disk-uuid[652]: The operation has completed successfully. Sep 11 00:30:29.864291 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 11 00:30:29.864533 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 11 00:30:29.880713 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 11 00:30:29.900491 sh[682]: Success Sep 11 00:30:29.915542 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 11 00:30:29.915578 kernel: device-mapper: uevent: version 1.0.3 Sep 11 00:30:29.915590 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 11 00:30:29.925532 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Sep 11 00:30:29.975703 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 11 00:30:29.978509 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 11 00:30:29.988654 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 11 00:30:30.003502 kernel: BTRFS: device fsid f1eb5eb7-34cc-49c0-9f2b-e603bd772d66 devid 1 transid 39 /dev/mapper/usr (254:0) scanned by mount (694) Sep 11 00:30:30.003539 kernel: BTRFS info (device dm-0): first mount of filesystem f1eb5eb7-34cc-49c0-9f2b-e603bd772d66 Sep 11 00:30:30.004504 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:30:30.012952 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 11 00:30:30.013005 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 11 00:30:30.013020 kernel: BTRFS info (device dm-0): enabling free space tree Sep 11 00:30:30.015656 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 11 00:30:30.015997 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 11 00:30:30.016594 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Sep 11 00:30:30.018525 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 11 00:30:30.045486 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (717) Sep 11 00:30:30.051503 kernel: BTRFS info (device sda6): first mount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:30:30.051550 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:30:30.098527 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 11 00:30:30.098582 kernel: BTRFS info (device sda6): enabling free space tree Sep 11 00:30:30.102477 kernel: BTRFS info (device sda6): last unmount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:30:30.102303 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 11 00:30:30.105046 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 11 00:30:30.303089 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Sep 11 00:30:30.303955 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 11 00:30:30.392608 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 11 00:30:30.393894 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 11 00:30:30.411398 ignition[739]: Ignition 2.21.0 Sep 11 00:30:30.411408 ignition[739]: Stage: fetch-offline Sep 11 00:30:30.411427 ignition[739]: no configs at "/usr/lib/ignition/base.d" Sep 11 00:30:30.411434 ignition[739]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 11 00:30:30.411506 ignition[739]: parsed url from cmdline: "" Sep 11 00:30:30.411509 ignition[739]: no config URL provided Sep 11 00:30:30.411514 ignition[739]: reading system config file "/usr/lib/ignition/user.ign" Sep 11 00:30:30.411520 ignition[739]: no config at "/usr/lib/ignition/user.ign" Sep 11 00:30:30.411938 ignition[739]: config successfully fetched Sep 11 00:30:30.411959 ignition[739]: parsing config with SHA512: 2116f8352b3fc8b86b1c4db9488e9d719e745a6fddb40f0621f7653d09ccbe74375f5f1e795fa1fb12b062980121d5e1094db588e607ed6c20a42b4aaab7a125 Sep 11 00:30:30.416801 unknown[739]: fetched base config from "system" Sep 11 00:30:30.416957 unknown[739]: fetched user config from "vmware" Sep 11 00:30:30.417263 ignition[739]: fetch-offline: fetch-offline passed Sep 11 00:30:30.417295 ignition[739]: Ignition finished successfully Sep 11 00:30:30.418051 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 11 00:30:30.423390 systemd-networkd[877]: lo: Link UP Sep 11 00:30:30.423397 systemd-networkd[877]: lo: Gained carrier Sep 11 00:30:30.424336 systemd-networkd[877]: Enumeration completed Sep 11 00:30:30.424489 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 11 00:30:30.424682 systemd-networkd[877]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Sep 11 00:30:30.424796 systemd[1]: Reached target network.target - Network. Sep 11 00:30:30.425057 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 11 00:30:30.436896 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Sep 11 00:30:30.436999 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Sep 11 00:30:30.434544 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 11 00:30:30.434600 systemd-networkd[877]: ens192: Link UP Sep 11 00:30:30.434603 systemd-networkd[877]: ens192: Gained carrier Sep 11 00:30:30.449452 ignition[882]: Ignition 2.21.0 Sep 11 00:30:30.449758 ignition[882]: Stage: kargs Sep 11 00:30:30.450098 ignition[882]: no configs at "/usr/lib/ignition/base.d" Sep 11 00:30:30.450106 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 11 00:30:30.450561 ignition[882]: kargs: kargs passed Sep 11 00:30:30.450586 ignition[882]: Ignition finished successfully Sep 11 00:30:30.451980 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 11 00:30:30.452651 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 11 00:30:30.464064 ignition[890]: Ignition 2.21.0 Sep 11 00:30:30.464070 ignition[890]: Stage: disks Sep 11 00:30:30.464145 ignition[890]: no configs at "/usr/lib/ignition/base.d" Sep 11 00:30:30.464151 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 11 00:30:30.466860 ignition[890]: disks: disks passed Sep 11 00:30:30.467003 ignition[890]: Ignition finished successfully Sep 11 00:30:30.467948 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 11 00:30:30.468292 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 11 00:30:30.468558 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 11 00:30:30.468821 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 11 00:30:30.469046 systemd[1]: Reached target sysinit.target - System Initialization. Sep 11 00:30:30.469278 systemd[1]: Reached target basic.target - Basic System. Sep 11 00:30:30.470088 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 11 00:30:30.545682 systemd-fsck[898]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Sep 11 00:30:30.547739 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 11 00:30:30.549394 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 11 00:30:30.760489 kernel: EXT4-fs (sda9): mounted filesystem 6a9ce0af-81d0-4628-9791-e47488ed2744 r/w with ordered data mode. Quota mode: none. Sep 11 00:30:30.760355 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 11 00:30:30.760826 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 11 00:30:30.761945 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 11 00:30:30.763504 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 11 00:30:30.763922 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 11 00:30:30.763958 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 11 00:30:30.763975 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 11 00:30:30.773444 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 11 00:30:30.774331 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 11 00:30:30.780617 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (906) Sep 11 00:30:30.782923 kernel: BTRFS info (device sda6): first mount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:30:30.782944 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:30:30.787471 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 11 00:30:30.787505 kernel: BTRFS info (device sda6): enabling free space tree Sep 11 00:30:30.788788 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 11 00:30:30.809608 initrd-setup-root[930]: cut: /sysroot/etc/passwd: No such file or directory Sep 11 00:30:30.818408 initrd-setup-root[937]: cut: /sysroot/etc/group: No such file or directory Sep 11 00:30:30.827274 initrd-setup-root[944]: cut: /sysroot/etc/shadow: No such file or directory Sep 11 00:30:30.829285 initrd-setup-root[951]: cut: /sysroot/etc/gshadow: No such file or directory Sep 11 00:30:30.943976 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 11 00:30:30.945071 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 11 00:30:30.946528 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 11 00:30:30.954488 kernel: BTRFS info (device sda6): last unmount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:30:30.966401 ignition[1018]: INFO : Ignition 2.21.0 Sep 11 00:30:30.966401 ignition[1018]: INFO : Stage: mount Sep 11 00:30:30.966843 ignition[1018]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 00:30:30.966843 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 11 00:30:30.967566 ignition[1018]: INFO : mount: mount passed Sep 11 00:30:30.967566 ignition[1018]: INFO : Ignition finished successfully Sep 11 00:30:30.968122 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 11 00:30:30.968889 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 11 00:30:31.001135 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 11 00:30:31.002053 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 11 00:30:31.037480 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1027) Sep 11 00:30:31.039349 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 11 00:30:31.045992 kernel: BTRFS info (device sda6): first mount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:30:31.046031 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:30:31.089152 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 11 00:30:31.089210 kernel: BTRFS info (device sda6): enabling free space tree Sep 11 00:30:31.090936 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 11 00:30:31.112623 ignition[1048]: INFO : Ignition 2.21.0 Sep 11 00:30:31.112623 ignition[1048]: INFO : Stage: files Sep 11 00:30:31.114598 ignition[1048]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 00:30:31.114598 ignition[1048]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 11 00:30:31.114598 ignition[1048]: DEBUG : files: compiled without relabeling support, skipping Sep 11 00:30:31.118442 ignition[1048]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 11 00:30:31.118680 ignition[1048]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 11 00:30:31.120987 ignition[1048]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 11 00:30:31.121288 ignition[1048]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 11 00:30:31.121765 unknown[1048]: wrote ssh authorized keys file for user: core Sep 11 00:30:31.122019 ignition[1048]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 11 00:30:31.123630 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 11 00:30:31.124043 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 11 00:30:31.228177 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 11 00:30:32.328710 systemd-networkd[877]: ens192: Gained IPv6LL Sep 11 00:30:35.903356 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 11 00:30:35.903356 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 11 00:30:35.903356 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 11 00:30:35.903356 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 11 00:30:35.903356 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 11 00:30:35.903356 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 11 00:30:35.903356 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 11 00:30:35.903356 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 11 00:30:35.903356 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 11 00:30:35.909956 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 11 00:30:35.909956 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 11 00:30:35.909956 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 11 00:30:35.917050 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 11 00:30:35.917050 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 11 00:30:35.917050 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 11 00:30:36.512600 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 11 00:30:37.021469 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 11 00:30:37.021969 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Sep 11 00:30:37.022346 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Sep 11 00:30:37.022346 ignition[1048]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 11 00:30:37.022809 ignition[1048]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 11 00:30:37.023196 ignition[1048]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 11 00:30:37.023196 ignition[1048]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 11 00:30:37.023196 ignition[1048]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 11 00:30:37.023705 ignition[1048]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 11 00:30:37.023705 ignition[1048]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 11 00:30:37.023705 ignition[1048]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 11 00:30:37.023705 ignition[1048]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 11 00:30:37.137322 ignition[1048]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 11 00:30:37.142109 ignition[1048]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 11 00:30:37.142109 ignition[1048]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 11 00:30:37.142109 ignition[1048]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 11 00:30:37.142109 ignition[1048]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 11 00:30:37.142109 ignition[1048]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 11 00:30:37.142109 ignition[1048]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 11 00:30:37.142109 ignition[1048]: INFO : files: files passed Sep 11 00:30:37.142109 ignition[1048]: INFO : Ignition finished successfully Sep 11 00:30:37.145403 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 11 00:30:37.146666 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 11 00:30:37.148548 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 11 00:30:37.158684 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 11 00:30:37.158770 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 11 00:30:37.161917 initrd-setup-root-after-ignition[1080]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 11 00:30:37.161917 initrd-setup-root-after-ignition[1080]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 11 00:30:37.162943 initrd-setup-root-after-ignition[1084]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 11 00:30:37.164311 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 11 00:30:37.164901 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 11 00:30:37.165832 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 11 00:30:37.203828 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 11 00:30:37.203907 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 11 00:30:37.204414 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 11 00:30:37.204562 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 11 00:30:37.204797 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 11 00:30:37.205330 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 11 00:30:37.223018 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 11 00:30:37.224162 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 11 00:30:37.240241 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 11 00:30:37.240453 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 00:30:37.240735 systemd[1]: Stopped target timers.target - Timer Units. Sep 11 00:30:37.240988 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 11 00:30:37.241069 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 11 00:30:37.241372 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 11 00:30:37.241688 systemd[1]: Stopped target basic.target - Basic System. Sep 11 00:30:37.241898 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 11 00:30:37.242121 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 11 00:30:37.242365 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 11 00:30:37.242612 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 11 00:30:37.242839 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 11 00:30:37.243071 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 11 00:30:37.243307 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 11 00:30:37.243560 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 11 00:30:37.243770 systemd[1]: Stopped target swap.target - Swaps. Sep 11 00:30:37.243965 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 11 00:30:37.244034 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 11 00:30:37.244321 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 11 00:30:37.244596 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 00:30:37.244815 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 11 00:30:37.244864 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 00:30:37.245063 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 11 00:30:37.245128 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 11 00:30:37.245423 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 11 00:30:37.245509 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 11 00:30:37.245762 systemd[1]: Stopped target paths.target - Path Units. Sep 11 00:30:37.245932 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 11 00:30:37.249519 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 00:30:37.249688 systemd[1]: Stopped target slices.target - Slice Units. Sep 11 00:30:37.249924 systemd[1]: Stopped target sockets.target - Socket Units. Sep 11 00:30:37.250093 systemd[1]: iscsid.socket: Deactivated successfully. Sep 11 00:30:37.250143 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 11 00:30:37.250315 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 11 00:30:37.250358 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 11 00:30:37.250533 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 11 00:30:37.250605 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 11 00:30:37.250863 systemd[1]: ignition-files.service: Deactivated successfully. Sep 11 00:30:37.250924 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 11 00:30:37.252560 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 11 00:30:37.254512 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 11 00:30:37.254754 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 11 00:30:37.254934 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 00:30:37.255273 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 11 00:30:37.255337 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 11 00:30:37.258299 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 11 00:30:37.258507 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 11 00:30:37.265508 ignition[1104]: INFO : Ignition 2.21.0 Sep 11 00:30:37.265508 ignition[1104]: INFO : Stage: umount Sep 11 00:30:37.265859 ignition[1104]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 00:30:37.265859 ignition[1104]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 11 00:30:37.266142 ignition[1104]: INFO : umount: umount passed Sep 11 00:30:37.266681 ignition[1104]: INFO : Ignition finished successfully Sep 11 00:30:37.267130 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 11 00:30:37.267352 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 11 00:30:37.267749 systemd[1]: Stopped target network.target - Network. Sep 11 00:30:37.267943 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 11 00:30:37.267970 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 11 00:30:37.268079 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 11 00:30:37.268100 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 11 00:30:37.268207 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 11 00:30:37.268231 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 11 00:30:37.268338 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 11 00:30:37.268359 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 11 00:30:37.268532 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 11 00:30:37.268920 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 11 00:30:37.270747 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 11 00:30:37.270812 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 11 00:30:37.272198 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 11 00:30:37.272446 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 11 00:30:37.272968 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 00:30:37.273828 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 11 00:30:37.277658 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 11 00:30:37.277748 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 11 00:30:37.278853 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 11 00:30:37.278971 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 11 00:30:37.279195 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 11 00:30:37.279223 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 11 00:30:37.280076 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 11 00:30:37.280229 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 11 00:30:37.280264 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 11 00:30:37.280523 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Sep 11 00:30:37.280554 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Sep 11 00:30:37.281909 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 11 00:30:37.281942 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:30:37.282307 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 11 00:30:37.282335 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 11 00:30:37.282523 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 00:30:37.283526 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 11 00:30:37.288661 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 11 00:30:37.289575 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 00:30:37.290065 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 11 00:30:37.290099 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 11 00:30:37.290239 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 11 00:30:37.290262 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 00:30:37.290381 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 11 00:30:37.290408 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 11 00:30:37.291373 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 11 00:30:37.291404 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 11 00:30:37.291736 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 11 00:30:37.291762 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 11 00:30:37.293544 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 11 00:30:37.293677 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 11 00:30:37.293705 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 00:30:37.293941 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 11 00:30:37.293973 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 00:30:37.294248 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 11 00:30:37.294271 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 11 00:30:37.294450 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 11 00:30:37.294592 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 00:30:37.294882 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 00:30:37.294904 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:30:37.297385 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 11 00:30:37.297438 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 11 00:30:37.297678 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 11 00:30:37.297710 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 11 00:30:37.297734 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 11 00:30:37.301359 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 11 00:30:37.304511 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 11 00:30:37.304808 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 11 00:30:37.304851 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 11 00:30:37.306258 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 11 00:30:37.306333 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 11 00:30:37.306640 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 11 00:30:37.306696 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 11 00:30:37.307072 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 11 00:30:37.307804 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 11 00:30:37.314169 systemd[1]: Switching root. Sep 11 00:30:37.350675 systemd-journald[243]: Journal stopped Sep 11 00:30:39.589087 systemd-journald[243]: Received SIGTERM from PID 1 (systemd). Sep 11 00:30:39.589111 kernel: SELinux: policy capability network_peer_controls=1 Sep 11 00:30:39.589120 kernel: SELinux: policy capability open_perms=1 Sep 11 00:30:39.589126 kernel: SELinux: policy capability extended_socket_class=1 Sep 11 00:30:39.589131 kernel: SELinux: policy capability always_check_network=0 Sep 11 00:30:39.589138 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 11 00:30:39.589145 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 11 00:30:39.589150 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 11 00:30:39.589156 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 11 00:30:39.589162 kernel: SELinux: policy capability userspace_initial_context=0 Sep 11 00:30:39.589167 kernel: audit: type=1403 audit(1757550638.479:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 11 00:30:39.589174 systemd[1]: Successfully loaded SELinux policy in 91.957ms. Sep 11 00:30:39.589183 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.259ms. Sep 11 00:30:39.589190 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 11 00:30:39.589198 systemd[1]: Detected virtualization vmware. Sep 11 00:30:39.589204 systemd[1]: Detected architecture x86-64. Sep 11 00:30:39.589212 systemd[1]: Detected first boot. Sep 11 00:30:39.589219 systemd[1]: Initializing machine ID from random generator. Sep 11 00:30:39.589226 zram_generator::config[1147]: No configuration found. Sep 11 00:30:39.589312 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Sep 11 00:30:39.589323 kernel: Guest personality initialized and is active Sep 11 00:30:39.589329 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 11 00:30:39.589335 kernel: Initialized host personality Sep 11 00:30:39.589349 kernel: NET: Registered PF_VSOCK protocol family Sep 11 00:30:39.589356 systemd[1]: Populated /etc with preset unit settings. Sep 11 00:30:39.589364 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Sep 11 00:30:39.589371 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Sep 11 00:30:39.589378 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 11 00:30:39.589385 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 11 00:30:39.589391 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 11 00:30:39.589400 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 11 00:30:39.589407 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 11 00:30:39.589414 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 11 00:30:39.589421 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 11 00:30:39.589427 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 11 00:30:39.589434 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 11 00:30:39.589441 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 11 00:30:39.589449 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 11 00:30:39.589547 systemd[1]: Created slice user.slice - User and Session Slice. Sep 11 00:30:39.589560 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 00:30:39.589571 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 00:30:39.589578 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 11 00:30:39.589585 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 11 00:30:39.589592 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 11 00:30:39.589599 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 11 00:30:39.589607 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 11 00:30:39.589614 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 00:30:39.589621 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 11 00:30:39.589628 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 11 00:30:39.589635 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 11 00:30:39.589642 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 11 00:30:39.589649 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 11 00:30:39.589656 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 00:30:39.589664 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 11 00:30:39.589671 systemd[1]: Reached target slices.target - Slice Units. Sep 11 00:30:39.589678 systemd[1]: Reached target swap.target - Swaps. Sep 11 00:30:39.589685 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 11 00:30:39.589692 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 11 00:30:39.589700 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 11 00:30:39.589708 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 11 00:30:39.589715 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 11 00:30:39.589722 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 00:30:39.589729 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 11 00:30:39.589736 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 11 00:30:39.589743 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 11 00:30:39.589750 systemd[1]: Mounting media.mount - External Media Directory... Sep 11 00:30:39.589758 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:30:39.589765 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 11 00:30:39.589772 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 11 00:30:39.589779 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 11 00:30:39.589787 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 11 00:30:39.589794 systemd[1]: Reached target machines.target - Containers. Sep 11 00:30:39.589801 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 11 00:30:39.589808 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Sep 11 00:30:39.589816 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 11 00:30:39.589824 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 11 00:30:39.589830 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 00:30:39.589837 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 11 00:30:39.589845 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 00:30:39.589852 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 11 00:30:39.589860 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 00:30:39.589867 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 11 00:30:39.589875 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 11 00:30:39.589883 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 11 00:30:39.589890 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 11 00:30:39.589896 systemd[1]: Stopped systemd-fsck-usr.service. Sep 11 00:30:39.589904 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:30:39.589917 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 11 00:30:39.589924 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 11 00:30:39.589931 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 11 00:30:39.596294 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 11 00:30:39.596313 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 11 00:30:39.596322 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 11 00:30:39.596329 systemd[1]: verity-setup.service: Deactivated successfully. Sep 11 00:30:39.596336 systemd[1]: Stopped verity-setup.service. Sep 11 00:30:39.596343 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:30:39.596351 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 11 00:30:39.596358 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 11 00:30:39.596365 systemd[1]: Mounted media.mount - External Media Directory. Sep 11 00:30:39.596375 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 11 00:30:39.596382 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 11 00:30:39.596389 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 11 00:30:39.596396 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 00:30:39.596403 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 11 00:30:39.596410 kernel: fuse: init (API version 7.41) Sep 11 00:30:39.596417 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 11 00:30:39.596424 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 00:30:39.596433 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 00:30:39.596440 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 00:30:39.596447 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 00:30:39.596454 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 00:30:39.597518 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 11 00:30:39.597548 systemd-journald[1230]: Collecting audit messages is disabled. Sep 11 00:30:39.597569 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 11 00:30:39.597577 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 11 00:30:39.597584 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 11 00:30:39.597591 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 11 00:30:39.597599 systemd-journald[1230]: Journal started Sep 11 00:30:39.597618 systemd-journald[1230]: Runtime Journal (/run/log/journal/2693cdb0aef64ea59b69858674be1a85) is 4.8M, max 38.9M, 34M free. Sep 11 00:30:39.422580 systemd[1]: Queued start job for default target multi-user.target. Sep 11 00:30:39.430365 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 11 00:30:39.430606 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 11 00:30:39.601596 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 11 00:30:39.601615 systemd[1]: Started systemd-journald.service - Journal Service. Sep 11 00:30:39.601721 jq[1217]: true Sep 11 00:30:39.606051 kernel: loop: module loaded Sep 11 00:30:39.606075 jq[1248]: true Sep 11 00:30:39.602945 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 00:30:39.603070 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 00:30:39.603320 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 11 00:30:39.603582 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 11 00:30:39.604681 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 11 00:30:39.604824 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 11 00:30:39.615368 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 11 00:30:39.616692 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 11 00:30:39.617450 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 11 00:30:39.620562 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 11 00:30:39.621018 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:30:39.622316 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 11 00:30:39.627614 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 11 00:30:39.627778 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 11 00:30:39.629234 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 11 00:30:39.629360 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 11 00:30:39.632562 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 11 00:30:39.637254 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 11 00:30:39.638349 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 11 00:30:39.677516 systemd-journald[1230]: Time spent on flushing to /var/log/journal/2693cdb0aef64ea59b69858674be1a85 is 72.358ms for 1761 entries. Sep 11 00:30:39.677516 systemd-journald[1230]: System Journal (/var/log/journal/2693cdb0aef64ea59b69858674be1a85) is 8M, max 584.8M, 576.8M free. Sep 11 00:30:39.769796 systemd-journald[1230]: Received client request to flush runtime journal. Sep 11 00:30:39.769824 kernel: ACPI: bus type drm_connector registered Sep 11 00:30:39.769836 kernel: loop0: detected capacity change from 0 to 113872 Sep 11 00:30:39.769850 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 11 00:30:39.680063 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 11 00:30:39.684375 ignition[1266]: Ignition 2.21.0 Sep 11 00:30:39.680803 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 11 00:30:39.684938 ignition[1266]: deleting config from guestinfo properties Sep 11 00:30:39.699026 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Sep 11 00:30:39.696844 ignition[1266]: Successfully deleted config Sep 11 00:30:39.699035 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Sep 11 00:30:39.702434 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 11 00:30:39.703775 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 11 00:30:39.704455 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 11 00:30:39.707896 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 11 00:30:39.713182 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Sep 11 00:30:39.714096 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:30:39.719725 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 11 00:30:39.726899 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 11 00:30:39.772232 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 11 00:30:39.781586 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 00:30:39.790476 kernel: loop1: detected capacity change from 0 to 224512 Sep 11 00:30:39.815151 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 11 00:30:39.829086 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 11 00:30:39.830198 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 11 00:30:39.854266 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. Sep 11 00:30:39.854278 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. Sep 11 00:30:39.857152 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 00:30:39.862494 kernel: loop2: detected capacity change from 0 to 146240 Sep 11 00:30:39.930478 kernel: loop3: detected capacity change from 0 to 2960 Sep 11 00:30:39.999424 kernel: loop4: detected capacity change from 0 to 113872 Sep 11 00:30:40.058477 kernel: loop5: detected capacity change from 0 to 224512 Sep 11 00:30:40.120476 kernel: loop6: detected capacity change from 0 to 146240 Sep 11 00:30:40.199801 kernel: loop7: detected capacity change from 0 to 2960 Sep 11 00:30:40.211379 (sd-merge)[1324]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. Sep 11 00:30:40.211800 (sd-merge)[1324]: Merged extensions into '/usr'. Sep 11 00:30:40.215945 systemd[1]: Reload requested from client PID 1290 ('systemd-sysext') (unit systemd-sysext.service)... Sep 11 00:30:40.216029 systemd[1]: Reloading... Sep 11 00:30:40.258503 zram_generator::config[1350]: No configuration found. Sep 11 00:30:40.332929 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:30:40.344119 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Sep 11 00:30:40.397911 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 11 00:30:40.398297 systemd[1]: Reloading finished in 181 ms. Sep 11 00:30:40.415736 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 11 00:30:40.416208 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 11 00:30:40.422302 systemd[1]: Starting ensure-sysext.service... Sep 11 00:30:40.424992 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 11 00:30:40.431585 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 00:30:40.440579 systemd[1]: Reload requested from client PID 1406 ('systemctl') (unit ensure-sysext.service)... Sep 11 00:30:40.440591 systemd[1]: Reloading... Sep 11 00:30:40.455672 systemd-tmpfiles[1407]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 11 00:30:40.456509 systemd-tmpfiles[1407]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 11 00:30:40.456730 systemd-tmpfiles[1407]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 11 00:30:40.456893 systemd-tmpfiles[1407]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 11 00:30:40.457377 systemd-tmpfiles[1407]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 11 00:30:40.459575 systemd-tmpfiles[1407]: ACLs are not supported, ignoring. Sep 11 00:30:40.459613 systemd-tmpfiles[1407]: ACLs are not supported, ignoring. Sep 11 00:30:40.468190 systemd-udevd[1408]: Using default interface naming scheme 'v255'. Sep 11 00:30:40.473857 systemd-tmpfiles[1407]: Detected autofs mount point /boot during canonicalization of boot. Sep 11 00:30:40.473924 systemd-tmpfiles[1407]: Skipping /boot Sep 11 00:30:40.479989 systemd-tmpfiles[1407]: Detected autofs mount point /boot during canonicalization of boot. Sep 11 00:30:40.480056 systemd-tmpfiles[1407]: Skipping /boot Sep 11 00:30:40.481470 zram_generator::config[1435]: No configuration found. Sep 11 00:30:40.553488 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:30:40.561402 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Sep 11 00:30:40.617013 systemd[1]: Reloading finished in 176 ms. Sep 11 00:30:40.623328 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 00:30:40.630149 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 00:30:40.638661 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 11 00:30:40.647520 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 11 00:30:40.649600 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 11 00:30:40.654114 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 11 00:30:40.657298 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 11 00:30:40.659481 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 11 00:30:40.667656 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 11 00:30:40.668692 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:30:40.672080 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 00:30:40.673244 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 00:30:40.691768 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 00:30:40.692154 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:30:40.692237 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:30:40.692300 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:30:40.693993 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 11 00:30:40.704205 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:30:40.704339 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:30:40.704428 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:30:40.705590 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:30:40.707489 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 11 00:30:40.707976 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 00:30:40.708085 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 00:30:40.712087 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 00:30:40.712794 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 00:30:40.717170 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:30:40.719666 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 00:30:40.721575 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 11 00:30:40.723713 ldconfig[1281]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 11 00:30:40.725318 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 00:30:40.725545 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:30:40.725621 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:30:40.725715 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:30:40.731117 systemd[1]: Finished ensure-sysext.service. Sep 11 00:30:40.731832 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 11 00:30:40.736624 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 11 00:30:40.738448 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 11 00:30:40.743998 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 00:30:40.748615 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 00:30:40.749115 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 11 00:30:40.751679 augenrules[1558]: No rules Sep 11 00:30:40.752791 systemd[1]: audit-rules.service: Deactivated successfully. Sep 11 00:30:40.753088 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 11 00:30:40.759153 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 11 00:30:40.759296 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 11 00:30:40.759892 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 00:30:40.760095 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 00:30:40.760440 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 00:30:40.760617 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 00:30:40.761204 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 11 00:30:40.765712 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 11 00:30:40.775557 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 11 00:30:40.776078 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 11 00:30:40.784588 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 11 00:30:40.861812 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 11 00:30:40.885786 systemd-networkd[1520]: lo: Link UP Sep 11 00:30:40.885792 systemd-networkd[1520]: lo: Gained carrier Sep 11 00:30:40.886267 systemd-networkd[1520]: Enumeration completed Sep 11 00:30:40.886328 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 11 00:30:40.887849 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 11 00:30:40.889225 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 11 00:30:40.899005 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 11 00:30:40.899169 systemd[1]: Reached target time-set.target - System Time Set. Sep 11 00:30:40.902472 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 11 00:30:40.901103 systemd-resolved[1521]: Positive Trust Anchors: Sep 11 00:30:40.901111 systemd-resolved[1521]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 11 00:30:40.901133 systemd-resolved[1521]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 11 00:30:40.905471 kernel: mousedev: PS/2 mouse device common for all mice Sep 11 00:30:40.905736 systemd-resolved[1521]: Defaulting to hostname 'linux'. Sep 11 00:30:40.906752 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 11 00:30:40.906959 systemd[1]: Reached target network.target - Network. Sep 11 00:30:40.907117 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 11 00:30:40.907308 systemd[1]: Reached target sysinit.target - System Initialization. Sep 11 00:30:40.907654 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 11 00:30:40.907825 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 11 00:30:40.907986 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 11 00:30:40.908215 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 11 00:30:40.908403 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 11 00:30:40.908572 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 11 00:30:40.908729 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 11 00:30:40.908784 systemd[1]: Reached target paths.target - Path Units. Sep 11 00:30:40.908884 systemd[1]: Reached target timers.target - Timer Units. Sep 11 00:30:40.909407 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 11 00:30:40.910767 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 11 00:30:40.913022 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 11 00:30:40.913239 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 11 00:30:40.913500 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 11 00:30:40.915065 kernel: ACPI: button: Power Button [PWRF] Sep 11 00:30:40.915271 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 11 00:30:40.915804 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 11 00:30:40.915804 systemd-networkd[1520]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Sep 11 00:30:40.916907 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 11 00:30:40.917196 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 11 00:30:40.917706 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Sep 11 00:30:40.918599 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Sep 11 00:30:40.919374 systemd[1]: Reached target sockets.target - Socket Units. Sep 11 00:30:40.919745 systemd[1]: Reached target basic.target - Basic System. Sep 11 00:30:40.919869 systemd-networkd[1520]: ens192: Link UP Sep 11 00:30:40.919888 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 11 00:30:40.919904 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 11 00:30:40.920508 systemd-networkd[1520]: ens192: Gained carrier Sep 11 00:30:40.921580 systemd[1]: Starting containerd.service - containerd container runtime... Sep 11 00:30:40.923592 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 11 00:30:40.924507 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 11 00:30:40.925253 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 11 00:30:40.925685 systemd-timesyncd[1554]: Network configuration changed, trying to establish connection. Sep 11 00:30:40.929599 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 11 00:30:40.929738 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 11 00:30:40.936350 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 11 00:30:40.939642 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 11 00:30:40.944581 jq[1595]: false Sep 11 00:30:40.946165 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 11 00:30:40.949226 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 11 00:30:40.950720 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 11 00:30:40.958886 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 11 00:30:40.959491 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 11 00:30:40.960433 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 11 00:30:40.961071 systemd[1]: Starting update-engine.service - Update Engine... Sep 11 00:30:40.962887 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 11 00:30:40.965393 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Sep 11 00:30:40.970295 google_oslogin_nss_cache[1598]: oslogin_cache_refresh[1598]: Refreshing passwd entry cache Sep 11 00:30:40.972708 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 11 00:30:40.972982 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 11 00:30:40.973104 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 11 00:30:40.984482 oslogin_cache_refresh[1598]: Refreshing passwd entry cache Sep 11 00:30:40.989481 jq[1605]: true Sep 11 00:30:40.992843 google_oslogin_nss_cache[1598]: oslogin_cache_refresh[1598]: Failure getting users, quitting Sep 11 00:30:40.992843 google_oslogin_nss_cache[1598]: oslogin_cache_refresh[1598]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 11 00:30:40.992843 google_oslogin_nss_cache[1598]: oslogin_cache_refresh[1598]: Refreshing group entry cache Sep 11 00:30:40.992392 oslogin_cache_refresh[1598]: Failure getting users, quitting Sep 11 00:30:40.992403 oslogin_cache_refresh[1598]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 11 00:30:40.992429 oslogin_cache_refresh[1598]: Refreshing group entry cache Sep 11 00:30:40.993342 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 11 00:30:40.993564 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 11 00:30:40.995418 google_oslogin_nss_cache[1598]: oslogin_cache_refresh[1598]: Failure getting groups, quitting Sep 11 00:30:40.995456 oslogin_cache_refresh[1598]: Failure getting groups, quitting Sep 11 00:30:40.995544 google_oslogin_nss_cache[1598]: oslogin_cache_refresh[1598]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 11 00:30:40.995570 oslogin_cache_refresh[1598]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 11 00:30:41.006380 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 11 00:30:41.006546 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 11 00:30:41.010915 (ntainerd)[1623]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 11 00:30:41.021478 extend-filesystems[1597]: Found /dev/sda6 Sep 11 00:30:41.020512 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Sep 11 00:30:41.023162 jq[1619]: true Sep 11 00:30:41.023927 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Sep 11 00:30:41.024258 systemd[1]: motdgen.service: Deactivated successfully. Sep 11 00:30:41.025514 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 11 00:30:41.026380 update_engine[1604]: I20250911 00:30:41.026336 1604 main.cc:92] Flatcar Update Engine starting Sep 11 00:30:41.035872 tar[1610]: linux-amd64/LICENSE Sep 11 00:30:41.037466 tar[1610]: linux-amd64/helm Sep 11 00:30:41.041633 extend-filesystems[1597]: Found /dev/sda9 Sep 11 00:30:41.052767 extend-filesystems[1597]: Checking size of /dev/sda9 Sep 11 00:30:41.059194 dbus-daemon[1593]: [system] SELinux support is enabled Sep 11 00:30:41.059316 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 11 00:30:41.061029 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 11 00:30:41.061047 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 11 00:30:41.061185 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 11 00:30:41.061195 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 11 00:30:41.064533 systemd[1]: Started update-engine.service - Update Engine. Sep 11 00:30:41.065706 update_engine[1604]: I20250911 00:30:41.065304 1604 update_check_scheduler.cc:74] Next update check in 3m33s Sep 11 00:30:41.074107 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 11 00:30:41.105864 bash[1659]: Updated "/home/core/.ssh/authorized_keys" Sep 11 00:30:41.105956 extend-filesystems[1597]: Old size kept for /dev/sda9 Sep 11 00:30:41.103946 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 11 00:30:41.104375 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 11 00:30:41.106231 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 11 00:30:41.106359 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 11 00:30:41.120993 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Sep 11 00:30:41.126197 unknown[1634]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Sep 11 00:30:41.129367 systemd-logind[1603]: New seat seat0. Sep 11 00:30:41.129950 systemd[1]: Started systemd-logind.service - User Login Management. Sep 11 00:30:41.132300 unknown[1634]: Core dump limit set to -1 Sep 11 00:30:41.173564 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Sep 11 00:30:41.177520 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 11 00:30:41.231482 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Sep 11 00:30:41.236805 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 11 00:30:41.283581 sshd_keygen[1633]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 11 00:30:41.352093 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 11 00:30:41.356224 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 11 00:30:41.367771 locksmithd[1645]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 11 00:30:41.384589 systemd[1]: issuegen.service: Deactivated successfully. Sep 11 00:30:41.385436 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 11 00:30:41.388811 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 11 00:30:41.432030 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 11 00:30:41.433663 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 11 00:30:41.436368 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 11 00:30:41.436676 systemd[1]: Reached target getty.target - Login Prompts. Sep 11 00:30:41.441535 containerd[1623]: time="2025-09-11T00:30:41Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 11 00:30:41.441535 containerd[1623]: time="2025-09-11T00:30:41.438691173Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 11 00:30:41.439738 systemd-logind[1603]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 11 00:30:41.449087 (udev-worker)[1503]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Sep 11 00:30:41.454240 containerd[1623]: time="2025-09-11T00:30:41.454065900Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.157µs" Sep 11 00:30:41.457568 containerd[1623]: time="2025-09-11T00:30:41.456697178Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 11 00:30:41.457568 containerd[1623]: time="2025-09-11T00:30:41.456720168Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 11 00:30:41.457568 containerd[1623]: time="2025-09-11T00:30:41.456804658Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 11 00:30:41.457568 containerd[1623]: time="2025-09-11T00:30:41.456814779Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 11 00:30:41.457568 containerd[1623]: time="2025-09-11T00:30:41.456829810Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 11 00:30:41.457568 containerd[1623]: time="2025-09-11T00:30:41.456864651Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 11 00:30:41.457568 containerd[1623]: time="2025-09-11T00:30:41.456872102Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 11 00:30:41.457568 containerd[1623]: time="2025-09-11T00:30:41.456982561Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 11 00:30:41.457568 containerd[1623]: time="2025-09-11T00:30:41.456991082Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 11 00:30:41.457568 containerd[1623]: time="2025-09-11T00:30:41.456996908Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 11 00:30:41.457568 containerd[1623]: time="2025-09-11T00:30:41.457001410Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 11 00:30:41.457568 containerd[1623]: time="2025-09-11T00:30:41.457045945Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 11 00:30:41.457752 containerd[1623]: time="2025-09-11T00:30:41.457151686Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 11 00:30:41.457752 containerd[1623]: time="2025-09-11T00:30:41.457167960Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 11 00:30:41.457752 containerd[1623]: time="2025-09-11T00:30:41.457173641Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 11 00:30:41.457752 containerd[1623]: time="2025-09-11T00:30:41.457192396Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 11 00:30:41.457752 containerd[1623]: time="2025-09-11T00:30:41.457340260Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 11 00:30:41.457752 containerd[1623]: time="2025-09-11T00:30:41.457376980Z" level=info msg="metadata content store policy set" policy=shared Sep 11 00:30:41.465527 containerd[1623]: time="2025-09-11T00:30:41.465493579Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 11 00:30:41.466488 containerd[1623]: time="2025-09-11T00:30:41.465599780Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 11 00:30:41.466488 containerd[1623]: time="2025-09-11T00:30:41.465615189Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 11 00:30:41.466488 containerd[1623]: time="2025-09-11T00:30:41.465622864Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 11 00:30:41.466488 containerd[1623]: time="2025-09-11T00:30:41.465629596Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 11 00:30:41.466488 containerd[1623]: time="2025-09-11T00:30:41.465635633Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 11 00:30:41.466488 containerd[1623]: time="2025-09-11T00:30:41.465643241Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 11 00:30:41.466488 containerd[1623]: time="2025-09-11T00:30:41.465685240Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 11 00:30:41.466488 containerd[1623]: time="2025-09-11T00:30:41.465692475Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 11 00:30:41.466488 containerd[1623]: time="2025-09-11T00:30:41.465698010Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 11 00:30:41.466488 containerd[1623]: time="2025-09-11T00:30:41.465704019Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 11 00:30:41.466488 containerd[1623]: time="2025-09-11T00:30:41.465711519Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 11 00:30:41.466488 containerd[1623]: time="2025-09-11T00:30:41.465773832Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 11 00:30:41.466488 containerd[1623]: time="2025-09-11T00:30:41.465786051Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 11 00:30:41.466488 containerd[1623]: time="2025-09-11T00:30:41.465795011Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 11 00:30:41.465927 systemd-logind[1603]: Watching system buttons on /dev/input/event2 (Power Button) Sep 11 00:30:41.466732 containerd[1623]: time="2025-09-11T00:30:41.465800747Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 11 00:30:41.466732 containerd[1623]: time="2025-09-11T00:30:41.465807122Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 11 00:30:41.466732 containerd[1623]: time="2025-09-11T00:30:41.465813074Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 11 00:30:41.466732 containerd[1623]: time="2025-09-11T00:30:41.465822102Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 11 00:30:41.466732 containerd[1623]: time="2025-09-11T00:30:41.465828089Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 11 00:30:41.466732 containerd[1623]: time="2025-09-11T00:30:41.465834442Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 11 00:30:41.466732 containerd[1623]: time="2025-09-11T00:30:41.465842747Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 11 00:30:41.466732 containerd[1623]: time="2025-09-11T00:30:41.465852831Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 11 00:30:41.466732 containerd[1623]: time="2025-09-11T00:30:41.465887975Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 11 00:30:41.466732 containerd[1623]: time="2025-09-11T00:30:41.465896493Z" level=info msg="Start snapshots syncer" Sep 11 00:30:41.466732 containerd[1623]: time="2025-09-11T00:30:41.465912365Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 11 00:30:41.466876 containerd[1623]: time="2025-09-11T00:30:41.466054408Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 11 00:30:41.466876 containerd[1623]: time="2025-09-11T00:30:41.466083771Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 11 00:30:41.466949 containerd[1623]: time="2025-09-11T00:30:41.466131642Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 11 00:30:41.466949 containerd[1623]: time="2025-09-11T00:30:41.466183261Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 11 00:30:41.466949 containerd[1623]: time="2025-09-11T00:30:41.466201096Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 11 00:30:41.466949 containerd[1623]: time="2025-09-11T00:30:41.466210472Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 11 00:30:41.466949 containerd[1623]: time="2025-09-11T00:30:41.466216898Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 11 00:30:41.466949 containerd[1623]: time="2025-09-11T00:30:41.466223544Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 11 00:30:41.466949 containerd[1623]: time="2025-09-11T00:30:41.466229309Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 11 00:30:41.466949 containerd[1623]: time="2025-09-11T00:30:41.466235038Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 11 00:30:41.466949 containerd[1623]: time="2025-09-11T00:30:41.466247584Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 11 00:30:41.466949 containerd[1623]: time="2025-09-11T00:30:41.466253671Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 11 00:30:41.466949 containerd[1623]: time="2025-09-11T00:30:41.466259104Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 11 00:30:41.466949 containerd[1623]: time="2025-09-11T00:30:41.466273037Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 11 00:30:41.466949 containerd[1623]: time="2025-09-11T00:30:41.466281321Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 11 00:30:41.466949 containerd[1623]: time="2025-09-11T00:30:41.466286257Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 11 00:30:41.467127 containerd[1623]: time="2025-09-11T00:30:41.466292102Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 11 00:30:41.467127 containerd[1623]: time="2025-09-11T00:30:41.466296309Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 11 00:30:41.467127 containerd[1623]: time="2025-09-11T00:30:41.466306552Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 11 00:30:41.467127 containerd[1623]: time="2025-09-11T00:30:41.466312484Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 11 00:30:41.467127 containerd[1623]: time="2025-09-11T00:30:41.466321734Z" level=info msg="runtime interface created" Sep 11 00:30:41.467127 containerd[1623]: time="2025-09-11T00:30:41.466324657Z" level=info msg="created NRI interface" Sep 11 00:30:41.467127 containerd[1623]: time="2025-09-11T00:30:41.466329134Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 11 00:30:41.467127 containerd[1623]: time="2025-09-11T00:30:41.466334437Z" level=info msg="Connect containerd service" Sep 11 00:30:41.467127 containerd[1623]: time="2025-09-11T00:30:41.466348279Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 11 00:30:41.470487 containerd[1623]: time="2025-09-11T00:30:41.469526322Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 11 00:30:41.488703 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:30:41.650049 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:30:41.665012 containerd[1623]: time="2025-09-11T00:30:41.664707401Z" level=info msg="Start subscribing containerd event" Sep 11 00:30:41.665012 containerd[1623]: time="2025-09-11T00:30:41.664746326Z" level=info msg="Start recovering state" Sep 11 00:30:41.665012 containerd[1623]: time="2025-09-11T00:30:41.664797253Z" level=info msg="Start event monitor" Sep 11 00:30:41.665012 containerd[1623]: time="2025-09-11T00:30:41.664805911Z" level=info msg="Start cni network conf syncer for default" Sep 11 00:30:41.665012 containerd[1623]: time="2025-09-11T00:30:41.664811968Z" level=info msg="Start streaming server" Sep 11 00:30:41.665012 containerd[1623]: time="2025-09-11T00:30:41.664819394Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 11 00:30:41.665012 containerd[1623]: time="2025-09-11T00:30:41.664823585Z" level=info msg="runtime interface starting up..." Sep 11 00:30:41.665012 containerd[1623]: time="2025-09-11T00:30:41.664826486Z" level=info msg="starting plugins..." Sep 11 00:30:41.665012 containerd[1623]: time="2025-09-11T00:30:41.664834560Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 11 00:30:41.665012 containerd[1623]: time="2025-09-11T00:30:41.664940897Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 11 00:30:41.665012 containerd[1623]: time="2025-09-11T00:30:41.665010887Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 11 00:30:41.665508 containerd[1623]: time="2025-09-11T00:30:41.665267520Z" level=info msg="containerd successfully booted in 0.227631s" Sep 11 00:30:41.665337 systemd[1]: Started containerd.service - containerd container runtime. Sep 11 00:30:41.698974 tar[1610]: linux-amd64/README.md Sep 11 00:30:41.711547 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 11 00:30:42.056546 systemd-networkd[1520]: ens192: Gained IPv6LL Sep 11 00:30:42.057613 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 11 00:30:42.058737 systemd[1]: Reached target network-online.target - Network is Online. Sep 11 00:30:42.060156 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Sep 11 00:30:42.063213 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:30:42.071781 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 11 00:30:42.092143 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 11 00:30:42.103255 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 11 00:30:42.103384 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Sep 11 00:30:42.103910 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 11 00:30:42.863435 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:30:42.863793 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 11 00:30:42.864732 systemd[1]: Startup finished in 2.649s (kernel) + 10.829s (initrd) + 4.475s (userspace) = 17.955s. Sep 11 00:30:42.875737 (kubelet)[1819]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:30:42.937575 login[1730]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 11 00:30:42.938989 login[1732]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 11 00:30:42.943857 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 11 00:30:42.945074 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 11 00:30:42.952078 systemd-logind[1603]: New session 2 of user core. Sep 11 00:30:42.958229 systemd-logind[1603]: New session 1 of user core. Sep 11 00:30:42.961929 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 11 00:30:42.963910 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 11 00:30:42.978009 (systemd)[1826]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 11 00:30:42.980151 systemd-logind[1603]: New session c1 of user core. Sep 11 00:30:43.071820 systemd[1826]: Queued start job for default target default.target. Sep 11 00:30:43.079330 systemd[1826]: Created slice app.slice - User Application Slice. Sep 11 00:30:43.079431 systemd[1826]: Reached target paths.target - Paths. Sep 11 00:30:43.079575 systemd[1826]: Reached target timers.target - Timers. Sep 11 00:30:43.080333 systemd[1826]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 11 00:30:43.089227 systemd[1826]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 11 00:30:43.089313 systemd[1826]: Reached target sockets.target - Sockets. Sep 11 00:30:43.089436 systemd[1826]: Reached target basic.target - Basic System. Sep 11 00:30:43.089549 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 11 00:30:43.090479 systemd[1826]: Reached target default.target - Main User Target. Sep 11 00:30:43.090575 systemd[1826]: Startup finished in 106ms. Sep 11 00:30:43.090681 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 11 00:30:43.091410 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 11 00:30:43.397041 kubelet[1819]: E0911 00:30:43.397004 1819 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:30:43.398555 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:30:43.398705 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:30:43.399087 systemd[1]: kubelet.service: Consumed 611ms CPU time, 263M memory peak. Sep 11 00:30:51.192787 systemd-timesyncd[1554]: Timed out waiting for reply from 23.186.168.128:123 (0.flatcar.pool.ntp.org). Sep 11 00:32:24.199449 systemd-resolved[1521]: Clock change detected. Flushing caches. Sep 11 00:32:24.199558 systemd-timesyncd[1554]: Contacted time server 208.88.126.235:123 (0.flatcar.pool.ntp.org). Sep 11 00:32:24.200315 systemd-timesyncd[1554]: Initial clock synchronization to Thu 2025-09-11 00:32:24.199365 UTC. Sep 11 00:32:26.583925 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 11 00:32:26.585469 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:32:26.880928 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:32:26.886495 (kubelet)[1870]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:32:26.947996 kubelet[1870]: E0911 00:32:26.947961 1870 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:32:26.950426 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:32:26.950510 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:32:26.950714 systemd[1]: kubelet.service: Consumed 99ms CPU time, 109.1M memory peak. Sep 11 00:32:37.113695 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 11 00:32:37.115185 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:32:37.476929 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:32:37.487508 (kubelet)[1886]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:32:37.533833 kubelet[1886]: E0911 00:32:37.533784 1886 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:32:37.535365 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:32:37.535513 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:32:37.535860 systemd[1]: kubelet.service: Consumed 102ms CPU time, 110.5M memory peak. Sep 11 00:32:44.272845 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 11 00:32:44.275507 systemd[1]: Started sshd@0-139.178.70.106:22-139.178.89.65:57154.service - OpenSSH per-connection server daemon (139.178.89.65:57154). Sep 11 00:32:44.320589 sshd[1893]: Accepted publickey for core from 139.178.89.65 port 57154 ssh2: RSA SHA256:408a9NHYR1n+3gHIFEtjtiM/BUVpJAMRTZCdef9pGpE Sep 11 00:32:44.321453 sshd-session[1893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:32:44.324703 systemd-logind[1603]: New session 3 of user core. Sep 11 00:32:44.336440 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 11 00:32:44.390627 systemd[1]: Started sshd@1-139.178.70.106:22-139.178.89.65:57158.service - OpenSSH per-connection server daemon (139.178.89.65:57158). Sep 11 00:32:44.430102 sshd[1898]: Accepted publickey for core from 139.178.89.65 port 57158 ssh2: RSA SHA256:408a9NHYR1n+3gHIFEtjtiM/BUVpJAMRTZCdef9pGpE Sep 11 00:32:44.430752 sshd-session[1898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:32:44.433798 systemd-logind[1603]: New session 4 of user core. Sep 11 00:32:44.443372 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 11 00:32:44.491547 sshd[1900]: Connection closed by 139.178.89.65 port 57158 Sep 11 00:32:44.492260 sshd-session[1898]: pam_unix(sshd:session): session closed for user core Sep 11 00:32:44.506446 systemd[1]: sshd@1-139.178.70.106:22-139.178.89.65:57158.service: Deactivated successfully. Sep 11 00:32:44.507397 systemd[1]: session-4.scope: Deactivated successfully. Sep 11 00:32:44.507863 systemd-logind[1603]: Session 4 logged out. Waiting for processes to exit. Sep 11 00:32:44.509276 systemd[1]: Started sshd@2-139.178.70.106:22-139.178.89.65:57164.service - OpenSSH per-connection server daemon (139.178.89.65:57164). Sep 11 00:32:44.510484 systemd-logind[1603]: Removed session 4. Sep 11 00:32:44.540089 sshd[1906]: Accepted publickey for core from 139.178.89.65 port 57164 ssh2: RSA SHA256:408a9NHYR1n+3gHIFEtjtiM/BUVpJAMRTZCdef9pGpE Sep 11 00:32:44.540607 sshd-session[1906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:32:44.542992 systemd-logind[1603]: New session 5 of user core. Sep 11 00:32:44.550504 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 11 00:32:44.595954 sshd[1908]: Connection closed by 139.178.89.65 port 57164 Sep 11 00:32:44.596847 sshd-session[1906]: pam_unix(sshd:session): session closed for user core Sep 11 00:32:44.606065 systemd[1]: sshd@2-139.178.70.106:22-139.178.89.65:57164.service: Deactivated successfully. Sep 11 00:32:44.607061 systemd[1]: session-5.scope: Deactivated successfully. Sep 11 00:32:44.607701 systemd-logind[1603]: Session 5 logged out. Waiting for processes to exit. Sep 11 00:32:44.609101 systemd[1]: Started sshd@3-139.178.70.106:22-139.178.89.65:57168.service - OpenSSH per-connection server daemon (139.178.89.65:57168). Sep 11 00:32:44.610463 systemd-logind[1603]: Removed session 5. Sep 11 00:32:44.643236 sshd[1914]: Accepted publickey for core from 139.178.89.65 port 57168 ssh2: RSA SHA256:408a9NHYR1n+3gHIFEtjtiM/BUVpJAMRTZCdef9pGpE Sep 11 00:32:44.644075 sshd-session[1914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:32:44.648137 systemd-logind[1603]: New session 6 of user core. Sep 11 00:32:44.659509 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 11 00:32:44.708795 sshd[1916]: Connection closed by 139.178.89.65 port 57168 Sep 11 00:32:44.709140 sshd-session[1914]: pam_unix(sshd:session): session closed for user core Sep 11 00:32:44.721458 systemd[1]: sshd@3-139.178.70.106:22-139.178.89.65:57168.service: Deactivated successfully. Sep 11 00:32:44.723285 systemd[1]: session-6.scope: Deactivated successfully. Sep 11 00:32:44.724506 systemd-logind[1603]: Session 6 logged out. Waiting for processes to exit. Sep 11 00:32:44.726465 systemd[1]: Started sshd@4-139.178.70.106:22-139.178.89.65:57172.service - OpenSSH per-connection server daemon (139.178.89.65:57172). Sep 11 00:32:44.727497 systemd-logind[1603]: Removed session 6. Sep 11 00:32:44.764253 sshd[1922]: Accepted publickey for core from 139.178.89.65 port 57172 ssh2: RSA SHA256:408a9NHYR1n+3gHIFEtjtiM/BUVpJAMRTZCdef9pGpE Sep 11 00:32:44.764982 sshd-session[1922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:32:44.767617 systemd-logind[1603]: New session 7 of user core. Sep 11 00:32:44.778436 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 11 00:32:44.880188 sudo[1925]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 11 00:32:44.880407 sudo[1925]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:32:44.902023 sudo[1925]: pam_unix(sudo:session): session closed for user root Sep 11 00:32:44.903043 sshd[1924]: Connection closed by 139.178.89.65 port 57172 Sep 11 00:32:44.904039 sshd-session[1922]: pam_unix(sshd:session): session closed for user core Sep 11 00:32:44.913998 systemd[1]: sshd@4-139.178.70.106:22-139.178.89.65:57172.service: Deactivated successfully. Sep 11 00:32:44.915145 systemd[1]: session-7.scope: Deactivated successfully. Sep 11 00:32:44.915865 systemd-logind[1603]: Session 7 logged out. Waiting for processes to exit. Sep 11 00:32:44.917655 systemd[1]: Started sshd@5-139.178.70.106:22-139.178.89.65:57178.service - OpenSSH per-connection server daemon (139.178.89.65:57178). Sep 11 00:32:44.919444 systemd-logind[1603]: Removed session 7. Sep 11 00:32:44.953435 sshd[1931]: Accepted publickey for core from 139.178.89.65 port 57178 ssh2: RSA SHA256:408a9NHYR1n+3gHIFEtjtiM/BUVpJAMRTZCdef9pGpE Sep 11 00:32:44.954203 sshd-session[1931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:32:44.956903 systemd-logind[1603]: New session 8 of user core. Sep 11 00:32:44.967499 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 11 00:32:45.016855 sudo[1935]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 11 00:32:45.017057 sudo[1935]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:32:45.023473 sudo[1935]: pam_unix(sudo:session): session closed for user root Sep 11 00:32:45.027110 sudo[1934]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 11 00:32:45.027489 sudo[1934]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:32:45.034647 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 11 00:32:45.064557 augenrules[1957]: No rules Sep 11 00:32:45.065206 systemd[1]: audit-rules.service: Deactivated successfully. Sep 11 00:32:45.065394 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 11 00:32:45.066138 sudo[1934]: pam_unix(sudo:session): session closed for user root Sep 11 00:32:45.067045 sshd[1933]: Connection closed by 139.178.89.65 port 57178 Sep 11 00:32:45.067345 sshd-session[1931]: pam_unix(sshd:session): session closed for user core Sep 11 00:32:45.074325 systemd[1]: sshd@5-139.178.70.106:22-139.178.89.65:57178.service: Deactivated successfully. Sep 11 00:32:45.075144 systemd[1]: session-8.scope: Deactivated successfully. Sep 11 00:32:45.075624 systemd-logind[1603]: Session 8 logged out. Waiting for processes to exit. Sep 11 00:32:45.077080 systemd[1]: Started sshd@6-139.178.70.106:22-139.178.89.65:57194.service - OpenSSH per-connection server daemon (139.178.89.65:57194). Sep 11 00:32:45.079504 systemd-logind[1603]: Removed session 8. Sep 11 00:32:45.111256 sshd[1967]: Accepted publickey for core from 139.178.89.65 port 57194 ssh2: RSA SHA256:408a9NHYR1n+3gHIFEtjtiM/BUVpJAMRTZCdef9pGpE Sep 11 00:32:45.111943 sshd-session[1967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:32:45.115300 systemd-logind[1603]: New session 9 of user core. Sep 11 00:32:45.120379 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 11 00:32:45.168717 sudo[1970]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 11 00:32:45.169328 sudo[1970]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:32:45.556545 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 11 00:32:45.567526 (dockerd)[1988]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 11 00:32:45.772787 dockerd[1988]: time="2025-09-11T00:32:45.772743975Z" level=info msg="Starting up" Sep 11 00:32:45.773694 dockerd[1988]: time="2025-09-11T00:32:45.773675484Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 11 00:32:45.803915 dockerd[1988]: time="2025-09-11T00:32:45.803893010Z" level=info msg="Loading containers: start." Sep 11 00:32:45.811306 kernel: Initializing XFRM netlink socket Sep 11 00:32:45.961522 systemd-networkd[1520]: docker0: Link UP Sep 11 00:32:45.962731 dockerd[1988]: time="2025-09-11T00:32:45.962711286Z" level=info msg="Loading containers: done." Sep 11 00:32:45.972312 dockerd[1988]: time="2025-09-11T00:32:45.972245063Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 11 00:32:45.972389 dockerd[1988]: time="2025-09-11T00:32:45.972307396Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 11 00:32:45.972389 dockerd[1988]: time="2025-09-11T00:32:45.972371708Z" level=info msg="Initializing buildkit" Sep 11 00:32:45.983156 dockerd[1988]: time="2025-09-11T00:32:45.983131537Z" level=info msg="Completed buildkit initialization" Sep 11 00:32:45.988046 dockerd[1988]: time="2025-09-11T00:32:45.988018369Z" level=info msg="Daemon has completed initialization" Sep 11 00:32:45.988160 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 11 00:32:45.988539 dockerd[1988]: time="2025-09-11T00:32:45.988514433Z" level=info msg="API listen on /run/docker.sock" Sep 11 00:32:46.788557 containerd[1623]: time="2025-09-11T00:32:46.788465236Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 11 00:32:47.397891 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3431804463.mount: Deactivated successfully. Sep 11 00:32:47.613470 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 11 00:32:47.615402 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:32:47.808688 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:32:47.811111 (kubelet)[2255]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:32:47.847231 kubelet[2255]: E0911 00:32:47.847206 2255 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:32:47.848606 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:32:47.848692 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:32:47.849009 systemd[1]: kubelet.service: Consumed 92ms CPU time, 108.4M memory peak. Sep 11 00:32:48.494257 containerd[1623]: time="2025-09-11T00:32:48.494211366Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:48.495145 containerd[1623]: time="2025-09-11T00:32:48.495134002Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Sep 11 00:32:48.495535 containerd[1623]: time="2025-09-11T00:32:48.495523509Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:48.496913 containerd[1623]: time="2025-09-11T00:32:48.496901558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:48.497306 containerd[1623]: time="2025-09-11T00:32:48.497198969Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 1.708687845s" Sep 11 00:32:48.497306 containerd[1623]: time="2025-09-11T00:32:48.497288144Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Sep 11 00:32:48.498033 containerd[1623]: time="2025-09-11T00:32:48.497991497Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 11 00:32:49.963316 containerd[1623]: time="2025-09-11T00:32:49.963033926Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:49.971562 containerd[1623]: time="2025-09-11T00:32:49.971542553Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Sep 11 00:32:49.977874 containerd[1623]: time="2025-09-11T00:32:49.977675553Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:49.982777 containerd[1623]: time="2025-09-11T00:32:49.982764214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:49.983207 containerd[1623]: time="2025-09-11T00:32:49.983190901Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.485180735s" Sep 11 00:32:49.983235 containerd[1623]: time="2025-09-11T00:32:49.983207867Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Sep 11 00:32:49.983479 containerd[1623]: time="2025-09-11T00:32:49.983457052Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 11 00:32:51.029768 containerd[1623]: time="2025-09-11T00:32:51.029728478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:51.030407 containerd[1623]: time="2025-09-11T00:32:51.030190390Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Sep 11 00:32:51.030700 containerd[1623]: time="2025-09-11T00:32:51.030684210Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:51.032115 containerd[1623]: time="2025-09-11T00:32:51.032100328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:51.032934 containerd[1623]: time="2025-09-11T00:32:51.032917097Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.049117998s" Sep 11 00:32:51.032968 containerd[1623]: time="2025-09-11T00:32:51.032935949Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Sep 11 00:32:51.033942 containerd[1623]: time="2025-09-11T00:32:51.033926132Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 11 00:32:51.963574 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1595236658.mount: Deactivated successfully. Sep 11 00:32:52.427859 containerd[1623]: time="2025-09-11T00:32:52.427521708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:52.431172 containerd[1623]: time="2025-09-11T00:32:52.431158401Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Sep 11 00:32:52.439261 containerd[1623]: time="2025-09-11T00:32:52.439248130Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:52.442588 containerd[1623]: time="2025-09-11T00:32:52.442576253Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:52.442774 containerd[1623]: time="2025-09-11T00:32:52.442756250Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 1.408814085s" Sep 11 00:32:52.442801 containerd[1623]: time="2025-09-11T00:32:52.442775713Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Sep 11 00:32:52.443117 containerd[1623]: time="2025-09-11T00:32:52.443059976Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 11 00:32:52.937661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1531929140.mount: Deactivated successfully. Sep 11 00:32:53.801337 containerd[1623]: time="2025-09-11T00:32:53.801308973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:53.814615 containerd[1623]: time="2025-09-11T00:32:53.814590081Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 11 00:32:53.857325 containerd[1623]: time="2025-09-11T00:32:53.857255623Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:53.871122 containerd[1623]: time="2025-09-11T00:32:53.871084740Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:53.872361 containerd[1623]: time="2025-09-11T00:32:53.872086207Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.42900963s" Sep 11 00:32:53.872361 containerd[1623]: time="2025-09-11T00:32:53.872123463Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 11 00:32:53.872528 containerd[1623]: time="2025-09-11T00:32:53.872439278Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 11 00:32:54.391849 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3769546021.mount: Deactivated successfully. Sep 11 00:32:54.393606 containerd[1623]: time="2025-09-11T00:32:54.393572147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 00:32:54.394037 containerd[1623]: time="2025-09-11T00:32:54.393954485Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 11 00:32:54.394089 containerd[1623]: time="2025-09-11T00:32:54.394079491Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 00:32:54.395049 containerd[1623]: time="2025-09-11T00:32:54.395037460Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 00:32:54.395798 containerd[1623]: time="2025-09-11T00:32:54.395458969Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 522.989581ms" Sep 11 00:32:54.395798 containerd[1623]: time="2025-09-11T00:32:54.395475995Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 11 00:32:54.395940 containerd[1623]: time="2025-09-11T00:32:54.395930465Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 11 00:32:54.909011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1887406584.mount: Deactivated successfully. Sep 11 00:32:57.689326 containerd[1623]: time="2025-09-11T00:32:57.688722174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:57.690451 containerd[1623]: time="2025-09-11T00:32:57.690312868Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 11 00:32:57.691573 containerd[1623]: time="2025-09-11T00:32:57.691558008Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:57.693647 containerd[1623]: time="2025-09-11T00:32:57.693629294Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:32:57.694475 containerd[1623]: time="2025-09-11T00:32:57.694453534Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.298425636s" Sep 11 00:32:57.694536 containerd[1623]: time="2025-09-11T00:32:57.694524207Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 11 00:32:57.863466 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 11 00:32:57.864897 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:32:58.429473 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:32:58.436570 (kubelet)[2419]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:32:58.471743 kubelet[2419]: E0911 00:32:58.471705 2419 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:32:58.473957 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:32:58.474080 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:32:58.474605 systemd[1]: kubelet.service: Consumed 103ms CPU time, 115.3M memory peak. Sep 11 00:32:58.857433 update_engine[1604]: I20250911 00:32:58.857335 1604 update_attempter.cc:509] Updating boot flags... Sep 11 00:32:59.975683 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:32:59.975925 systemd[1]: kubelet.service: Consumed 103ms CPU time, 115.3M memory peak. Sep 11 00:32:59.978390 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:33:00.002743 systemd[1]: Reload requested from client PID 2449 ('systemctl') (unit session-9.scope)... Sep 11 00:33:00.002765 systemd[1]: Reloading... Sep 11 00:33:00.072348 zram_generator::config[2492]: No configuration found. Sep 11 00:33:00.143969 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:33:00.153066 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Sep 11 00:33:00.221412 systemd[1]: Reloading finished in 218 ms. Sep 11 00:33:00.259410 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 11 00:33:00.259468 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 11 00:33:00.259655 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:33:00.261459 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:33:00.655917 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:33:00.665610 (kubelet)[2560]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 11 00:33:00.748839 kubelet[2560]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:33:00.748839 kubelet[2560]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 11 00:33:00.748839 kubelet[2560]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:33:00.749074 kubelet[2560]: I0911 00:33:00.748883 2560 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 11 00:33:00.967759 kubelet[2560]: I0911 00:33:00.967669 2560 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 11 00:33:00.967759 kubelet[2560]: I0911 00:33:00.967694 2560 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 11 00:33:00.968117 kubelet[2560]: I0911 00:33:00.968099 2560 server.go:954] "Client rotation is on, will bootstrap in background" Sep 11 00:33:01.169051 kubelet[2560]: E0911 00:33:01.169020 2560 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.106:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:33:01.169928 kubelet[2560]: I0911 00:33:01.169840 2560 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 11 00:33:01.183786 kubelet[2560]: I0911 00:33:01.183659 2560 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 11 00:33:01.188536 kubelet[2560]: I0911 00:33:01.188328 2560 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 11 00:33:01.196935 kubelet[2560]: I0911 00:33:01.196890 2560 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 11 00:33:01.197154 kubelet[2560]: I0911 00:33:01.197028 2560 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 11 00:33:01.201121 kubelet[2560]: I0911 00:33:01.200955 2560 topology_manager.go:138] "Creating topology manager with none policy" Sep 11 00:33:01.201121 kubelet[2560]: I0911 00:33:01.200976 2560 container_manager_linux.go:304] "Creating device plugin manager" Sep 11 00:33:01.202071 kubelet[2560]: I0911 00:33:01.202060 2560 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:33:01.208335 kubelet[2560]: I0911 00:33:01.208323 2560 kubelet.go:446] "Attempting to sync node with API server" Sep 11 00:33:01.208406 kubelet[2560]: I0911 00:33:01.208400 2560 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 11 00:33:01.212013 kubelet[2560]: I0911 00:33:01.211999 2560 kubelet.go:352] "Adding apiserver pod source" Sep 11 00:33:01.212263 kubelet[2560]: I0911 00:33:01.212074 2560 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 11 00:33:01.215760 kubelet[2560]: W0911 00:33:01.215712 2560 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused Sep 11 00:33:01.215804 kubelet[2560]: E0911 00:33:01.215769 2560 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:33:01.216789 kubelet[2560]: W0911 00:33:01.216704 2560 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused Sep 11 00:33:01.216789 kubelet[2560]: E0911 00:33:01.216749 2560 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:33:01.218676 kubelet[2560]: I0911 00:33:01.218518 2560 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 11 00:33:01.222565 kubelet[2560]: I0911 00:33:01.222438 2560 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 11 00:33:01.228050 kubelet[2560]: W0911 00:33:01.228027 2560 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 11 00:33:01.231020 kubelet[2560]: I0911 00:33:01.230819 2560 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 11 00:33:01.231020 kubelet[2560]: I0911 00:33:01.230848 2560 server.go:1287] "Started kubelet" Sep 11 00:33:01.241868 kubelet[2560]: I0911 00:33:01.241312 2560 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 11 00:33:01.245590 kubelet[2560]: I0911 00:33:01.245163 2560 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 11 00:33:01.245590 kubelet[2560]: I0911 00:33:01.245507 2560 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 11 00:33:01.251104 kubelet[2560]: I0911 00:33:01.250883 2560 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 11 00:33:01.259415 kubelet[2560]: E0911 00:33:01.249383 2560 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.106:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.106:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864131fa5f9579f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-11 00:33:01.230831519 +0000 UTC m=+0.563094756,LastTimestamp:2025-09-11 00:33:01.230831519 +0000 UTC m=+0.563094756,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 11 00:33:01.260391 kubelet[2560]: I0911 00:33:01.260359 2560 server.go:479] "Adding debug handlers to kubelet server" Sep 11 00:33:01.261225 kubelet[2560]: I0911 00:33:01.261213 2560 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 11 00:33:01.262719 kubelet[2560]: I0911 00:33:01.262704 2560 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 11 00:33:01.262862 kubelet[2560]: E0911 00:33:01.262848 2560 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:33:01.264327 kubelet[2560]: I0911 00:33:01.264069 2560 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 11 00:33:01.264327 kubelet[2560]: I0911 00:33:01.264169 2560 reconciler.go:26] "Reconciler: start to sync state" Sep 11 00:33:01.265875 kubelet[2560]: W0911 00:33:01.265130 2560 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused Sep 11 00:33:01.265875 kubelet[2560]: E0911 00:33:01.265172 2560 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:33:01.265875 kubelet[2560]: E0911 00:33:01.265209 2560 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.106:6443: connect: connection refused" interval="200ms" Sep 11 00:33:01.267213 kubelet[2560]: I0911 00:33:01.267201 2560 factory.go:221] Registration of the systemd container factory successfully Sep 11 00:33:01.267356 kubelet[2560]: I0911 00:33:01.267343 2560 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 11 00:33:01.272099 kubelet[2560]: E0911 00:33:01.272080 2560 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 11 00:33:01.272318 kubelet[2560]: I0911 00:33:01.272284 2560 factory.go:221] Registration of the containerd container factory successfully Sep 11 00:33:01.276996 kubelet[2560]: I0911 00:33:01.276005 2560 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 11 00:33:01.276996 kubelet[2560]: I0911 00:33:01.276807 2560 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 11 00:33:01.276996 kubelet[2560]: I0911 00:33:01.276819 2560 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 11 00:33:01.276996 kubelet[2560]: I0911 00:33:01.276832 2560 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 11 00:33:01.276996 kubelet[2560]: I0911 00:33:01.276836 2560 kubelet.go:2382] "Starting kubelet main sync loop" Sep 11 00:33:01.276996 kubelet[2560]: E0911 00:33:01.276864 2560 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 11 00:33:01.281409 kubelet[2560]: W0911 00:33:01.281376 2560 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused Sep 11 00:33:01.281500 kubelet[2560]: E0911 00:33:01.281423 2560 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:33:01.297966 kubelet[2560]: I0911 00:33:01.297946 2560 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 11 00:33:01.297966 kubelet[2560]: I0911 00:33:01.297959 2560 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 11 00:33:01.298096 kubelet[2560]: I0911 00:33:01.297983 2560 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:33:01.299061 kubelet[2560]: I0911 00:33:01.299048 2560 policy_none.go:49] "None policy: Start" Sep 11 00:33:01.299061 kubelet[2560]: I0911 00:33:01.299061 2560 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 11 00:33:01.299121 kubelet[2560]: I0911 00:33:01.299066 2560 state_mem.go:35] "Initializing new in-memory state store" Sep 11 00:33:01.302706 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 11 00:33:01.310473 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 11 00:33:01.320680 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 11 00:33:01.321608 kubelet[2560]: I0911 00:33:01.321592 2560 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 11 00:33:01.322735 kubelet[2560]: I0911 00:33:01.322390 2560 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 11 00:33:01.322735 kubelet[2560]: I0911 00:33:01.322402 2560 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 11 00:33:01.322735 kubelet[2560]: I0911 00:33:01.322652 2560 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 11 00:33:01.323981 kubelet[2560]: E0911 00:33:01.323968 2560 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 11 00:33:01.324024 kubelet[2560]: E0911 00:33:01.324002 2560 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 11 00:33:01.394940 systemd[1]: Created slice kubepods-burstable-pod5ac5b60d7fc5da40ba6cd7eae2579d86.slice - libcontainer container kubepods-burstable-pod5ac5b60d7fc5da40ba6cd7eae2579d86.slice. Sep 11 00:33:01.414121 kubelet[2560]: E0911 00:33:01.414049 2560 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:33:01.417153 systemd[1]: Created slice kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice - libcontainer container kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice. Sep 11 00:33:01.419038 kubelet[2560]: E0911 00:33:01.418995 2560 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:33:01.420811 systemd[1]: Created slice kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice - libcontainer container kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice. Sep 11 00:33:01.421916 kubelet[2560]: E0911 00:33:01.421898 2560 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:33:01.423625 kubelet[2560]: I0911 00:33:01.423611 2560 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 11 00:33:01.423829 kubelet[2560]: E0911 00:33:01.423814 2560 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.106:6443/api/v1/nodes\": dial tcp 139.178.70.106:6443: connect: connection refused" node="localhost" Sep 11 00:33:01.465191 kubelet[2560]: I0911 00:33:01.465139 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5ac5b60d7fc5da40ba6cd7eae2579d86-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5ac5b60d7fc5da40ba6cd7eae2579d86\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:33:01.465191 kubelet[2560]: I0911 00:33:01.465193 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:33:01.465311 kubelet[2560]: I0911 00:33:01.465206 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:33:01.465311 kubelet[2560]: I0911 00:33:01.465217 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:33:01.465311 kubelet[2560]: I0911 00:33:01.465227 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:33:01.465311 kubelet[2560]: I0911 00:33:01.465235 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5ac5b60d7fc5da40ba6cd7eae2579d86-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5ac5b60d7fc5da40ba6cd7eae2579d86\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:33:01.465311 kubelet[2560]: I0911 00:33:01.465254 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5ac5b60d7fc5da40ba6cd7eae2579d86-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5ac5b60d7fc5da40ba6cd7eae2579d86\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:33:01.465413 kubelet[2560]: I0911 00:33:01.465266 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:33:01.465413 kubelet[2560]: I0911 00:33:01.465274 2560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 11 00:33:01.465704 kubelet[2560]: E0911 00:33:01.465684 2560 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.106:6443: connect: connection refused" interval="400ms" Sep 11 00:33:01.625270 kubelet[2560]: I0911 00:33:01.625247 2560 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 11 00:33:01.625572 kubelet[2560]: E0911 00:33:01.625551 2560 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.106:6443/api/v1/nodes\": dial tcp 139.178.70.106:6443: connect: connection refused" node="localhost" Sep 11 00:33:01.715663 containerd[1623]: time="2025-09-11T00:33:01.715641143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5ac5b60d7fc5da40ba6cd7eae2579d86,Namespace:kube-system,Attempt:0,}" Sep 11 00:33:01.721340 containerd[1623]: time="2025-09-11T00:33:01.721073878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,}" Sep 11 00:33:01.722923 containerd[1623]: time="2025-09-11T00:33:01.722904418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,}" Sep 11 00:33:01.803190 containerd[1623]: time="2025-09-11T00:33:01.803162983Z" level=info msg="connecting to shim 4d3c04b3b64fbbc4b49f2f1b2a9b12e2e02e407cd165ab2be5231bbc493cc098" address="unix:///run/containerd/s/9cbf9ebccbecf13f9eae62d94166ca3c57e5bf307dbe40be5dc13ff7d52772d6" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:33:01.804733 containerd[1623]: time="2025-09-11T00:33:01.804708105Z" level=info msg="connecting to shim 172ea450eafd7ceeb6f2de4280d7329f2d3ebd03e4f4156ba7ee0d7b2d2a33de" address="unix:///run/containerd/s/fa063d37521e58ef16c97b0c40c770f23f42dd5d0d775a92b0928bc0b91216be" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:33:01.805140 containerd[1623]: time="2025-09-11T00:33:01.805122445Z" level=info msg="connecting to shim 334f97bb326ce4ac131a7c57e6a0e01dcc07d2da313993c7cc5343b8a7dc05c5" address="unix:///run/containerd/s/71c22c0ed8fa0f4b754cac0ebd7968783f365105bec0108f3f517ea3d53de40d" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:33:01.866274 kubelet[2560]: E0911 00:33:01.866241 2560 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.106:6443: connect: connection refused" interval="800ms" Sep 11 00:33:01.881469 systemd[1]: Started cri-containerd-334f97bb326ce4ac131a7c57e6a0e01dcc07d2da313993c7cc5343b8a7dc05c5.scope - libcontainer container 334f97bb326ce4ac131a7c57e6a0e01dcc07d2da313993c7cc5343b8a7dc05c5. Sep 11 00:33:01.887454 systemd[1]: Started cri-containerd-172ea450eafd7ceeb6f2de4280d7329f2d3ebd03e4f4156ba7ee0d7b2d2a33de.scope - libcontainer container 172ea450eafd7ceeb6f2de4280d7329f2d3ebd03e4f4156ba7ee0d7b2d2a33de. Sep 11 00:33:01.889147 systemd[1]: Started cri-containerd-4d3c04b3b64fbbc4b49f2f1b2a9b12e2e02e407cd165ab2be5231bbc493cc098.scope - libcontainer container 4d3c04b3b64fbbc4b49f2f1b2a9b12e2e02e407cd165ab2be5231bbc493cc098. Sep 11 00:33:01.931664 containerd[1623]: time="2025-09-11T00:33:01.931581414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"334f97bb326ce4ac131a7c57e6a0e01dcc07d2da313993c7cc5343b8a7dc05c5\"" Sep 11 00:33:01.933782 containerd[1623]: time="2025-09-11T00:33:01.933630130Z" level=info msg="CreateContainer within sandbox \"334f97bb326ce4ac131a7c57e6a0e01dcc07d2da313993c7cc5343b8a7dc05c5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 11 00:33:01.938653 containerd[1623]: time="2025-09-11T00:33:01.938631357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5ac5b60d7fc5da40ba6cd7eae2579d86,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d3c04b3b64fbbc4b49f2f1b2a9b12e2e02e407cd165ab2be5231bbc493cc098\"" Sep 11 00:33:01.941548 containerd[1623]: time="2025-09-11T00:33:01.941525108Z" level=info msg="CreateContainer within sandbox \"4d3c04b3b64fbbc4b49f2f1b2a9b12e2e02e407cd165ab2be5231bbc493cc098\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 11 00:33:01.945629 containerd[1623]: time="2025-09-11T00:33:01.945608217Z" level=info msg="Container 9628b352455eea29ca409b58fff1769dcf6a31178df0afb824b323e557e4a0a7: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:33:01.948285 containerd[1623]: time="2025-09-11T00:33:01.948269093Z" level=info msg="Container bd7c3da91895cf282608d91d1c396fa2c0ffd7d12398e2fbee6ad3d9e4e576c8: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:33:01.950933 containerd[1623]: time="2025-09-11T00:33:01.950881090Z" level=info msg="CreateContainer within sandbox \"334f97bb326ce4ac131a7c57e6a0e01dcc07d2da313993c7cc5343b8a7dc05c5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9628b352455eea29ca409b58fff1769dcf6a31178df0afb824b323e557e4a0a7\"" Sep 11 00:33:01.952098 containerd[1623]: time="2025-09-11T00:33:01.951868615Z" level=info msg="StartContainer for \"9628b352455eea29ca409b58fff1769dcf6a31178df0afb824b323e557e4a0a7\"" Sep 11 00:33:01.952657 containerd[1623]: time="2025-09-11T00:33:01.952638655Z" level=info msg="CreateContainer within sandbox \"4d3c04b3b64fbbc4b49f2f1b2a9b12e2e02e407cd165ab2be5231bbc493cc098\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bd7c3da91895cf282608d91d1c396fa2c0ffd7d12398e2fbee6ad3d9e4e576c8\"" Sep 11 00:33:01.953511 containerd[1623]: time="2025-09-11T00:33:01.953499317Z" level=info msg="StartContainer for \"bd7c3da91895cf282608d91d1c396fa2c0ffd7d12398e2fbee6ad3d9e4e576c8\"" Sep 11 00:33:01.956979 containerd[1623]: time="2025-09-11T00:33:01.956844550Z" level=info msg="connecting to shim bd7c3da91895cf282608d91d1c396fa2c0ffd7d12398e2fbee6ad3d9e4e576c8" address="unix:///run/containerd/s/9cbf9ebccbecf13f9eae62d94166ca3c57e5bf307dbe40be5dc13ff7d52772d6" protocol=ttrpc version=3 Sep 11 00:33:01.957691 containerd[1623]: time="2025-09-11T00:33:01.957546430Z" level=info msg="connecting to shim 9628b352455eea29ca409b58fff1769dcf6a31178df0afb824b323e557e4a0a7" address="unix:///run/containerd/s/71c22c0ed8fa0f4b754cac0ebd7968783f365105bec0108f3f517ea3d53de40d" protocol=ttrpc version=3 Sep 11 00:33:01.964823 containerd[1623]: time="2025-09-11T00:33:01.964794491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,} returns sandbox id \"172ea450eafd7ceeb6f2de4280d7329f2d3ebd03e4f4156ba7ee0d7b2d2a33de\"" Sep 11 00:33:01.970948 containerd[1623]: time="2025-09-11T00:33:01.969589233Z" level=info msg="CreateContainer within sandbox \"172ea450eafd7ceeb6f2de4280d7329f2d3ebd03e4f4156ba7ee0d7b2d2a33de\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 11 00:33:01.974435 systemd[1]: Started cri-containerd-9628b352455eea29ca409b58fff1769dcf6a31178df0afb824b323e557e4a0a7.scope - libcontainer container 9628b352455eea29ca409b58fff1769dcf6a31178df0afb824b323e557e4a0a7. Sep 11 00:33:01.978105 systemd[1]: Started cri-containerd-bd7c3da91895cf282608d91d1c396fa2c0ffd7d12398e2fbee6ad3d9e4e576c8.scope - libcontainer container bd7c3da91895cf282608d91d1c396fa2c0ffd7d12398e2fbee6ad3d9e4e576c8. Sep 11 00:33:01.979908 containerd[1623]: time="2025-09-11T00:33:01.979890698Z" level=info msg="Container efb05b5ab8ab1a1b25f6a39e237380679af3f42692e9acb6e12a35eadc603ab8: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:33:01.984088 containerd[1623]: time="2025-09-11T00:33:01.983981009Z" level=info msg="CreateContainer within sandbox \"172ea450eafd7ceeb6f2de4280d7329f2d3ebd03e4f4156ba7ee0d7b2d2a33de\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"efb05b5ab8ab1a1b25f6a39e237380679af3f42692e9acb6e12a35eadc603ab8\"" Sep 11 00:33:01.984666 containerd[1623]: time="2025-09-11T00:33:01.984601666Z" level=info msg="StartContainer for \"efb05b5ab8ab1a1b25f6a39e237380679af3f42692e9acb6e12a35eadc603ab8\"" Sep 11 00:33:01.987270 containerd[1623]: time="2025-09-11T00:33:01.987253640Z" level=info msg="connecting to shim efb05b5ab8ab1a1b25f6a39e237380679af3f42692e9acb6e12a35eadc603ab8" address="unix:///run/containerd/s/fa063d37521e58ef16c97b0c40c770f23f42dd5d0d775a92b0928bc0b91216be" protocol=ttrpc version=3 Sep 11 00:33:02.013433 systemd[1]: Started cri-containerd-efb05b5ab8ab1a1b25f6a39e237380679af3f42692e9acb6e12a35eadc603ab8.scope - libcontainer container efb05b5ab8ab1a1b25f6a39e237380679af3f42692e9acb6e12a35eadc603ab8. Sep 11 00:33:02.029548 kubelet[2560]: I0911 00:33:02.029027 2560 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 11 00:33:02.029548 kubelet[2560]: E0911 00:33:02.029243 2560 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.106:6443/api/v1/nodes\": dial tcp 139.178.70.106:6443: connect: connection refused" node="localhost" Sep 11 00:33:02.037506 containerd[1623]: time="2025-09-11T00:33:02.037480675Z" level=info msg="StartContainer for \"9628b352455eea29ca409b58fff1769dcf6a31178df0afb824b323e557e4a0a7\" returns successfully" Sep 11 00:33:02.047810 containerd[1623]: time="2025-09-11T00:33:02.047786392Z" level=info msg="StartContainer for \"bd7c3da91895cf282608d91d1c396fa2c0ffd7d12398e2fbee6ad3d9e4e576c8\" returns successfully" Sep 11 00:33:02.080447 containerd[1623]: time="2025-09-11T00:33:02.080426550Z" level=info msg="StartContainer for \"efb05b5ab8ab1a1b25f6a39e237380679af3f42692e9acb6e12a35eadc603ab8\" returns successfully" Sep 11 00:33:02.187509 kubelet[2560]: W0911 00:33:02.187385 2560 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused Sep 11 00:33:02.187509 kubelet[2560]: E0911 00:33:02.187440 2560 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:33:02.298407 kubelet[2560]: E0911 00:33:02.298205 2560 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:33:02.300449 kubelet[2560]: W0911 00:33:02.300402 2560 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused Sep 11 00:33:02.300449 kubelet[2560]: E0911 00:33:02.300437 2560 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:33:02.301722 kubelet[2560]: E0911 00:33:02.301710 2560 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:33:02.303268 kubelet[2560]: E0911 00:33:02.303248 2560 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:33:02.518638 kubelet[2560]: W0911 00:33:02.518539 2560 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused Sep 11 00:33:02.518638 kubelet[2560]: E0911 00:33:02.518581 2560 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" Sep 11 00:33:02.831012 kubelet[2560]: I0911 00:33:02.830739 2560 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 11 00:33:03.305791 kubelet[2560]: E0911 00:33:03.305755 2560 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:33:03.306011 kubelet[2560]: E0911 00:33:03.305903 2560 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:33:03.366237 kubelet[2560]: E0911 00:33:03.366205 2560 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 11 00:33:03.430776 kubelet[2560]: I0911 00:33:03.430748 2560 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 11 00:33:03.430776 kubelet[2560]: E0911 00:33:03.430776 2560 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 11 00:33:03.438797 kubelet[2560]: E0911 00:33:03.438775 2560 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:33:03.539402 kubelet[2560]: E0911 00:33:03.539369 2560 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:33:03.639866 kubelet[2560]: E0911 00:33:03.639824 2560 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:33:03.740174 kubelet[2560]: E0911 00:33:03.740142 2560 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:33:03.841154 kubelet[2560]: E0911 00:33:03.841130 2560 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:33:03.942157 kubelet[2560]: E0911 00:33:03.941913 2560 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:33:04.042407 kubelet[2560]: E0911 00:33:04.042376 2560 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:33:04.143110 kubelet[2560]: E0911 00:33:04.143080 2560 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:33:04.243930 kubelet[2560]: E0911 00:33:04.243633 2560 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:33:04.344195 kubelet[2560]: E0911 00:33:04.344169 2560 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:33:04.444953 kubelet[2560]: E0911 00:33:04.444889 2560 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:33:04.545681 kubelet[2560]: E0911 00:33:04.545412 2560 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:33:04.664709 kubelet[2560]: I0911 00:33:04.664553 2560 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 11 00:33:04.673724 kubelet[2560]: I0911 00:33:04.673686 2560 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 11 00:33:04.679004 kubelet[2560]: I0911 00:33:04.678865 2560 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 11 00:33:05.217933 kubelet[2560]: I0911 00:33:05.217745 2560 apiserver.go:52] "Watching apiserver" Sep 11 00:33:05.264329 kubelet[2560]: I0911 00:33:05.264314 2560 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 11 00:33:05.273932 systemd[1]: Reload requested from client PID 2830 ('systemctl') (unit session-9.scope)... Sep 11 00:33:05.274147 systemd[1]: Reloading... Sep 11 00:33:05.350315 zram_generator::config[2878]: No configuration found. Sep 11 00:33:05.424700 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:33:05.433471 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Sep 11 00:33:05.517794 systemd[1]: Reloading finished in 243 ms. Sep 11 00:33:05.547314 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:33:05.557987 systemd[1]: kubelet.service: Deactivated successfully. Sep 11 00:33:05.558144 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:33:05.558180 systemd[1]: kubelet.service: Consumed 484ms CPU time, 130.5M memory peak. Sep 11 00:33:05.561566 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:33:06.201192 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:33:06.211776 (kubelet)[2941]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 11 00:33:06.276264 kubelet[2941]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:33:06.276481 kubelet[2941]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 11 00:33:06.276512 kubelet[2941]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:33:06.276599 kubelet[2941]: I0911 00:33:06.276582 2941 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 11 00:33:06.280486 kubelet[2941]: I0911 00:33:06.280473 2941 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 11 00:33:06.280546 kubelet[2941]: I0911 00:33:06.280540 2941 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 11 00:33:06.280721 kubelet[2941]: I0911 00:33:06.280713 2941 server.go:954] "Client rotation is on, will bootstrap in background" Sep 11 00:33:06.281882 kubelet[2941]: I0911 00:33:06.281473 2941 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 11 00:33:06.287199 kubelet[2941]: I0911 00:33:06.287180 2941 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 11 00:33:06.290991 kubelet[2941]: I0911 00:33:06.290981 2941 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 11 00:33:06.293093 kubelet[2941]: I0911 00:33:06.293084 2941 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 11 00:33:06.321204 kubelet[2941]: I0911 00:33:06.321169 2941 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 11 00:33:06.321445 kubelet[2941]: I0911 00:33:06.321307 2941 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 11 00:33:06.321553 kubelet[2941]: I0911 00:33:06.321545 2941 topology_manager.go:138] "Creating topology manager with none policy" Sep 11 00:33:06.321600 kubelet[2941]: I0911 00:33:06.321594 2941 container_manager_linux.go:304] "Creating device plugin manager" Sep 11 00:33:06.321677 kubelet[2941]: I0911 00:33:06.321670 2941 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:33:06.321885 kubelet[2941]: I0911 00:33:06.321876 2941 kubelet.go:446] "Attempting to sync node with API server" Sep 11 00:33:06.321947 kubelet[2941]: I0911 00:33:06.321941 2941 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 11 00:33:06.322003 kubelet[2941]: I0911 00:33:06.321997 2941 kubelet.go:352] "Adding apiserver pod source" Sep 11 00:33:06.322048 kubelet[2941]: I0911 00:33:06.322042 2941 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 11 00:33:06.331270 kubelet[2941]: I0911 00:33:06.331226 2941 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 11 00:33:06.331668 kubelet[2941]: I0911 00:33:06.331646 2941 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 11 00:33:06.331972 kubelet[2941]: I0911 00:33:06.331957 2941 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 11 00:33:06.332009 kubelet[2941]: I0911 00:33:06.331982 2941 server.go:1287] "Started kubelet" Sep 11 00:33:06.341589 kubelet[2941]: I0911 00:33:06.341566 2941 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 11 00:33:06.351273 kubelet[2941]: I0911 00:33:06.351226 2941 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 11 00:33:06.352236 kubelet[2941]: I0911 00:33:06.352221 2941 server.go:479] "Adding debug handlers to kubelet server" Sep 11 00:33:06.355535 kubelet[2941]: I0911 00:33:06.355281 2941 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 11 00:33:06.355749 kubelet[2941]: I0911 00:33:06.355739 2941 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 11 00:33:06.356218 kubelet[2941]: I0911 00:33:06.356161 2941 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 11 00:33:06.358868 kubelet[2941]: I0911 00:33:06.358858 2941 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 11 00:33:06.359151 kubelet[2941]: E0911 00:33:06.359125 2941 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:33:06.359672 kubelet[2941]: I0911 00:33:06.359661 2941 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 11 00:33:06.359803 kubelet[2941]: I0911 00:33:06.359796 2941 reconciler.go:26] "Reconciler: start to sync state" Sep 11 00:33:06.364533 kubelet[2941]: I0911 00:33:06.364520 2941 factory.go:221] Registration of the systemd container factory successfully Sep 11 00:33:06.364754 kubelet[2941]: I0911 00:33:06.364729 2941 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 11 00:33:06.384567 kubelet[2941]: I0911 00:33:06.384530 2941 factory.go:221] Registration of the containerd container factory successfully Sep 11 00:33:06.389510 kubelet[2941]: E0911 00:33:06.389483 2941 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 11 00:33:06.390820 kubelet[2941]: I0911 00:33:06.390796 2941 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 11 00:33:06.391598 kubelet[2941]: I0911 00:33:06.391584 2941 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 11 00:33:06.391637 kubelet[2941]: I0911 00:33:06.391603 2941 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 11 00:33:06.391637 kubelet[2941]: I0911 00:33:06.391615 2941 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 11 00:33:06.391637 kubelet[2941]: I0911 00:33:06.391620 2941 kubelet.go:2382] "Starting kubelet main sync loop" Sep 11 00:33:06.391689 kubelet[2941]: E0911 00:33:06.391643 2941 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 11 00:33:06.424599 kubelet[2941]: I0911 00:33:06.423899 2941 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 11 00:33:06.424599 kubelet[2941]: I0911 00:33:06.423911 2941 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 11 00:33:06.424599 kubelet[2941]: I0911 00:33:06.423925 2941 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:33:06.424599 kubelet[2941]: I0911 00:33:06.424078 2941 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 11 00:33:06.424599 kubelet[2941]: I0911 00:33:06.424092 2941 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 11 00:33:06.424599 kubelet[2941]: I0911 00:33:06.424109 2941 policy_none.go:49] "None policy: Start" Sep 11 00:33:06.424599 kubelet[2941]: I0911 00:33:06.424116 2941 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 11 00:33:06.424599 kubelet[2941]: I0911 00:33:06.424122 2941 state_mem.go:35] "Initializing new in-memory state store" Sep 11 00:33:06.424599 kubelet[2941]: I0911 00:33:06.424226 2941 state_mem.go:75] "Updated machine memory state" Sep 11 00:33:06.444031 kubelet[2941]: I0911 00:33:06.444017 2941 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 11 00:33:06.444928 kubelet[2941]: I0911 00:33:06.444920 2941 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 11 00:33:06.445460 kubelet[2941]: I0911 00:33:06.445440 2941 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 11 00:33:06.445771 kubelet[2941]: I0911 00:33:06.445747 2941 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 11 00:33:06.447069 kubelet[2941]: E0911 00:33:06.447055 2941 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 11 00:33:06.492963 kubelet[2941]: I0911 00:33:06.492901 2941 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 11 00:33:06.493890 kubelet[2941]: I0911 00:33:06.492929 2941 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 11 00:33:06.494038 kubelet[2941]: I0911 00:33:06.492982 2941 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 11 00:33:06.514799 kubelet[2941]: E0911 00:33:06.514776 2941 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 11 00:33:06.514964 kubelet[2941]: E0911 00:33:06.514788 2941 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 11 00:33:06.515010 kubelet[2941]: E0911 00:33:06.514804 2941 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 11 00:33:06.547091 kubelet[2941]: I0911 00:33:06.547066 2941 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 11 00:33:06.560168 kubelet[2941]: I0911 00:33:06.560138 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5ac5b60d7fc5da40ba6cd7eae2579d86-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5ac5b60d7fc5da40ba6cd7eae2579d86\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:33:06.560168 kubelet[2941]: I0911 00:33:06.560167 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5ac5b60d7fc5da40ba6cd7eae2579d86-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5ac5b60d7fc5da40ba6cd7eae2579d86\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:33:06.560288 kubelet[2941]: I0911 00:33:06.560214 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:33:06.560288 kubelet[2941]: I0911 00:33:06.560232 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:33:06.560288 kubelet[2941]: I0911 00:33:06.560249 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5ac5b60d7fc5da40ba6cd7eae2579d86-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5ac5b60d7fc5da40ba6cd7eae2579d86\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:33:06.560358 kubelet[2941]: I0911 00:33:06.560291 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:33:06.560358 kubelet[2941]: I0911 00:33:06.560325 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:33:06.560358 kubelet[2941]: I0911 00:33:06.560336 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:33:06.560358 kubelet[2941]: I0911 00:33:06.560344 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 11 00:33:06.562577 kubelet[2941]: I0911 00:33:06.562560 2941 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 11 00:33:06.562706 kubelet[2941]: I0911 00:33:06.562605 2941 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 11 00:33:07.323326 kubelet[2941]: I0911 00:33:07.323266 2941 apiserver.go:52] "Watching apiserver" Sep 11 00:33:07.360136 kubelet[2941]: I0911 00:33:07.360100 2941 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 11 00:33:07.407702 kubelet[2941]: I0911 00:33:07.407680 2941 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 11 00:33:07.420906 kubelet[2941]: E0911 00:33:07.420882 2941 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 11 00:33:07.456621 kubelet[2941]: I0911 00:33:07.456584 2941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.456569951 podStartE2EDuration="3.456569951s" podCreationTimestamp="2025-09-11 00:33:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:33:07.456427727 +0000 UTC m=+1.220601967" watchObservedRunningTime="2025-09-11 00:33:07.456569951 +0000 UTC m=+1.220744185" Sep 11 00:33:07.489895 kubelet[2941]: I0911 00:33:07.489849 2941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.489832719 podStartE2EDuration="3.489832719s" podCreationTimestamp="2025-09-11 00:33:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:33:07.471109117 +0000 UTC m=+1.235283351" watchObservedRunningTime="2025-09-11 00:33:07.489832719 +0000 UTC m=+1.254006954" Sep 11 00:33:07.505164 kubelet[2941]: I0911 00:33:07.504667 2941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.504655519 podStartE2EDuration="3.504655519s" podCreationTimestamp="2025-09-11 00:33:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:33:07.490001114 +0000 UTC m=+1.254175354" watchObservedRunningTime="2025-09-11 00:33:07.504655519 +0000 UTC m=+1.268829750" Sep 11 00:33:11.583864 kubelet[2941]: I0911 00:33:11.583838 2941 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 11 00:33:11.584132 kubelet[2941]: I0911 00:33:11.584124 2941 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 11 00:33:11.584177 containerd[1623]: time="2025-09-11T00:33:11.584022633Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 11 00:33:12.216998 systemd[1]: Created slice kubepods-besteffort-podd7d49e5a_9d7b_4b16_98ec_7e5e57268687.slice - libcontainer container kubepods-besteffort-podd7d49e5a_9d7b_4b16_98ec_7e5e57268687.slice. Sep 11 00:33:12.299544 kubelet[2941]: I0911 00:33:12.299516 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d7d49e5a-9d7b-4b16-98ec-7e5e57268687-lib-modules\") pod \"kube-proxy-c62dz\" (UID: \"d7d49e5a-9d7b-4b16-98ec-7e5e57268687\") " pod="kube-system/kube-proxy-c62dz" Sep 11 00:33:12.299697 kubelet[2941]: I0911 00:33:12.299684 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d7d49e5a-9d7b-4b16-98ec-7e5e57268687-kube-proxy\") pod \"kube-proxy-c62dz\" (UID: \"d7d49e5a-9d7b-4b16-98ec-7e5e57268687\") " pod="kube-system/kube-proxy-c62dz" Sep 11 00:33:12.299805 kubelet[2941]: I0911 00:33:12.299757 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d7d49e5a-9d7b-4b16-98ec-7e5e57268687-xtables-lock\") pod \"kube-proxy-c62dz\" (UID: \"d7d49e5a-9d7b-4b16-98ec-7e5e57268687\") " pod="kube-system/kube-proxy-c62dz" Sep 11 00:33:12.299805 kubelet[2941]: I0911 00:33:12.299774 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjwbs\" (UniqueName: \"kubernetes.io/projected/d7d49e5a-9d7b-4b16-98ec-7e5e57268687-kube-api-access-sjwbs\") pod \"kube-proxy-c62dz\" (UID: \"d7d49e5a-9d7b-4b16-98ec-7e5e57268687\") " pod="kube-system/kube-proxy-c62dz" Sep 11 00:33:12.526465 containerd[1623]: time="2025-09-11T00:33:12.526112975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c62dz,Uid:d7d49e5a-9d7b-4b16-98ec-7e5e57268687,Namespace:kube-system,Attempt:0,}" Sep 11 00:33:12.572657 containerd[1623]: time="2025-09-11T00:33:12.572626748Z" level=info msg="connecting to shim 59e945688367bb77435d07118f7f88d591ebebec317a7bab1a563ed3a1ca96fe" address="unix:///run/containerd/s/4ff758e55d07ff4b977d8512d8cf82cd88994ebc0b6e5e94b965aae3ff6a4aed" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:33:12.590420 systemd[1]: Started cri-containerd-59e945688367bb77435d07118f7f88d591ebebec317a7bab1a563ed3a1ca96fe.scope - libcontainer container 59e945688367bb77435d07118f7f88d591ebebec317a7bab1a563ed3a1ca96fe. Sep 11 00:33:12.612053 containerd[1623]: time="2025-09-11T00:33:12.612027598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c62dz,Uid:d7d49e5a-9d7b-4b16-98ec-7e5e57268687,Namespace:kube-system,Attempt:0,} returns sandbox id \"59e945688367bb77435d07118f7f88d591ebebec317a7bab1a563ed3a1ca96fe\"" Sep 11 00:33:12.622983 containerd[1623]: time="2025-09-11T00:33:12.619026918Z" level=info msg="CreateContainer within sandbox \"59e945688367bb77435d07118f7f88d591ebebec317a7bab1a563ed3a1ca96fe\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 11 00:33:12.643431 systemd[1]: Created slice kubepods-besteffort-podaf528bbd_65ac_4bb8_81d4_1e22b781f9b8.slice - libcontainer container kubepods-besteffort-podaf528bbd_65ac_4bb8_81d4_1e22b781f9b8.slice. Sep 11 00:33:12.650786 containerd[1623]: time="2025-09-11T00:33:12.650749265Z" level=info msg="Container 12134729bf576cc226e185fb992c5771e6f06d12a71f12bf7e7d5baa5eefa751: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:33:12.671533 containerd[1623]: time="2025-09-11T00:33:12.671510896Z" level=info msg="CreateContainer within sandbox \"59e945688367bb77435d07118f7f88d591ebebec317a7bab1a563ed3a1ca96fe\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"12134729bf576cc226e185fb992c5771e6f06d12a71f12bf7e7d5baa5eefa751\"" Sep 11 00:33:12.672349 containerd[1623]: time="2025-09-11T00:33:12.672289551Z" level=info msg="StartContainer for \"12134729bf576cc226e185fb992c5771e6f06d12a71f12bf7e7d5baa5eefa751\"" Sep 11 00:33:12.673529 containerd[1623]: time="2025-09-11T00:33:12.673512056Z" level=info msg="connecting to shim 12134729bf576cc226e185fb992c5771e6f06d12a71f12bf7e7d5baa5eefa751" address="unix:///run/containerd/s/4ff758e55d07ff4b977d8512d8cf82cd88994ebc0b6e5e94b965aae3ff6a4aed" protocol=ttrpc version=3 Sep 11 00:33:12.691568 systemd[1]: Started cri-containerd-12134729bf576cc226e185fb992c5771e6f06d12a71f12bf7e7d5baa5eefa751.scope - libcontainer container 12134729bf576cc226e185fb992c5771e6f06d12a71f12bf7e7d5baa5eefa751. Sep 11 00:33:12.701395 kubelet[2941]: I0911 00:33:12.701370 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/af528bbd-65ac-4bb8-81d4-1e22b781f9b8-var-lib-calico\") pod \"tigera-operator-755d956888-trvf8\" (UID: \"af528bbd-65ac-4bb8-81d4-1e22b781f9b8\") " pod="tigera-operator/tigera-operator-755d956888-trvf8" Sep 11 00:33:12.701658 kubelet[2941]: I0911 00:33:12.701647 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgkq6\" (UniqueName: \"kubernetes.io/projected/af528bbd-65ac-4bb8-81d4-1e22b781f9b8-kube-api-access-fgkq6\") pod \"tigera-operator-755d956888-trvf8\" (UID: \"af528bbd-65ac-4bb8-81d4-1e22b781f9b8\") " pod="tigera-operator/tigera-operator-755d956888-trvf8" Sep 11 00:33:12.722651 containerd[1623]: time="2025-09-11T00:33:12.722625531Z" level=info msg="StartContainer for \"12134729bf576cc226e185fb992c5771e6f06d12a71f12bf7e7d5baa5eefa751\" returns successfully" Sep 11 00:33:12.951389 containerd[1623]: time="2025-09-11T00:33:12.951286929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-trvf8,Uid:af528bbd-65ac-4bb8-81d4-1e22b781f9b8,Namespace:tigera-operator,Attempt:0,}" Sep 11 00:33:12.961011 containerd[1623]: time="2025-09-11T00:33:12.960978980Z" level=info msg="connecting to shim 14db42cabfa496ef775924e4933032f00da05fff1b4ffa37fa57a7b00fe68cd4" address="unix:///run/containerd/s/9804d2d2b779ca47328ece9b1d2d1bb3606855cbe1e205478f0e85ce85089b99" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:33:12.980407 systemd[1]: Started cri-containerd-14db42cabfa496ef775924e4933032f00da05fff1b4ffa37fa57a7b00fe68cd4.scope - libcontainer container 14db42cabfa496ef775924e4933032f00da05fff1b4ffa37fa57a7b00fe68cd4. Sep 11 00:33:13.014098 containerd[1623]: time="2025-09-11T00:33:13.014076815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-trvf8,Uid:af528bbd-65ac-4bb8-81d4-1e22b781f9b8,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"14db42cabfa496ef775924e4933032f00da05fff1b4ffa37fa57a7b00fe68cd4\"" Sep 11 00:33:13.015807 containerd[1623]: time="2025-09-11T00:33:13.015247342Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 11 00:33:13.417740 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2345655740.mount: Deactivated successfully. Sep 11 00:33:14.526292 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount541547661.mount: Deactivated successfully. Sep 11 00:33:14.832605 kubelet[2941]: I0911 00:33:14.832511 2941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-c62dz" podStartSLOduration=2.832500233 podStartE2EDuration="2.832500233s" podCreationTimestamp="2025-09-11 00:33:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:33:13.435323575 +0000 UTC m=+7.199497815" watchObservedRunningTime="2025-09-11 00:33:14.832500233 +0000 UTC m=+8.596674472" Sep 11 00:33:15.206662 containerd[1623]: time="2025-09-11T00:33:15.206638495Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:15.207027 containerd[1623]: time="2025-09-11T00:33:15.207011307Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 11 00:33:15.207366 containerd[1623]: time="2025-09-11T00:33:15.207351258Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:15.208414 containerd[1623]: time="2025-09-11T00:33:15.208402940Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:15.208782 containerd[1623]: time="2025-09-11T00:33:15.208761643Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.192975566s" Sep 11 00:33:15.208782 containerd[1623]: time="2025-09-11T00:33:15.208779119Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 11 00:33:15.211688 containerd[1623]: time="2025-09-11T00:33:15.211673001Z" level=info msg="CreateContainer within sandbox \"14db42cabfa496ef775924e4933032f00da05fff1b4ffa37fa57a7b00fe68cd4\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 11 00:33:15.217496 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount705639960.mount: Deactivated successfully. Sep 11 00:33:15.218720 containerd[1623]: time="2025-09-11T00:33:15.217746618Z" level=info msg="Container e7f9d19f906a480b0a2ba783fb649db1d04fba2fd75e597525140e792835ba4e: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:33:15.229593 containerd[1623]: time="2025-09-11T00:33:15.229569224Z" level=info msg="CreateContainer within sandbox \"14db42cabfa496ef775924e4933032f00da05fff1b4ffa37fa57a7b00fe68cd4\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e7f9d19f906a480b0a2ba783fb649db1d04fba2fd75e597525140e792835ba4e\"" Sep 11 00:33:15.230427 containerd[1623]: time="2025-09-11T00:33:15.230130956Z" level=info msg="StartContainer for \"e7f9d19f906a480b0a2ba783fb649db1d04fba2fd75e597525140e792835ba4e\"" Sep 11 00:33:15.230741 containerd[1623]: time="2025-09-11T00:33:15.230723025Z" level=info msg="connecting to shim e7f9d19f906a480b0a2ba783fb649db1d04fba2fd75e597525140e792835ba4e" address="unix:///run/containerd/s/9804d2d2b779ca47328ece9b1d2d1bb3606855cbe1e205478f0e85ce85089b99" protocol=ttrpc version=3 Sep 11 00:33:15.259419 systemd[1]: Started cri-containerd-e7f9d19f906a480b0a2ba783fb649db1d04fba2fd75e597525140e792835ba4e.scope - libcontainer container e7f9d19f906a480b0a2ba783fb649db1d04fba2fd75e597525140e792835ba4e. Sep 11 00:33:15.280377 containerd[1623]: time="2025-09-11T00:33:15.280173282Z" level=info msg="StartContainer for \"e7f9d19f906a480b0a2ba783fb649db1d04fba2fd75e597525140e792835ba4e\" returns successfully" Sep 11 00:33:15.459148 kubelet[2941]: I0911 00:33:15.459039 2941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-trvf8" podStartSLOduration=1.264715539 podStartE2EDuration="3.459025709s" podCreationTimestamp="2025-09-11 00:33:12 +0000 UTC" firstStartedPulling="2025-09-11 00:33:13.014992166 +0000 UTC m=+6.779166397" lastFinishedPulling="2025-09-11 00:33:15.209302334 +0000 UTC m=+8.973476567" observedRunningTime="2025-09-11 00:33:15.458825721 +0000 UTC m=+9.222999964" watchObservedRunningTime="2025-09-11 00:33:15.459025709 +0000 UTC m=+9.223199944" Sep 11 00:33:21.486916 sudo[1970]: pam_unix(sudo:session): session closed for user root Sep 11 00:33:21.488543 sshd[1969]: Connection closed by 139.178.89.65 port 57194 Sep 11 00:33:21.489547 sshd-session[1967]: pam_unix(sshd:session): session closed for user core Sep 11 00:33:21.491610 systemd[1]: sshd@6-139.178.70.106:22-139.178.89.65:57194.service: Deactivated successfully. Sep 11 00:33:21.493099 systemd[1]: session-9.scope: Deactivated successfully. Sep 11 00:33:21.494779 systemd[1]: session-9.scope: Consumed 3.089s CPU time, 151.9M memory peak. Sep 11 00:33:21.496307 systemd-logind[1603]: Session 9 logged out. Waiting for processes to exit. Sep 11 00:33:21.497607 systemd-logind[1603]: Removed session 9. Sep 11 00:33:23.344346 kubelet[2941]: W0911 00:33:23.344251 2941 reflector.go:569] object-"calico-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Sep 11 00:33:23.344346 kubelet[2941]: E0911 00:33:23.344280 2941 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 11 00:33:23.344723 systemd[1]: Created slice kubepods-besteffort-podca749a37_3934_4338_a0b4_09af90e7bf1e.slice - libcontainer container kubepods-besteffort-podca749a37_3934_4338_a0b4_09af90e7bf1e.slice. Sep 11 00:33:23.345555 kubelet[2941]: I0911 00:33:23.345408 2941 status_manager.go:890] "Failed to get status for pod" podUID="ca749a37-3934-4338-a0b4-09af90e7bf1e" pod="calico-system/calico-typha-7bf658f584-6g7kg" err="pods \"calico-typha-7bf658f584-6g7kg\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" Sep 11 00:33:23.345555 kubelet[2941]: W0911 00:33:23.345450 2941 reflector.go:569] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Sep 11 00:33:23.345555 kubelet[2941]: E0911 00:33:23.345463 2941 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"typha-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"typha-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 11 00:33:23.347182 kubelet[2941]: W0911 00:33:23.347039 2941 reflector.go:569] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Sep 11 00:33:23.347182 kubelet[2941]: E0911 00:33:23.347133 2941 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"tigera-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"tigera-ca-bundle\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 11 00:33:23.369356 kubelet[2941]: I0911 00:33:23.369327 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtcr5\" (UniqueName: \"kubernetes.io/projected/ca749a37-3934-4338-a0b4-09af90e7bf1e-kube-api-access-wtcr5\") pod \"calico-typha-7bf658f584-6g7kg\" (UID: \"ca749a37-3934-4338-a0b4-09af90e7bf1e\") " pod="calico-system/calico-typha-7bf658f584-6g7kg" Sep 11 00:33:23.369356 kubelet[2941]: I0911 00:33:23.369354 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ca749a37-3934-4338-a0b4-09af90e7bf1e-typha-certs\") pod \"calico-typha-7bf658f584-6g7kg\" (UID: \"ca749a37-3934-4338-a0b4-09af90e7bf1e\") " pod="calico-system/calico-typha-7bf658f584-6g7kg" Sep 11 00:33:23.369542 kubelet[2941]: I0911 00:33:23.369365 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca749a37-3934-4338-a0b4-09af90e7bf1e-tigera-ca-bundle\") pod \"calico-typha-7bf658f584-6g7kg\" (UID: \"ca749a37-3934-4338-a0b4-09af90e7bf1e\") " pod="calico-system/calico-typha-7bf658f584-6g7kg" Sep 11 00:33:23.775143 systemd[1]: Created slice kubepods-besteffort-pod134e502c_a8c1_4393_84c9_665a93059d9b.slice - libcontainer container kubepods-besteffort-pod134e502c_a8c1_4393_84c9_665a93059d9b.slice. Sep 11 00:33:23.873896 kubelet[2941]: I0911 00:33:23.872736 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/134e502c-a8c1-4393-84c9-665a93059d9b-cni-bin-dir\") pod \"calico-node-vv6pf\" (UID: \"134e502c-a8c1-4393-84c9-665a93059d9b\") " pod="calico-system/calico-node-vv6pf" Sep 11 00:33:23.873896 kubelet[2941]: I0911 00:33:23.872766 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/134e502c-a8c1-4393-84c9-665a93059d9b-cni-log-dir\") pod \"calico-node-vv6pf\" (UID: \"134e502c-a8c1-4393-84c9-665a93059d9b\") " pod="calico-system/calico-node-vv6pf" Sep 11 00:33:23.873896 kubelet[2941]: I0911 00:33:23.872780 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/134e502c-a8c1-4393-84c9-665a93059d9b-node-certs\") pod \"calico-node-vv6pf\" (UID: \"134e502c-a8c1-4393-84c9-665a93059d9b\") " pod="calico-system/calico-node-vv6pf" Sep 11 00:33:23.873896 kubelet[2941]: I0911 00:33:23.872791 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/134e502c-a8c1-4393-84c9-665a93059d9b-var-run-calico\") pod \"calico-node-vv6pf\" (UID: \"134e502c-a8c1-4393-84c9-665a93059d9b\") " pod="calico-system/calico-node-vv6pf" Sep 11 00:33:23.873896 kubelet[2941]: I0911 00:33:23.872802 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/134e502c-a8c1-4393-84c9-665a93059d9b-var-lib-calico\") pod \"calico-node-vv6pf\" (UID: \"134e502c-a8c1-4393-84c9-665a93059d9b\") " pod="calico-system/calico-node-vv6pf" Sep 11 00:33:23.874084 kubelet[2941]: I0911 00:33:23.872815 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/134e502c-a8c1-4393-84c9-665a93059d9b-tigera-ca-bundle\") pod \"calico-node-vv6pf\" (UID: \"134e502c-a8c1-4393-84c9-665a93059d9b\") " pod="calico-system/calico-node-vv6pf" Sep 11 00:33:23.874084 kubelet[2941]: I0911 00:33:23.872837 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg9n4\" (UniqueName: \"kubernetes.io/projected/134e502c-a8c1-4393-84c9-665a93059d9b-kube-api-access-xg9n4\") pod \"calico-node-vv6pf\" (UID: \"134e502c-a8c1-4393-84c9-665a93059d9b\") " pod="calico-system/calico-node-vv6pf" Sep 11 00:33:23.874084 kubelet[2941]: I0911 00:33:23.872849 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/134e502c-a8c1-4393-84c9-665a93059d9b-policysync\") pod \"calico-node-vv6pf\" (UID: \"134e502c-a8c1-4393-84c9-665a93059d9b\") " pod="calico-system/calico-node-vv6pf" Sep 11 00:33:23.874084 kubelet[2941]: I0911 00:33:23.872860 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/134e502c-a8c1-4393-84c9-665a93059d9b-flexvol-driver-host\") pod \"calico-node-vv6pf\" (UID: \"134e502c-a8c1-4393-84c9-665a93059d9b\") " pod="calico-system/calico-node-vv6pf" Sep 11 00:33:23.874084 kubelet[2941]: I0911 00:33:23.872879 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/134e502c-a8c1-4393-84c9-665a93059d9b-xtables-lock\") pod \"calico-node-vv6pf\" (UID: \"134e502c-a8c1-4393-84c9-665a93059d9b\") " pod="calico-system/calico-node-vv6pf" Sep 11 00:33:23.874179 kubelet[2941]: I0911 00:33:23.872888 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/134e502c-a8c1-4393-84c9-665a93059d9b-lib-modules\") pod \"calico-node-vv6pf\" (UID: \"134e502c-a8c1-4393-84c9-665a93059d9b\") " pod="calico-system/calico-node-vv6pf" Sep 11 00:33:23.874179 kubelet[2941]: I0911 00:33:23.872897 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/134e502c-a8c1-4393-84c9-665a93059d9b-cni-net-dir\") pod \"calico-node-vv6pf\" (UID: \"134e502c-a8c1-4393-84c9-665a93059d9b\") " pod="calico-system/calico-node-vv6pf" Sep 11 00:33:23.986642 kubelet[2941]: E0911 00:33:23.986609 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:23.986642 kubelet[2941]: W0911 00:33:23.986635 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:23.986792 kubelet[2941]: E0911 00:33:23.986665 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.048454 kubelet[2941]: E0911 00:33:24.048362 2941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5zknv" podUID="9e09e6c7-d57e-4653-9635-85c8bb6db9f7" Sep 11 00:33:24.058410 kubelet[2941]: E0911 00:33:24.058384 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.058410 kubelet[2941]: W0911 00:33:24.058404 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.058525 kubelet[2941]: E0911 00:33:24.058420 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.058633 kubelet[2941]: E0911 00:33:24.058620 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.058660 kubelet[2941]: W0911 00:33:24.058631 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.058679 kubelet[2941]: E0911 00:33:24.058661 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.058868 kubelet[2941]: E0911 00:33:24.058854 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.058868 kubelet[2941]: W0911 00:33:24.058866 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.059918 kubelet[2941]: E0911 00:33:24.058873 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.059918 kubelet[2941]: E0911 00:33:24.059159 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.059918 kubelet[2941]: W0911 00:33:24.059168 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.059918 kubelet[2941]: E0911 00:33:24.059175 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.059918 kubelet[2941]: E0911 00:33:24.059733 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.059918 kubelet[2941]: W0911 00:33:24.059743 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.059918 kubelet[2941]: E0911 00:33:24.059752 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.060168 kubelet[2941]: E0911 00:33:24.060152 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.060168 kubelet[2941]: W0911 00:33:24.060165 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.060217 kubelet[2941]: E0911 00:33:24.060176 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.060541 kubelet[2941]: E0911 00:33:24.060526 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.060541 kubelet[2941]: W0911 00:33:24.060537 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.060643 kubelet[2941]: E0911 00:33:24.060547 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.061446 kubelet[2941]: E0911 00:33:24.061358 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.061446 kubelet[2941]: W0911 00:33:24.061371 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.061446 kubelet[2941]: E0911 00:33:24.061383 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.061573 kubelet[2941]: E0911 00:33:24.061565 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.061726 kubelet[2941]: W0911 00:33:24.061620 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.061726 kubelet[2941]: E0911 00:33:24.061630 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.061874 kubelet[2941]: E0911 00:33:24.061857 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.061981 kubelet[2941]: W0911 00:33:24.061911 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.062133 kubelet[2941]: E0911 00:33:24.061921 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.062318 kubelet[2941]: E0911 00:33:24.062265 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.062318 kubelet[2941]: W0911 00:33:24.062272 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.062318 kubelet[2941]: E0911 00:33:24.062278 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.062601 kubelet[2941]: E0911 00:33:24.062560 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.062601 kubelet[2941]: W0911 00:33:24.062568 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.062801 kubelet[2941]: E0911 00:33:24.062575 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.062900 kubelet[2941]: E0911 00:33:24.062894 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.063006 kubelet[2941]: W0911 00:33:24.062935 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.063006 kubelet[2941]: E0911 00:33:24.062947 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.063104 kubelet[2941]: E0911 00:33:24.063097 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.063199 kubelet[2941]: W0911 00:33:24.063136 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.063199 kubelet[2941]: E0911 00:33:24.063145 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.063391 kubelet[2941]: E0911 00:33:24.063293 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.063391 kubelet[2941]: W0911 00:33:24.063334 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.063391 kubelet[2941]: E0911 00:33:24.063343 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.063508 kubelet[2941]: E0911 00:33:24.063500 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.063549 kubelet[2941]: W0911 00:33:24.063543 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.063661 kubelet[2941]: E0911 00:33:24.063589 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.063737 kubelet[2941]: E0911 00:33:24.063730 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.063828 kubelet[2941]: W0911 00:33:24.063766 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.063828 kubelet[2941]: E0911 00:33:24.063774 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.063948 kubelet[2941]: E0911 00:33:24.063940 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.064041 kubelet[2941]: W0911 00:33:24.064034 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.064079 kubelet[2941]: E0911 00:33:24.064073 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.064260 kubelet[2941]: E0911 00:33:24.064225 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.064260 kubelet[2941]: W0911 00:33:24.064232 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.064260 kubelet[2941]: E0911 00:33:24.064238 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.064491 kubelet[2941]: E0911 00:33:24.064485 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.064533 kubelet[2941]: W0911 00:33:24.064525 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.064574 kubelet[2941]: E0911 00:33:24.064567 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.073993 kubelet[2941]: E0911 00:33:24.073973 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.073993 kubelet[2941]: W0911 00:33:24.073988 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.074179 kubelet[2941]: E0911 00:33:24.074001 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.074179 kubelet[2941]: I0911 00:33:24.074021 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcrsw\" (UniqueName: \"kubernetes.io/projected/9e09e6c7-d57e-4653-9635-85c8bb6db9f7-kube-api-access-xcrsw\") pod \"csi-node-driver-5zknv\" (UID: \"9e09e6c7-d57e-4653-9635-85c8bb6db9f7\") " pod="calico-system/csi-node-driver-5zknv" Sep 11 00:33:24.074179 kubelet[2941]: E0911 00:33:24.074133 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.074179 kubelet[2941]: W0911 00:33:24.074140 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.074179 kubelet[2941]: E0911 00:33:24.074151 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.074179 kubelet[2941]: I0911 00:33:24.074169 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9e09e6c7-d57e-4653-9635-85c8bb6db9f7-registration-dir\") pod \"csi-node-driver-5zknv\" (UID: \"9e09e6c7-d57e-4653-9635-85c8bb6db9f7\") " pod="calico-system/csi-node-driver-5zknv" Sep 11 00:33:24.074427 kubelet[2941]: E0911 00:33:24.074266 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.074427 kubelet[2941]: W0911 00:33:24.074274 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.074427 kubelet[2941]: E0911 00:33:24.074285 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.074427 kubelet[2941]: I0911 00:33:24.074306 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9e09e6c7-d57e-4653-9635-85c8bb6db9f7-socket-dir\") pod \"csi-node-driver-5zknv\" (UID: \"9e09e6c7-d57e-4653-9635-85c8bb6db9f7\") " pod="calico-system/csi-node-driver-5zknv" Sep 11 00:33:24.074635 kubelet[2941]: E0911 00:33:24.074605 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.074635 kubelet[2941]: W0911 00:33:24.074613 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.074635 kubelet[2941]: E0911 00:33:24.074625 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.074757 kubelet[2941]: E0911 00:33:24.074711 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.074757 kubelet[2941]: W0911 00:33:24.074717 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.074757 kubelet[2941]: E0911 00:33:24.074727 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.074847 kubelet[2941]: E0911 00:33:24.074818 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.074847 kubelet[2941]: W0911 00:33:24.074823 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.074847 kubelet[2941]: E0911 00:33:24.074828 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.074847 kubelet[2941]: I0911 00:33:24.074837 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9e09e6c7-d57e-4653-9635-85c8bb6db9f7-kubelet-dir\") pod \"csi-node-driver-5zknv\" (UID: \"9e09e6c7-d57e-4653-9635-85c8bb6db9f7\") " pod="calico-system/csi-node-driver-5zknv" Sep 11 00:33:24.075015 kubelet[2941]: E0911 00:33:24.074918 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.075015 kubelet[2941]: W0911 00:33:24.074924 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.075015 kubelet[2941]: E0911 00:33:24.074933 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.075079 kubelet[2941]: E0911 00:33:24.075021 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.075079 kubelet[2941]: W0911 00:33:24.075026 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.075079 kubelet[2941]: E0911 00:33:24.075033 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.075235 kubelet[2941]: E0911 00:33:24.075138 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.075235 kubelet[2941]: W0911 00:33:24.075143 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.075235 kubelet[2941]: E0911 00:33:24.075150 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.075400 kubelet[2941]: E0911 00:33:24.075343 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.075400 kubelet[2941]: W0911 00:33:24.075353 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.075400 kubelet[2941]: E0911 00:33:24.075362 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.075548 kubelet[2941]: E0911 00:33:24.075543 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.075650 kubelet[2941]: W0911 00:33:24.075579 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.075650 kubelet[2941]: E0911 00:33:24.075591 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.075650 kubelet[2941]: I0911 00:33:24.075603 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9e09e6c7-d57e-4653-9635-85c8bb6db9f7-varrun\") pod \"csi-node-driver-5zknv\" (UID: \"9e09e6c7-d57e-4653-9635-85c8bb6db9f7\") " pod="calico-system/csi-node-driver-5zknv" Sep 11 00:33:24.075767 kubelet[2941]: E0911 00:33:24.075759 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.075802 kubelet[2941]: W0911 00:33:24.075796 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.075854 kubelet[2941]: E0911 00:33:24.075830 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.075950 kubelet[2941]: E0911 00:33:24.075939 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.075950 kubelet[2941]: W0911 00:33:24.075947 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.076050 kubelet[2941]: E0911 00:33:24.075956 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.076050 kubelet[2941]: E0911 00:33:24.076047 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.076101 kubelet[2941]: W0911 00:33:24.076054 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.076101 kubelet[2941]: E0911 00:33:24.076059 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.076187 kubelet[2941]: E0911 00:33:24.076178 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.076187 kubelet[2941]: W0911 00:33:24.076182 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.076231 kubelet[2941]: E0911 00:33:24.076188 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.176628 kubelet[2941]: E0911 00:33:24.176607 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.176628 kubelet[2941]: W0911 00:33:24.176624 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.176831 kubelet[2941]: E0911 00:33:24.176639 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.176831 kubelet[2941]: E0911 00:33:24.176741 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.176831 kubelet[2941]: W0911 00:33:24.176747 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.176831 kubelet[2941]: E0911 00:33:24.176755 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.176981 kubelet[2941]: E0911 00:33:24.176936 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.176981 kubelet[2941]: W0911 00:33:24.176945 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.176981 kubelet[2941]: E0911 00:33:24.176956 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.177148 kubelet[2941]: E0911 00:33:24.177112 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.177148 kubelet[2941]: W0911 00:33:24.177119 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.177148 kubelet[2941]: E0911 00:33:24.177129 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.177387 kubelet[2941]: E0911 00:33:24.177322 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.177387 kubelet[2941]: W0911 00:33:24.177329 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.177387 kubelet[2941]: E0911 00:33:24.177338 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.177549 kubelet[2941]: E0911 00:33:24.177543 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.177640 kubelet[2941]: W0911 00:33:24.177592 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.177640 kubelet[2941]: E0911 00:33:24.177609 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.177710 kubelet[2941]: E0911 00:33:24.177700 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.177710 kubelet[2941]: W0911 00:33:24.177708 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.177755 kubelet[2941]: E0911 00:33:24.177714 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.177821 kubelet[2941]: E0911 00:33:24.177817 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.177844 kubelet[2941]: W0911 00:33:24.177822 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.177844 kubelet[2941]: E0911 00:33:24.177827 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.177918 kubelet[2941]: E0911 00:33:24.177907 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.177918 kubelet[2941]: W0911 00:33:24.177915 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.177975 kubelet[2941]: E0911 00:33:24.177922 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.178088 kubelet[2941]: E0911 00:33:24.178014 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.178088 kubelet[2941]: W0911 00:33:24.178021 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.178088 kubelet[2941]: E0911 00:33:24.178026 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.178245 kubelet[2941]: E0911 00:33:24.178235 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.178276 kubelet[2941]: W0911 00:33:24.178245 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.178276 kubelet[2941]: E0911 00:33:24.178257 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.178447 kubelet[2941]: E0911 00:33:24.178436 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.178447 kubelet[2941]: W0911 00:33:24.178445 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.178600 kubelet[2941]: E0911 00:33:24.178456 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.178600 kubelet[2941]: E0911 00:33:24.178577 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.178600 kubelet[2941]: W0911 00:33:24.178582 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.178600 kubelet[2941]: E0911 00:33:24.178588 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.178732 kubelet[2941]: E0911 00:33:24.178714 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.178732 kubelet[2941]: W0911 00:33:24.178722 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.178732 kubelet[2941]: E0911 00:33:24.178727 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.179372 kubelet[2941]: E0911 00:33:24.178822 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.179372 kubelet[2941]: W0911 00:33:24.178828 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.179372 kubelet[2941]: E0911 00:33:24.178874 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.179372 kubelet[2941]: E0911 00:33:24.178931 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.179372 kubelet[2941]: W0911 00:33:24.178936 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.179372 kubelet[2941]: E0911 00:33:24.179022 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.179372 kubelet[2941]: W0911 00:33:24.179029 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.179372 kubelet[2941]: E0911 00:33:24.179035 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.179372 kubelet[2941]: E0911 00:33:24.179132 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.179372 kubelet[2941]: W0911 00:33:24.179138 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.179706 kubelet[2941]: E0911 00:33:24.179162 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.179706 kubelet[2941]: E0911 00:33:24.179249 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.179706 kubelet[2941]: W0911 00:33:24.179256 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.179706 kubelet[2941]: E0911 00:33:24.179262 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.179706 kubelet[2941]: E0911 00:33:24.178951 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.179706 kubelet[2941]: E0911 00:33:24.179386 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.179706 kubelet[2941]: W0911 00:33:24.179391 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.179706 kubelet[2941]: E0911 00:33:24.179398 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.179706 kubelet[2941]: E0911 00:33:24.179511 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.179706 kubelet[2941]: W0911 00:33:24.179518 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.180479 kubelet[2941]: E0911 00:33:24.179877 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.180479 kubelet[2941]: E0911 00:33:24.180256 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.180479 kubelet[2941]: W0911 00:33:24.180262 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.180479 kubelet[2941]: E0911 00:33:24.180271 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.180875 kubelet[2941]: E0911 00:33:24.180692 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.180875 kubelet[2941]: W0911 00:33:24.180699 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.180875 kubelet[2941]: E0911 00:33:24.180706 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.180875 kubelet[2941]: E0911 00:33:24.180850 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.180875 kubelet[2941]: W0911 00:33:24.180855 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.181229 kubelet[2941]: E0911 00:33:24.181004 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.181382 kubelet[2941]: E0911 00:33:24.181357 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.181382 kubelet[2941]: W0911 00:33:24.181364 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.181382 kubelet[2941]: E0911 00:33:24.181370 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.314033 kubelet[2941]: E0911 00:33:24.313442 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.314033 kubelet[2941]: W0911 00:33:24.313457 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.314033 kubelet[2941]: E0911 00:33:24.313469 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.470915 kubelet[2941]: E0911 00:33:24.470688 2941 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Sep 11 00:33:24.470915 kubelet[2941]: E0911 00:33:24.470751 2941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ca749a37-3934-4338-a0b4-09af90e7bf1e-tigera-ca-bundle podName:ca749a37-3934-4338-a0b4-09af90e7bf1e nodeName:}" failed. No retries permitted until 2025-09-11 00:33:24.970735817 +0000 UTC m=+18.734910048 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/ca749a37-3934-4338-a0b4-09af90e7bf1e-tigera-ca-bundle") pod "calico-typha-7bf658f584-6g7kg" (UID: "ca749a37-3934-4338-a0b4-09af90e7bf1e") : failed to sync configmap cache: timed out waiting for the condition Sep 11 00:33:24.479032 kubelet[2941]: E0911 00:33:24.478937 2941 projected.go:288] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Sep 11 00:33:24.479032 kubelet[2941]: E0911 00:33:24.478962 2941 projected.go:194] Error preparing data for projected volume kube-api-access-wtcr5 for pod calico-system/calico-typha-7bf658f584-6g7kg: failed to sync configmap cache: timed out waiting for the condition Sep 11 00:33:24.479032 kubelet[2941]: E0911 00:33:24.479010 2941 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ca749a37-3934-4338-a0b4-09af90e7bf1e-kube-api-access-wtcr5 podName:ca749a37-3934-4338-a0b4-09af90e7bf1e nodeName:}" failed. No retries permitted until 2025-09-11 00:33:24.978999907 +0000 UTC m=+18.743174138 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wtcr5" (UniqueName: "kubernetes.io/projected/ca749a37-3934-4338-a0b4-09af90e7bf1e-kube-api-access-wtcr5") pod "calico-typha-7bf658f584-6g7kg" (UID: "ca749a37-3934-4338-a0b4-09af90e7bf1e") : failed to sync configmap cache: timed out waiting for the condition Sep 11 00:33:24.479370 kubelet[2941]: E0911 00:33:24.479356 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.479370 kubelet[2941]: W0911 00:33:24.479367 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.479426 kubelet[2941]: E0911 00:33:24.479380 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.479602 kubelet[2941]: E0911 00:33:24.479504 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.479602 kubelet[2941]: W0911 00:33:24.479600 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.479647 kubelet[2941]: E0911 00:33:24.479607 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.580261 kubelet[2941]: E0911 00:33:24.580176 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.580261 kubelet[2941]: W0911 00:33:24.580192 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.580261 kubelet[2941]: E0911 00:33:24.580209 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.580666 kubelet[2941]: E0911 00:33:24.580327 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.580666 kubelet[2941]: W0911 00:33:24.580332 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.580666 kubelet[2941]: E0911 00:33:24.580337 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.588464 kubelet[2941]: E0911 00:33:24.588395 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.588464 kubelet[2941]: W0911 00:33:24.588410 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.588464 kubelet[2941]: E0911 00:33:24.588425 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.589543 kubelet[2941]: E0911 00:33:24.589527 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.589543 kubelet[2941]: W0911 00:33:24.589538 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.589632 kubelet[2941]: E0911 00:33:24.589548 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.637752 kubelet[2941]: E0911 00:33:24.637726 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.637752 kubelet[2941]: W0911 00:33:24.637746 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.637902 kubelet[2941]: E0911 00:33:24.637766 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.678882 containerd[1623]: time="2025-09-11T00:33:24.678848377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vv6pf,Uid:134e502c-a8c1-4393-84c9-665a93059d9b,Namespace:calico-system,Attempt:0,}" Sep 11 00:33:24.681571 kubelet[2941]: E0911 00:33:24.681553 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.681571 kubelet[2941]: W0911 00:33:24.681569 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.681672 kubelet[2941]: E0911 00:33:24.681585 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.681723 kubelet[2941]: E0911 00:33:24.681712 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.681723 kubelet[2941]: W0911 00:33:24.681721 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.681778 kubelet[2941]: E0911 00:33:24.681729 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.744389 containerd[1623]: time="2025-09-11T00:33:24.744345818Z" level=info msg="connecting to shim b0bf51018d381f9796d6356b221c411b0822f92ba5c5aa3be358f1258be644e0" address="unix:///run/containerd/s/e912685169d3c9ab099f0486fce8cc9aaa032ae84a458e08add5d3dbb3c59535" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:33:24.764395 systemd[1]: Started cri-containerd-b0bf51018d381f9796d6356b221c411b0822f92ba5c5aa3be358f1258be644e0.scope - libcontainer container b0bf51018d381f9796d6356b221c411b0822f92ba5c5aa3be358f1258be644e0. Sep 11 00:33:24.782733 kubelet[2941]: E0911 00:33:24.782681 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.782921 kubelet[2941]: W0911 00:33:24.782839 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.782921 kubelet[2941]: E0911 00:33:24.782872 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.783377 kubelet[2941]: E0911 00:33:24.783370 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.783507 kubelet[2941]: W0911 00:33:24.783441 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.783658 kubelet[2941]: E0911 00:33:24.783551 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.799584 containerd[1623]: time="2025-09-11T00:33:24.799392231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vv6pf,Uid:134e502c-a8c1-4393-84c9-665a93059d9b,Namespace:calico-system,Attempt:0,} returns sandbox id \"b0bf51018d381f9796d6356b221c411b0822f92ba5c5aa3be358f1258be644e0\"" Sep 11 00:33:24.802991 containerd[1623]: time="2025-09-11T00:33:24.802889109Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 11 00:33:24.884711 kubelet[2941]: E0911 00:33:24.884609 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.884711 kubelet[2941]: W0911 00:33:24.884625 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.884711 kubelet[2941]: E0911 00:33:24.884640 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.884887 kubelet[2941]: E0911 00:33:24.884780 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.884887 kubelet[2941]: W0911 00:33:24.884787 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.884887 kubelet[2941]: E0911 00:33:24.884794 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.986078 kubelet[2941]: E0911 00:33:24.986056 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.986078 kubelet[2941]: W0911 00:33:24.986073 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.986209 kubelet[2941]: E0911 00:33:24.986087 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.986209 kubelet[2941]: E0911 00:33:24.986203 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.986209 kubelet[2941]: W0911 00:33:24.986208 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.986338 kubelet[2941]: E0911 00:33:24.986213 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.986338 kubelet[2941]: E0911 00:33:24.986321 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.986338 kubelet[2941]: W0911 00:33:24.986326 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.986428 kubelet[2941]: E0911 00:33:24.986341 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.986514 kubelet[2941]: E0911 00:33:24.986437 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.986514 kubelet[2941]: W0911 00:33:24.986442 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.986514 kubelet[2941]: E0911 00:33:24.986450 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.986689 kubelet[2941]: E0911 00:33:24.986562 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.986689 kubelet[2941]: W0911 00:33:24.986570 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.986689 kubelet[2941]: E0911 00:33:24.986581 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.986834 kubelet[2941]: E0911 00:33:24.986804 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.986834 kubelet[2941]: W0911 00:33:24.986813 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.986834 kubelet[2941]: E0911 00:33:24.986825 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.986915 kubelet[2941]: E0911 00:33:24.986901 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.986915 kubelet[2941]: W0911 00:33:24.986911 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.986995 kubelet[2941]: E0911 00:33:24.986919 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.987044 kubelet[2941]: E0911 00:33:24.987034 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.987044 kubelet[2941]: W0911 00:33:24.987043 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.987091 kubelet[2941]: E0911 00:33:24.987056 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.988307 kubelet[2941]: E0911 00:33:24.987344 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.988307 kubelet[2941]: W0911 00:33:24.987352 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.988307 kubelet[2941]: E0911 00:33:24.987357 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.988307 kubelet[2941]: E0911 00:33:24.987468 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.988307 kubelet[2941]: W0911 00:33:24.987473 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.988307 kubelet[2941]: E0911 00:33:24.987479 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.988307 kubelet[2941]: E0911 00:33:24.987912 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.988307 kubelet[2941]: W0911 00:33:24.987917 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.988307 kubelet[2941]: E0911 00:33:24.987923 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:24.992772 kubelet[2941]: E0911 00:33:24.992751 2941 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:33:24.992772 kubelet[2941]: W0911 00:33:24.992764 2941 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:33:24.992863 kubelet[2941]: E0911 00:33:24.992776 2941 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:33:25.149263 containerd[1623]: time="2025-09-11T00:33:25.149177096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7bf658f584-6g7kg,Uid:ca749a37-3934-4338-a0b4-09af90e7bf1e,Namespace:calico-system,Attempt:0,}" Sep 11 00:33:25.214789 containerd[1623]: time="2025-09-11T00:33:25.214762116Z" level=info msg="connecting to shim 7cd1b5a803e7483693ce9582db05a64141bce2724c40a050b89ba8d04494460a" address="unix:///run/containerd/s/ffc7f2fcfbe28ba37246b77fed5964e49f1e89d7d5f47ba5fd905ff27c47d40b" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:33:25.234397 systemd[1]: Started cri-containerd-7cd1b5a803e7483693ce9582db05a64141bce2724c40a050b89ba8d04494460a.scope - libcontainer container 7cd1b5a803e7483693ce9582db05a64141bce2724c40a050b89ba8d04494460a. Sep 11 00:33:25.275145 containerd[1623]: time="2025-09-11T00:33:25.275114522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7bf658f584-6g7kg,Uid:ca749a37-3934-4338-a0b4-09af90e7bf1e,Namespace:calico-system,Attempt:0,} returns sandbox id \"7cd1b5a803e7483693ce9582db05a64141bce2724c40a050b89ba8d04494460a\"" Sep 11 00:33:25.392337 kubelet[2941]: E0911 00:33:25.392014 2941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5zknv" podUID="9e09e6c7-d57e-4653-9635-85c8bb6db9f7" Sep 11 00:33:26.356165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2570681444.mount: Deactivated successfully. Sep 11 00:33:26.419976 containerd[1623]: time="2025-09-11T00:33:26.419940401Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:26.420611 containerd[1623]: time="2025-09-11T00:33:26.420590020Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=5939501" Sep 11 00:33:26.420956 containerd[1623]: time="2025-09-11T00:33:26.420901663Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:26.422041 containerd[1623]: time="2025-09-11T00:33:26.422024614Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:26.422455 containerd[1623]: time="2025-09-11T00:33:26.422435790Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.619492429s" Sep 11 00:33:26.422507 containerd[1623]: time="2025-09-11T00:33:26.422454440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 11 00:33:26.423286 containerd[1623]: time="2025-09-11T00:33:26.423263251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 11 00:33:26.425948 containerd[1623]: time="2025-09-11T00:33:26.425912049Z" level=info msg="CreateContainer within sandbox \"b0bf51018d381f9796d6356b221c411b0822f92ba5c5aa3be358f1258be644e0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 11 00:33:26.433690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3972417427.mount: Deactivated successfully. Sep 11 00:33:26.435548 containerd[1623]: time="2025-09-11T00:33:26.435522996Z" level=info msg="Container 2d3c7fc70709f5a1b355291d101bdecc42165abe563d8bf733fa739d3ed87015: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:33:26.465340 containerd[1623]: time="2025-09-11T00:33:26.465205346Z" level=info msg="CreateContainer within sandbox \"b0bf51018d381f9796d6356b221c411b0822f92ba5c5aa3be358f1258be644e0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2d3c7fc70709f5a1b355291d101bdecc42165abe563d8bf733fa739d3ed87015\"" Sep 11 00:33:26.466046 containerd[1623]: time="2025-09-11T00:33:26.465992909Z" level=info msg="StartContainer for \"2d3c7fc70709f5a1b355291d101bdecc42165abe563d8bf733fa739d3ed87015\"" Sep 11 00:33:26.467216 containerd[1623]: time="2025-09-11T00:33:26.467192223Z" level=info msg="connecting to shim 2d3c7fc70709f5a1b355291d101bdecc42165abe563d8bf733fa739d3ed87015" address="unix:///run/containerd/s/e912685169d3c9ab099f0486fce8cc9aaa032ae84a458e08add5d3dbb3c59535" protocol=ttrpc version=3 Sep 11 00:33:26.482411 systemd[1]: Started cri-containerd-2d3c7fc70709f5a1b355291d101bdecc42165abe563d8bf733fa739d3ed87015.scope - libcontainer container 2d3c7fc70709f5a1b355291d101bdecc42165abe563d8bf733fa739d3ed87015. Sep 11 00:33:26.515787 containerd[1623]: time="2025-09-11T00:33:26.515760429Z" level=info msg="StartContainer for \"2d3c7fc70709f5a1b355291d101bdecc42165abe563d8bf733fa739d3ed87015\" returns successfully" Sep 11 00:33:26.522444 systemd[1]: cri-containerd-2d3c7fc70709f5a1b355291d101bdecc42165abe563d8bf733fa739d3ed87015.scope: Deactivated successfully. Sep 11 00:33:26.558728 containerd[1623]: time="2025-09-11T00:33:26.558282655Z" level=info msg="received exit event container_id:\"2d3c7fc70709f5a1b355291d101bdecc42165abe563d8bf733fa739d3ed87015\" id:\"2d3c7fc70709f5a1b355291d101bdecc42165abe563d8bf733fa739d3ed87015\" pid:3539 exited_at:{seconds:1757550806 nanos:523879549}" Sep 11 00:33:26.558951 containerd[1623]: time="2025-09-11T00:33:26.558324060Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2d3c7fc70709f5a1b355291d101bdecc42165abe563d8bf733fa739d3ed87015\" id:\"2d3c7fc70709f5a1b355291d101bdecc42165abe563d8bf733fa739d3ed87015\" pid:3539 exited_at:{seconds:1757550806 nanos:523879549}" Sep 11 00:33:27.330764 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d3c7fc70709f5a1b355291d101bdecc42165abe563d8bf733fa739d3ed87015-rootfs.mount: Deactivated successfully. Sep 11 00:33:27.392864 kubelet[2941]: E0911 00:33:27.392830 2941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5zknv" podUID="9e09e6c7-d57e-4653-9635-85c8bb6db9f7" Sep 11 00:33:28.577880 containerd[1623]: time="2025-09-11T00:33:28.577843204Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:28.587745 containerd[1623]: time="2025-09-11T00:33:28.587708321Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33744548" Sep 11 00:33:28.593744 containerd[1623]: time="2025-09-11T00:33:28.593716393Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:28.600420 containerd[1623]: time="2025-09-11T00:33:28.600385096Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:28.600848 containerd[1623]: time="2025-09-11T00:33:28.600635368Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.177266725s" Sep 11 00:33:28.600848 containerd[1623]: time="2025-09-11T00:33:28.600654156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 11 00:33:28.601688 containerd[1623]: time="2025-09-11T00:33:28.601441970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 11 00:33:28.616519 containerd[1623]: time="2025-09-11T00:33:28.616499506Z" level=info msg="CreateContainer within sandbox \"7cd1b5a803e7483693ce9582db05a64141bce2724c40a050b89ba8d04494460a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 11 00:33:28.657687 containerd[1623]: time="2025-09-11T00:33:28.657660450Z" level=info msg="Container 93c8cecea57f339e84a40b7b7b9a461ebe1f203f2b15680678197d05656cc489: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:33:28.912219 containerd[1623]: time="2025-09-11T00:33:28.912188209Z" level=info msg="CreateContainer within sandbox \"7cd1b5a803e7483693ce9582db05a64141bce2724c40a050b89ba8d04494460a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"93c8cecea57f339e84a40b7b7b9a461ebe1f203f2b15680678197d05656cc489\"" Sep 11 00:33:28.913399 containerd[1623]: time="2025-09-11T00:33:28.913030377Z" level=info msg="StartContainer for \"93c8cecea57f339e84a40b7b7b9a461ebe1f203f2b15680678197d05656cc489\"" Sep 11 00:33:28.914250 containerd[1623]: time="2025-09-11T00:33:28.914229774Z" level=info msg="connecting to shim 93c8cecea57f339e84a40b7b7b9a461ebe1f203f2b15680678197d05656cc489" address="unix:///run/containerd/s/ffc7f2fcfbe28ba37246b77fed5964e49f1e89d7d5f47ba5fd905ff27c47d40b" protocol=ttrpc version=3 Sep 11 00:33:28.930447 systemd[1]: Started cri-containerd-93c8cecea57f339e84a40b7b7b9a461ebe1f203f2b15680678197d05656cc489.scope - libcontainer container 93c8cecea57f339e84a40b7b7b9a461ebe1f203f2b15680678197d05656cc489. Sep 11 00:33:28.977977 containerd[1623]: time="2025-09-11T00:33:28.977950838Z" level=info msg="StartContainer for \"93c8cecea57f339e84a40b7b7b9a461ebe1f203f2b15680678197d05656cc489\" returns successfully" Sep 11 00:33:29.392826 kubelet[2941]: E0911 00:33:29.392789 2941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5zknv" podUID="9e09e6c7-d57e-4653-9635-85c8bb6db9f7" Sep 11 00:33:30.492640 kubelet[2941]: I0911 00:33:30.492608 2941 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 00:33:30.794510 kubelet[2941]: I0911 00:33:30.793709 2941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7bf658f584-6g7kg" podStartSLOduration=4.467125949 podStartE2EDuration="7.792492045s" podCreationTimestamp="2025-09-11 00:33:23 +0000 UTC" firstStartedPulling="2025-09-11 00:33:25.27600966 +0000 UTC m=+19.040183893" lastFinishedPulling="2025-09-11 00:33:28.601375755 +0000 UTC m=+22.365549989" observedRunningTime="2025-09-11 00:33:29.478181565 +0000 UTC m=+23.242355804" watchObservedRunningTime="2025-09-11 00:33:30.792492045 +0000 UTC m=+24.556666284" Sep 11 00:33:31.408474 kubelet[2941]: E0911 00:33:31.408440 2941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5zknv" podUID="9e09e6c7-d57e-4653-9635-85c8bb6db9f7" Sep 11 00:33:32.222632 containerd[1623]: time="2025-09-11T00:33:32.222544068Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:32.228306 containerd[1623]: time="2025-09-11T00:33:32.228273731Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 11 00:33:32.231916 containerd[1623]: time="2025-09-11T00:33:32.231883106Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:32.237016 containerd[1623]: time="2025-09-11T00:33:32.236968012Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:32.237513 containerd[1623]: time="2025-09-11T00:33:32.237428236Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 3.635968881s" Sep 11 00:33:32.237513 containerd[1623]: time="2025-09-11T00:33:32.237453090Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 11 00:33:32.240854 containerd[1623]: time="2025-09-11T00:33:32.239583738Z" level=info msg="CreateContainer within sandbox \"b0bf51018d381f9796d6356b221c411b0822f92ba5c5aa3be358f1258be644e0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 11 00:33:32.289742 containerd[1623]: time="2025-09-11T00:33:32.289709443Z" level=info msg="Container 17af3d9016a1bd662aed30ded2f3e0f4c595a07fb27963f72c85b98d3d87743e: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:33:32.307130 containerd[1623]: time="2025-09-11T00:33:32.307097741Z" level=info msg="CreateContainer within sandbox \"b0bf51018d381f9796d6356b221c411b0822f92ba5c5aa3be358f1258be644e0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"17af3d9016a1bd662aed30ded2f3e0f4c595a07fb27963f72c85b98d3d87743e\"" Sep 11 00:33:32.307964 containerd[1623]: time="2025-09-11T00:33:32.307857793Z" level=info msg="StartContainer for \"17af3d9016a1bd662aed30ded2f3e0f4c595a07fb27963f72c85b98d3d87743e\"" Sep 11 00:33:32.309269 containerd[1623]: time="2025-09-11T00:33:32.309249110Z" level=info msg="connecting to shim 17af3d9016a1bd662aed30ded2f3e0f4c595a07fb27963f72c85b98d3d87743e" address="unix:///run/containerd/s/e912685169d3c9ab099f0486fce8cc9aaa032ae84a458e08add5d3dbb3c59535" protocol=ttrpc version=3 Sep 11 00:33:32.331466 systemd[1]: Started cri-containerd-17af3d9016a1bd662aed30ded2f3e0f4c595a07fb27963f72c85b98d3d87743e.scope - libcontainer container 17af3d9016a1bd662aed30ded2f3e0f4c595a07fb27963f72c85b98d3d87743e. Sep 11 00:33:32.388682 containerd[1623]: time="2025-09-11T00:33:32.388619408Z" level=info msg="StartContainer for \"17af3d9016a1bd662aed30ded2f3e0f4c595a07fb27963f72c85b98d3d87743e\" returns successfully" Sep 11 00:33:33.392353 kubelet[2941]: E0911 00:33:33.392320 2941 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5zknv" podUID="9e09e6c7-d57e-4653-9635-85c8bb6db9f7" Sep 11 00:33:34.524857 systemd[1]: cri-containerd-17af3d9016a1bd662aed30ded2f3e0f4c595a07fb27963f72c85b98d3d87743e.scope: Deactivated successfully. Sep 11 00:33:34.525473 systemd[1]: cri-containerd-17af3d9016a1bd662aed30ded2f3e0f4c595a07fb27963f72c85b98d3d87743e.scope: Consumed 338ms CPU time, 161M memory peak, 1.1M read from disk, 171.3M written to disk. Sep 11 00:33:34.648842 kubelet[2941]: I0911 00:33:34.648817 2941 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 11 00:33:34.683851 containerd[1623]: time="2025-09-11T00:33:34.683769679Z" level=info msg="received exit event container_id:\"17af3d9016a1bd662aed30ded2f3e0f4c595a07fb27963f72c85b98d3d87743e\" id:\"17af3d9016a1bd662aed30ded2f3e0f4c595a07fb27963f72c85b98d3d87743e\" pid:3644 exited_at:{seconds:1757550814 nanos:674504625}" Sep 11 00:33:34.684342 containerd[1623]: time="2025-09-11T00:33:34.683853269Z" level=info msg="TaskExit event in podsandbox handler container_id:\"17af3d9016a1bd662aed30ded2f3e0f4c595a07fb27963f72c85b98d3d87743e\" id:\"17af3d9016a1bd662aed30ded2f3e0f4c595a07fb27963f72c85b98d3d87743e\" pid:3644 exited_at:{seconds:1757550814 nanos:674504625}" Sep 11 00:33:34.763012 systemd[1]: Created slice kubepods-burstable-pod282222e7_7607_4bf7_8990_39b458addfca.slice - libcontainer container kubepods-burstable-pod282222e7_7607_4bf7_8990_39b458addfca.slice. Sep 11 00:33:34.782535 systemd[1]: Created slice kubepods-burstable-podb497c000_3842_4419_a5a7_413b5b0d4274.slice - libcontainer container kubepods-burstable-podb497c000_3842_4419_a5a7_413b5b0d4274.slice. Sep 11 00:33:34.790015 systemd[1]: Created slice kubepods-besteffort-podb859708f_7aeb_4315_9af2_3e57c1933a45.slice - libcontainer container kubepods-besteffort-podb859708f_7aeb_4315_9af2_3e57c1933a45.slice. Sep 11 00:33:34.805441 systemd[1]: Created slice kubepods-besteffort-podda6f64ef_8d82_4d3d_b358_cbc72485920a.slice - libcontainer container kubepods-besteffort-podda6f64ef_8d82_4d3d_b358_cbc72485920a.slice. Sep 11 00:33:34.813760 systemd[1]: Created slice kubepods-besteffort-pod775b899a_158c_4a61_bc04_563957a6f436.slice - libcontainer container kubepods-besteffort-pod775b899a_158c_4a61_bc04_563957a6f436.slice. Sep 11 00:33:34.821149 systemd[1]: Created slice kubepods-besteffort-podab50939c_42a6_4b91_8b1f_2906b7091d42.slice - libcontainer container kubepods-besteffort-podab50939c_42a6_4b91_8b1f_2906b7091d42.slice. Sep 11 00:33:34.824346 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17af3d9016a1bd662aed30ded2f3e0f4c595a07fb27963f72c85b98d3d87743e-rootfs.mount: Deactivated successfully. Sep 11 00:33:34.827834 systemd[1]: Created slice kubepods-besteffort-pod1d784e97_b879_49d9_bd94_fd0284ab6cc2.slice - libcontainer container kubepods-besteffort-pod1d784e97_b879_49d9_bd94_fd0284ab6cc2.slice. Sep 11 00:33:34.856173 kubelet[2941]: I0911 00:33:34.856147 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1d784e97-b879-49d9-bd94-fd0284ab6cc2-calico-apiserver-certs\") pod \"calico-apiserver-6b5fbfb79d-cz76d\" (UID: \"1d784e97-b879-49d9-bd94-fd0284ab6cc2\") " pod="calico-apiserver/calico-apiserver-6b5fbfb79d-cz76d" Sep 11 00:33:34.856480 kubelet[2941]: I0911 00:33:34.856185 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/da6f64ef-8d82-4d3d-b358-cbc72485920a-whisker-backend-key-pair\") pod \"whisker-fc5bd97bc-x8nb9\" (UID: \"da6f64ef-8d82-4d3d-b358-cbc72485920a\") " pod="calico-system/whisker-fc5bd97bc-x8nb9" Sep 11 00:33:34.856480 kubelet[2941]: I0911 00:33:34.856198 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc4bx\" (UniqueName: \"kubernetes.io/projected/da6f64ef-8d82-4d3d-b358-cbc72485920a-kube-api-access-lc4bx\") pod \"whisker-fc5bd97bc-x8nb9\" (UID: \"da6f64ef-8d82-4d3d-b358-cbc72485920a\") " pod="calico-system/whisker-fc5bd97bc-x8nb9" Sep 11 00:33:34.856480 kubelet[2941]: I0911 00:33:34.856208 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da6f64ef-8d82-4d3d-b358-cbc72485920a-whisker-ca-bundle\") pod \"whisker-fc5bd97bc-x8nb9\" (UID: \"da6f64ef-8d82-4d3d-b358-cbc72485920a\") " pod="calico-system/whisker-fc5bd97bc-x8nb9" Sep 11 00:33:34.856480 kubelet[2941]: I0911 00:33:34.856222 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b497c000-3842-4419-a5a7-413b5b0d4274-config-volume\") pod \"coredns-668d6bf9bc-q5r45\" (UID: \"b497c000-3842-4419-a5a7-413b5b0d4274\") " pod="kube-system/coredns-668d6bf9bc-q5r45" Sep 11 00:33:34.856480 kubelet[2941]: I0911 00:33:34.856240 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab50939c-42a6-4b91-8b1f-2906b7091d42-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-52kp4\" (UID: \"ab50939c-42a6-4b91-8b1f-2906b7091d42\") " pod="calico-system/goldmane-54d579b49d-52kp4" Sep 11 00:33:34.856603 kubelet[2941]: I0911 00:33:34.856265 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69dxh\" (UniqueName: \"kubernetes.io/projected/ab50939c-42a6-4b91-8b1f-2906b7091d42-kube-api-access-69dxh\") pod \"goldmane-54d579b49d-52kp4\" (UID: \"ab50939c-42a6-4b91-8b1f-2906b7091d42\") " pod="calico-system/goldmane-54d579b49d-52kp4" Sep 11 00:33:34.856603 kubelet[2941]: I0911 00:33:34.856278 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plzj9\" (UniqueName: \"kubernetes.io/projected/1d784e97-b879-49d9-bd94-fd0284ab6cc2-kube-api-access-plzj9\") pod \"calico-apiserver-6b5fbfb79d-cz76d\" (UID: \"1d784e97-b879-49d9-bd94-fd0284ab6cc2\") " pod="calico-apiserver/calico-apiserver-6b5fbfb79d-cz76d" Sep 11 00:33:34.856603 kubelet[2941]: I0911 00:33:34.856350 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ab50939c-42a6-4b91-8b1f-2906b7091d42-goldmane-key-pair\") pod \"goldmane-54d579b49d-52kp4\" (UID: \"ab50939c-42a6-4b91-8b1f-2906b7091d42\") " pod="calico-system/goldmane-54d579b49d-52kp4" Sep 11 00:33:34.856603 kubelet[2941]: I0911 00:33:34.856377 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/775b899a-158c-4a61-bc04-563957a6f436-calico-apiserver-certs\") pod \"calico-apiserver-6b5fbfb79d-qgw7r\" (UID: \"775b899a-158c-4a61-bc04-563957a6f436\") " pod="calico-apiserver/calico-apiserver-6b5fbfb79d-qgw7r" Sep 11 00:33:34.856603 kubelet[2941]: I0911 00:33:34.856413 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9g75\" (UniqueName: \"kubernetes.io/projected/775b899a-158c-4a61-bc04-563957a6f436-kube-api-access-p9g75\") pod \"calico-apiserver-6b5fbfb79d-qgw7r\" (UID: \"775b899a-158c-4a61-bc04-563957a6f436\") " pod="calico-apiserver/calico-apiserver-6b5fbfb79d-qgw7r" Sep 11 00:33:34.856690 kubelet[2941]: I0911 00:33:34.856441 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6vr9\" (UniqueName: \"kubernetes.io/projected/b497c000-3842-4419-a5a7-413b5b0d4274-kube-api-access-s6vr9\") pod \"coredns-668d6bf9bc-q5r45\" (UID: \"b497c000-3842-4419-a5a7-413b5b0d4274\") " pod="kube-system/coredns-668d6bf9bc-q5r45" Sep 11 00:33:34.856783 kubelet[2941]: I0911 00:33:34.856720 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m5pv\" (UniqueName: \"kubernetes.io/projected/282222e7-7607-4bf7-8990-39b458addfca-kube-api-access-6m5pv\") pod \"coredns-668d6bf9bc-rpfgw\" (UID: \"282222e7-7607-4bf7-8990-39b458addfca\") " pod="kube-system/coredns-668d6bf9bc-rpfgw" Sep 11 00:33:34.856783 kubelet[2941]: I0911 00:33:34.856742 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab50939c-42a6-4b91-8b1f-2906b7091d42-config\") pod \"goldmane-54d579b49d-52kp4\" (UID: \"ab50939c-42a6-4b91-8b1f-2906b7091d42\") " pod="calico-system/goldmane-54d579b49d-52kp4" Sep 11 00:33:34.856783 kubelet[2941]: I0911 00:33:34.856753 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b859708f-7aeb-4315-9af2-3e57c1933a45-tigera-ca-bundle\") pod \"calico-kube-controllers-6f4d847988-7r26x\" (UID: \"b859708f-7aeb-4315-9af2-3e57c1933a45\") " pod="calico-system/calico-kube-controllers-6f4d847988-7r26x" Sep 11 00:33:34.856783 kubelet[2941]: I0911 00:33:34.856765 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf2wp\" (UniqueName: \"kubernetes.io/projected/b859708f-7aeb-4315-9af2-3e57c1933a45-kube-api-access-rf2wp\") pod \"calico-kube-controllers-6f4d847988-7r26x\" (UID: \"b859708f-7aeb-4315-9af2-3e57c1933a45\") " pod="calico-system/calico-kube-controllers-6f4d847988-7r26x" Sep 11 00:33:34.856783 kubelet[2941]: I0911 00:33:34.856774 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/282222e7-7607-4bf7-8990-39b458addfca-config-volume\") pod \"coredns-668d6bf9bc-rpfgw\" (UID: \"282222e7-7607-4bf7-8990-39b458addfca\") " pod="kube-system/coredns-668d6bf9bc-rpfgw" Sep 11 00:33:35.085095 containerd[1623]: time="2025-09-11T00:33:35.084408667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rpfgw,Uid:282222e7-7607-4bf7-8990-39b458addfca,Namespace:kube-system,Attempt:0,}" Sep 11 00:33:35.087688 containerd[1623]: time="2025-09-11T00:33:35.087662074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q5r45,Uid:b497c000-3842-4419-a5a7-413b5b0d4274,Namespace:kube-system,Attempt:0,}" Sep 11 00:33:35.100944 containerd[1623]: time="2025-09-11T00:33:35.100907182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f4d847988-7r26x,Uid:b859708f-7aeb-4315-9af2-3e57c1933a45,Namespace:calico-system,Attempt:0,}" Sep 11 00:33:35.109681 containerd[1623]: time="2025-09-11T00:33:35.109657550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-fc5bd97bc-x8nb9,Uid:da6f64ef-8d82-4d3d-b358-cbc72485920a,Namespace:calico-system,Attempt:0,}" Sep 11 00:33:35.118113 containerd[1623]: time="2025-09-11T00:33:35.118095740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b5fbfb79d-qgw7r,Uid:775b899a-158c-4a61-bc04-563957a6f436,Namespace:calico-apiserver,Attempt:0,}" Sep 11 00:33:35.125596 containerd[1623]: time="2025-09-11T00:33:35.125578546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-52kp4,Uid:ab50939c-42a6-4b91-8b1f-2906b7091d42,Namespace:calico-system,Attempt:0,}" Sep 11 00:33:35.131011 containerd[1623]: time="2025-09-11T00:33:35.130995381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b5fbfb79d-cz76d,Uid:1d784e97-b879-49d9-bd94-fd0284ab6cc2,Namespace:calico-apiserver,Attempt:0,}" Sep 11 00:33:35.403705 systemd[1]: Created slice kubepods-besteffort-pod9e09e6c7_d57e_4653_9635_85c8bb6db9f7.slice - libcontainer container kubepods-besteffort-pod9e09e6c7_d57e_4653_9635_85c8bb6db9f7.slice. Sep 11 00:33:35.406599 containerd[1623]: time="2025-09-11T00:33:35.406566829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5zknv,Uid:9e09e6c7-d57e-4653-9635-85c8bb6db9f7,Namespace:calico-system,Attempt:0,}" Sep 11 00:33:35.531765 containerd[1623]: time="2025-09-11T00:33:35.531730762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 11 00:33:35.883742 containerd[1623]: time="2025-09-11T00:33:35.883708692Z" level=error msg="Failed to destroy network for sandbox \"10c9e0a6017b9c7ca1640b4a5a184ad0a62f482e4cd89b58718c969ab874bc38\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:33:35.885525 containerd[1623]: time="2025-09-11T00:33:35.884468478Z" level=error msg="Failed to destroy network for sandbox \"7037242ccc7902bcd830cfaac7acf0e29d847b058e8c6f073b195e3e624b7a23\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:33:35.886237 systemd[1]: run-netns-cni\x2de571ce04\x2d546c\x2d3b8e\x2d7f14\x2d20c4c59e0618.mount: Deactivated successfully. Sep 11 00:33:35.888379 containerd[1623]: time="2025-09-11T00:33:35.888178268Z" level=error msg="Failed to destroy network for sandbox \"a027c5fbd62589a706844a9ab60213b9b2f3f901d27ebf66b50e92d34dc3428d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:33:35.890380 systemd[1]: run-netns-cni\x2d06dcf2e7\x2d746e\x2dc59d\x2dfacf\x2d42e19d0b8013.mount: Deactivated successfully. Sep 11 00:33:35.894278 systemd[1]: run-netns-cni\x2d686d32dc\x2d0fad\x2da124\x2db128\x2d8d356540feaf.mount: Deactivated successfully. Sep 11 00:33:35.894663 containerd[1623]: time="2025-09-11T00:33:35.890998244Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q5r45,Uid:b497c000-3842-4419-a5a7-413b5b0d4274,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"10c9e0a6017b9c7ca1640b4a5a184ad0a62f482e4cd89b58718c969ab874bc38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:33:35.898699 containerd[1623]: time="2025-09-11T00:33:35.896823582Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-fc5bd97bc-x8nb9,Uid:da6f64ef-8d82-4d3d-b358-cbc72485920a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7037242ccc7902bcd830cfaac7acf0e29d847b058e8c6f073b195e3e624b7a23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:33:35.898699 containerd[1623]: time="2025-09-11T00:33:35.898620160Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b5fbfb79d-qgw7r,Uid:775b899a-158c-4a61-bc04-563957a6f436,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a027c5fbd62589a706844a9ab60213b9b2f3f901d27ebf66b50e92d34dc3428d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:33:35.902114 kubelet[2941]: E0911 00:33:35.902070 2941 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a027c5fbd62589a706844a9ab60213b9b2f3f901d27ebf66b50e92d34dc3428d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:33:35.902591 kubelet[2941]: E0911 00:33:35.902278 2941 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10c9e0a6017b9c7ca1640b4a5a184ad0a62f482e4cd89b58718c969ab874bc38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:33:35.905884 kubelet[2941]: E0911 00:33:35.905834 2941 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7037242ccc7902bcd830cfaac7acf0e29d847b058e8c6f073b195e3e624b7a23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:33:35.906064 kubelet[2941]: E0911 00:33:35.905895 2941 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7037242ccc7902bcd830cfaac7acf0e29d847b058e8c6f073b195e3e624b7a23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-fc5bd97bc-x8nb9" Sep 11 00:33:35.906100 kubelet[2941]: E0911 00:33:35.906071 2941 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10c9e0a6017b9c7ca1640b4a5a184ad0a62f482e4cd89b58718c969ab874bc38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-q5r45" Sep 11 00:33:35.909377 kubelet[2941]: E0911 00:33:35.909335 2941 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10c9e0a6017b9c7ca1640b4a5a184ad0a62f482e4cd89b58718c969ab874bc38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-q5r45" Sep 11 00:33:35.910277 kubelet[2941]: E0911 00:33:35.909462 2941 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7037242ccc7902bcd830cfaac7acf0e29d847b058e8c6f073b195e3e624b7a23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-fc5bd97bc-x8nb9" Sep 11 00:33:35.912956 kubelet[2941]: E0911 00:33:35.912748 2941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-fc5bd97bc-x8nb9_calico-system(da6f64ef-8d82-4d3d-b358-cbc72485920a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-fc5bd97bc-x8nb9_calico-system(da6f64ef-8d82-4d3d-b358-cbc72485920a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7037242ccc7902bcd830cfaac7acf0e29d847b058e8c6f073b195e3e624b7a23\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-fc5bd97bc-x8nb9" podUID="da6f64ef-8d82-4d3d-b358-cbc72485920a" Sep 11 00:33:35.913085 kubelet[2941]: E0911 00:33:35.912836 2941 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a027c5fbd62589a706844a9ab60213b9b2f3f901d27ebf66b50e92d34dc3428d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b5fbfb79d-qgw7r" Sep 11 00:33:35.913085 kubelet[2941]: E0911 00:33:35.913039 2941 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a027c5fbd62589a706844a9ab60213b9b2f3f901d27ebf66b50e92d34dc3428d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b5fbfb79d-qgw7r" Sep 11 00:33:35.913085 kubelet[2941]: E0911 00:33:35.913074 2941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b5fbfb79d-qgw7r_calico-apiserver(775b899a-158c-4a61-bc04-563957a6f436)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b5fbfb79d-qgw7r_calico-apiserver(775b899a-158c-4a61-bc04-563957a6f436)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a027c5fbd62589a706844a9ab60213b9b2f3f901d27ebf66b50e92d34dc3428d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b5fbfb79d-qgw7r" podUID="775b899a-158c-4a61-bc04-563957a6f436" Sep 11 00:33:35.913307 kubelet[2941]: E0911 00:33:35.913280 2941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-q5r45_kube-system(b497c000-3842-4419-a5a7-413b5b0d4274)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-q5r45_kube-system(b497c000-3842-4419-a5a7-413b5b0d4274)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"10c9e0a6017b9c7ca1640b4a5a184ad0a62f482e4cd89b58718c969ab874bc38\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-q5r45" podUID="b497c000-3842-4419-a5a7-413b5b0d4274" Sep 11 00:33:35.924718 containerd[1623]: time="2025-09-11T00:33:35.924683610Z" level=error msg="Failed to destroy network for sandbox \"9be6827c4bc27c45b3eef75cef352c2bc04cbfac480a49555391050daadcb9f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:33:35.926682 systemd[1]: run-netns-cni\x2d1328888c\x2d33c4\x2d9432\x2dd6e2\x2d1cbc63b2c3fb.mount: Deactivated successfully. Sep 11 00:33:35.931437 containerd[1623]: time="2025-09-11T00:33:35.931047090Z" level=error msg="Failed to destroy network for sandbox \"010c1e9b5034661efe5cc0469ff9fc18feaf3347089bad18ede0f3aebb86db9c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:33:35.932540 containerd[1623]: time="2025-09-11T00:33:35.932244744Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5zknv,Uid:9e09e6c7-d57e-4653-9635-85c8bb6db9f7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9be6827c4bc27c45b3eef75cef352c2bc04cbfac480a49555391050daadcb9f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:33:35.932853 kubelet[2941]: E0911 00:33:35.932627 2941 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9be6827c4bc27c45b3eef75cef352c2bc04cbfac480a49555391050daadcb9f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:33:35.932853 kubelet[2941]: E0911 00:33:35.932670 2941 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9be6827c4bc27c45b3eef75cef352c2bc04cbfac480a49555391050daadcb9f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5zknv" Sep 11 00:33:35.932853 kubelet[2941]: E0911 00:33:35.932687 2941 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9be6827c4bc27c45b3eef75cef352c2bc04cbfac480a49555391050daadcb9f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5zknv" Sep 11 00:33:35.932971 containerd[1623]: time="2025-09-11T00:33:35.932780901Z" level=error msg="Failed to destroy network for sandbox \"1189de5b849ff856d02e74a6adcaa18e39eedf273ecc1407758d49bb2ff599f4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:33:35.935378 kubelet[2941]: E0911 00:33:35.932714 2941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5zknv_calico-system(9e09e6c7-d57e-4653-9635-85c8bb6db9f7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5zknv_calico-system(9e09e6c7-d57e-4653-9635-85c8bb6db9f7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9be6827c4bc27c45b3eef75cef352c2bc04cbfac480a49555391050daadcb9f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5zknv" podUID="9e09e6c7-d57e-4653-9635-85c8bb6db9f7" Sep 11 00:33:35.936074 containerd[1623]: time="2025-09-11T00:33:35.935455154Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-52kp4,Uid:ab50939c-42a6-4b91-8b1f-2906b7091d42,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"010c1e9b5034661efe5cc0469ff9fc18feaf3347089bad18ede0f3aebb86db9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:33:35.936150 containerd[1623]: time="2025-09-11T00:33:35.936085865Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rpfgw,Uid:282222e7-7607-4bf7-8990-39b458addfca,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1189de5b849ff856d02e74a6adcaa18e39eedf273ecc1407758d49bb2ff599f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:33:35.936202 kubelet[2941]: E0911 00:33:35.936167 2941 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"010c1e9b5034661efe5cc0469ff9fc18feaf3347089bad18ede0f3aebb86db9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:33:35.936232 kubelet[2941]: E0911 00:33:35.936211 2941 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"010c1e9b5034661efe5cc0469ff9fc18feaf3347089bad18ede0f3aebb86db9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-52kp4" Sep 11 00:33:35.936254 kubelet[2941]: E0911 00:33:35.936231 2941 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"010c1e9b5034661efe5cc0469ff9fc18feaf3347089bad18ede0f3aebb86db9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-52kp4" Sep 11 00:33:35.936254 kubelet[2941]: E0911 00:33:35.936263 2941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-52kp4_calico-system(ab50939c-42a6-4b91-8b1f-2906b7091d42)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-52kp4_calico-system(ab50939c-42a6-4b91-8b1f-2906b7091d42)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"010c1e9b5034661efe5cc0469ff9fc18feaf3347089bad18ede0f3aebb86db9c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-52kp4" podUID="ab50939c-42a6-4b91-8b1f-2906b7091d42" Sep 11 00:33:35.936475 kubelet[2941]: E0911 00:33:35.936384 2941 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1189de5b849ff856d02e74a6adcaa18e39eedf273ecc1407758d49bb2ff599f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:33:35.936475 kubelet[2941]: E0911 00:33:35.936408 2941 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1189de5b849ff856d02e74a6adcaa18e39eedf273ecc1407758d49bb2ff599f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rpfgw" Sep 11 00:33:35.936475 kubelet[2941]: E0911 00:33:35.936426 2941 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1189de5b849ff856d02e74a6adcaa18e39eedf273ecc1407758d49bb2ff599f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rpfgw" Sep 11 00:33:35.937758 kubelet[2941]: E0911 00:33:35.936450 2941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-rpfgw_kube-system(282222e7-7607-4bf7-8990-39b458addfca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-rpfgw_kube-system(282222e7-7607-4bf7-8990-39b458addfca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1189de5b849ff856d02e74a6adcaa18e39eedf273ecc1407758d49bb2ff599f4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-rpfgw" podUID="282222e7-7607-4bf7-8990-39b458addfca" Sep 11 00:33:35.940423 containerd[1623]: time="2025-09-11T00:33:35.940377140Z" level=error msg="Failed to destroy network for sandbox \"12d331e2ed3082dbdcccfce4db94ca08c6bf96410262d267bc4b0229ff5c6351\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:33:35.940955 containerd[1623]: time="2025-09-11T00:33:35.940928742Z" level=error msg="Failed to destroy network for sandbox \"c7bb9104ea35822cb4836aebea77828cc0e86f30527d11380d2d846fa1335a00\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:33:35.942304 containerd[1623]: time="2025-09-11T00:33:35.942251157Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b5fbfb79d-cz76d,Uid:1d784e97-b879-49d9-bd94-fd0284ab6cc2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"12d331e2ed3082dbdcccfce4db94ca08c6bf96410262d267bc4b0229ff5c6351\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:33:35.942492 kubelet[2941]: E0911 00:33:35.942468 2941 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12d331e2ed3082dbdcccfce4db94ca08c6bf96410262d267bc4b0229ff5c6351\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:33:35.949163 kubelet[2941]: E0911 00:33:35.942506 2941 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12d331e2ed3082dbdcccfce4db94ca08c6bf96410262d267bc4b0229ff5c6351\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b5fbfb79d-cz76d" Sep 11 00:33:35.949163 kubelet[2941]: E0911 00:33:35.942521 2941 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12d331e2ed3082dbdcccfce4db94ca08c6bf96410262d267bc4b0229ff5c6351\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b5fbfb79d-cz76d" Sep 11 00:33:35.949163 kubelet[2941]: E0911 00:33:35.942548 2941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b5fbfb79d-cz76d_calico-apiserver(1d784e97-b879-49d9-bd94-fd0284ab6cc2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b5fbfb79d-cz76d_calico-apiserver(1d784e97-b879-49d9-bd94-fd0284ab6cc2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"12d331e2ed3082dbdcccfce4db94ca08c6bf96410262d267bc4b0229ff5c6351\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b5fbfb79d-cz76d" podUID="1d784e97-b879-49d9-bd94-fd0284ab6cc2" Sep 11 00:33:35.949271 containerd[1623]: time="2025-09-11T00:33:35.944826494Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f4d847988-7r26x,Uid:b859708f-7aeb-4315-9af2-3e57c1933a45,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7bb9104ea35822cb4836aebea77828cc0e86f30527d11380d2d846fa1335a00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:33:35.949329 kubelet[2941]: E0911 00:33:35.944927 2941 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7bb9104ea35822cb4836aebea77828cc0e86f30527d11380d2d846fa1335a00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:33:35.949329 kubelet[2941]: E0911 00:33:35.944982 2941 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7bb9104ea35822cb4836aebea77828cc0e86f30527d11380d2d846fa1335a00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f4d847988-7r26x" Sep 11 00:33:35.949329 kubelet[2941]: E0911 00:33:35.945141 2941 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7bb9104ea35822cb4836aebea77828cc0e86f30527d11380d2d846fa1335a00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f4d847988-7r26x" Sep 11 00:33:35.949396 kubelet[2941]: E0911 00:33:35.945192 2941 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6f4d847988-7r26x_calico-system(b859708f-7aeb-4315-9af2-3e57c1933a45)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6f4d847988-7r26x_calico-system(b859708f-7aeb-4315-9af2-3e57c1933a45)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c7bb9104ea35822cb4836aebea77828cc0e86f30527d11380d2d846fa1335a00\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6f4d847988-7r26x" podUID="b859708f-7aeb-4315-9af2-3e57c1933a45" Sep 11 00:33:36.824500 systemd[1]: run-netns-cni\x2d7e33c2c3\x2da2b7\x2df9c1\x2d231e\x2d98a6ea241f49.mount: Deactivated successfully. Sep 11 00:33:36.824559 systemd[1]: run-netns-cni\x2d020628fc\x2d5f88\x2d144f\x2db719\x2d1bf50b987041.mount: Deactivated successfully. Sep 11 00:33:36.824595 systemd[1]: run-netns-cni\x2dbd6d5477\x2d633c\x2d1db6\x2d1103\x2d062199d16bc0.mount: Deactivated successfully. Sep 11 00:33:36.824628 systemd[1]: run-netns-cni\x2d6f93cfdd\x2da3ab\x2d55d6\x2d0fcb\x2d2122b86a5696.mount: Deactivated successfully. Sep 11 00:33:40.858641 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1055427330.mount: Deactivated successfully. Sep 11 00:33:41.101947 containerd[1623]: time="2025-09-11T00:33:41.085675774Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:41.129486 containerd[1623]: time="2025-09-11T00:33:41.129439629Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 11 00:33:41.135407 containerd[1623]: time="2025-09-11T00:33:41.135358928Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:41.186414 containerd[1623]: time="2025-09-11T00:33:41.186259177Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:41.201059 containerd[1623]: time="2025-09-11T00:33:41.187533741Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 5.653779863s" Sep 11 00:33:41.201059 containerd[1623]: time="2025-09-11T00:33:41.187558423Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 11 00:33:41.273202 containerd[1623]: time="2025-09-11T00:33:41.273169329Z" level=info msg="CreateContainer within sandbox \"b0bf51018d381f9796d6356b221c411b0822f92ba5c5aa3be358f1258be644e0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 11 00:33:42.456640 containerd[1623]: time="2025-09-11T00:33:42.456429437Z" level=info msg="Container 2b1dd5f6f0ed9d56248c963ed83b51f2e94d55a13cdf1b0cb5eba25736d59121: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:33:42.456447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4102156864.mount: Deactivated successfully. Sep 11 00:33:42.576209 containerd[1623]: time="2025-09-11T00:33:42.576106730Z" level=info msg="CreateContainer within sandbox \"b0bf51018d381f9796d6356b221c411b0822f92ba5c5aa3be358f1258be644e0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2b1dd5f6f0ed9d56248c963ed83b51f2e94d55a13cdf1b0cb5eba25736d59121\"" Sep 11 00:33:42.576622 containerd[1623]: time="2025-09-11T00:33:42.576524898Z" level=info msg="StartContainer for \"2b1dd5f6f0ed9d56248c963ed83b51f2e94d55a13cdf1b0cb5eba25736d59121\"" Sep 11 00:33:42.580666 containerd[1623]: time="2025-09-11T00:33:42.580632453Z" level=info msg="connecting to shim 2b1dd5f6f0ed9d56248c963ed83b51f2e94d55a13cdf1b0cb5eba25736d59121" address="unix:///run/containerd/s/e912685169d3c9ab099f0486fce8cc9aaa032ae84a458e08add5d3dbb3c59535" protocol=ttrpc version=3 Sep 11 00:33:42.774468 systemd[1]: Started cri-containerd-2b1dd5f6f0ed9d56248c963ed83b51f2e94d55a13cdf1b0cb5eba25736d59121.scope - libcontainer container 2b1dd5f6f0ed9d56248c963ed83b51f2e94d55a13cdf1b0cb5eba25736d59121. Sep 11 00:33:42.848746 containerd[1623]: time="2025-09-11T00:33:42.848680007Z" level=info msg="StartContainer for \"2b1dd5f6f0ed9d56248c963ed83b51f2e94d55a13cdf1b0cb5eba25736d59121\" returns successfully" Sep 11 00:33:43.569641 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 11 00:33:43.582152 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 11 00:33:43.759473 containerd[1623]: time="2025-09-11T00:33:43.759443427Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2b1dd5f6f0ed9d56248c963ed83b51f2e94d55a13cdf1b0cb5eba25736d59121\" id:\"d4236da43eb4323af251db7621b8cb4c0a12212b8628bb0811668d6d499f5d3c\" pid:3969 exit_status:1 exited_at:{seconds:1757550823 nanos:758567663}" Sep 11 00:33:44.054445 kubelet[2941]: I0911 00:33:44.054401 2941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-vv6pf" podStartSLOduration=4.668420125 podStartE2EDuration="21.054387259s" podCreationTimestamp="2025-09-11 00:33:23 +0000 UTC" firstStartedPulling="2025-09-11 00:33:24.802554327 +0000 UTC m=+18.566728557" lastFinishedPulling="2025-09-11 00:33:41.188521456 +0000 UTC m=+34.952695691" observedRunningTime="2025-09-11 00:33:43.610627931 +0000 UTC m=+37.374802171" watchObservedRunningTime="2025-09-11 00:33:44.054387259 +0000 UTC m=+37.818561488" Sep 11 00:33:44.126044 kubelet[2941]: I0911 00:33:44.125932 2941 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da6f64ef-8d82-4d3d-b358-cbc72485920a-whisker-ca-bundle\") pod \"da6f64ef-8d82-4d3d-b358-cbc72485920a\" (UID: \"da6f64ef-8d82-4d3d-b358-cbc72485920a\") " Sep 11 00:33:44.126044 kubelet[2941]: I0911 00:33:44.125979 2941 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lc4bx\" (UniqueName: \"kubernetes.io/projected/da6f64ef-8d82-4d3d-b358-cbc72485920a-kube-api-access-lc4bx\") pod \"da6f64ef-8d82-4d3d-b358-cbc72485920a\" (UID: \"da6f64ef-8d82-4d3d-b358-cbc72485920a\") " Sep 11 00:33:44.126044 kubelet[2941]: I0911 00:33:44.126000 2941 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/da6f64ef-8d82-4d3d-b358-cbc72485920a-whisker-backend-key-pair\") pod \"da6f64ef-8d82-4d3d-b358-cbc72485920a\" (UID: \"da6f64ef-8d82-4d3d-b358-cbc72485920a\") " Sep 11 00:33:44.140349 kubelet[2941]: I0911 00:33:44.138838 2941 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da6f64ef-8d82-4d3d-b358-cbc72485920a-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "da6f64ef-8d82-4d3d-b358-cbc72485920a" (UID: "da6f64ef-8d82-4d3d-b358-cbc72485920a"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 11 00:33:44.146672 systemd[1]: var-lib-kubelet-pods-da6f64ef\x2d8d82\x2d4d3d\x2db358\x2dcbc72485920a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlc4bx.mount: Deactivated successfully. Sep 11 00:33:44.146740 systemd[1]: var-lib-kubelet-pods-da6f64ef\x2d8d82\x2d4d3d\x2db358\x2dcbc72485920a-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 11 00:33:44.149119 kubelet[2941]: I0911 00:33:44.149094 2941 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da6f64ef-8d82-4d3d-b358-cbc72485920a-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "da6f64ef-8d82-4d3d-b358-cbc72485920a" (UID: "da6f64ef-8d82-4d3d-b358-cbc72485920a"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 11 00:33:44.149933 kubelet[2941]: I0911 00:33:44.149905 2941 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da6f64ef-8d82-4d3d-b358-cbc72485920a-kube-api-access-lc4bx" (OuterVolumeSpecName: "kube-api-access-lc4bx") pod "da6f64ef-8d82-4d3d-b358-cbc72485920a" (UID: "da6f64ef-8d82-4d3d-b358-cbc72485920a"). InnerVolumeSpecName "kube-api-access-lc4bx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 11 00:33:44.226944 kubelet[2941]: I0911 00:33:44.226903 2941 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/da6f64ef-8d82-4d3d-b358-cbc72485920a-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 11 00:33:44.226944 kubelet[2941]: I0911 00:33:44.226924 2941 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da6f64ef-8d82-4d3d-b358-cbc72485920a-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 11 00:33:44.226944 kubelet[2941]: I0911 00:33:44.226930 2941 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lc4bx\" (UniqueName: \"kubernetes.io/projected/da6f64ef-8d82-4d3d-b358-cbc72485920a-kube-api-access-lc4bx\") on node \"localhost\" DevicePath \"\"" Sep 11 00:33:44.396816 systemd[1]: Removed slice kubepods-besteffort-podda6f64ef_8d82_4d3d_b358_cbc72485920a.slice - libcontainer container kubepods-besteffort-podda6f64ef_8d82_4d3d_b358_cbc72485920a.slice. Sep 11 00:33:44.697739 containerd[1623]: time="2025-09-11T00:33:44.697662009Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2b1dd5f6f0ed9d56248c963ed83b51f2e94d55a13cdf1b0cb5eba25736d59121\" id:\"26a4540e2cb00955b61e9e462b4921d360c2581a1a632d15ab97dd1e12cb425b\" pid:4013 exit_status:1 exited_at:{seconds:1757550824 nanos:697479083}" Sep 11 00:33:44.763899 systemd[1]: Created slice kubepods-besteffort-pod0f4bbbd3_9a1d_42a5_9b97_1abfc18ce4b4.slice - libcontainer container kubepods-besteffort-pod0f4bbbd3_9a1d_42a5_9b97_1abfc18ce4b4.slice. Sep 11 00:33:44.830679 kubelet[2941]: I0911 00:33:44.830644 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f4bbbd3-9a1d-42a5-9b97-1abfc18ce4b4-whisker-ca-bundle\") pod \"whisker-86c864fb6-ztr4j\" (UID: \"0f4bbbd3-9a1d-42a5-9b97-1abfc18ce4b4\") " pod="calico-system/whisker-86c864fb6-ztr4j" Sep 11 00:33:44.831076 kubelet[2941]: I0911 00:33:44.830799 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhd8l\" (UniqueName: \"kubernetes.io/projected/0f4bbbd3-9a1d-42a5-9b97-1abfc18ce4b4-kube-api-access-mhd8l\") pod \"whisker-86c864fb6-ztr4j\" (UID: \"0f4bbbd3-9a1d-42a5-9b97-1abfc18ce4b4\") " pod="calico-system/whisker-86c864fb6-ztr4j" Sep 11 00:33:44.831076 kubelet[2941]: I0911 00:33:44.830825 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0f4bbbd3-9a1d-42a5-9b97-1abfc18ce4b4-whisker-backend-key-pair\") pod \"whisker-86c864fb6-ztr4j\" (UID: \"0f4bbbd3-9a1d-42a5-9b97-1abfc18ce4b4\") " pod="calico-system/whisker-86c864fb6-ztr4j" Sep 11 00:33:45.067611 containerd[1623]: time="2025-09-11T00:33:45.067505744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86c864fb6-ztr4j,Uid:0f4bbbd3-9a1d-42a5-9b97-1abfc18ce4b4,Namespace:calico-system,Attempt:0,}" Sep 11 00:33:45.799247 systemd-networkd[1520]: vxlan.calico: Link UP Sep 11 00:33:45.799896 systemd-networkd[1520]: vxlan.calico: Gained carrier Sep 11 00:33:45.832957 systemd-networkd[1520]: cali62f400abe80: Link UP Sep 11 00:33:45.833653 systemd-networkd[1520]: cali62f400abe80: Gained carrier Sep 11 00:33:45.869788 containerd[1623]: 2025-09-11 00:33:45.117 [INFO][4032] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 11 00:33:45.869788 containerd[1623]: 2025-09-11 00:33:45.189 [INFO][4032] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--86c864fb6--ztr4j-eth0 whisker-86c864fb6- calico-system 0f4bbbd3-9a1d-42a5-9b97-1abfc18ce4b4 872 0 2025-09-11 00:33:44 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:86c864fb6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-86c864fb6-ztr4j eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali62f400abe80 [] [] }} ContainerID="70911ff387fef03121776e64c28df0fce10e62b26615b295005fc726bb3421f3" Namespace="calico-system" Pod="whisker-86c864fb6-ztr4j" WorkloadEndpoint="localhost-k8s-whisker--86c864fb6--ztr4j-" Sep 11 00:33:45.869788 containerd[1623]: 2025-09-11 00:33:45.189 [INFO][4032] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="70911ff387fef03121776e64c28df0fce10e62b26615b295005fc726bb3421f3" Namespace="calico-system" Pod="whisker-86c864fb6-ztr4j" WorkloadEndpoint="localhost-k8s-whisker--86c864fb6--ztr4j-eth0" Sep 11 00:33:45.869788 containerd[1623]: 2025-09-11 00:33:45.590 [INFO][4125] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="70911ff387fef03121776e64c28df0fce10e62b26615b295005fc726bb3421f3" HandleID="k8s-pod-network.70911ff387fef03121776e64c28df0fce10e62b26615b295005fc726bb3421f3" Workload="localhost-k8s-whisker--86c864fb6--ztr4j-eth0" Sep 11 00:33:45.869980 containerd[1623]: 2025-09-11 00:33:45.593 [INFO][4125] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="70911ff387fef03121776e64c28df0fce10e62b26615b295005fc726bb3421f3" HandleID="k8s-pod-network.70911ff387fef03121776e64c28df0fce10e62b26615b295005fc726bb3421f3" Workload="localhost-k8s-whisker--86c864fb6--ztr4j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f480), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-86c864fb6-ztr4j", "timestamp":"2025-09-11 00:33:45.590564688 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:33:45.869980 containerd[1623]: 2025-09-11 00:33:45.593 [INFO][4125] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:33:45.869980 containerd[1623]: 2025-09-11 00:33:45.594 [INFO][4125] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:33:45.869980 containerd[1623]: 2025-09-11 00:33:45.595 [INFO][4125] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:33:45.869980 containerd[1623]: 2025-09-11 00:33:45.665 [INFO][4125] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.70911ff387fef03121776e64c28df0fce10e62b26615b295005fc726bb3421f3" host="localhost" Sep 11 00:33:45.869980 containerd[1623]: 2025-09-11 00:33:45.757 [INFO][4125] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:33:45.869980 containerd[1623]: 2025-09-11 00:33:45.760 [INFO][4125] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:33:45.869980 containerd[1623]: 2025-09-11 00:33:45.762 [INFO][4125] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:33:45.869980 containerd[1623]: 2025-09-11 00:33:45.763 [INFO][4125] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:33:45.869980 containerd[1623]: 2025-09-11 00:33:45.763 [INFO][4125] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.70911ff387fef03121776e64c28df0fce10e62b26615b295005fc726bb3421f3" host="localhost" Sep 11 00:33:45.870569 containerd[1623]: 2025-09-11 00:33:45.764 [INFO][4125] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.70911ff387fef03121776e64c28df0fce10e62b26615b295005fc726bb3421f3 Sep 11 00:33:45.870569 containerd[1623]: 2025-09-11 00:33:45.775 [INFO][4125] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.70911ff387fef03121776e64c28df0fce10e62b26615b295005fc726bb3421f3" host="localhost" Sep 11 00:33:45.870569 containerd[1623]: 2025-09-11 00:33:45.781 [INFO][4125] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.70911ff387fef03121776e64c28df0fce10e62b26615b295005fc726bb3421f3" host="localhost" Sep 11 00:33:45.870569 containerd[1623]: 2025-09-11 00:33:45.781 [INFO][4125] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.70911ff387fef03121776e64c28df0fce10e62b26615b295005fc726bb3421f3" host="localhost" Sep 11 00:33:45.870569 containerd[1623]: 2025-09-11 00:33:45.781 [INFO][4125] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:33:45.870569 containerd[1623]: 2025-09-11 00:33:45.781 [INFO][4125] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="70911ff387fef03121776e64c28df0fce10e62b26615b295005fc726bb3421f3" HandleID="k8s-pod-network.70911ff387fef03121776e64c28df0fce10e62b26615b295005fc726bb3421f3" Workload="localhost-k8s-whisker--86c864fb6--ztr4j-eth0" Sep 11 00:33:45.870677 containerd[1623]: 2025-09-11 00:33:45.785 [INFO][4032] cni-plugin/k8s.go 418: Populated endpoint ContainerID="70911ff387fef03121776e64c28df0fce10e62b26615b295005fc726bb3421f3" Namespace="calico-system" Pod="whisker-86c864fb6-ztr4j" WorkloadEndpoint="localhost-k8s-whisker--86c864fb6--ztr4j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--86c864fb6--ztr4j-eth0", GenerateName:"whisker-86c864fb6-", Namespace:"calico-system", SelfLink:"", UID:"0f4bbbd3-9a1d-42a5-9b97-1abfc18ce4b4", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 33, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"86c864fb6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-86c864fb6-ztr4j", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali62f400abe80", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:33:45.870677 containerd[1623]: 2025-09-11 00:33:45.786 [INFO][4032] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="70911ff387fef03121776e64c28df0fce10e62b26615b295005fc726bb3421f3" Namespace="calico-system" Pod="whisker-86c864fb6-ztr4j" WorkloadEndpoint="localhost-k8s-whisker--86c864fb6--ztr4j-eth0" Sep 11 00:33:45.870736 containerd[1623]: 2025-09-11 00:33:45.786 [INFO][4032] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali62f400abe80 ContainerID="70911ff387fef03121776e64c28df0fce10e62b26615b295005fc726bb3421f3" Namespace="calico-system" Pod="whisker-86c864fb6-ztr4j" WorkloadEndpoint="localhost-k8s-whisker--86c864fb6--ztr4j-eth0" Sep 11 00:33:45.870736 containerd[1623]: 2025-09-11 00:33:45.843 [INFO][4032] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="70911ff387fef03121776e64c28df0fce10e62b26615b295005fc726bb3421f3" Namespace="calico-system" Pod="whisker-86c864fb6-ztr4j" WorkloadEndpoint="localhost-k8s-whisker--86c864fb6--ztr4j-eth0" Sep 11 00:33:45.870771 containerd[1623]: 2025-09-11 00:33:45.844 [INFO][4032] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="70911ff387fef03121776e64c28df0fce10e62b26615b295005fc726bb3421f3" Namespace="calico-system" Pod="whisker-86c864fb6-ztr4j" WorkloadEndpoint="localhost-k8s-whisker--86c864fb6--ztr4j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--86c864fb6--ztr4j-eth0", GenerateName:"whisker-86c864fb6-", Namespace:"calico-system", SelfLink:"", UID:"0f4bbbd3-9a1d-42a5-9b97-1abfc18ce4b4", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 33, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"86c864fb6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"70911ff387fef03121776e64c28df0fce10e62b26615b295005fc726bb3421f3", Pod:"whisker-86c864fb6-ztr4j", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali62f400abe80", MAC:"12:84:1f:be:4d:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:33:45.870813 containerd[1623]: 2025-09-11 00:33:45.864 [INFO][4032] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="70911ff387fef03121776e64c28df0fce10e62b26615b295005fc726bb3421f3" Namespace="calico-system" Pod="whisker-86c864fb6-ztr4j" WorkloadEndpoint="localhost-k8s-whisker--86c864fb6--ztr4j-eth0" Sep 11 00:33:46.156628 containerd[1623]: time="2025-09-11T00:33:46.156594382Z" level=info msg="connecting to shim 70911ff387fef03121776e64c28df0fce10e62b26615b295005fc726bb3421f3" address="unix:///run/containerd/s/98deacd55aba7e2dd4648055775007884c4d34e2ed3f5da792b66f6cfb453aa3" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:33:46.191416 systemd[1]: Started cri-containerd-70911ff387fef03121776e64c28df0fce10e62b26615b295005fc726bb3421f3.scope - libcontainer container 70911ff387fef03121776e64c28df0fce10e62b26615b295005fc726bb3421f3. Sep 11 00:33:46.205347 systemd-resolved[1521]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:33:46.235464 containerd[1623]: time="2025-09-11T00:33:46.235426145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86c864fb6-ztr4j,Uid:0f4bbbd3-9a1d-42a5-9b97-1abfc18ce4b4,Namespace:calico-system,Attempt:0,} returns sandbox id \"70911ff387fef03121776e64c28df0fce10e62b26615b295005fc726bb3421f3\"" Sep 11 00:33:46.237355 containerd[1623]: time="2025-09-11T00:33:46.237341606Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 11 00:33:46.397728 containerd[1623]: time="2025-09-11T00:33:46.397429891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b5fbfb79d-qgw7r,Uid:775b899a-158c-4a61-bc04-563957a6f436,Namespace:calico-apiserver,Attempt:0,}" Sep 11 00:33:46.398106 kubelet[2941]: I0911 00:33:46.397563 2941 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da6f64ef-8d82-4d3d-b358-cbc72485920a" path="/var/lib/kubelet/pods/da6f64ef-8d82-4d3d-b358-cbc72485920a/volumes" Sep 11 00:33:46.475067 systemd-networkd[1520]: caliad7477814d1: Link UP Sep 11 00:33:46.475712 systemd-networkd[1520]: caliad7477814d1: Gained carrier Sep 11 00:33:46.487139 containerd[1623]: 2025-09-11 00:33:46.424 [INFO][4303] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6b5fbfb79d--qgw7r-eth0 calico-apiserver-6b5fbfb79d- calico-apiserver 775b899a-158c-4a61-bc04-563957a6f436 798 0 2025-09-11 00:33:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b5fbfb79d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6b5fbfb79d-qgw7r eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliad7477814d1 [] [] }} ContainerID="f5ee2a789cbd37358dca00ef8259413a31c940329b4cc122861e2c9aafc41dd4" Namespace="calico-apiserver" Pod="calico-apiserver-6b5fbfb79d-qgw7r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5fbfb79d--qgw7r-" Sep 11 00:33:46.487139 containerd[1623]: 2025-09-11 00:33:46.424 [INFO][4303] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f5ee2a789cbd37358dca00ef8259413a31c940329b4cc122861e2c9aafc41dd4" Namespace="calico-apiserver" Pod="calico-apiserver-6b5fbfb79d-qgw7r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5fbfb79d--qgw7r-eth0" Sep 11 00:33:46.487139 containerd[1623]: 2025-09-11 00:33:46.448 [INFO][4314] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f5ee2a789cbd37358dca00ef8259413a31c940329b4cc122861e2c9aafc41dd4" HandleID="k8s-pod-network.f5ee2a789cbd37358dca00ef8259413a31c940329b4cc122861e2c9aafc41dd4" Workload="localhost-k8s-calico--apiserver--6b5fbfb79d--qgw7r-eth0" Sep 11 00:33:46.487353 containerd[1623]: 2025-09-11 00:33:46.448 [INFO][4314] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f5ee2a789cbd37358dca00ef8259413a31c940329b4cc122861e2c9aafc41dd4" HandleID="k8s-pod-network.f5ee2a789cbd37358dca00ef8259413a31c940329b4cc122861e2c9aafc41dd4" Workload="localhost-k8s-calico--apiserver--6b5fbfb79d--qgw7r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f920), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6b5fbfb79d-qgw7r", "timestamp":"2025-09-11 00:33:46.448093965 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:33:46.487353 containerd[1623]: 2025-09-11 00:33:46.448 [INFO][4314] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:33:46.487353 containerd[1623]: 2025-09-11 00:33:46.448 [INFO][4314] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:33:46.487353 containerd[1623]: 2025-09-11 00:33:46.448 [INFO][4314] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:33:46.487353 containerd[1623]: 2025-09-11 00:33:46.454 [INFO][4314] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f5ee2a789cbd37358dca00ef8259413a31c940329b4cc122861e2c9aafc41dd4" host="localhost" Sep 11 00:33:46.487353 containerd[1623]: 2025-09-11 00:33:46.457 [INFO][4314] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:33:46.487353 containerd[1623]: 2025-09-11 00:33:46.460 [INFO][4314] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:33:46.487353 containerd[1623]: 2025-09-11 00:33:46.461 [INFO][4314] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:33:46.487353 containerd[1623]: 2025-09-11 00:33:46.463 [INFO][4314] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:33:46.487353 containerd[1623]: 2025-09-11 00:33:46.463 [INFO][4314] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f5ee2a789cbd37358dca00ef8259413a31c940329b4cc122861e2c9aafc41dd4" host="localhost" Sep 11 00:33:46.487812 containerd[1623]: 2025-09-11 00:33:46.464 [INFO][4314] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f5ee2a789cbd37358dca00ef8259413a31c940329b4cc122861e2c9aafc41dd4 Sep 11 00:33:46.487812 containerd[1623]: 2025-09-11 00:33:46.466 [INFO][4314] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f5ee2a789cbd37358dca00ef8259413a31c940329b4cc122861e2c9aafc41dd4" host="localhost" Sep 11 00:33:46.487812 containerd[1623]: 2025-09-11 00:33:46.470 [INFO][4314] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.f5ee2a789cbd37358dca00ef8259413a31c940329b4cc122861e2c9aafc41dd4" host="localhost" Sep 11 00:33:46.487812 containerd[1623]: 2025-09-11 00:33:46.470 [INFO][4314] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.f5ee2a789cbd37358dca00ef8259413a31c940329b4cc122861e2c9aafc41dd4" host="localhost" Sep 11 00:33:46.487812 containerd[1623]: 2025-09-11 00:33:46.470 [INFO][4314] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:33:46.487812 containerd[1623]: 2025-09-11 00:33:46.470 [INFO][4314] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="f5ee2a789cbd37358dca00ef8259413a31c940329b4cc122861e2c9aafc41dd4" HandleID="k8s-pod-network.f5ee2a789cbd37358dca00ef8259413a31c940329b4cc122861e2c9aafc41dd4" Workload="localhost-k8s-calico--apiserver--6b5fbfb79d--qgw7r-eth0" Sep 11 00:33:46.487955 containerd[1623]: 2025-09-11 00:33:46.472 [INFO][4303] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f5ee2a789cbd37358dca00ef8259413a31c940329b4cc122861e2c9aafc41dd4" Namespace="calico-apiserver" Pod="calico-apiserver-6b5fbfb79d-qgw7r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5fbfb79d--qgw7r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b5fbfb79d--qgw7r-eth0", GenerateName:"calico-apiserver-6b5fbfb79d-", Namespace:"calico-apiserver", SelfLink:"", UID:"775b899a-158c-4a61-bc04-563957a6f436", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 33, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b5fbfb79d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6b5fbfb79d-qgw7r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliad7477814d1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:33:46.488013 containerd[1623]: 2025-09-11 00:33:46.472 [INFO][4303] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="f5ee2a789cbd37358dca00ef8259413a31c940329b4cc122861e2c9aafc41dd4" Namespace="calico-apiserver" Pod="calico-apiserver-6b5fbfb79d-qgw7r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5fbfb79d--qgw7r-eth0" Sep 11 00:33:46.488013 containerd[1623]: 2025-09-11 00:33:46.472 [INFO][4303] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliad7477814d1 ContainerID="f5ee2a789cbd37358dca00ef8259413a31c940329b4cc122861e2c9aafc41dd4" Namespace="calico-apiserver" Pod="calico-apiserver-6b5fbfb79d-qgw7r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5fbfb79d--qgw7r-eth0" Sep 11 00:33:46.488013 containerd[1623]: 2025-09-11 00:33:46.476 [INFO][4303] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f5ee2a789cbd37358dca00ef8259413a31c940329b4cc122861e2c9aafc41dd4" Namespace="calico-apiserver" Pod="calico-apiserver-6b5fbfb79d-qgw7r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5fbfb79d--qgw7r-eth0" Sep 11 00:33:46.488079 containerd[1623]: 2025-09-11 00:33:46.476 [INFO][4303] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f5ee2a789cbd37358dca00ef8259413a31c940329b4cc122861e2c9aafc41dd4" Namespace="calico-apiserver" Pod="calico-apiserver-6b5fbfb79d-qgw7r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5fbfb79d--qgw7r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b5fbfb79d--qgw7r-eth0", GenerateName:"calico-apiserver-6b5fbfb79d-", Namespace:"calico-apiserver", SelfLink:"", UID:"775b899a-158c-4a61-bc04-563957a6f436", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 33, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b5fbfb79d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f5ee2a789cbd37358dca00ef8259413a31c940329b4cc122861e2c9aafc41dd4", Pod:"calico-apiserver-6b5fbfb79d-qgw7r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliad7477814d1", MAC:"da:f8:2b:f9:12:02", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:33:46.488136 containerd[1623]: 2025-09-11 00:33:46.484 [INFO][4303] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f5ee2a789cbd37358dca00ef8259413a31c940329b4cc122861e2c9aafc41dd4" Namespace="calico-apiserver" Pod="calico-apiserver-6b5fbfb79d-qgw7r" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5fbfb79d--qgw7r-eth0" Sep 11 00:33:46.508912 containerd[1623]: time="2025-09-11T00:33:46.508883038Z" level=info msg="connecting to shim f5ee2a789cbd37358dca00ef8259413a31c940329b4cc122861e2c9aafc41dd4" address="unix:///run/containerd/s/68c6c02fa72f6f30e5592f1cfbc137adb768ed4ae9a92163d94b7e57fdb24645" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:33:46.535406 systemd[1]: Started cri-containerd-f5ee2a789cbd37358dca00ef8259413a31c940329b4cc122861e2c9aafc41dd4.scope - libcontainer container f5ee2a789cbd37358dca00ef8259413a31c940329b4cc122861e2c9aafc41dd4. Sep 11 00:33:46.545028 systemd-resolved[1521]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:33:46.571584 containerd[1623]: time="2025-09-11T00:33:46.571556940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b5fbfb79d-qgw7r,Uid:775b899a-158c-4a61-bc04-563957a6f436,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f5ee2a789cbd37358dca00ef8259413a31c940329b4cc122861e2c9aafc41dd4\"" Sep 11 00:33:47.407473 systemd-networkd[1520]: vxlan.calico: Gained IPv6LL Sep 11 00:33:47.599639 systemd-networkd[1520]: cali62f400abe80: Gained IPv6LL Sep 11 00:33:47.659882 containerd[1623]: time="2025-09-11T00:33:47.659769499Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:47.661379 containerd[1623]: time="2025-09-11T00:33:47.660153629Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 11 00:33:47.661379 containerd[1623]: time="2025-09-11T00:33:47.660607263Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:47.662002 containerd[1623]: time="2025-09-11T00:33:47.661990734Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:47.662769 containerd[1623]: time="2025-09-11T00:33:47.662756287Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.42539711s" Sep 11 00:33:47.662837 containerd[1623]: time="2025-09-11T00:33:47.662828036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 11 00:33:47.663917 containerd[1623]: time="2025-09-11T00:33:47.663908373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 11 00:33:47.667070 containerd[1623]: time="2025-09-11T00:33:47.667053221Z" level=info msg="CreateContainer within sandbox \"70911ff387fef03121776e64c28df0fce10e62b26615b295005fc726bb3421f3\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 11 00:33:47.676383 containerd[1623]: time="2025-09-11T00:33:47.676362482Z" level=info msg="Container 7ccac1b206afc99a1e44c85cecc020626e0840aa71b79de04ecc06ed67576bbd: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:33:47.688106 containerd[1623]: time="2025-09-11T00:33:47.688084130Z" level=info msg="CreateContainer within sandbox \"70911ff387fef03121776e64c28df0fce10e62b26615b295005fc726bb3421f3\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"7ccac1b206afc99a1e44c85cecc020626e0840aa71b79de04ecc06ed67576bbd\"" Sep 11 00:33:47.688666 containerd[1623]: time="2025-09-11T00:33:47.688651816Z" level=info msg="StartContainer for \"7ccac1b206afc99a1e44c85cecc020626e0840aa71b79de04ecc06ed67576bbd\"" Sep 11 00:33:47.689396 containerd[1623]: time="2025-09-11T00:33:47.689380256Z" level=info msg="connecting to shim 7ccac1b206afc99a1e44c85cecc020626e0840aa71b79de04ecc06ed67576bbd" address="unix:///run/containerd/s/98deacd55aba7e2dd4648055775007884c4d34e2ed3f5da792b66f6cfb453aa3" protocol=ttrpc version=3 Sep 11 00:33:47.706380 systemd[1]: Started cri-containerd-7ccac1b206afc99a1e44c85cecc020626e0840aa71b79de04ecc06ed67576bbd.scope - libcontainer container 7ccac1b206afc99a1e44c85cecc020626e0840aa71b79de04ecc06ed67576bbd. Sep 11 00:33:47.742953 containerd[1623]: time="2025-09-11T00:33:47.742869457Z" level=info msg="StartContainer for \"7ccac1b206afc99a1e44c85cecc020626e0840aa71b79de04ecc06ed67576bbd\" returns successfully" Sep 11 00:33:47.919486 systemd-networkd[1520]: caliad7477814d1: Gained IPv6LL Sep 11 00:33:48.393105 containerd[1623]: time="2025-09-11T00:33:48.393042496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q5r45,Uid:b497c000-3842-4419-a5a7-413b5b0d4274,Namespace:kube-system,Attempt:0,}" Sep 11 00:33:48.470095 systemd-networkd[1520]: cali0cfbc61429c: Link UP Sep 11 00:33:48.471055 systemd-networkd[1520]: cali0cfbc61429c: Gained carrier Sep 11 00:33:48.487358 containerd[1623]: 2025-09-11 00:33:48.427 [INFO][4419] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--q5r45-eth0 coredns-668d6bf9bc- kube-system b497c000-3842-4419-a5a7-413b5b0d4274 793 0 2025-09-11 00:33:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-q5r45 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0cfbc61429c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f8657741d55ad0c35b6a4d509d7f9c505b284bd50f398c95a2b4b20242c9d6e8" Namespace="kube-system" Pod="coredns-668d6bf9bc-q5r45" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--q5r45-" Sep 11 00:33:48.487358 containerd[1623]: 2025-09-11 00:33:48.428 [INFO][4419] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f8657741d55ad0c35b6a4d509d7f9c505b284bd50f398c95a2b4b20242c9d6e8" Namespace="kube-system" Pod="coredns-668d6bf9bc-q5r45" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--q5r45-eth0" Sep 11 00:33:48.487358 containerd[1623]: 2025-09-11 00:33:48.449 [INFO][4431] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f8657741d55ad0c35b6a4d509d7f9c505b284bd50f398c95a2b4b20242c9d6e8" HandleID="k8s-pod-network.f8657741d55ad0c35b6a4d509d7f9c505b284bd50f398c95a2b4b20242c9d6e8" Workload="localhost-k8s-coredns--668d6bf9bc--q5r45-eth0" Sep 11 00:33:48.487538 containerd[1623]: 2025-09-11 00:33:48.449 [INFO][4431] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f8657741d55ad0c35b6a4d509d7f9c505b284bd50f398c95a2b4b20242c9d6e8" HandleID="k8s-pod-network.f8657741d55ad0c35b6a4d509d7f9c505b284bd50f398c95a2b4b20242c9d6e8" Workload="localhost-k8s-coredns--668d6bf9bc--q5r45-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024ef60), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-q5r45", "timestamp":"2025-09-11 00:33:48.449164114 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:33:48.487538 containerd[1623]: 2025-09-11 00:33:48.449 [INFO][4431] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:33:48.487538 containerd[1623]: 2025-09-11 00:33:48.449 [INFO][4431] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:33:48.487538 containerd[1623]: 2025-09-11 00:33:48.449 [INFO][4431] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:33:48.487538 containerd[1623]: 2025-09-11 00:33:48.452 [INFO][4431] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f8657741d55ad0c35b6a4d509d7f9c505b284bd50f398c95a2b4b20242c9d6e8" host="localhost" Sep 11 00:33:48.487538 containerd[1623]: 2025-09-11 00:33:48.455 [INFO][4431] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:33:48.487538 containerd[1623]: 2025-09-11 00:33:48.457 [INFO][4431] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:33:48.487538 containerd[1623]: 2025-09-11 00:33:48.458 [INFO][4431] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:33:48.487538 containerd[1623]: 2025-09-11 00:33:48.459 [INFO][4431] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:33:48.487538 containerd[1623]: 2025-09-11 00:33:48.459 [INFO][4431] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f8657741d55ad0c35b6a4d509d7f9c505b284bd50f398c95a2b4b20242c9d6e8" host="localhost" Sep 11 00:33:48.488278 containerd[1623]: 2025-09-11 00:33:48.459 [INFO][4431] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f8657741d55ad0c35b6a4d509d7f9c505b284bd50f398c95a2b4b20242c9d6e8 Sep 11 00:33:48.488278 containerd[1623]: 2025-09-11 00:33:48.463 [INFO][4431] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f8657741d55ad0c35b6a4d509d7f9c505b284bd50f398c95a2b4b20242c9d6e8" host="localhost" Sep 11 00:33:48.488278 containerd[1623]: 2025-09-11 00:33:48.466 [INFO][4431] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.f8657741d55ad0c35b6a4d509d7f9c505b284bd50f398c95a2b4b20242c9d6e8" host="localhost" Sep 11 00:33:48.488278 containerd[1623]: 2025-09-11 00:33:48.466 [INFO][4431] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.f8657741d55ad0c35b6a4d509d7f9c505b284bd50f398c95a2b4b20242c9d6e8" host="localhost" Sep 11 00:33:48.488278 containerd[1623]: 2025-09-11 00:33:48.466 [INFO][4431] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:33:48.488278 containerd[1623]: 2025-09-11 00:33:48.466 [INFO][4431] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="f8657741d55ad0c35b6a4d509d7f9c505b284bd50f398c95a2b4b20242c9d6e8" HandleID="k8s-pod-network.f8657741d55ad0c35b6a4d509d7f9c505b284bd50f398c95a2b4b20242c9d6e8" Workload="localhost-k8s-coredns--668d6bf9bc--q5r45-eth0" Sep 11 00:33:48.488438 containerd[1623]: 2025-09-11 00:33:48.467 [INFO][4419] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f8657741d55ad0c35b6a4d509d7f9c505b284bd50f398c95a2b4b20242c9d6e8" Namespace="kube-system" Pod="coredns-668d6bf9bc-q5r45" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--q5r45-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--q5r45-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b497c000-3842-4419-a5a7-413b5b0d4274", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 33, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-q5r45", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0cfbc61429c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:33:48.488604 containerd[1623]: 2025-09-11 00:33:48.468 [INFO][4419] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="f8657741d55ad0c35b6a4d509d7f9c505b284bd50f398c95a2b4b20242c9d6e8" Namespace="kube-system" Pod="coredns-668d6bf9bc-q5r45" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--q5r45-eth0" Sep 11 00:33:48.488604 containerd[1623]: 2025-09-11 00:33:48.468 [INFO][4419] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0cfbc61429c ContainerID="f8657741d55ad0c35b6a4d509d7f9c505b284bd50f398c95a2b4b20242c9d6e8" Namespace="kube-system" Pod="coredns-668d6bf9bc-q5r45" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--q5r45-eth0" Sep 11 00:33:48.488604 containerd[1623]: 2025-09-11 00:33:48.470 [INFO][4419] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f8657741d55ad0c35b6a4d509d7f9c505b284bd50f398c95a2b4b20242c9d6e8" Namespace="kube-system" Pod="coredns-668d6bf9bc-q5r45" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--q5r45-eth0" Sep 11 00:33:48.488675 containerd[1623]: 2025-09-11 00:33:48.474 [INFO][4419] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f8657741d55ad0c35b6a4d509d7f9c505b284bd50f398c95a2b4b20242c9d6e8" Namespace="kube-system" Pod="coredns-668d6bf9bc-q5r45" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--q5r45-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--q5r45-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b497c000-3842-4419-a5a7-413b5b0d4274", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 33, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f8657741d55ad0c35b6a4d509d7f9c505b284bd50f398c95a2b4b20242c9d6e8", Pod:"coredns-668d6bf9bc-q5r45", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0cfbc61429c", MAC:"8a:1f:ce:d7:25:8b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:33:48.488675 containerd[1623]: 2025-09-11 00:33:48.484 [INFO][4419] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f8657741d55ad0c35b6a4d509d7f9c505b284bd50f398c95a2b4b20242c9d6e8" Namespace="kube-system" Pod="coredns-668d6bf9bc-q5r45" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--q5r45-eth0" Sep 11 00:33:48.502696 containerd[1623]: time="2025-09-11T00:33:48.502666877Z" level=info msg="connecting to shim f8657741d55ad0c35b6a4d509d7f9c505b284bd50f398c95a2b4b20242c9d6e8" address="unix:///run/containerd/s/e5de08d7662275418955b5dfe9a8b9a8503cdfb18b36073a91555b686f8a06ee" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:33:48.524438 systemd[1]: Started cri-containerd-f8657741d55ad0c35b6a4d509d7f9c505b284bd50f398c95a2b4b20242c9d6e8.scope - libcontainer container f8657741d55ad0c35b6a4d509d7f9c505b284bd50f398c95a2b4b20242c9d6e8. Sep 11 00:33:48.532997 systemd-resolved[1521]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:33:48.562670 containerd[1623]: time="2025-09-11T00:33:48.562644560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q5r45,Uid:b497c000-3842-4419-a5a7-413b5b0d4274,Namespace:kube-system,Attempt:0,} returns sandbox id \"f8657741d55ad0c35b6a4d509d7f9c505b284bd50f398c95a2b4b20242c9d6e8\"" Sep 11 00:33:48.582410 containerd[1623]: time="2025-09-11T00:33:48.581441124Z" level=info msg="CreateContainer within sandbox \"f8657741d55ad0c35b6a4d509d7f9c505b284bd50f398c95a2b4b20242c9d6e8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 11 00:33:48.595930 containerd[1623]: time="2025-09-11T00:33:48.595903929Z" level=info msg="Container 98e1f00bb9ed2c5a87ddda188190f26bc0e6406c24f07b86b4e774df32cdd95c: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:33:48.598013 containerd[1623]: time="2025-09-11T00:33:48.597995575Z" level=info msg="CreateContainer within sandbox \"f8657741d55ad0c35b6a4d509d7f9c505b284bd50f398c95a2b4b20242c9d6e8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"98e1f00bb9ed2c5a87ddda188190f26bc0e6406c24f07b86b4e774df32cdd95c\"" Sep 11 00:33:48.598403 containerd[1623]: time="2025-09-11T00:33:48.598392815Z" level=info msg="StartContainer for \"98e1f00bb9ed2c5a87ddda188190f26bc0e6406c24f07b86b4e774df32cdd95c\"" Sep 11 00:33:48.598941 containerd[1623]: time="2025-09-11T00:33:48.598929571Z" level=info msg="connecting to shim 98e1f00bb9ed2c5a87ddda188190f26bc0e6406c24f07b86b4e774df32cdd95c" address="unix:///run/containerd/s/e5de08d7662275418955b5dfe9a8b9a8503cdfb18b36073a91555b686f8a06ee" protocol=ttrpc version=3 Sep 11 00:33:48.612384 systemd[1]: Started cri-containerd-98e1f00bb9ed2c5a87ddda188190f26bc0e6406c24f07b86b4e774df32cdd95c.scope - libcontainer container 98e1f00bb9ed2c5a87ddda188190f26bc0e6406c24f07b86b4e774df32cdd95c. Sep 11 00:33:48.633953 containerd[1623]: time="2025-09-11T00:33:48.633928307Z" level=info msg="StartContainer for \"98e1f00bb9ed2c5a87ddda188190f26bc0e6406c24f07b86b4e774df32cdd95c\" returns successfully" Sep 11 00:33:49.393685 containerd[1623]: time="2025-09-11T00:33:49.393657179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-52kp4,Uid:ab50939c-42a6-4b91-8b1f-2906b7091d42,Namespace:calico-system,Attempt:0,}" Sep 11 00:33:49.394524 containerd[1623]: time="2025-09-11T00:33:49.393666852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5zknv,Uid:9e09e6c7-d57e-4653-9635-85c8bb6db9f7,Namespace:calico-system,Attempt:0,}" Sep 11 00:33:49.576564 systemd-networkd[1520]: calia9e3e6f6bed: Link UP Sep 11 00:33:49.577068 systemd-networkd[1520]: calia9e3e6f6bed: Gained carrier Sep 11 00:33:49.589287 containerd[1623]: 2025-09-11 00:33:49.466 [INFO][4529] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--52kp4-eth0 goldmane-54d579b49d- calico-system ab50939c-42a6-4b91-8b1f-2906b7091d42 800 0 2025-09-11 00:33:23 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-52kp4 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calia9e3e6f6bed [] [] }} ContainerID="b9c036d7eb6651688d225997af39a1228aa8dfee5e3cb8def761d82b582b48ba" Namespace="calico-system" Pod="goldmane-54d579b49d-52kp4" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--52kp4-" Sep 11 00:33:49.589287 containerd[1623]: 2025-09-11 00:33:49.467 [INFO][4529] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b9c036d7eb6651688d225997af39a1228aa8dfee5e3cb8def761d82b582b48ba" Namespace="calico-system" Pod="goldmane-54d579b49d-52kp4" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--52kp4-eth0" Sep 11 00:33:49.589287 containerd[1623]: 2025-09-11 00:33:49.501 [INFO][4551] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b9c036d7eb6651688d225997af39a1228aa8dfee5e3cb8def761d82b582b48ba" HandleID="k8s-pod-network.b9c036d7eb6651688d225997af39a1228aa8dfee5e3cb8def761d82b582b48ba" Workload="localhost-k8s-goldmane--54d579b49d--52kp4-eth0" Sep 11 00:33:49.589287 containerd[1623]: 2025-09-11 00:33:49.502 [INFO][4551] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b9c036d7eb6651688d225997af39a1228aa8dfee5e3cb8def761d82b582b48ba" HandleID="k8s-pod-network.b9c036d7eb6651688d225997af39a1228aa8dfee5e3cb8def761d82b582b48ba" Workload="localhost-k8s-goldmane--54d579b49d--52kp4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-52kp4", "timestamp":"2025-09-11 00:33:49.501108709 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:33:49.589287 containerd[1623]: 2025-09-11 00:33:49.502 [INFO][4551] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:33:49.589287 containerd[1623]: 2025-09-11 00:33:49.502 [INFO][4551] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:33:49.589287 containerd[1623]: 2025-09-11 00:33:49.502 [INFO][4551] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:33:49.589287 containerd[1623]: 2025-09-11 00:33:49.510 [INFO][4551] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b9c036d7eb6651688d225997af39a1228aa8dfee5e3cb8def761d82b582b48ba" host="localhost" Sep 11 00:33:49.589287 containerd[1623]: 2025-09-11 00:33:49.520 [INFO][4551] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:33:49.589287 containerd[1623]: 2025-09-11 00:33:49.532 [INFO][4551] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:33:49.589287 containerd[1623]: 2025-09-11 00:33:49.537 [INFO][4551] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:33:49.589287 containerd[1623]: 2025-09-11 00:33:49.540 [INFO][4551] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:33:49.589287 containerd[1623]: 2025-09-11 00:33:49.540 [INFO][4551] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b9c036d7eb6651688d225997af39a1228aa8dfee5e3cb8def761d82b582b48ba" host="localhost" Sep 11 00:33:49.589287 containerd[1623]: 2025-09-11 00:33:49.543 [INFO][4551] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b9c036d7eb6651688d225997af39a1228aa8dfee5e3cb8def761d82b582b48ba Sep 11 00:33:49.589287 containerd[1623]: 2025-09-11 00:33:49.551 [INFO][4551] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b9c036d7eb6651688d225997af39a1228aa8dfee5e3cb8def761d82b582b48ba" host="localhost" Sep 11 00:33:49.589287 containerd[1623]: 2025-09-11 00:33:49.560 [INFO][4551] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.b9c036d7eb6651688d225997af39a1228aa8dfee5e3cb8def761d82b582b48ba" host="localhost" Sep 11 00:33:49.589287 containerd[1623]: 2025-09-11 00:33:49.561 [INFO][4551] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.b9c036d7eb6651688d225997af39a1228aa8dfee5e3cb8def761d82b582b48ba" host="localhost" Sep 11 00:33:49.589287 containerd[1623]: 2025-09-11 00:33:49.562 [INFO][4551] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:33:49.589287 containerd[1623]: 2025-09-11 00:33:49.562 [INFO][4551] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="b9c036d7eb6651688d225997af39a1228aa8dfee5e3cb8def761d82b582b48ba" HandleID="k8s-pod-network.b9c036d7eb6651688d225997af39a1228aa8dfee5e3cb8def761d82b582b48ba" Workload="localhost-k8s-goldmane--54d579b49d--52kp4-eth0" Sep 11 00:33:49.595416 containerd[1623]: 2025-09-11 00:33:49.567 [INFO][4529] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b9c036d7eb6651688d225997af39a1228aa8dfee5e3cb8def761d82b582b48ba" Namespace="calico-system" Pod="goldmane-54d579b49d-52kp4" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--52kp4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--52kp4-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"ab50939c-42a6-4b91-8b1f-2906b7091d42", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 33, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-52kp4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia9e3e6f6bed", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:33:49.595416 containerd[1623]: 2025-09-11 00:33:49.567 [INFO][4529] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="b9c036d7eb6651688d225997af39a1228aa8dfee5e3cb8def761d82b582b48ba" Namespace="calico-system" Pod="goldmane-54d579b49d-52kp4" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--52kp4-eth0" Sep 11 00:33:49.595416 containerd[1623]: 2025-09-11 00:33:49.568 [INFO][4529] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia9e3e6f6bed ContainerID="b9c036d7eb6651688d225997af39a1228aa8dfee5e3cb8def761d82b582b48ba" Namespace="calico-system" Pod="goldmane-54d579b49d-52kp4" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--52kp4-eth0" Sep 11 00:33:49.595416 containerd[1623]: 2025-09-11 00:33:49.576 [INFO][4529] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b9c036d7eb6651688d225997af39a1228aa8dfee5e3cb8def761d82b582b48ba" Namespace="calico-system" Pod="goldmane-54d579b49d-52kp4" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--52kp4-eth0" Sep 11 00:33:49.595416 containerd[1623]: 2025-09-11 00:33:49.577 [INFO][4529] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b9c036d7eb6651688d225997af39a1228aa8dfee5e3cb8def761d82b582b48ba" Namespace="calico-system" Pod="goldmane-54d579b49d-52kp4" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--52kp4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--52kp4-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"ab50939c-42a6-4b91-8b1f-2906b7091d42", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 33, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b9c036d7eb6651688d225997af39a1228aa8dfee5e3cb8def761d82b582b48ba", Pod:"goldmane-54d579b49d-52kp4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia9e3e6f6bed", MAC:"66:c3:de:26:9b:44", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:33:49.595416 containerd[1623]: 2025-09-11 00:33:49.586 [INFO][4529] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b9c036d7eb6651688d225997af39a1228aa8dfee5e3cb8def761d82b582b48ba" Namespace="calico-system" Pod="goldmane-54d579b49d-52kp4" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--52kp4-eth0" Sep 11 00:33:49.621248 containerd[1623]: time="2025-09-11T00:33:49.621219289Z" level=info msg="connecting to shim b9c036d7eb6651688d225997af39a1228aa8dfee5e3cb8def761d82b582b48ba" address="unix:///run/containerd/s/111fc7056028fb8b3528a1408d8eaed71e0bde4074c19104cfdbb594b9347f8c" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:33:49.663623 systemd[1]: Started cri-containerd-b9c036d7eb6651688d225997af39a1228aa8dfee5e3cb8def761d82b582b48ba.scope - libcontainer container b9c036d7eb6651688d225997af39a1228aa8dfee5e3cb8def761d82b582b48ba. Sep 11 00:33:49.676432 systemd-networkd[1520]: cali312b2f079f8: Link UP Sep 11 00:33:49.677571 systemd-networkd[1520]: cali312b2f079f8: Gained carrier Sep 11 00:33:49.690025 systemd-resolved[1521]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:33:49.704314 containerd[1623]: 2025-09-11 00:33:49.461 [INFO][4532] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--5zknv-eth0 csi-node-driver- calico-system 9e09e6c7-d57e-4653-9635-85c8bb6db9f7 672 0 2025-09-11 00:33:24 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-5zknv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali312b2f079f8 [] [] }} ContainerID="17cf1342d6aa111d923afc862ace072fd8e59f9d4f56da48dfe3f856b594381b" Namespace="calico-system" Pod="csi-node-driver-5zknv" WorkloadEndpoint="localhost-k8s-csi--node--driver--5zknv-" Sep 11 00:33:49.704314 containerd[1623]: 2025-09-11 00:33:49.462 [INFO][4532] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="17cf1342d6aa111d923afc862ace072fd8e59f9d4f56da48dfe3f856b594381b" Namespace="calico-system" Pod="csi-node-driver-5zknv" WorkloadEndpoint="localhost-k8s-csi--node--driver--5zknv-eth0" Sep 11 00:33:49.704314 containerd[1623]: 2025-09-11 00:33:49.507 [INFO][4553] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="17cf1342d6aa111d923afc862ace072fd8e59f9d4f56da48dfe3f856b594381b" HandleID="k8s-pod-network.17cf1342d6aa111d923afc862ace072fd8e59f9d4f56da48dfe3f856b594381b" Workload="localhost-k8s-csi--node--driver--5zknv-eth0" Sep 11 00:33:49.704314 containerd[1623]: 2025-09-11 00:33:49.508 [INFO][4553] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="17cf1342d6aa111d923afc862ace072fd8e59f9d4f56da48dfe3f856b594381b" HandleID="k8s-pod-network.17cf1342d6aa111d923afc862ace072fd8e59f9d4f56da48dfe3f856b594381b" Workload="localhost-k8s-csi--node--driver--5zknv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c5080), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-5zknv", "timestamp":"2025-09-11 00:33:49.507979805 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:33:49.704314 containerd[1623]: 2025-09-11 00:33:49.508 [INFO][4553] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:33:49.704314 containerd[1623]: 2025-09-11 00:33:49.561 [INFO][4553] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:33:49.704314 containerd[1623]: 2025-09-11 00:33:49.562 [INFO][4553] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:33:49.704314 containerd[1623]: 2025-09-11 00:33:49.611 [INFO][4553] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.17cf1342d6aa111d923afc862ace072fd8e59f9d4f56da48dfe3f856b594381b" host="localhost" Sep 11 00:33:49.704314 containerd[1623]: 2025-09-11 00:33:49.623 [INFO][4553] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:33:49.704314 containerd[1623]: 2025-09-11 00:33:49.642 [INFO][4553] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:33:49.704314 containerd[1623]: 2025-09-11 00:33:49.644 [INFO][4553] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:33:49.704314 containerd[1623]: 2025-09-11 00:33:49.647 [INFO][4553] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:33:49.704314 containerd[1623]: 2025-09-11 00:33:49.647 [INFO][4553] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.17cf1342d6aa111d923afc862ace072fd8e59f9d4f56da48dfe3f856b594381b" host="localhost" Sep 11 00:33:49.704314 containerd[1623]: 2025-09-11 00:33:49.649 [INFO][4553] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.17cf1342d6aa111d923afc862ace072fd8e59f9d4f56da48dfe3f856b594381b Sep 11 00:33:49.704314 containerd[1623]: 2025-09-11 00:33:49.656 [INFO][4553] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.17cf1342d6aa111d923afc862ace072fd8e59f9d4f56da48dfe3f856b594381b" host="localhost" Sep 11 00:33:49.704314 containerd[1623]: 2025-09-11 00:33:49.666 [INFO][4553] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.17cf1342d6aa111d923afc862ace072fd8e59f9d4f56da48dfe3f856b594381b" host="localhost" Sep 11 00:33:49.704314 containerd[1623]: 2025-09-11 00:33:49.666 [INFO][4553] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.17cf1342d6aa111d923afc862ace072fd8e59f9d4f56da48dfe3f856b594381b" host="localhost" Sep 11 00:33:49.704314 containerd[1623]: 2025-09-11 00:33:49.666 [INFO][4553] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:33:49.704314 containerd[1623]: 2025-09-11 00:33:49.666 [INFO][4553] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="17cf1342d6aa111d923afc862ace072fd8e59f9d4f56da48dfe3f856b594381b" HandleID="k8s-pod-network.17cf1342d6aa111d923afc862ace072fd8e59f9d4f56da48dfe3f856b594381b" Workload="localhost-k8s-csi--node--driver--5zknv-eth0" Sep 11 00:33:49.704740 containerd[1623]: 2025-09-11 00:33:49.669 [INFO][4532] cni-plugin/k8s.go 418: Populated endpoint ContainerID="17cf1342d6aa111d923afc862ace072fd8e59f9d4f56da48dfe3f856b594381b" Namespace="calico-system" Pod="csi-node-driver-5zknv" WorkloadEndpoint="localhost-k8s-csi--node--driver--5zknv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5zknv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9e09e6c7-d57e-4653-9635-85c8bb6db9f7", ResourceVersion:"672", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 33, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-5zknv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali312b2f079f8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:33:49.704740 containerd[1623]: 2025-09-11 00:33:49.669 [INFO][4532] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="17cf1342d6aa111d923afc862ace072fd8e59f9d4f56da48dfe3f856b594381b" Namespace="calico-system" Pod="csi-node-driver-5zknv" WorkloadEndpoint="localhost-k8s-csi--node--driver--5zknv-eth0" Sep 11 00:33:49.704740 containerd[1623]: 2025-09-11 00:33:49.669 [INFO][4532] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali312b2f079f8 ContainerID="17cf1342d6aa111d923afc862ace072fd8e59f9d4f56da48dfe3f856b594381b" Namespace="calico-system" Pod="csi-node-driver-5zknv" WorkloadEndpoint="localhost-k8s-csi--node--driver--5zknv-eth0" Sep 11 00:33:49.704740 containerd[1623]: 2025-09-11 00:33:49.676 [INFO][4532] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="17cf1342d6aa111d923afc862ace072fd8e59f9d4f56da48dfe3f856b594381b" Namespace="calico-system" Pod="csi-node-driver-5zknv" WorkloadEndpoint="localhost-k8s-csi--node--driver--5zknv-eth0" Sep 11 00:33:49.704740 containerd[1623]: 2025-09-11 00:33:49.679 [INFO][4532] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="17cf1342d6aa111d923afc862ace072fd8e59f9d4f56da48dfe3f856b594381b" Namespace="calico-system" Pod="csi-node-driver-5zknv" WorkloadEndpoint="localhost-k8s-csi--node--driver--5zknv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5zknv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9e09e6c7-d57e-4653-9635-85c8bb6db9f7", ResourceVersion:"672", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 33, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"17cf1342d6aa111d923afc862ace072fd8e59f9d4f56da48dfe3f856b594381b", Pod:"csi-node-driver-5zknv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali312b2f079f8", MAC:"92:bb:cb:6b:b0:39", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:33:49.704740 containerd[1623]: 2025-09-11 00:33:49.697 [INFO][4532] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="17cf1342d6aa111d923afc862ace072fd8e59f9d4f56da48dfe3f856b594381b" Namespace="calico-system" Pod="csi-node-driver-5zknv" WorkloadEndpoint="localhost-k8s-csi--node--driver--5zknv-eth0" Sep 11 00:33:49.749102 containerd[1623]: time="2025-09-11T00:33:49.749033866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-52kp4,Uid:ab50939c-42a6-4b91-8b1f-2906b7091d42,Namespace:calico-system,Attempt:0,} returns sandbox id \"b9c036d7eb6651688d225997af39a1228aa8dfee5e3cb8def761d82b582b48ba\"" Sep 11 00:33:49.786557 containerd[1623]: time="2025-09-11T00:33:49.786437961Z" level=info msg="connecting to shim 17cf1342d6aa111d923afc862ace072fd8e59f9d4f56da48dfe3f856b594381b" address="unix:///run/containerd/s/49d7fb3e7523640f2e9b193c519998453de3d3b54403a78274cd3584e0162868" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:33:49.814539 systemd[1]: Started cri-containerd-17cf1342d6aa111d923afc862ace072fd8e59f9d4f56da48dfe3f856b594381b.scope - libcontainer container 17cf1342d6aa111d923afc862ace072fd8e59f9d4f56da48dfe3f856b594381b. Sep 11 00:33:49.839110 systemd-resolved[1521]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:33:49.890450 containerd[1623]: time="2025-09-11T00:33:49.890419184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5zknv,Uid:9e09e6c7-d57e-4653-9635-85c8bb6db9f7,Namespace:calico-system,Attempt:0,} returns sandbox id \"17cf1342d6aa111d923afc862ace072fd8e59f9d4f56da48dfe3f856b594381b\"" Sep 11 00:33:50.100230 kubelet[2941]: I0911 00:33:50.052362 2941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-q5r45" podStartSLOduration=37.871606927 podStartE2EDuration="37.871606927s" podCreationTimestamp="2025-09-11 00:33:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:33:49.845948969 +0000 UTC m=+43.610123202" watchObservedRunningTime="2025-09-11 00:33:49.871606927 +0000 UTC m=+43.635781162" Sep 11 00:33:50.159408 systemd-networkd[1520]: cali0cfbc61429c: Gained IPv6LL Sep 11 00:33:50.393604 containerd[1623]: time="2025-09-11T00:33:50.393442349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b5fbfb79d-cz76d,Uid:1d784e97-b879-49d9-bd94-fd0284ab6cc2,Namespace:calico-apiserver,Attempt:0,}" Sep 11 00:33:50.393604 containerd[1623]: time="2025-09-11T00:33:50.393551281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rpfgw,Uid:282222e7-7607-4bf7-8990-39b458addfca,Namespace:kube-system,Attempt:0,}" Sep 11 00:33:50.863375 systemd-networkd[1520]: calia9e3e6f6bed: Gained IPv6LL Sep 11 00:33:50.927384 systemd-networkd[1520]: cali312b2f079f8: Gained IPv6LL Sep 11 00:33:51.392272 containerd[1623]: time="2025-09-11T00:33:51.392245110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f4d847988-7r26x,Uid:b859708f-7aeb-4315-9af2-3e57c1933a45,Namespace:calico-system,Attempt:0,}" Sep 11 00:33:51.439744 containerd[1623]: time="2025-09-11T00:33:51.439551534Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:51.441489 containerd[1623]: time="2025-09-11T00:33:51.441472298Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 11 00:33:51.442253 containerd[1623]: time="2025-09-11T00:33:51.441794566Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:51.444382 containerd[1623]: time="2025-09-11T00:33:51.444366742Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:51.446325 containerd[1623]: time="2025-09-11T00:33:51.446290713Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.782146223s" Sep 11 00:33:51.447174 containerd[1623]: time="2025-09-11T00:33:51.447149024Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 11 00:33:51.450390 containerd[1623]: time="2025-09-11T00:33:51.450282971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 11 00:33:51.453463 containerd[1623]: time="2025-09-11T00:33:51.453441537Z" level=info msg="CreateContainer within sandbox \"f5ee2a789cbd37358dca00ef8259413a31c940329b4cc122861e2c9aafc41dd4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 11 00:33:51.466633 containerd[1623]: time="2025-09-11T00:33:51.466605815Z" level=info msg="Container 6bfc5c4bc30ba4cb43fc718b1cdfd22c4489fa90712215a3c68f3dc7c9984738: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:33:51.492116 containerd[1623]: time="2025-09-11T00:33:51.492085348Z" level=info msg="CreateContainer within sandbox \"f5ee2a789cbd37358dca00ef8259413a31c940329b4cc122861e2c9aafc41dd4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6bfc5c4bc30ba4cb43fc718b1cdfd22c4489fa90712215a3c68f3dc7c9984738\"" Sep 11 00:33:51.492649 containerd[1623]: time="2025-09-11T00:33:51.492631170Z" level=info msg="StartContainer for \"6bfc5c4bc30ba4cb43fc718b1cdfd22c4489fa90712215a3c68f3dc7c9984738\"" Sep 11 00:33:51.495260 containerd[1623]: time="2025-09-11T00:33:51.495157398Z" level=info msg="connecting to shim 6bfc5c4bc30ba4cb43fc718b1cdfd22c4489fa90712215a3c68f3dc7c9984738" address="unix:///run/containerd/s/68c6c02fa72f6f30e5592f1cfbc137adb768ed4ae9a92163d94b7e57fdb24645" protocol=ttrpc version=3 Sep 11 00:33:51.553058 systemd-networkd[1520]: cali8ffc2bc6814: Link UP Sep 11 00:33:51.554576 systemd-networkd[1520]: cali8ffc2bc6814: Gained carrier Sep 11 00:33:51.575976 containerd[1623]: 2025-09-11 00:33:51.440 [INFO][4688] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--rpfgw-eth0 coredns-668d6bf9bc- kube-system 282222e7-7607-4bf7-8990-39b458addfca 788 0 2025-09-11 00:33:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-rpfgw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8ffc2bc6814 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c37d8d52b535f6c5841d3cc97378973a42cf33c31cda01faf242208297a0ca24" Namespace="kube-system" Pod="coredns-668d6bf9bc-rpfgw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rpfgw-" Sep 11 00:33:51.575976 containerd[1623]: 2025-09-11 00:33:51.440 [INFO][4688] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c37d8d52b535f6c5841d3cc97378973a42cf33c31cda01faf242208297a0ca24" Namespace="kube-system" Pod="coredns-668d6bf9bc-rpfgw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rpfgw-eth0" Sep 11 00:33:51.575976 containerd[1623]: 2025-09-11 00:33:51.502 [INFO][4730] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c37d8d52b535f6c5841d3cc97378973a42cf33c31cda01faf242208297a0ca24" HandleID="k8s-pod-network.c37d8d52b535f6c5841d3cc97378973a42cf33c31cda01faf242208297a0ca24" Workload="localhost-k8s-coredns--668d6bf9bc--rpfgw-eth0" Sep 11 00:33:51.575976 containerd[1623]: 2025-09-11 00:33:51.503 [INFO][4730] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c37d8d52b535f6c5841d3cc97378973a42cf33c31cda01faf242208297a0ca24" HandleID="k8s-pod-network.c37d8d52b535f6c5841d3cc97378973a42cf33c31cda01faf242208297a0ca24" Workload="localhost-k8s-coredns--668d6bf9bc--rpfgw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5640), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-rpfgw", "timestamp":"2025-09-11 00:33:51.502643731 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:33:51.575976 containerd[1623]: 2025-09-11 00:33:51.503 [INFO][4730] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:33:51.575976 containerd[1623]: 2025-09-11 00:33:51.503 [INFO][4730] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:33:51.575976 containerd[1623]: 2025-09-11 00:33:51.503 [INFO][4730] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:33:51.575976 containerd[1623]: 2025-09-11 00:33:51.515 [INFO][4730] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c37d8d52b535f6c5841d3cc97378973a42cf33c31cda01faf242208297a0ca24" host="localhost" Sep 11 00:33:51.575976 containerd[1623]: 2025-09-11 00:33:51.521 [INFO][4730] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:33:51.575976 containerd[1623]: 2025-09-11 00:33:51.529 [INFO][4730] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:33:51.575976 containerd[1623]: 2025-09-11 00:33:51.531 [INFO][4730] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:33:51.575976 containerd[1623]: 2025-09-11 00:33:51.534 [INFO][4730] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:33:51.575976 containerd[1623]: 2025-09-11 00:33:51.534 [INFO][4730] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c37d8d52b535f6c5841d3cc97378973a42cf33c31cda01faf242208297a0ca24" host="localhost" Sep 11 00:33:51.575976 containerd[1623]: 2025-09-11 00:33:51.536 [INFO][4730] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c37d8d52b535f6c5841d3cc97378973a42cf33c31cda01faf242208297a0ca24 Sep 11 00:33:51.575976 containerd[1623]: 2025-09-11 00:33:51.540 [INFO][4730] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c37d8d52b535f6c5841d3cc97378973a42cf33c31cda01faf242208297a0ca24" host="localhost" Sep 11 00:33:51.575976 containerd[1623]: 2025-09-11 00:33:51.545 [INFO][4730] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.c37d8d52b535f6c5841d3cc97378973a42cf33c31cda01faf242208297a0ca24" host="localhost" Sep 11 00:33:51.575976 containerd[1623]: 2025-09-11 00:33:51.546 [INFO][4730] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.c37d8d52b535f6c5841d3cc97378973a42cf33c31cda01faf242208297a0ca24" host="localhost" Sep 11 00:33:51.575976 containerd[1623]: 2025-09-11 00:33:51.546 [INFO][4730] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:33:51.575976 containerd[1623]: 2025-09-11 00:33:51.546 [INFO][4730] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="c37d8d52b535f6c5841d3cc97378973a42cf33c31cda01faf242208297a0ca24" HandleID="k8s-pod-network.c37d8d52b535f6c5841d3cc97378973a42cf33c31cda01faf242208297a0ca24" Workload="localhost-k8s-coredns--668d6bf9bc--rpfgw-eth0" Sep 11 00:33:51.576908 containerd[1623]: 2025-09-11 00:33:51.550 [INFO][4688] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c37d8d52b535f6c5841d3cc97378973a42cf33c31cda01faf242208297a0ca24" Namespace="kube-system" Pod="coredns-668d6bf9bc-rpfgw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rpfgw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--rpfgw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"282222e7-7607-4bf7-8990-39b458addfca", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 33, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-rpfgw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8ffc2bc6814", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:33:51.576908 containerd[1623]: 2025-09-11 00:33:51.550 [INFO][4688] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="c37d8d52b535f6c5841d3cc97378973a42cf33c31cda01faf242208297a0ca24" Namespace="kube-system" Pod="coredns-668d6bf9bc-rpfgw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rpfgw-eth0" Sep 11 00:33:51.576908 containerd[1623]: 2025-09-11 00:33:51.550 [INFO][4688] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8ffc2bc6814 ContainerID="c37d8d52b535f6c5841d3cc97378973a42cf33c31cda01faf242208297a0ca24" Namespace="kube-system" Pod="coredns-668d6bf9bc-rpfgw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rpfgw-eth0" Sep 11 00:33:51.576908 containerd[1623]: 2025-09-11 00:33:51.555 [INFO][4688] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c37d8d52b535f6c5841d3cc97378973a42cf33c31cda01faf242208297a0ca24" Namespace="kube-system" Pod="coredns-668d6bf9bc-rpfgw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rpfgw-eth0" Sep 11 00:33:51.576908 containerd[1623]: 2025-09-11 00:33:51.556 [INFO][4688] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c37d8d52b535f6c5841d3cc97378973a42cf33c31cda01faf242208297a0ca24" Namespace="kube-system" Pod="coredns-668d6bf9bc-rpfgw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rpfgw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--rpfgw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"282222e7-7607-4bf7-8990-39b458addfca", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 33, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c37d8d52b535f6c5841d3cc97378973a42cf33c31cda01faf242208297a0ca24", Pod:"coredns-668d6bf9bc-rpfgw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8ffc2bc6814", MAC:"d2:d1:f1:94:79:19", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:33:51.576908 containerd[1623]: 2025-09-11 00:33:51.567 [INFO][4688] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c37d8d52b535f6c5841d3cc97378973a42cf33c31cda01faf242208297a0ca24" Namespace="kube-system" Pod="coredns-668d6bf9bc-rpfgw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--rpfgw-eth0" Sep 11 00:33:51.582177 systemd[1]: Started cri-containerd-6bfc5c4bc30ba4cb43fc718b1cdfd22c4489fa90712215a3c68f3dc7c9984738.scope - libcontainer container 6bfc5c4bc30ba4cb43fc718b1cdfd22c4489fa90712215a3c68f3dc7c9984738. Sep 11 00:33:51.612356 containerd[1623]: time="2025-09-11T00:33:51.612329608Z" level=info msg="connecting to shim c37d8d52b535f6c5841d3cc97378973a42cf33c31cda01faf242208297a0ca24" address="unix:///run/containerd/s/49e7c709919c65f7da47558804403ec37f7fe3cbee1b9e798115a80b9e70eec9" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:33:51.633463 systemd[1]: Started cri-containerd-c37d8d52b535f6c5841d3cc97378973a42cf33c31cda01faf242208297a0ca24.scope - libcontainer container c37d8d52b535f6c5841d3cc97378973a42cf33c31cda01faf242208297a0ca24. Sep 11 00:33:51.649344 systemd-networkd[1520]: calif927f03d361: Link UP Sep 11 00:33:51.649678 systemd-networkd[1520]: calif927f03d361: Gained carrier Sep 11 00:33:51.661559 systemd-resolved[1521]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:33:51.669209 containerd[1623]: 2025-09-11 00:33:51.463 [INFO][4693] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6b5fbfb79d--cz76d-eth0 calico-apiserver-6b5fbfb79d- calico-apiserver 1d784e97-b879-49d9-bd94-fd0284ab6cc2 799 0 2025-09-11 00:33:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b5fbfb79d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6b5fbfb79d-cz76d eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif927f03d361 [] [] }} ContainerID="b8f69968fc837a1e17e4dece3d0fa58494b420ce8d0d3a3567eaa4a3db797da6" Namespace="calico-apiserver" Pod="calico-apiserver-6b5fbfb79d-cz76d" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5fbfb79d--cz76d-" Sep 11 00:33:51.669209 containerd[1623]: 2025-09-11 00:33:51.464 [INFO][4693] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b8f69968fc837a1e17e4dece3d0fa58494b420ce8d0d3a3567eaa4a3db797da6" Namespace="calico-apiserver" Pod="calico-apiserver-6b5fbfb79d-cz76d" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5fbfb79d--cz76d-eth0" Sep 11 00:33:51.669209 containerd[1623]: 2025-09-11 00:33:51.508 [INFO][4739] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b8f69968fc837a1e17e4dece3d0fa58494b420ce8d0d3a3567eaa4a3db797da6" HandleID="k8s-pod-network.b8f69968fc837a1e17e4dece3d0fa58494b420ce8d0d3a3567eaa4a3db797da6" Workload="localhost-k8s-calico--apiserver--6b5fbfb79d--cz76d-eth0" Sep 11 00:33:51.669209 containerd[1623]: 2025-09-11 00:33:51.509 [INFO][4739] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b8f69968fc837a1e17e4dece3d0fa58494b420ce8d0d3a3567eaa4a3db797da6" HandleID="k8s-pod-network.b8f69968fc837a1e17e4dece3d0fa58494b420ce8d0d3a3567eaa4a3db797da6" Workload="localhost-k8s-calico--apiserver--6b5fbfb79d--cz76d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f100), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6b5fbfb79d-cz76d", "timestamp":"2025-09-11 00:33:51.508283072 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:33:51.669209 containerd[1623]: 2025-09-11 00:33:51.509 [INFO][4739] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:33:51.669209 containerd[1623]: 2025-09-11 00:33:51.547 [INFO][4739] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:33:51.669209 containerd[1623]: 2025-09-11 00:33:51.547 [INFO][4739] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:33:51.669209 containerd[1623]: 2025-09-11 00:33:51.615 [INFO][4739] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b8f69968fc837a1e17e4dece3d0fa58494b420ce8d0d3a3567eaa4a3db797da6" host="localhost" Sep 11 00:33:51.669209 containerd[1623]: 2025-09-11 00:33:51.622 [INFO][4739] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:33:51.669209 containerd[1623]: 2025-09-11 00:33:51.627 [INFO][4739] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:33:51.669209 containerd[1623]: 2025-09-11 00:33:51.629 [INFO][4739] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:33:51.669209 containerd[1623]: 2025-09-11 00:33:51.631 [INFO][4739] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:33:51.669209 containerd[1623]: 2025-09-11 00:33:51.631 [INFO][4739] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b8f69968fc837a1e17e4dece3d0fa58494b420ce8d0d3a3567eaa4a3db797da6" host="localhost" Sep 11 00:33:51.669209 containerd[1623]: 2025-09-11 00:33:51.632 [INFO][4739] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b8f69968fc837a1e17e4dece3d0fa58494b420ce8d0d3a3567eaa4a3db797da6 Sep 11 00:33:51.669209 containerd[1623]: 2025-09-11 00:33:51.636 [INFO][4739] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b8f69968fc837a1e17e4dece3d0fa58494b420ce8d0d3a3567eaa4a3db797da6" host="localhost" Sep 11 00:33:51.669209 containerd[1623]: 2025-09-11 00:33:51.642 [INFO][4739] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.b8f69968fc837a1e17e4dece3d0fa58494b420ce8d0d3a3567eaa4a3db797da6" host="localhost" Sep 11 00:33:51.669209 containerd[1623]: 2025-09-11 00:33:51.642 [INFO][4739] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.b8f69968fc837a1e17e4dece3d0fa58494b420ce8d0d3a3567eaa4a3db797da6" host="localhost" Sep 11 00:33:51.669209 containerd[1623]: 2025-09-11 00:33:51.642 [INFO][4739] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:33:51.669209 containerd[1623]: 2025-09-11 00:33:51.642 [INFO][4739] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="b8f69968fc837a1e17e4dece3d0fa58494b420ce8d0d3a3567eaa4a3db797da6" HandleID="k8s-pod-network.b8f69968fc837a1e17e4dece3d0fa58494b420ce8d0d3a3567eaa4a3db797da6" Workload="localhost-k8s-calico--apiserver--6b5fbfb79d--cz76d-eth0" Sep 11 00:33:51.669999 containerd[1623]: 2025-09-11 00:33:51.646 [INFO][4693] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b8f69968fc837a1e17e4dece3d0fa58494b420ce8d0d3a3567eaa4a3db797da6" Namespace="calico-apiserver" Pod="calico-apiserver-6b5fbfb79d-cz76d" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5fbfb79d--cz76d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b5fbfb79d--cz76d-eth0", GenerateName:"calico-apiserver-6b5fbfb79d-", Namespace:"calico-apiserver", SelfLink:"", UID:"1d784e97-b879-49d9-bd94-fd0284ab6cc2", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 33, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b5fbfb79d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6b5fbfb79d-cz76d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif927f03d361", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:33:51.669999 containerd[1623]: 2025-09-11 00:33:51.646 [INFO][4693] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="b8f69968fc837a1e17e4dece3d0fa58494b420ce8d0d3a3567eaa4a3db797da6" Namespace="calico-apiserver" Pod="calico-apiserver-6b5fbfb79d-cz76d" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5fbfb79d--cz76d-eth0" Sep 11 00:33:51.669999 containerd[1623]: 2025-09-11 00:33:51.646 [INFO][4693] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif927f03d361 ContainerID="b8f69968fc837a1e17e4dece3d0fa58494b420ce8d0d3a3567eaa4a3db797da6" Namespace="calico-apiserver" Pod="calico-apiserver-6b5fbfb79d-cz76d" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5fbfb79d--cz76d-eth0" Sep 11 00:33:51.669999 containerd[1623]: 2025-09-11 00:33:51.650 [INFO][4693] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b8f69968fc837a1e17e4dece3d0fa58494b420ce8d0d3a3567eaa4a3db797da6" Namespace="calico-apiserver" Pod="calico-apiserver-6b5fbfb79d-cz76d" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5fbfb79d--cz76d-eth0" Sep 11 00:33:51.669999 containerd[1623]: 2025-09-11 00:33:51.652 [INFO][4693] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b8f69968fc837a1e17e4dece3d0fa58494b420ce8d0d3a3567eaa4a3db797da6" Namespace="calico-apiserver" Pod="calico-apiserver-6b5fbfb79d-cz76d" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5fbfb79d--cz76d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b5fbfb79d--cz76d-eth0", GenerateName:"calico-apiserver-6b5fbfb79d-", Namespace:"calico-apiserver", SelfLink:"", UID:"1d784e97-b879-49d9-bd94-fd0284ab6cc2", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 33, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b5fbfb79d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b8f69968fc837a1e17e4dece3d0fa58494b420ce8d0d3a3567eaa4a3db797da6", Pod:"calico-apiserver-6b5fbfb79d-cz76d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif927f03d361", MAC:"6e:1b:20:e8:ba:65", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:33:51.669999 containerd[1623]: 2025-09-11 00:33:51.665 [INFO][4693] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b8f69968fc837a1e17e4dece3d0fa58494b420ce8d0d3a3567eaa4a3db797da6" Namespace="calico-apiserver" Pod="calico-apiserver-6b5fbfb79d-cz76d" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5fbfb79d--cz76d-eth0" Sep 11 00:33:51.675749 containerd[1623]: time="2025-09-11T00:33:51.675727233Z" level=info msg="StartContainer for \"6bfc5c4bc30ba4cb43fc718b1cdfd22c4489fa90712215a3c68f3dc7c9984738\" returns successfully" Sep 11 00:33:51.698822 containerd[1623]: time="2025-09-11T00:33:51.698787397Z" level=info msg="connecting to shim b8f69968fc837a1e17e4dece3d0fa58494b420ce8d0d3a3567eaa4a3db797da6" address="unix:///run/containerd/s/36134ee09ce3db4a851816935248f9f7f464edf6ce0c686d1a78025170b7627a" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:33:51.707425 containerd[1623]: time="2025-09-11T00:33:51.707404394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rpfgw,Uid:282222e7-7607-4bf7-8990-39b458addfca,Namespace:kube-system,Attempt:0,} returns sandbox id \"c37d8d52b535f6c5841d3cc97378973a42cf33c31cda01faf242208297a0ca24\"" Sep 11 00:33:51.710368 containerd[1623]: time="2025-09-11T00:33:51.710346736Z" level=info msg="CreateContainer within sandbox \"c37d8d52b535f6c5841d3cc97378973a42cf33c31cda01faf242208297a0ca24\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 11 00:33:51.729124 systemd[1]: Started cri-containerd-b8f69968fc837a1e17e4dece3d0fa58494b420ce8d0d3a3567eaa4a3db797da6.scope - libcontainer container b8f69968fc837a1e17e4dece3d0fa58494b420ce8d0d3a3567eaa4a3db797da6. Sep 11 00:33:51.731836 containerd[1623]: time="2025-09-11T00:33:51.731811845Z" level=info msg="Container f6c4444c3aadb9fa3cbadc1e06c472b7efc436b98030816210af304c1cbf79da: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:33:51.757108 systemd-resolved[1521]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:33:51.761683 containerd[1623]: time="2025-09-11T00:33:51.761659737Z" level=info msg="CreateContainer within sandbox \"c37d8d52b535f6c5841d3cc97378973a42cf33c31cda01faf242208297a0ca24\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f6c4444c3aadb9fa3cbadc1e06c472b7efc436b98030816210af304c1cbf79da\"" Sep 11 00:33:51.764802 containerd[1623]: time="2025-09-11T00:33:51.764290715Z" level=info msg="StartContainer for \"f6c4444c3aadb9fa3cbadc1e06c472b7efc436b98030816210af304c1cbf79da\"" Sep 11 00:33:51.766016 systemd-networkd[1520]: cali1eb2b44cac4: Link UP Sep 11 00:33:51.766139 systemd-networkd[1520]: cali1eb2b44cac4: Gained carrier Sep 11 00:33:51.767145 containerd[1623]: time="2025-09-11T00:33:51.767126284Z" level=info msg="connecting to shim f6c4444c3aadb9fa3cbadc1e06c472b7efc436b98030816210af304c1cbf79da" address="unix:///run/containerd/s/49e7c709919c65f7da47558804403ec37f7fe3cbee1b9e798115a80b9e70eec9" protocol=ttrpc version=3 Sep 11 00:33:51.786460 containerd[1623]: 2025-09-11 00:33:51.479 [INFO][4699] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6f4d847988--7r26x-eth0 calico-kube-controllers-6f4d847988- calico-system b859708f-7aeb-4315-9af2-3e57c1933a45 796 0 2025-09-11 00:33:24 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6f4d847988 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6f4d847988-7r26x eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1eb2b44cac4 [] [] }} ContainerID="6c4c8b6138a9c445018f50b6ba90b2113e250b74bac9e6b92fef992701880d30" Namespace="calico-system" Pod="calico-kube-controllers-6f4d847988-7r26x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f4d847988--7r26x-" Sep 11 00:33:51.786460 containerd[1623]: 2025-09-11 00:33:51.479 [INFO][4699] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6c4c8b6138a9c445018f50b6ba90b2113e250b74bac9e6b92fef992701880d30" Namespace="calico-system" Pod="calico-kube-controllers-6f4d847988-7r26x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f4d847988--7r26x-eth0" Sep 11 00:33:51.786460 containerd[1623]: 2025-09-11 00:33:51.533 [INFO][4744] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6c4c8b6138a9c445018f50b6ba90b2113e250b74bac9e6b92fef992701880d30" HandleID="k8s-pod-network.6c4c8b6138a9c445018f50b6ba90b2113e250b74bac9e6b92fef992701880d30" Workload="localhost-k8s-calico--kube--controllers--6f4d847988--7r26x-eth0" Sep 11 00:33:51.786460 containerd[1623]: 2025-09-11 00:33:51.533 [INFO][4744] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6c4c8b6138a9c445018f50b6ba90b2113e250b74bac9e6b92fef992701880d30" HandleID="k8s-pod-network.6c4c8b6138a9c445018f50b6ba90b2113e250b74bac9e6b92fef992701880d30" Workload="localhost-k8s-calico--kube--controllers--6f4d847988--7r26x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cfa70), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6f4d847988-7r26x", "timestamp":"2025-09-11 00:33:51.53374739 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:33:51.786460 containerd[1623]: 2025-09-11 00:33:51.534 [INFO][4744] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:33:51.786460 containerd[1623]: 2025-09-11 00:33:51.642 [INFO][4744] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:33:51.786460 containerd[1623]: 2025-09-11 00:33:51.642 [INFO][4744] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:33:51.786460 containerd[1623]: 2025-09-11 00:33:51.715 [INFO][4744] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6c4c8b6138a9c445018f50b6ba90b2113e250b74bac9e6b92fef992701880d30" host="localhost" Sep 11 00:33:51.786460 containerd[1623]: 2025-09-11 00:33:51.723 [INFO][4744] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:33:51.786460 containerd[1623]: 2025-09-11 00:33:51.731 [INFO][4744] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:33:51.786460 containerd[1623]: 2025-09-11 00:33:51.732 [INFO][4744] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:33:51.786460 containerd[1623]: 2025-09-11 00:33:51.739 [INFO][4744] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:33:51.786460 containerd[1623]: 2025-09-11 00:33:51.739 [INFO][4744] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6c4c8b6138a9c445018f50b6ba90b2113e250b74bac9e6b92fef992701880d30" host="localhost" Sep 11 00:33:51.786460 containerd[1623]: 2025-09-11 00:33:51.742 [INFO][4744] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6c4c8b6138a9c445018f50b6ba90b2113e250b74bac9e6b92fef992701880d30 Sep 11 00:33:51.786460 containerd[1623]: 2025-09-11 00:33:51.750 [INFO][4744] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6c4c8b6138a9c445018f50b6ba90b2113e250b74bac9e6b92fef992701880d30" host="localhost" Sep 11 00:33:51.786460 containerd[1623]: 2025-09-11 00:33:51.756 [INFO][4744] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.6c4c8b6138a9c445018f50b6ba90b2113e250b74bac9e6b92fef992701880d30" host="localhost" Sep 11 00:33:51.786460 containerd[1623]: 2025-09-11 00:33:51.756 [INFO][4744] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.6c4c8b6138a9c445018f50b6ba90b2113e250b74bac9e6b92fef992701880d30" host="localhost" Sep 11 00:33:51.786460 containerd[1623]: 2025-09-11 00:33:51.756 [INFO][4744] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:33:51.786460 containerd[1623]: 2025-09-11 00:33:51.756 [INFO][4744] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="6c4c8b6138a9c445018f50b6ba90b2113e250b74bac9e6b92fef992701880d30" HandleID="k8s-pod-network.6c4c8b6138a9c445018f50b6ba90b2113e250b74bac9e6b92fef992701880d30" Workload="localhost-k8s-calico--kube--controllers--6f4d847988--7r26x-eth0" Sep 11 00:33:51.788815 containerd[1623]: 2025-09-11 00:33:51.760 [INFO][4699] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6c4c8b6138a9c445018f50b6ba90b2113e250b74bac9e6b92fef992701880d30" Namespace="calico-system" Pod="calico-kube-controllers-6f4d847988-7r26x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f4d847988--7r26x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6f4d847988--7r26x-eth0", GenerateName:"calico-kube-controllers-6f4d847988-", Namespace:"calico-system", SelfLink:"", UID:"b859708f-7aeb-4315-9af2-3e57c1933a45", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 33, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f4d847988", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6f4d847988-7r26x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1eb2b44cac4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:33:51.788815 containerd[1623]: 2025-09-11 00:33:51.760 [INFO][4699] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="6c4c8b6138a9c445018f50b6ba90b2113e250b74bac9e6b92fef992701880d30" Namespace="calico-system" Pod="calico-kube-controllers-6f4d847988-7r26x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f4d847988--7r26x-eth0" Sep 11 00:33:51.788815 containerd[1623]: 2025-09-11 00:33:51.760 [INFO][4699] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1eb2b44cac4 ContainerID="6c4c8b6138a9c445018f50b6ba90b2113e250b74bac9e6b92fef992701880d30" Namespace="calico-system" Pod="calico-kube-controllers-6f4d847988-7r26x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f4d847988--7r26x-eth0" Sep 11 00:33:51.788815 containerd[1623]: 2025-09-11 00:33:51.767 [INFO][4699] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6c4c8b6138a9c445018f50b6ba90b2113e250b74bac9e6b92fef992701880d30" Namespace="calico-system" Pod="calico-kube-controllers-6f4d847988-7r26x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f4d847988--7r26x-eth0" Sep 11 00:33:51.788815 containerd[1623]: 2025-09-11 00:33:51.767 [INFO][4699] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6c4c8b6138a9c445018f50b6ba90b2113e250b74bac9e6b92fef992701880d30" Namespace="calico-system" Pod="calico-kube-controllers-6f4d847988-7r26x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f4d847988--7r26x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6f4d847988--7r26x-eth0", GenerateName:"calico-kube-controllers-6f4d847988-", Namespace:"calico-system", SelfLink:"", UID:"b859708f-7aeb-4315-9af2-3e57c1933a45", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 33, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f4d847988", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6c4c8b6138a9c445018f50b6ba90b2113e250b74bac9e6b92fef992701880d30", Pod:"calico-kube-controllers-6f4d847988-7r26x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1eb2b44cac4", MAC:"4e:ce:61:dc:b6:f2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:33:51.788815 containerd[1623]: 2025-09-11 00:33:51.781 [INFO][4699] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6c4c8b6138a9c445018f50b6ba90b2113e250b74bac9e6b92fef992701880d30" Namespace="calico-system" Pod="calico-kube-controllers-6f4d847988-7r26x" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f4d847988--7r26x-eth0" Sep 11 00:33:51.807370 systemd[1]: Started cri-containerd-f6c4444c3aadb9fa3cbadc1e06c472b7efc436b98030816210af304c1cbf79da.scope - libcontainer container f6c4444c3aadb9fa3cbadc1e06c472b7efc436b98030816210af304c1cbf79da. Sep 11 00:33:51.813799 containerd[1623]: time="2025-09-11T00:33:51.813676821Z" level=info msg="connecting to shim 6c4c8b6138a9c445018f50b6ba90b2113e250b74bac9e6b92fef992701880d30" address="unix:///run/containerd/s/8ff4edbfd7035b80527c579f282808de5201d5688105128958c101369ecbc0df" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:33:51.835993 containerd[1623]: time="2025-09-11T00:33:51.835857265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b5fbfb79d-cz76d,Uid:1d784e97-b879-49d9-bd94-fd0284ab6cc2,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b8f69968fc837a1e17e4dece3d0fa58494b420ce8d0d3a3567eaa4a3db797da6\"" Sep 11 00:33:51.842646 containerd[1623]: time="2025-09-11T00:33:51.842604300Z" level=info msg="CreateContainer within sandbox \"b8f69968fc837a1e17e4dece3d0fa58494b420ce8d0d3a3567eaa4a3db797da6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 11 00:33:51.848794 containerd[1623]: time="2025-09-11T00:33:51.848704319Z" level=info msg="Container dadef1ef4eabb8104ea1b6c42691c07f2b99de06bc93aaf4c7e7f58acfd0d5c4: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:33:51.862559 systemd[1]: Started cri-containerd-6c4c8b6138a9c445018f50b6ba90b2113e250b74bac9e6b92fef992701880d30.scope - libcontainer container 6c4c8b6138a9c445018f50b6ba90b2113e250b74bac9e6b92fef992701880d30. Sep 11 00:33:51.862977 containerd[1623]: time="2025-09-11T00:33:51.862837183Z" level=info msg="StartContainer for \"f6c4444c3aadb9fa3cbadc1e06c472b7efc436b98030816210af304c1cbf79da\" returns successfully" Sep 11 00:33:51.864945 containerd[1623]: time="2025-09-11T00:33:51.864925416Z" level=info msg="CreateContainer within sandbox \"b8f69968fc837a1e17e4dece3d0fa58494b420ce8d0d3a3567eaa4a3db797da6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"dadef1ef4eabb8104ea1b6c42691c07f2b99de06bc93aaf4c7e7f58acfd0d5c4\"" Sep 11 00:33:51.867323 containerd[1623]: time="2025-09-11T00:33:51.867120088Z" level=info msg="StartContainer for \"dadef1ef4eabb8104ea1b6c42691c07f2b99de06bc93aaf4c7e7f58acfd0d5c4\"" Sep 11 00:33:51.868536 containerd[1623]: time="2025-09-11T00:33:51.868520816Z" level=info msg="connecting to shim dadef1ef4eabb8104ea1b6c42691c07f2b99de06bc93aaf4c7e7f58acfd0d5c4" address="unix:///run/containerd/s/36134ee09ce3db4a851816935248f9f7f464edf6ce0c686d1a78025170b7627a" protocol=ttrpc version=3 Sep 11 00:33:51.883531 systemd[1]: Started cri-containerd-dadef1ef4eabb8104ea1b6c42691c07f2b99de06bc93aaf4c7e7f58acfd0d5c4.scope - libcontainer container dadef1ef4eabb8104ea1b6c42691c07f2b99de06bc93aaf4c7e7f58acfd0d5c4. Sep 11 00:33:51.915134 systemd-resolved[1521]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:33:51.993336 containerd[1623]: time="2025-09-11T00:33:51.992938207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f4d847988-7r26x,Uid:b859708f-7aeb-4315-9af2-3e57c1933a45,Namespace:calico-system,Attempt:0,} returns sandbox id \"6c4c8b6138a9c445018f50b6ba90b2113e250b74bac9e6b92fef992701880d30\"" Sep 11 00:33:51.993791 containerd[1623]: time="2025-09-11T00:33:51.993761952Z" level=info msg="StartContainer for \"dadef1ef4eabb8104ea1b6c42691c07f2b99de06bc93aaf4c7e7f58acfd0d5c4\" returns successfully" Sep 11 00:33:52.644586 kubelet[2941]: I0911 00:33:52.642923 2941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-rpfgw" podStartSLOduration=40.642910144 podStartE2EDuration="40.642910144s" podCreationTimestamp="2025-09-11 00:33:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:33:52.630101208 +0000 UTC m=+46.394275446" watchObservedRunningTime="2025-09-11 00:33:52.642910144 +0000 UTC m=+46.407084379" Sep 11 00:33:52.696952 kubelet[2941]: I0911 00:33:52.696909 2941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6b5fbfb79d-qgw7r" podStartSLOduration=26.820561366 podStartE2EDuration="31.696897416s" podCreationTimestamp="2025-09-11 00:33:21 +0000 UTC" firstStartedPulling="2025-09-11 00:33:46.572356468 +0000 UTC m=+40.336530701" lastFinishedPulling="2025-09-11 00:33:51.448692519 +0000 UTC m=+45.212866751" observedRunningTime="2025-09-11 00:33:52.696562942 +0000 UTC m=+46.460737177" watchObservedRunningTime="2025-09-11 00:33:52.696897416 +0000 UTC m=+46.461071658" Sep 11 00:33:52.911448 systemd-networkd[1520]: cali1eb2b44cac4: Gained IPv6LL Sep 11 00:33:53.039454 systemd-networkd[1520]: calif927f03d361: Gained IPv6LL Sep 11 00:33:53.616393 systemd-networkd[1520]: cali8ffc2bc6814: Gained IPv6LL Sep 11 00:33:53.666591 kubelet[2941]: I0911 00:33:53.666460 2941 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 00:33:53.667142 kubelet[2941]: I0911 00:33:53.666889 2941 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 00:33:56.242359 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3814689492.mount: Deactivated successfully. Sep 11 00:33:56.255278 containerd[1623]: time="2025-09-11T00:33:56.255222135Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 11 00:33:56.258265 containerd[1623]: time="2025-09-11T00:33:56.258243609Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 4.807231757s" Sep 11 00:33:56.258506 containerd[1623]: time="2025-09-11T00:33:56.258267699Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 11 00:33:56.263606 containerd[1623]: time="2025-09-11T00:33:56.263546858Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 11 00:33:56.265771 containerd[1623]: time="2025-09-11T00:33:56.265746162Z" level=info msg="CreateContainer within sandbox \"70911ff387fef03121776e64c28df0fce10e62b26615b295005fc726bb3421f3\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 11 00:33:56.270922 containerd[1623]: time="2025-09-11T00:33:56.270328328Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:56.270922 containerd[1623]: time="2025-09-11T00:33:56.270773172Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:56.271111 containerd[1623]: time="2025-09-11T00:33:56.271073489Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:56.275320 containerd[1623]: time="2025-09-11T00:33:56.274550462Z" level=info msg="Container 558b7a6e92d542e4903d5db5ebf29d9edc0926183b394147789bea3d72f3afbd: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:33:56.283178 containerd[1623]: time="2025-09-11T00:33:56.283152660Z" level=info msg="CreateContainer within sandbox \"70911ff387fef03121776e64c28df0fce10e62b26615b295005fc726bb3421f3\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"558b7a6e92d542e4903d5db5ebf29d9edc0926183b394147789bea3d72f3afbd\"" Sep 11 00:33:56.283611 containerd[1623]: time="2025-09-11T00:33:56.283586386Z" level=info msg="StartContainer for \"558b7a6e92d542e4903d5db5ebf29d9edc0926183b394147789bea3d72f3afbd\"" Sep 11 00:33:56.284674 containerd[1623]: time="2025-09-11T00:33:56.284654286Z" level=info msg="connecting to shim 558b7a6e92d542e4903d5db5ebf29d9edc0926183b394147789bea3d72f3afbd" address="unix:///run/containerd/s/98deacd55aba7e2dd4648055775007884c4d34e2ed3f5da792b66f6cfb453aa3" protocol=ttrpc version=3 Sep 11 00:33:56.307692 systemd[1]: Started cri-containerd-558b7a6e92d542e4903d5db5ebf29d9edc0926183b394147789bea3d72f3afbd.scope - libcontainer container 558b7a6e92d542e4903d5db5ebf29d9edc0926183b394147789bea3d72f3afbd. Sep 11 00:33:56.374405 containerd[1623]: time="2025-09-11T00:33:56.374362553Z" level=info msg="StartContainer for \"558b7a6e92d542e4903d5db5ebf29d9edc0926183b394147789bea3d72f3afbd\" returns successfully" Sep 11 00:33:56.718127 kubelet[2941]: I0911 00:33:56.717328 2941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6b5fbfb79d-cz76d" podStartSLOduration=35.71731096 podStartE2EDuration="35.71731096s" podCreationTimestamp="2025-09-11 00:33:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:33:52.730779137 +0000 UTC m=+46.494953376" watchObservedRunningTime="2025-09-11 00:33:56.71731096 +0000 UTC m=+50.481485195" Sep 11 00:33:56.718127 kubelet[2941]: I0911 00:33:56.717402 2941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-86c864fb6-ztr4j" podStartSLOduration=2.6931989830000003 podStartE2EDuration="12.717398427s" podCreationTimestamp="2025-09-11 00:33:44 +0000 UTC" firstStartedPulling="2025-09-11 00:33:46.236861657 +0000 UTC m=+40.001035891" lastFinishedPulling="2025-09-11 00:33:56.261061105 +0000 UTC m=+50.025235335" observedRunningTime="2025-09-11 00:33:56.716324415 +0000 UTC m=+50.480498652" watchObservedRunningTime="2025-09-11 00:33:56.717398427 +0000 UTC m=+50.481572662" Sep 11 00:33:58.998209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3108992035.mount: Deactivated successfully. Sep 11 00:33:59.777268 containerd[1623]: time="2025-09-11T00:33:59.777236937Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:59.780303 containerd[1623]: time="2025-09-11T00:33:59.780156446Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:59.781856 containerd[1623]: time="2025-09-11T00:33:59.781842862Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 11 00:33:59.782517 containerd[1623]: time="2025-09-11T00:33:59.782504634Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:33:59.782930 containerd[1623]: time="2025-09-11T00:33:59.782768780Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 3.519201603s" Sep 11 00:33:59.783167 containerd[1623]: time="2025-09-11T00:33:59.783085620Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 11 00:33:59.795823 containerd[1623]: time="2025-09-11T00:33:59.795747471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 11 00:33:59.839830 containerd[1623]: time="2025-09-11T00:33:59.839331680Z" level=info msg="CreateContainer within sandbox \"b9c036d7eb6651688d225997af39a1228aa8dfee5e3cb8def761d82b582b48ba\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 11 00:33:59.843979 containerd[1623]: time="2025-09-11T00:33:59.843955004Z" level=info msg="Container 332eefc39136677b41aed8cb2c8a57dc7b545ba745daa3900425a5c10fd32d17: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:33:59.903558 containerd[1623]: time="2025-09-11T00:33:59.903533364Z" level=info msg="CreateContainer within sandbox \"b9c036d7eb6651688d225997af39a1228aa8dfee5e3cb8def761d82b582b48ba\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"332eefc39136677b41aed8cb2c8a57dc7b545ba745daa3900425a5c10fd32d17\"" Sep 11 00:33:59.904694 containerd[1623]: time="2025-09-11T00:33:59.904499436Z" level=info msg="StartContainer for \"332eefc39136677b41aed8cb2c8a57dc7b545ba745daa3900425a5c10fd32d17\"" Sep 11 00:33:59.905316 containerd[1623]: time="2025-09-11T00:33:59.905281174Z" level=info msg="connecting to shim 332eefc39136677b41aed8cb2c8a57dc7b545ba745daa3900425a5c10fd32d17" address="unix:///run/containerd/s/111fc7056028fb8b3528a1408d8eaed71e0bde4074c19104cfdbb594b9347f8c" protocol=ttrpc version=3 Sep 11 00:33:59.943455 systemd[1]: Started cri-containerd-332eefc39136677b41aed8cb2c8a57dc7b545ba745daa3900425a5c10fd32d17.scope - libcontainer container 332eefc39136677b41aed8cb2c8a57dc7b545ba745daa3900425a5c10fd32d17. Sep 11 00:34:00.033042 containerd[1623]: time="2025-09-11T00:34:00.032973487Z" level=info msg="StartContainer for \"332eefc39136677b41aed8cb2c8a57dc7b545ba745daa3900425a5c10fd32d17\" returns successfully" Sep 11 00:34:00.995336 containerd[1623]: time="2025-09-11T00:34:00.995208571Z" level=info msg="TaskExit event in podsandbox handler container_id:\"332eefc39136677b41aed8cb2c8a57dc7b545ba745daa3900425a5c10fd32d17\" id:\"f7d87c0f9595a26250b0cfa24fd99de3bcbfae66cd49594bdea01af610b3f8f9\" pid:5160 exited_at:{seconds:1757550840 nanos:957743917}" Sep 11 00:34:01.062316 kubelet[2941]: I0911 00:34:01.062098 2941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-52kp4" podStartSLOduration=28.018740072 podStartE2EDuration="38.062052631s" podCreationTimestamp="2025-09-11 00:33:23 +0000 UTC" firstStartedPulling="2025-09-11 00:33:49.749810486 +0000 UTC m=+43.513984719" lastFinishedPulling="2025-09-11 00:33:59.793123045 +0000 UTC m=+53.557297278" observedRunningTime="2025-09-11 00:34:00.735044785 +0000 UTC m=+54.499219020" watchObservedRunningTime="2025-09-11 00:34:01.062052631 +0000 UTC m=+54.826226866" Sep 11 00:34:01.359959 containerd[1623]: time="2025-09-11T00:34:01.359364749Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:34:01.367854 containerd[1623]: time="2025-09-11T00:34:01.367781495Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 11 00:34:01.373462 containerd[1623]: time="2025-09-11T00:34:01.373432326Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:34:01.396623 containerd[1623]: time="2025-09-11T00:34:01.396547594Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:34:01.397071 containerd[1623]: time="2025-09-11T00:34:01.396987358Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.601215535s" Sep 11 00:34:01.397071 containerd[1623]: time="2025-09-11T00:34:01.397007856Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 11 00:34:01.413982 containerd[1623]: time="2025-09-11T00:34:01.398000356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 11 00:34:01.425069 containerd[1623]: time="2025-09-11T00:34:01.425040382Z" level=info msg="CreateContainer within sandbox \"17cf1342d6aa111d923afc862ace072fd8e59f9d4f56da48dfe3f856b594381b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 11 00:34:01.515100 containerd[1623]: time="2025-09-11T00:34:01.514492837Z" level=info msg="Container 470fbfccca7f705c25747cbd36f6c803b2e4e24b66ea4ae17a1ed7521c3ecd48: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:34:01.518018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4001137585.mount: Deactivated successfully. Sep 11 00:34:01.571163 containerd[1623]: time="2025-09-11T00:34:01.571128514Z" level=info msg="CreateContainer within sandbox \"17cf1342d6aa111d923afc862ace072fd8e59f9d4f56da48dfe3f856b594381b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"470fbfccca7f705c25747cbd36f6c803b2e4e24b66ea4ae17a1ed7521c3ecd48\"" Sep 11 00:34:01.571651 containerd[1623]: time="2025-09-11T00:34:01.571623245Z" level=info msg="StartContainer for \"470fbfccca7f705c25747cbd36f6c803b2e4e24b66ea4ae17a1ed7521c3ecd48\"" Sep 11 00:34:01.584960 containerd[1623]: time="2025-09-11T00:34:01.572456311Z" level=info msg="connecting to shim 470fbfccca7f705c25747cbd36f6c803b2e4e24b66ea4ae17a1ed7521c3ecd48" address="unix:///run/containerd/s/49d7fb3e7523640f2e9b193c519998453de3d3b54403a78274cd3584e0162868" protocol=ttrpc version=3 Sep 11 00:34:01.594502 systemd[1]: Started cri-containerd-470fbfccca7f705c25747cbd36f6c803b2e4e24b66ea4ae17a1ed7521c3ecd48.scope - libcontainer container 470fbfccca7f705c25747cbd36f6c803b2e4e24b66ea4ae17a1ed7521c3ecd48. Sep 11 00:34:01.642598 containerd[1623]: time="2025-09-11T00:34:01.642532006Z" level=info msg="StartContainer for \"470fbfccca7f705c25747cbd36f6c803b2e4e24b66ea4ae17a1ed7521c3ecd48\" returns successfully" Sep 11 00:34:04.489022 containerd[1623]: time="2025-09-11T00:34:04.488904615Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:34:04.519613 containerd[1623]: time="2025-09-11T00:34:04.519576321Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 11 00:34:04.555160 containerd[1623]: time="2025-09-11T00:34:04.555124173Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:34:04.563354 containerd[1623]: time="2025-09-11T00:34:04.563317140Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:34:04.563591 containerd[1623]: time="2025-09-11T00:34:04.563571389Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.165552762s" Sep 11 00:34:04.563620 containerd[1623]: time="2025-09-11T00:34:04.563591462Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 11 00:34:04.590597 containerd[1623]: time="2025-09-11T00:34:04.590566007Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 11 00:34:04.813393 containerd[1623]: time="2025-09-11T00:34:04.813221008Z" level=info msg="CreateContainer within sandbox \"6c4c8b6138a9c445018f50b6ba90b2113e250b74bac9e6b92fef992701880d30\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 11 00:34:04.824832 containerd[1623]: time="2025-09-11T00:34:04.824637877Z" level=info msg="Container 2ff22f9f12c21629192e7ecfd6062013c8d335f5743ba3f24f75fd51349745a8: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:34:04.837080 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1726506348.mount: Deactivated successfully. Sep 11 00:34:04.901124 containerd[1623]: time="2025-09-11T00:34:04.901069312Z" level=info msg="CreateContainer within sandbox \"6c4c8b6138a9c445018f50b6ba90b2113e250b74bac9e6b92fef992701880d30\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"2ff22f9f12c21629192e7ecfd6062013c8d335f5743ba3f24f75fd51349745a8\"" Sep 11 00:34:04.901658 containerd[1623]: time="2025-09-11T00:34:04.901647004Z" level=info msg="StartContainer for \"2ff22f9f12c21629192e7ecfd6062013c8d335f5743ba3f24f75fd51349745a8\"" Sep 11 00:34:04.903384 containerd[1623]: time="2025-09-11T00:34:04.903264879Z" level=info msg="connecting to shim 2ff22f9f12c21629192e7ecfd6062013c8d335f5743ba3f24f75fd51349745a8" address="unix:///run/containerd/s/8ff4edbfd7035b80527c579f282808de5201d5688105128958c101369ecbc0df" protocol=ttrpc version=3 Sep 11 00:34:04.921462 systemd[1]: Started cri-containerd-2ff22f9f12c21629192e7ecfd6062013c8d335f5743ba3f24f75fd51349745a8.scope - libcontainer container 2ff22f9f12c21629192e7ecfd6062013c8d335f5743ba3f24f75fd51349745a8. Sep 11 00:34:05.137588 containerd[1623]: time="2025-09-11T00:34:05.137550838Z" level=info msg="StartContainer for \"2ff22f9f12c21629192e7ecfd6062013c8d335f5743ba3f24f75fd51349745a8\" returns successfully" Sep 11 00:34:05.922289 containerd[1623]: time="2025-09-11T00:34:05.922254138Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2ff22f9f12c21629192e7ecfd6062013c8d335f5743ba3f24f75fd51349745a8\" id:\"8e24872f13cde1c64e3d14c5bdeeabe5db60b75ac4b4645a403159ce185ec6ea\" pid:5267 exited_at:{seconds:1757550845 nanos:921991932}" Sep 11 00:34:06.095912 kubelet[2941]: I0911 00:34:06.093980 2941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6f4d847988-7r26x" podStartSLOduration=29.526570068 podStartE2EDuration="42.078218738s" podCreationTimestamp="2025-09-11 00:33:24 +0000 UTC" firstStartedPulling="2025-09-11 00:33:52.016413123 +0000 UTC m=+45.780587354" lastFinishedPulling="2025-09-11 00:34:04.568061795 +0000 UTC m=+58.332236024" observedRunningTime="2025-09-11 00:34:06.063955195 +0000 UTC m=+59.828129436" watchObservedRunningTime="2025-09-11 00:34:06.078218738 +0000 UTC m=+59.842392980" Sep 11 00:34:06.878983 containerd[1623]: time="2025-09-11T00:34:06.878942285Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:34:06.889453 containerd[1623]: time="2025-09-11T00:34:06.889305354Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 11 00:34:06.910585 containerd[1623]: time="2025-09-11T00:34:06.910538456Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:34:06.933068 containerd[1623]: time="2025-09-11T00:34:06.933009772Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:34:06.933514 containerd[1623]: time="2025-09-11T00:34:06.933247497Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.342652899s" Sep 11 00:34:06.933514 containerd[1623]: time="2025-09-11T00:34:06.933266314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 11 00:34:07.397290 containerd[1623]: time="2025-09-11T00:34:07.395897030Z" level=info msg="CreateContainer within sandbox \"17cf1342d6aa111d923afc862ace072fd8e59f9d4f56da48dfe3f856b594381b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 11 00:34:07.404249 containerd[1623]: time="2025-09-11T00:34:07.404215623Z" level=info msg="Container e98fd8cac869703679d5468856d6b6a2061ce39f2990cbbe79dc848a9ea79c0b: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:34:07.434926 containerd[1623]: time="2025-09-11T00:34:07.434896559Z" level=info msg="CreateContainer within sandbox \"17cf1342d6aa111d923afc862ace072fd8e59f9d4f56da48dfe3f856b594381b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e98fd8cac869703679d5468856d6b6a2061ce39f2990cbbe79dc848a9ea79c0b\"" Sep 11 00:34:07.435442 containerd[1623]: time="2025-09-11T00:34:07.435427999Z" level=info msg="StartContainer for \"e98fd8cac869703679d5468856d6b6a2061ce39f2990cbbe79dc848a9ea79c0b\"" Sep 11 00:34:07.438036 containerd[1623]: time="2025-09-11T00:34:07.437995424Z" level=info msg="connecting to shim e98fd8cac869703679d5468856d6b6a2061ce39f2990cbbe79dc848a9ea79c0b" address="unix:///run/containerd/s/49d7fb3e7523640f2e9b193c519998453de3d3b54403a78274cd3584e0162868" protocol=ttrpc version=3 Sep 11 00:34:07.460628 systemd[1]: Started cri-containerd-e98fd8cac869703679d5468856d6b6a2061ce39f2990cbbe79dc848a9ea79c0b.scope - libcontainer container e98fd8cac869703679d5468856d6b6a2061ce39f2990cbbe79dc848a9ea79c0b. Sep 11 00:34:07.504181 containerd[1623]: time="2025-09-11T00:34:07.504158104Z" level=info msg="StartContainer for \"e98fd8cac869703679d5468856d6b6a2061ce39f2990cbbe79dc848a9ea79c0b\" returns successfully" Sep 11 00:34:08.968121 kubelet[2941]: I0911 00:34:08.966881 2941 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 11 00:34:08.968121 kubelet[2941]: I0911 00:34:08.968098 2941 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 11 00:34:09.141378 kubelet[2941]: I0911 00:34:09.141345 2941 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 00:34:09.577139 kubelet[2941]: I0911 00:34:09.577097 2941 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-5zknv" podStartSLOduration=28.334722273 podStartE2EDuration="45.577085339s" podCreationTimestamp="2025-09-11 00:33:24 +0000 UTC" firstStartedPulling="2025-09-11 00:33:50.10002689 +0000 UTC m=+43.864201121" lastFinishedPulling="2025-09-11 00:34:07.342389953 +0000 UTC m=+61.106564187" observedRunningTime="2025-09-11 00:34:08.493709719 +0000 UTC m=+62.257883960" watchObservedRunningTime="2025-09-11 00:34:09.577085339 +0000 UTC m=+63.341259574" Sep 11 00:34:17.025754 containerd[1623]: time="2025-09-11T00:34:17.025009013Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2b1dd5f6f0ed9d56248c963ed83b51f2e94d55a13cdf1b0cb5eba25736d59121\" id:\"13004b6b6a1193d02a05daef75e8ca2070fb4c8aa972fa529495a308b3593970\" pid:5344 exited_at:{seconds:1757550857 nanos:5013383}" Sep 11 00:34:25.206378 containerd[1623]: time="2025-09-11T00:34:25.206337822Z" level=info msg="TaskExit event in podsandbox handler container_id:\"332eefc39136677b41aed8cb2c8a57dc7b545ba745daa3900425a5c10fd32d17\" id:\"183c36c686ef4d45782e4e2d80b99b0220d0c2188a32b77dddb094e3c6544c46\" pid:5380 exited_at:{seconds:1757550865 nanos:175388759}" Sep 11 00:34:28.280324 kubelet[2941]: I0911 00:34:28.236450 2941 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 00:34:31.250259 containerd[1623]: time="2025-09-11T00:34:31.250232855Z" level=info msg="TaskExit event in podsandbox handler container_id:\"332eefc39136677b41aed8cb2c8a57dc7b545ba745daa3900425a5c10fd32d17\" id:\"095e26b08aeeba9b6125f1afe46a752ca3337bac8874262ae4cd3d158e1b2644\" pid:5412 exited_at:{seconds:1757550871 nanos:250009735}" Sep 11 00:34:31.575000 systemd[1]: Started sshd@7-139.178.70.106:22-139.178.89.65:43756.service - OpenSSH per-connection server daemon (139.178.89.65:43756). Sep 11 00:34:31.755542 sshd[5448]: Accepted publickey for core from 139.178.89.65 port 43756 ssh2: RSA SHA256:408a9NHYR1n+3gHIFEtjtiM/BUVpJAMRTZCdef9pGpE Sep 11 00:34:31.758881 sshd-session[5448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:34:31.772099 systemd-logind[1603]: New session 10 of user core. Sep 11 00:34:31.776438 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 11 00:34:32.429609 sshd[5453]: Connection closed by 139.178.89.65 port 43756 Sep 11 00:34:32.430239 sshd-session[5448]: pam_unix(sshd:session): session closed for user core Sep 11 00:34:32.438811 systemd[1]: sshd@7-139.178.70.106:22-139.178.89.65:43756.service: Deactivated successfully. Sep 11 00:34:32.443686 systemd[1]: session-10.scope: Deactivated successfully. Sep 11 00:34:32.446066 systemd-logind[1603]: Session 10 logged out. Waiting for processes to exit. Sep 11 00:34:32.449018 systemd-logind[1603]: Removed session 10. Sep 11 00:34:35.978131 containerd[1623]: time="2025-09-11T00:34:35.978100311Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2ff22f9f12c21629192e7ecfd6062013c8d335f5743ba3f24f75fd51349745a8\" id:\"5025434725aa18dbec518fc0c2172d51a25cbf40ae59e33d5903ee6c0ea8b3ab\" pid:5481 exited_at:{seconds:1757550875 nanos:977812940}" Sep 11 00:34:37.535408 systemd[1]: Started sshd@8-139.178.70.106:22-139.178.89.65:43768.service - OpenSSH per-connection server daemon (139.178.89.65:43768). Sep 11 00:34:38.294007 sshd[5490]: Accepted publickey for core from 139.178.89.65 port 43768 ssh2: RSA SHA256:408a9NHYR1n+3gHIFEtjtiM/BUVpJAMRTZCdef9pGpE Sep 11 00:34:38.298049 sshd-session[5490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:34:38.301991 systemd-logind[1603]: New session 11 of user core. Sep 11 00:34:38.307271 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 11 00:34:38.477037 containerd[1623]: time="2025-09-11T00:34:38.477011598Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2ff22f9f12c21629192e7ecfd6062013c8d335f5743ba3f24f75fd51349745a8\" id:\"78849297f091abeb598b899c7eede4de0dba972de5eb2c34d55a3deba9d90fac\" pid:5509 exited_at:{seconds:1757550878 nanos:476238681}" Sep 11 00:34:38.851003 sshd[5492]: Connection closed by 139.178.89.65 port 43768 Sep 11 00:34:38.852899 sshd-session[5490]: pam_unix(sshd:session): session closed for user core Sep 11 00:34:38.856743 systemd[1]: sshd@8-139.178.70.106:22-139.178.89.65:43768.service: Deactivated successfully. Sep 11 00:34:38.858749 systemd[1]: session-11.scope: Deactivated successfully. Sep 11 00:34:38.859862 systemd-logind[1603]: Session 11 logged out. Waiting for processes to exit. Sep 11 00:34:38.860928 systemd-logind[1603]: Removed session 11. Sep 11 00:34:43.864689 systemd[1]: Started sshd@9-139.178.70.106:22-139.178.89.65:59460.service - OpenSSH per-connection server daemon (139.178.89.65:59460). Sep 11 00:34:44.051399 sshd[5529]: Accepted publickey for core from 139.178.89.65 port 59460 ssh2: RSA SHA256:408a9NHYR1n+3gHIFEtjtiM/BUVpJAMRTZCdef9pGpE Sep 11 00:34:44.052348 sshd-session[5529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:34:44.056065 systemd-logind[1603]: New session 12 of user core. Sep 11 00:34:44.061432 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 11 00:34:44.368697 sshd[5531]: Connection closed by 139.178.89.65 port 59460 Sep 11 00:34:44.369134 sshd-session[5529]: pam_unix(sshd:session): session closed for user core Sep 11 00:34:44.375763 systemd[1]: sshd@9-139.178.70.106:22-139.178.89.65:59460.service: Deactivated successfully. Sep 11 00:34:44.376954 systemd[1]: session-12.scope: Deactivated successfully. Sep 11 00:34:44.377736 systemd-logind[1603]: Session 12 logged out. Waiting for processes to exit. Sep 11 00:34:44.379133 systemd[1]: Started sshd@10-139.178.70.106:22-139.178.89.65:59472.service - OpenSSH per-connection server daemon (139.178.89.65:59472). Sep 11 00:34:44.379918 systemd-logind[1603]: Removed session 12. Sep 11 00:34:44.518560 sshd[5545]: Accepted publickey for core from 139.178.89.65 port 59472 ssh2: RSA SHA256:408a9NHYR1n+3gHIFEtjtiM/BUVpJAMRTZCdef9pGpE Sep 11 00:34:44.521421 sshd-session[5545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:34:44.525795 systemd-logind[1603]: New session 13 of user core. Sep 11 00:34:44.530420 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 11 00:34:44.936624 sshd[5548]: Connection closed by 139.178.89.65 port 59472 Sep 11 00:34:44.939400 sshd-session[5545]: pam_unix(sshd:session): session closed for user core Sep 11 00:34:44.949354 systemd[1]: Started sshd@11-139.178.70.106:22-139.178.89.65:59486.service - OpenSSH per-connection server daemon (139.178.89.65:59486). Sep 11 00:34:44.949863 systemd[1]: sshd@10-139.178.70.106:22-139.178.89.65:59472.service: Deactivated successfully. Sep 11 00:34:44.951174 systemd[1]: session-13.scope: Deactivated successfully. Sep 11 00:34:44.954137 systemd-logind[1603]: Session 13 logged out. Waiting for processes to exit. Sep 11 00:34:44.958850 systemd-logind[1603]: Removed session 13. Sep 11 00:34:45.034072 sshd[5575]: Accepted publickey for core from 139.178.89.65 port 59486 ssh2: RSA SHA256:408a9NHYR1n+3gHIFEtjtiM/BUVpJAMRTZCdef9pGpE Sep 11 00:34:45.035935 sshd-session[5575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:34:45.042390 systemd-logind[1603]: New session 14 of user core. Sep 11 00:34:45.047450 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 11 00:34:45.276986 sshd[5584]: Connection closed by 139.178.89.65 port 59486 Sep 11 00:34:45.299998 sshd-session[5575]: pam_unix(sshd:session): session closed for user core Sep 11 00:34:45.308612 systemd[1]: sshd@11-139.178.70.106:22-139.178.89.65:59486.service: Deactivated successfully. Sep 11 00:34:45.310971 systemd[1]: session-14.scope: Deactivated successfully. Sep 11 00:34:45.313007 systemd-logind[1603]: Session 14 logged out. Waiting for processes to exit. Sep 11 00:34:45.314873 systemd-logind[1603]: Removed session 14. Sep 11 00:34:45.741075 containerd[1623]: time="2025-09-11T00:34:45.740992138Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2b1dd5f6f0ed9d56248c963ed83b51f2e94d55a13cdf1b0cb5eba25736d59121\" id:\"3cf44701b459537b2d8bce3989797d50d3bfd3e11558e6388a5aa7e274697d33\" pid:5563 exited_at:{seconds:1757550885 nanos:734275980}" Sep 11 00:34:50.300620 systemd[1]: Started sshd@12-139.178.70.106:22-139.178.89.65:43016.service - OpenSSH per-connection server daemon (139.178.89.65:43016). Sep 11 00:34:50.448981 sshd[5600]: Accepted publickey for core from 139.178.89.65 port 43016 ssh2: RSA SHA256:408a9NHYR1n+3gHIFEtjtiM/BUVpJAMRTZCdef9pGpE Sep 11 00:34:50.451326 sshd-session[5600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:34:50.458321 systemd-logind[1603]: New session 15 of user core. Sep 11 00:34:50.466551 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 11 00:34:51.522205 sshd[5602]: Connection closed by 139.178.89.65 port 43016 Sep 11 00:34:51.522782 sshd-session[5600]: pam_unix(sshd:session): session closed for user core Sep 11 00:34:51.525036 systemd[1]: sshd@12-139.178.70.106:22-139.178.89.65:43016.service: Deactivated successfully. Sep 11 00:34:51.527526 systemd[1]: session-15.scope: Deactivated successfully. Sep 11 00:34:51.530095 systemd-logind[1603]: Session 15 logged out. Waiting for processes to exit. Sep 11 00:34:51.531552 systemd-logind[1603]: Removed session 15. Sep 11 00:34:56.536471 systemd[1]: Started sshd@13-139.178.70.106:22-139.178.89.65:43032.service - OpenSSH per-connection server daemon (139.178.89.65:43032). Sep 11 00:34:56.603571 sshd[5613]: Accepted publickey for core from 139.178.89.65 port 43032 ssh2: RSA SHA256:408a9NHYR1n+3gHIFEtjtiM/BUVpJAMRTZCdef9pGpE Sep 11 00:34:56.605581 sshd-session[5613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:34:56.611335 systemd-logind[1603]: New session 16 of user core. Sep 11 00:34:56.618445 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 11 00:34:57.139133 sshd[5615]: Connection closed by 139.178.89.65 port 43032 Sep 11 00:34:57.139692 sshd-session[5613]: pam_unix(sshd:session): session closed for user core Sep 11 00:34:57.142809 systemd-logind[1603]: Session 16 logged out. Waiting for processes to exit. Sep 11 00:34:57.143221 systemd[1]: sshd@13-139.178.70.106:22-139.178.89.65:43032.service: Deactivated successfully. Sep 11 00:34:57.144486 systemd[1]: session-16.scope: Deactivated successfully. Sep 11 00:34:57.145364 systemd-logind[1603]: Removed session 16. Sep 11 00:35:01.119942 containerd[1623]: time="2025-09-11T00:35:01.119911740Z" level=info msg="TaskExit event in podsandbox handler container_id:\"332eefc39136677b41aed8cb2c8a57dc7b545ba745daa3900425a5c10fd32d17\" id:\"b130d24ea7e4a21743e14e20692b94ee34bf0b3e3bceda4e25271b6a2079d1c6\" pid:5639 exited_at:{seconds:1757550901 nanos:119690596}" Sep 11 00:35:02.149517 systemd[1]: Started sshd@14-139.178.70.106:22-139.178.89.65:37978.service - OpenSSH per-connection server daemon (139.178.89.65:37978). Sep 11 00:35:02.256811 sshd[5650]: Accepted publickey for core from 139.178.89.65 port 37978 ssh2: RSA SHA256:408a9NHYR1n+3gHIFEtjtiM/BUVpJAMRTZCdef9pGpE Sep 11 00:35:02.258593 sshd-session[5650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:35:02.261602 systemd-logind[1603]: New session 17 of user core. Sep 11 00:35:02.271383 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 11 00:35:02.637331 sshd[5652]: Connection closed by 139.178.89.65 port 37978 Sep 11 00:35:02.637871 sshd-session[5650]: pam_unix(sshd:session): session closed for user core Sep 11 00:35:02.644803 systemd[1]: sshd@14-139.178.70.106:22-139.178.89.65:37978.service: Deactivated successfully. Sep 11 00:35:02.645888 systemd[1]: session-17.scope: Deactivated successfully. Sep 11 00:35:02.646702 systemd-logind[1603]: Session 17 logged out. Waiting for processes to exit. Sep 11 00:35:02.648805 systemd[1]: Started sshd@15-139.178.70.106:22-139.178.89.65:37980.service - OpenSSH per-connection server daemon (139.178.89.65:37980). Sep 11 00:35:02.651394 systemd-logind[1603]: Removed session 17. Sep 11 00:35:02.703751 sshd[5663]: Accepted publickey for core from 139.178.89.65 port 37980 ssh2: RSA SHA256:408a9NHYR1n+3gHIFEtjtiM/BUVpJAMRTZCdef9pGpE Sep 11 00:35:02.705004 sshd-session[5663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:35:02.708039 systemd-logind[1603]: New session 18 of user core. Sep 11 00:35:02.715390 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 11 00:35:03.578418 sshd[5665]: Connection closed by 139.178.89.65 port 37980 Sep 11 00:35:03.578756 sshd-session[5663]: pam_unix(sshd:session): session closed for user core Sep 11 00:35:03.588414 systemd[1]: sshd@15-139.178.70.106:22-139.178.89.65:37980.service: Deactivated successfully. Sep 11 00:35:03.591768 systemd[1]: session-18.scope: Deactivated successfully. Sep 11 00:35:03.592733 systemd-logind[1603]: Session 18 logged out. Waiting for processes to exit. Sep 11 00:35:03.595265 systemd[1]: Started sshd@16-139.178.70.106:22-139.178.89.65:37992.service - OpenSSH per-connection server daemon (139.178.89.65:37992). Sep 11 00:35:03.595907 systemd-logind[1603]: Removed session 18. Sep 11 00:35:03.631987 sshd[5675]: Accepted publickey for core from 139.178.89.65 port 37992 ssh2: RSA SHA256:408a9NHYR1n+3gHIFEtjtiM/BUVpJAMRTZCdef9pGpE Sep 11 00:35:03.632949 sshd-session[5675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:35:03.636734 systemd-logind[1603]: New session 19 of user core. Sep 11 00:35:03.641692 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 11 00:35:04.362151 sshd[5677]: Connection closed by 139.178.89.65 port 37992 Sep 11 00:35:04.364479 sshd-session[5675]: pam_unix(sshd:session): session closed for user core Sep 11 00:35:04.372616 systemd[1]: Started sshd@17-139.178.70.106:22-139.178.89.65:38004.service - OpenSSH per-connection server daemon (139.178.89.65:38004). Sep 11 00:35:04.385669 systemd[1]: sshd@16-139.178.70.106:22-139.178.89.65:37992.service: Deactivated successfully. Sep 11 00:35:04.386927 systemd[1]: session-19.scope: Deactivated successfully. Sep 11 00:35:04.391244 systemd-logind[1603]: Session 19 logged out. Waiting for processes to exit. Sep 11 00:35:04.392847 systemd-logind[1603]: Removed session 19. Sep 11 00:35:04.565342 sshd[5690]: Accepted publickey for core from 139.178.89.65 port 38004 ssh2: RSA SHA256:408a9NHYR1n+3gHIFEtjtiM/BUVpJAMRTZCdef9pGpE Sep 11 00:35:04.571913 sshd-session[5690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:35:04.580631 systemd-logind[1603]: New session 20 of user core. Sep 11 00:35:04.584619 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 11 00:35:05.788935 sshd[5699]: Connection closed by 139.178.89.65 port 38004 Sep 11 00:35:05.803117 systemd[1]: Started sshd@18-139.178.70.106:22-139.178.89.65:38020.service - OpenSSH per-connection server daemon (139.178.89.65:38020). Sep 11 00:35:05.805793 sshd-session[5690]: pam_unix(sshd:session): session closed for user core Sep 11 00:35:05.827723 systemd-logind[1603]: Session 20 logged out. Waiting for processes to exit. Sep 11 00:35:05.828580 systemd[1]: sshd@17-139.178.70.106:22-139.178.89.65:38004.service: Deactivated successfully. Sep 11 00:35:05.832028 systemd[1]: session-20.scope: Deactivated successfully. Sep 11 00:35:05.835655 systemd-logind[1603]: Removed session 20. Sep 11 00:35:05.995798 sshd[5706]: Accepted publickey for core from 139.178.89.65 port 38020 ssh2: RSA SHA256:408a9NHYR1n+3gHIFEtjtiM/BUVpJAMRTZCdef9pGpE Sep 11 00:35:05.997243 sshd-session[5706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:35:06.003971 systemd-logind[1603]: New session 21 of user core. Sep 11 00:35:06.008410 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 11 00:35:07.903355 containerd[1623]: time="2025-09-11T00:35:07.865115696Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2ff22f9f12c21629192e7ecfd6062013c8d335f5743ba3f24f75fd51349745a8\" id:\"3cd693fa417fbae1554a580c727017af97f0643128147845c7e327e156a985a8\" pid:5741 exited_at:{seconds:1757550907 nanos:646855426}" Sep 11 00:35:08.027597 sshd[5711]: Connection closed by 139.178.89.65 port 38020 Sep 11 00:35:08.028414 sshd-session[5706]: pam_unix(sshd:session): session closed for user core Sep 11 00:35:08.047464 systemd[1]: sshd@18-139.178.70.106:22-139.178.89.65:38020.service: Deactivated successfully. Sep 11 00:35:08.051499 systemd[1]: session-21.scope: Deactivated successfully. Sep 11 00:35:08.056552 systemd-logind[1603]: Session 21 logged out. Waiting for processes to exit. Sep 11 00:35:08.057800 systemd-logind[1603]: Removed session 21. Sep 11 00:35:13.037472 systemd[1]: Started sshd@19-139.178.70.106:22-139.178.89.65:37682.service - OpenSSH per-connection server daemon (139.178.89.65:37682). Sep 11 00:35:13.142385 sshd[5758]: Accepted publickey for core from 139.178.89.65 port 37682 ssh2: RSA SHA256:408a9NHYR1n+3gHIFEtjtiM/BUVpJAMRTZCdef9pGpE Sep 11 00:35:13.144085 sshd-session[5758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:35:13.147907 systemd-logind[1603]: New session 22 of user core. Sep 11 00:35:13.156409 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 11 00:35:15.273631 sshd[5760]: Connection closed by 139.178.89.65 port 37682 Sep 11 00:35:15.274113 sshd-session[5758]: pam_unix(sshd:session): session closed for user core Sep 11 00:35:15.277068 systemd[1]: sshd@19-139.178.70.106:22-139.178.89.65:37682.service: Deactivated successfully. Sep 11 00:35:15.277115 systemd-logind[1603]: Session 22 logged out. Waiting for processes to exit. Sep 11 00:35:15.278744 systemd[1]: session-22.scope: Deactivated successfully. Sep 11 00:35:15.280603 systemd-logind[1603]: Removed session 22. Sep 11 00:35:16.081135 containerd[1623]: time="2025-09-11T00:35:16.081106911Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2b1dd5f6f0ed9d56248c963ed83b51f2e94d55a13cdf1b0cb5eba25736d59121\" id:\"4bb4b13c4f87456a5e1af97f1b8aad4eb11e79a30a50e508f641035d5ddae7ea\" pid:5785 exited_at:{seconds:1757550916 nanos:77468951}" Sep 11 00:35:20.297827 systemd[1]: Started sshd@20-139.178.70.106:22-139.178.89.65:48574.service - OpenSSH per-connection server daemon (139.178.89.65:48574). Sep 11 00:35:20.403780 sshd[5815]: Accepted publickey for core from 139.178.89.65 port 48574 ssh2: RSA SHA256:408a9NHYR1n+3gHIFEtjtiM/BUVpJAMRTZCdef9pGpE Sep 11 00:35:20.408079 sshd-session[5815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:35:20.412673 systemd-logind[1603]: New session 23 of user core. Sep 11 00:35:20.416407 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 11 00:35:21.291559 sshd[5817]: Connection closed by 139.178.89.65 port 48574 Sep 11 00:35:21.291948 sshd-session[5815]: pam_unix(sshd:session): session closed for user core Sep 11 00:35:21.294098 systemd-logind[1603]: Session 23 logged out. Waiting for processes to exit. Sep 11 00:35:21.294243 systemd[1]: sshd@20-139.178.70.106:22-139.178.89.65:48574.service: Deactivated successfully. Sep 11 00:35:21.295699 systemd[1]: session-23.scope: Deactivated successfully. Sep 11 00:35:21.299951 systemd-logind[1603]: Removed session 23.