Jun 20 19:21:00.717895 kernel: Linux version 6.12.34-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jun 20 17:06:39 -00 2025 Jun 20 19:21:00.717911 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:21:00.717918 kernel: Disabled fast string operations Jun 20 19:21:00.717922 kernel: BIOS-provided physical RAM map: Jun 20 19:21:00.717926 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Jun 20 19:21:00.717930 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Jun 20 19:21:00.717936 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Jun 20 19:21:00.717940 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Jun 20 19:21:00.717945 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Jun 20 19:21:00.717949 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Jun 20 19:21:00.717953 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Jun 20 19:21:00.717957 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Jun 20 19:21:00.717962 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Jun 20 19:21:00.717966 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jun 20 19:21:00.717972 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Jun 20 19:21:00.717977 kernel: NX (Execute Disable) protection: active Jun 20 19:21:00.717982 kernel: APIC: Static calls initialized Jun 20 19:21:00.717987 kernel: SMBIOS 2.7 present. Jun 20 19:21:00.717992 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Jun 20 19:21:00.717997 kernel: DMI: Memory slots populated: 1/128 Jun 20 19:21:00.718002 kernel: vmware: hypercall mode: 0x00 Jun 20 19:21:00.718007 kernel: Hypervisor detected: VMware Jun 20 19:21:00.718012 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Jun 20 19:21:00.718017 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Jun 20 19:21:00.718022 kernel: vmware: using clock offset of 3630903931 ns Jun 20 19:21:00.718026 kernel: tsc: Detected 3408.000 MHz processor Jun 20 19:21:00.718032 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 20 19:21:00.718037 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 20 19:21:00.718042 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Jun 20 19:21:00.718047 kernel: total RAM covered: 3072M Jun 20 19:21:00.718053 kernel: Found optimal setting for mtrr clean up Jun 20 19:21:00.718058 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Jun 20 19:21:00.718063 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Jun 20 19:21:00.718068 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 20 19:21:00.718073 kernel: Using GB pages for direct mapping Jun 20 19:21:00.718078 kernel: ACPI: Early table checksum verification disabled Jun 20 19:21:00.718083 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Jun 20 19:21:00.718088 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Jun 20 19:21:00.718093 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Jun 20 19:21:00.718099 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Jun 20 19:21:00.718106 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jun 20 19:21:00.718111 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jun 20 19:21:00.718116 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Jun 20 19:21:00.718121 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Jun 20 19:21:00.718128 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Jun 20 19:21:00.718133 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Jun 20 19:21:00.718138 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Jun 20 19:21:00.718143 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Jun 20 19:21:00.718149 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Jun 20 19:21:00.718154 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Jun 20 19:21:00.718159 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jun 20 19:21:00.718164 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jun 20 19:21:00.718169 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Jun 20 19:21:00.718174 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Jun 20 19:21:00.718180 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Jun 20 19:21:00.718186 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Jun 20 19:21:00.718191 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Jun 20 19:21:00.718196 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Jun 20 19:21:00.718201 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jun 20 19:21:00.718206 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jun 20 19:21:00.718211 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Jun 20 19:21:00.718216 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00001000-0x7fffffff] Jun 20 19:21:00.718221 kernel: NODE_DATA(0) allocated [mem 0x7fff8dc0-0x7fffffff] Jun 20 19:21:00.718227 kernel: Zone ranges: Jun 20 19:21:00.718233 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 20 19:21:00.718238 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Jun 20 19:21:00.718243 kernel: Normal empty Jun 20 19:21:00.718248 kernel: Device empty Jun 20 19:21:00.718253 kernel: Movable zone start for each node Jun 20 19:21:00.718258 kernel: Early memory node ranges Jun 20 19:21:00.718263 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Jun 20 19:21:00.718268 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Jun 20 19:21:00.718275 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Jun 20 19:21:00.718289 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Jun 20 19:21:00.718294 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 20 19:21:00.718299 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Jun 20 19:21:00.718304 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Jun 20 19:21:00.718310 kernel: ACPI: PM-Timer IO Port: 0x1008 Jun 20 19:21:00.718315 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Jun 20 19:21:00.718320 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jun 20 19:21:00.718325 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jun 20 19:21:00.718332 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jun 20 19:21:00.718337 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jun 20 19:21:00.718342 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jun 20 19:21:00.718347 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jun 20 19:21:00.718352 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jun 20 19:21:00.718357 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jun 20 19:21:00.718362 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jun 20 19:21:00.718367 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jun 20 19:21:00.718372 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jun 20 19:21:00.718377 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jun 20 19:21:00.718383 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jun 20 19:21:00.718388 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jun 20 19:21:00.718393 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jun 20 19:21:00.718398 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jun 20 19:21:00.718404 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Jun 20 19:21:00.718409 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Jun 20 19:21:00.718414 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Jun 20 19:21:00.718420 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Jun 20 19:21:00.718459 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Jun 20 19:21:00.718468 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Jun 20 19:21:00.718473 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Jun 20 19:21:00.718478 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Jun 20 19:21:00.718483 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Jun 20 19:21:00.718489 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Jun 20 19:21:00.718494 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Jun 20 19:21:00.718499 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Jun 20 19:21:00.718504 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Jun 20 19:21:00.718509 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Jun 20 19:21:00.718514 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Jun 20 19:21:00.718522 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Jun 20 19:21:00.718527 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Jun 20 19:21:00.718532 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Jun 20 19:21:00.718682 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Jun 20 19:21:00.718688 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Jun 20 19:21:00.718694 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Jun 20 19:21:00.718699 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Jun 20 19:21:00.718708 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Jun 20 19:21:00.718719 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Jun 20 19:21:00.718725 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Jun 20 19:21:00.718733 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Jun 20 19:21:00.718739 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Jun 20 19:21:00.718746 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Jun 20 19:21:00.718751 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Jun 20 19:21:00.718756 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Jun 20 19:21:00.718765 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Jun 20 19:21:00.718771 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Jun 20 19:21:00.718778 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Jun 20 19:21:00.718783 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Jun 20 19:21:00.718789 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Jun 20 19:21:00.718799 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Jun 20 19:21:00.718805 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Jun 20 19:21:00.718811 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Jun 20 19:21:00.718817 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Jun 20 19:21:00.718825 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Jun 20 19:21:00.718831 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Jun 20 19:21:00.718839 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Jun 20 19:21:00.718844 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Jun 20 19:21:00.718850 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Jun 20 19:21:00.718855 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Jun 20 19:21:00.718861 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Jun 20 19:21:00.718866 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Jun 20 19:21:00.718872 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Jun 20 19:21:00.718877 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Jun 20 19:21:00.718882 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Jun 20 19:21:00.718891 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Jun 20 19:21:00.718898 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Jun 20 19:21:00.718904 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Jun 20 19:21:00.718909 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Jun 20 19:21:00.718915 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Jun 20 19:21:00.718920 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Jun 20 19:21:00.718925 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Jun 20 19:21:00.718931 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Jun 20 19:21:00.718936 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Jun 20 19:21:00.718942 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Jun 20 19:21:00.718949 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Jun 20 19:21:00.718954 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Jun 20 19:21:00.718959 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Jun 20 19:21:00.718965 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Jun 20 19:21:00.718970 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Jun 20 19:21:00.718976 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Jun 20 19:21:00.718981 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Jun 20 19:21:00.718987 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Jun 20 19:21:00.718992 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Jun 20 19:21:00.719001 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Jun 20 19:21:00.719008 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Jun 20 19:21:00.719013 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Jun 20 19:21:00.719019 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Jun 20 19:21:00.719027 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Jun 20 19:21:00.719033 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Jun 20 19:21:00.719039 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Jun 20 19:21:00.719044 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Jun 20 19:21:00.719049 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Jun 20 19:21:00.719055 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Jun 20 19:21:00.719065 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Jun 20 19:21:00.719070 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Jun 20 19:21:00.719076 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Jun 20 19:21:00.719081 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Jun 20 19:21:00.719089 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Jun 20 19:21:00.719111 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Jun 20 19:21:00.719118 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Jun 20 19:21:00.719248 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Jun 20 19:21:00.719255 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Jun 20 19:21:00.719261 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Jun 20 19:21:00.719268 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Jun 20 19:21:00.719274 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Jun 20 19:21:00.721298 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Jun 20 19:21:00.721307 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Jun 20 19:21:00.721313 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Jun 20 19:21:00.721318 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Jun 20 19:21:00.721324 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Jun 20 19:21:00.721329 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Jun 20 19:21:00.721335 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Jun 20 19:21:00.721343 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Jun 20 19:21:00.721349 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Jun 20 19:21:00.721354 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Jun 20 19:21:00.721359 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Jun 20 19:21:00.721365 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Jun 20 19:21:00.721371 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Jun 20 19:21:00.721376 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Jun 20 19:21:00.721382 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Jun 20 19:21:00.721388 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Jun 20 19:21:00.721395 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Jun 20 19:21:00.721400 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Jun 20 19:21:00.721406 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Jun 20 19:21:00.721411 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Jun 20 19:21:00.721417 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Jun 20 19:21:00.721423 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Jun 20 19:21:00.721429 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 20 19:21:00.721434 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Jun 20 19:21:00.721440 kernel: TSC deadline timer available Jun 20 19:21:00.721446 kernel: CPU topo: Max. logical packages: 128 Jun 20 19:21:00.721452 kernel: CPU topo: Max. logical dies: 128 Jun 20 19:21:00.721458 kernel: CPU topo: Max. dies per package: 1 Jun 20 19:21:00.721463 kernel: CPU topo: Max. threads per core: 1 Jun 20 19:21:00.721469 kernel: CPU topo: Num. cores per package: 1 Jun 20 19:21:00.721474 kernel: CPU topo: Num. threads per package: 1 Jun 20 19:21:00.721480 kernel: CPU topo: Allowing 2 present CPUs plus 126 hotplug CPUs Jun 20 19:21:00.721486 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Jun 20 19:21:00.721491 kernel: Booting paravirtualized kernel on VMware hypervisor Jun 20 19:21:00.721497 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 20 19:21:00.721504 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Jun 20 19:21:00.721509 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Jun 20 19:21:00.721515 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Jun 20 19:21:00.721521 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Jun 20 19:21:00.721526 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Jun 20 19:21:00.721532 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Jun 20 19:21:00.721537 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Jun 20 19:21:00.721543 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Jun 20 19:21:00.721548 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Jun 20 19:21:00.721555 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Jun 20 19:21:00.721560 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Jun 20 19:21:00.721566 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Jun 20 19:21:00.721571 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Jun 20 19:21:00.721577 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Jun 20 19:21:00.721582 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Jun 20 19:21:00.721588 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Jun 20 19:21:00.721593 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Jun 20 19:21:00.721599 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Jun 20 19:21:00.721605 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Jun 20 19:21:00.721612 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:21:00.721618 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 20 19:21:00.721623 kernel: random: crng init done Jun 20 19:21:00.721629 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Jun 20 19:21:00.721634 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Jun 20 19:21:00.721640 kernel: printk: log_buf_len min size: 262144 bytes Jun 20 19:21:00.721645 kernel: printk: log_buf_len: 1048576 bytes Jun 20 19:21:00.721652 kernel: printk: early log buf free: 245576(93%) Jun 20 19:21:00.721657 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 20 19:21:00.721663 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 20 19:21:00.721669 kernel: Fallback order for Node 0: 0 Jun 20 19:21:00.721674 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524157 Jun 20 19:21:00.721680 kernel: Policy zone: DMA32 Jun 20 19:21:00.721685 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 20 19:21:00.721691 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Jun 20 19:21:00.721697 kernel: ftrace: allocating 40093 entries in 157 pages Jun 20 19:21:00.721703 kernel: ftrace: allocated 157 pages with 5 groups Jun 20 19:21:00.721709 kernel: Dynamic Preempt: voluntary Jun 20 19:21:00.721714 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 20 19:21:00.721721 kernel: rcu: RCU event tracing is enabled. Jun 20 19:21:00.721726 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Jun 20 19:21:00.721732 kernel: Trampoline variant of Tasks RCU enabled. Jun 20 19:21:00.721737 kernel: Rude variant of Tasks RCU enabled. Jun 20 19:21:00.721743 kernel: Tracing variant of Tasks RCU enabled. Jun 20 19:21:00.721748 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 20 19:21:00.721754 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Jun 20 19:21:00.721761 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jun 20 19:21:00.721766 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jun 20 19:21:00.721772 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jun 20 19:21:00.721778 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Jun 20 19:21:00.721783 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Jun 20 19:21:00.721789 kernel: Console: colour VGA+ 80x25 Jun 20 19:21:00.721794 kernel: printk: legacy console [tty0] enabled Jun 20 19:21:00.721800 kernel: printk: legacy console [ttyS0] enabled Jun 20 19:21:00.721805 kernel: ACPI: Core revision 20240827 Jun 20 19:21:00.721812 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Jun 20 19:21:00.721818 kernel: APIC: Switch to symmetric I/O mode setup Jun 20 19:21:00.721824 kernel: x2apic enabled Jun 20 19:21:00.721830 kernel: APIC: Switched APIC routing to: physical x2apic Jun 20 19:21:00.721835 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jun 20 19:21:00.721841 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jun 20 19:21:00.721847 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Jun 20 19:21:00.721853 kernel: Disabled fast string operations Jun 20 19:21:00.721858 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jun 20 19:21:00.721865 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jun 20 19:21:00.721871 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 20 19:21:00.721877 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Jun 20 19:21:00.721882 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jun 20 19:21:00.721888 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jun 20 19:21:00.721893 kernel: RETBleed: Mitigation: Enhanced IBRS Jun 20 19:21:00.721899 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jun 20 19:21:00.721905 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jun 20 19:21:00.721911 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jun 20 19:21:00.721917 kernel: SRBDS: Unknown: Dependent on hypervisor status Jun 20 19:21:00.721923 kernel: GDS: Unknown: Dependent on hypervisor status Jun 20 19:21:00.721928 kernel: ITS: Mitigation: Aligned branch/return thunks Jun 20 19:21:00.721934 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 20 19:21:00.721940 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 20 19:21:00.721945 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 20 19:21:00.721951 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 20 19:21:00.721957 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jun 20 19:21:00.721964 kernel: Freeing SMP alternatives memory: 32K Jun 20 19:21:00.721969 kernel: pid_max: default: 131072 minimum: 1024 Jun 20 19:21:00.721975 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jun 20 19:21:00.721980 kernel: landlock: Up and running. Jun 20 19:21:00.721986 kernel: SELinux: Initializing. Jun 20 19:21:00.721991 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 20 19:21:00.721997 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 20 19:21:00.722003 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jun 20 19:21:00.722008 kernel: Performance Events: Skylake events, core PMU driver. Jun 20 19:21:00.722015 kernel: core: CPUID marked event: 'cpu cycles' unavailable Jun 20 19:21:00.722021 kernel: core: CPUID marked event: 'instructions' unavailable Jun 20 19:21:00.722026 kernel: core: CPUID marked event: 'bus cycles' unavailable Jun 20 19:21:00.722032 kernel: core: CPUID marked event: 'cache references' unavailable Jun 20 19:21:00.722037 kernel: core: CPUID marked event: 'cache misses' unavailable Jun 20 19:21:00.722043 kernel: core: CPUID marked event: 'branch instructions' unavailable Jun 20 19:21:00.722048 kernel: core: CPUID marked event: 'branch misses' unavailable Jun 20 19:21:00.722053 kernel: ... version: 1 Jun 20 19:21:00.722059 kernel: ... bit width: 48 Jun 20 19:21:00.722066 kernel: ... generic registers: 4 Jun 20 19:21:00.722071 kernel: ... value mask: 0000ffffffffffff Jun 20 19:21:00.722077 kernel: ... max period: 000000007fffffff Jun 20 19:21:00.722082 kernel: ... fixed-purpose events: 0 Jun 20 19:21:00.722088 kernel: ... event mask: 000000000000000f Jun 20 19:21:00.722093 kernel: signal: max sigframe size: 1776 Jun 20 19:21:00.722099 kernel: rcu: Hierarchical SRCU implementation. Jun 20 19:21:00.722105 kernel: rcu: Max phase no-delay instances is 400. Jun 20 19:21:00.722110 kernel: Timer migration: 3 hierarchy levels; 8 children per group; 3 crossnode level Jun 20 19:21:00.722117 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jun 20 19:21:00.722122 kernel: smp: Bringing up secondary CPUs ... Jun 20 19:21:00.722128 kernel: smpboot: x86: Booting SMP configuration: Jun 20 19:21:00.722134 kernel: .... node #0, CPUs: #1 Jun 20 19:21:00.722139 kernel: Disabled fast string operations Jun 20 19:21:00.722144 kernel: smp: Brought up 1 node, 2 CPUs Jun 20 19:21:00.722150 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Jun 20 19:21:00.722156 kernel: Memory: 1924220K/2096628K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54424K init, 2544K bss, 161024K reserved, 0K cma-reserved) Jun 20 19:21:00.722162 kernel: devtmpfs: initialized Jun 20 19:21:00.722169 kernel: x86/mm: Memory block size: 128MB Jun 20 19:21:00.722175 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Jun 20 19:21:00.722181 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 20 19:21:00.722186 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Jun 20 19:21:00.722192 kernel: pinctrl core: initialized pinctrl subsystem Jun 20 19:21:00.722197 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 20 19:21:00.722203 kernel: audit: initializing netlink subsys (disabled) Jun 20 19:21:00.722209 kernel: audit: type=2000 audit(1750447257.277:1): state=initialized audit_enabled=0 res=1 Jun 20 19:21:00.722214 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 20 19:21:00.722221 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 20 19:21:00.722226 kernel: cpuidle: using governor menu Jun 20 19:21:00.722232 kernel: Simple Boot Flag at 0x36 set to 0x80 Jun 20 19:21:00.722238 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 20 19:21:00.722243 kernel: dca service started, version 1.12.1 Jun 20 19:21:00.722249 kernel: PCI: ECAM [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) for domain 0000 [bus 00-7f] Jun 20 19:21:00.722263 kernel: PCI: Using configuration type 1 for base access Jun 20 19:21:00.722270 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 20 19:21:00.722276 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 20 19:21:00.722302 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 20 19:21:00.722308 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 20 19:21:00.722314 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 20 19:21:00.722319 kernel: ACPI: Added _OSI(Module Device) Jun 20 19:21:00.722325 kernel: ACPI: Added _OSI(Processor Device) Jun 20 19:21:00.722331 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 20 19:21:00.722337 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 20 19:21:00.722343 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jun 20 19:21:00.722349 kernel: ACPI: Interpreter enabled Jun 20 19:21:00.722356 kernel: ACPI: PM: (supports S0 S1 S5) Jun 20 19:21:00.722362 kernel: ACPI: Using IOAPIC for interrupt routing Jun 20 19:21:00.722367 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 20 19:21:00.722373 kernel: PCI: Using E820 reservations for host bridge windows Jun 20 19:21:00.722380 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Jun 20 19:21:00.722386 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Jun 20 19:21:00.722471 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 20 19:21:00.722526 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Jun 20 19:21:00.722577 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Jun 20 19:21:00.722586 kernel: PCI host bridge to bus 0000:00 Jun 20 19:21:00.722636 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 20 19:21:00.722681 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Jun 20 19:21:00.722725 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jun 20 19:21:00.722768 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 20 19:21:00.722811 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Jun 20 19:21:00.722856 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Jun 20 19:21:00.722915 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 conventional PCI endpoint Jun 20 19:21:00.722974 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 conventional PCI bridge Jun 20 19:21:00.723027 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jun 20 19:21:00.723088 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Jun 20 19:21:00.723146 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a conventional PCI endpoint Jun 20 19:21:00.723199 kernel: pci 0000:00:07.1: BAR 4 [io 0x1060-0x106f] Jun 20 19:21:00.723249 kernel: pci 0000:00:07.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Jun 20 19:21:00.723751 kernel: pci 0000:00:07.1: BAR 1 [io 0x03f6]: legacy IDE quirk Jun 20 19:21:00.723808 kernel: pci 0000:00:07.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Jun 20 19:21:00.723864 kernel: pci 0000:00:07.1: BAR 3 [io 0x0376]: legacy IDE quirk Jun 20 19:21:00.723920 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Jun 20 19:21:00.723971 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Jun 20 19:21:00.724021 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Jun 20 19:21:00.724076 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 conventional PCI endpoint Jun 20 19:21:00.724127 kernel: pci 0000:00:07.7: BAR 0 [io 0x1080-0x10bf] Jun 20 19:21:00.724179 kernel: pci 0000:00:07.7: BAR 1 [mem 0xfebfe000-0xfebfffff 64bit] Jun 20 19:21:00.724237 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 conventional PCI endpoint Jun 20 19:21:00.724300 kernel: pci 0000:00:0f.0: BAR 0 [io 0x1070-0x107f] Jun 20 19:21:00.724355 kernel: pci 0000:00:0f.0: BAR 1 [mem 0xe8000000-0xefffffff pref] Jun 20 19:21:00.724416 kernel: pci 0000:00:0f.0: BAR 2 [mem 0xfe000000-0xfe7fffff] Jun 20 19:21:00.724471 kernel: pci 0000:00:0f.0: ROM [mem 0x00000000-0x00007fff pref] Jun 20 19:21:00.724521 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 20 19:21:00.724578 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 conventional PCI bridge Jun 20 19:21:00.724629 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Jun 20 19:21:00.724678 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jun 20 19:21:00.724727 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jun 20 19:21:00.724776 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jun 20 19:21:00.724830 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.724887 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jun 20 19:21:00.724941 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jun 20 19:21:00.724991 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jun 20 19:21:00.725042 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.725096 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.725148 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jun 20 19:21:00.725197 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jun 20 19:21:00.725248 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jun 20 19:21:00.728431 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jun 20 19:21:00.728506 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.728565 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.728618 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jun 20 19:21:00.728669 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jun 20 19:21:00.728720 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jun 20 19:21:00.728777 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jun 20 19:21:00.728826 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.728882 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.728935 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jun 20 19:21:00.728985 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jun 20 19:21:00.729035 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jun 20 19:21:00.729084 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.729142 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.729193 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jun 20 19:21:00.729244 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jun 20 19:21:00.729312 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jun 20 19:21:00.729364 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.729422 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.729477 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jun 20 19:21:00.729529 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jun 20 19:21:00.729579 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jun 20 19:21:00.729629 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.729683 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.729734 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jun 20 19:21:00.729784 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jun 20 19:21:00.729834 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jun 20 19:21:00.729886 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.729942 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.729993 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jun 20 19:21:00.730044 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jun 20 19:21:00.730102 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jun 20 19:21:00.730152 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.730207 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.730261 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jun 20 19:21:00.732349 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jun 20 19:21:00.732406 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jun 20 19:21:00.732457 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.732516 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.732568 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jun 20 19:21:00.732619 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jun 20 19:21:00.732669 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jun 20 19:21:00.732725 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jun 20 19:21:00.732775 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.732830 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.732895 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jun 20 19:21:00.732946 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jun 20 19:21:00.733006 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jun 20 19:21:00.733070 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jun 20 19:21:00.733125 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.733180 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.733232 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jun 20 19:21:00.733293 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jun 20 19:21:00.733346 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jun 20 19:21:00.733397 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.733452 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.733505 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jun 20 19:21:00.733556 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jun 20 19:21:00.733606 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jun 20 19:21:00.733656 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.733712 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.733763 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jun 20 19:21:00.733814 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jun 20 19:21:00.733866 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jun 20 19:21:00.733916 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.733971 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.734023 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jun 20 19:21:00.734073 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jun 20 19:21:00.734123 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jun 20 19:21:00.734174 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.734232 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.736310 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jun 20 19:21:00.736373 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jun 20 19:21:00.736424 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jun 20 19:21:00.736475 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.736530 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.736581 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jun 20 19:21:00.736636 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jun 20 19:21:00.736686 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jun 20 19:21:00.736735 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jun 20 19:21:00.736785 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.736840 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.736891 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jun 20 19:21:00.736944 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jun 20 19:21:00.736999 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jun 20 19:21:00.737059 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jun 20 19:21:00.737111 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.737174 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.737229 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jun 20 19:21:00.737289 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jun 20 19:21:00.737355 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jun 20 19:21:00.738366 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jun 20 19:21:00.738430 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.738491 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.738546 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jun 20 19:21:00.738602 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jun 20 19:21:00.738654 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jun 20 19:21:00.738706 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.738763 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.738816 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jun 20 19:21:00.738867 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jun 20 19:21:00.738919 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jun 20 19:21:00.738974 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.739030 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.739083 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jun 20 19:21:00.739136 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jun 20 19:21:00.739186 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jun 20 19:21:00.739237 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.740377 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.740445 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jun 20 19:21:00.740499 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jun 20 19:21:00.740551 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jun 20 19:21:00.740601 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.740656 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.740708 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jun 20 19:21:00.740758 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jun 20 19:21:00.740811 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jun 20 19:21:00.740861 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.740919 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.740969 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jun 20 19:21:00.741019 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jun 20 19:21:00.741069 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jun 20 19:21:00.741118 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jun 20 19:21:00.741168 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.741225 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.741275 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jun 20 19:21:00.741335 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jun 20 19:21:00.741385 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jun 20 19:21:00.741435 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jun 20 19:21:00.741501 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.741557 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.741611 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jun 20 19:21:00.741660 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jun 20 19:21:00.741709 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jun 20 19:21:00.741759 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.741814 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.741864 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jun 20 19:21:00.741915 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jun 20 19:21:00.741967 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jun 20 19:21:00.742016 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.742075 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.742127 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jun 20 19:21:00.742176 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jun 20 19:21:00.742226 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jun 20 19:21:00.742276 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.742499 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.742551 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jun 20 19:21:00.742601 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jun 20 19:21:00.742651 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jun 20 19:21:00.742700 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.742754 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.742804 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jun 20 19:21:00.742857 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jun 20 19:21:00.742906 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jun 20 19:21:00.742955 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.743010 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.743066 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jun 20 19:21:00.743116 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jun 20 19:21:00.743166 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jun 20 19:21:00.743218 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.743274 kernel: pci_bus 0000:01: extended config space not accessible Jun 20 19:21:00.743357 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jun 20 19:21:00.743409 kernel: pci_bus 0000:02: extended config space not accessible Jun 20 19:21:00.743418 kernel: acpiphp: Slot [32] registered Jun 20 19:21:00.743425 kernel: acpiphp: Slot [33] registered Jun 20 19:21:00.743431 kernel: acpiphp: Slot [34] registered Jun 20 19:21:00.743437 kernel: acpiphp: Slot [35] registered Jun 20 19:21:00.743445 kernel: acpiphp: Slot [36] registered Jun 20 19:21:00.743451 kernel: acpiphp: Slot [37] registered Jun 20 19:21:00.743457 kernel: acpiphp: Slot [38] registered Jun 20 19:21:00.743463 kernel: acpiphp: Slot [39] registered Jun 20 19:21:00.743469 kernel: acpiphp: Slot [40] registered Jun 20 19:21:00.743475 kernel: acpiphp: Slot [41] registered Jun 20 19:21:00.743480 kernel: acpiphp: Slot [42] registered Jun 20 19:21:00.743486 kernel: acpiphp: Slot [43] registered Jun 20 19:21:00.743492 kernel: acpiphp: Slot [44] registered Jun 20 19:21:00.743498 kernel: acpiphp: Slot [45] registered Jun 20 19:21:00.743505 kernel: acpiphp: Slot [46] registered Jun 20 19:21:00.743511 kernel: acpiphp: Slot [47] registered Jun 20 19:21:00.743517 kernel: acpiphp: Slot [48] registered Jun 20 19:21:00.743522 kernel: acpiphp: Slot [49] registered Jun 20 19:21:00.743528 kernel: acpiphp: Slot [50] registered Jun 20 19:21:00.743534 kernel: acpiphp: Slot [51] registered Jun 20 19:21:00.743540 kernel: acpiphp: Slot [52] registered Jun 20 19:21:00.743546 kernel: acpiphp: Slot [53] registered Jun 20 19:21:00.743552 kernel: acpiphp: Slot [54] registered Jun 20 19:21:00.743559 kernel: acpiphp: Slot [55] registered Jun 20 19:21:00.743565 kernel: acpiphp: Slot [56] registered Jun 20 19:21:00.743571 kernel: acpiphp: Slot [57] registered Jun 20 19:21:00.743576 kernel: acpiphp: Slot [58] registered Jun 20 19:21:00.743582 kernel: acpiphp: Slot [59] registered Jun 20 19:21:00.743588 kernel: acpiphp: Slot [60] registered Jun 20 19:21:00.743594 kernel: acpiphp: Slot [61] registered Jun 20 19:21:00.743599 kernel: acpiphp: Slot [62] registered Jun 20 19:21:00.743605 kernel: acpiphp: Slot [63] registered Jun 20 19:21:00.743655 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Jun 20 19:21:00.743707 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Jun 20 19:21:00.743757 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Jun 20 19:21:00.743806 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Jun 20 19:21:00.743854 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Jun 20 19:21:00.743903 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Jun 20 19:21:00.743960 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 PCIe Endpoint Jun 20 19:21:00.744011 kernel: pci 0000:03:00.0: BAR 0 [io 0x4000-0x4007] Jun 20 19:21:00.744065 kernel: pci 0000:03:00.0: BAR 1 [mem 0xfd5f8000-0xfd5fffff 64bit] Jun 20 19:21:00.744115 kernel: pci 0000:03:00.0: ROM [mem 0x00000000-0x0000ffff pref] Jun 20 19:21:00.744166 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.744215 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jun 20 19:21:00.744267 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jun 20 19:21:00.744328 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jun 20 19:21:00.744382 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jun 20 19:21:00.744436 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jun 20 19:21:00.744487 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jun 20 19:21:00.744547 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jun 20 19:21:00.744601 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jun 20 19:21:00.744654 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jun 20 19:21:00.744709 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 PCIe Endpoint Jun 20 19:21:00.744762 kernel: pci 0000:0b:00.0: BAR 0 [mem 0xfd4fc000-0xfd4fcfff] Jun 20 19:21:00.744815 kernel: pci 0000:0b:00.0: BAR 1 [mem 0xfd4fd000-0xfd4fdfff] Jun 20 19:21:00.744866 kernel: pci 0000:0b:00.0: BAR 2 [mem 0xfd4fe000-0xfd4fffff] Jun 20 19:21:00.744917 kernel: pci 0000:0b:00.0: BAR 3 [io 0x5000-0x500f] Jun 20 19:21:00.744967 kernel: pci 0000:0b:00.0: ROM [mem 0x00000000-0x0000ffff pref] Jun 20 19:21:00.745017 kernel: pci 0000:0b:00.0: supports D1 D2 Jun 20 19:21:00.745068 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jun 20 19:21:00.745119 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jun 20 19:21:00.745169 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jun 20 19:21:00.745224 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jun 20 19:21:00.745276 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jun 20 19:21:00.745351 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jun 20 19:21:00.745403 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jun 20 19:21:00.745455 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jun 20 19:21:00.745507 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jun 20 19:21:00.745559 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jun 20 19:21:00.745614 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jun 20 19:21:00.745665 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jun 20 19:21:00.745717 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jun 20 19:21:00.745769 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jun 20 19:21:00.745822 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jun 20 19:21:00.745875 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jun 20 19:21:00.745927 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jun 20 19:21:00.745978 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jun 20 19:21:00.746033 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jun 20 19:21:00.746090 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jun 20 19:21:00.746141 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jun 20 19:21:00.746192 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jun 20 19:21:00.746243 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jun 20 19:21:00.746315 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jun 20 19:21:00.746368 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jun 20 19:21:00.746423 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jun 20 19:21:00.746432 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Jun 20 19:21:00.746438 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Jun 20 19:21:00.746444 kernel: ACPI: PCI: Interrupt link LNKB disabled Jun 20 19:21:00.746450 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 20 19:21:00.746456 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Jun 20 19:21:00.746462 kernel: iommu: Default domain type: Translated Jun 20 19:21:00.746468 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 20 19:21:00.746476 kernel: PCI: Using ACPI for IRQ routing Jun 20 19:21:00.746482 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 20 19:21:00.746489 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Jun 20 19:21:00.746494 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Jun 20 19:21:00.746544 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Jun 20 19:21:00.746593 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Jun 20 19:21:00.746642 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 20 19:21:00.746650 kernel: vgaarb: loaded Jun 20 19:21:00.746656 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Jun 20 19:21:00.746664 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Jun 20 19:21:00.746670 kernel: clocksource: Switched to clocksource tsc-early Jun 20 19:21:00.746677 kernel: VFS: Disk quotas dquot_6.6.0 Jun 20 19:21:00.746683 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 20 19:21:00.746689 kernel: pnp: PnP ACPI init Jun 20 19:21:00.746743 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Jun 20 19:21:00.746790 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Jun 20 19:21:00.746834 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Jun 20 19:21:00.746886 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Jun 20 19:21:00.746934 kernel: pnp 00:06: [dma 2] Jun 20 19:21:00.746984 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Jun 20 19:21:00.747029 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Jun 20 19:21:00.747074 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Jun 20 19:21:00.747082 kernel: pnp: PnP ACPI: found 8 devices Jun 20 19:21:00.747088 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 20 19:21:00.747096 kernel: NET: Registered PF_INET protocol family Jun 20 19:21:00.747102 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 20 19:21:00.747108 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jun 20 19:21:00.747114 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 20 19:21:00.747120 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 20 19:21:00.747126 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jun 20 19:21:00.747132 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jun 20 19:21:00.747138 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 20 19:21:00.747144 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 20 19:21:00.747151 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 20 19:21:00.747157 kernel: NET: Registered PF_XDP protocol family Jun 20 19:21:00.747207 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jun 20 19:21:00.747259 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jun 20 19:21:00.747320 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jun 20 19:21:00.747372 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jun 20 19:21:00.747423 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jun 20 19:21:00.747473 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jun 20 19:21:00.747526 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Jun 20 19:21:00.747578 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jun 20 19:21:00.747628 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jun 20 19:21:00.747679 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jun 20 19:21:00.747730 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jun 20 19:21:00.747779 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jun 20 19:21:00.747830 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jun 20 19:21:00.747882 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jun 20 19:21:00.747932 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jun 20 19:21:00.747982 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jun 20 19:21:00.748033 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jun 20 19:21:00.748084 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jun 20 19:21:00.748135 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jun 20 19:21:00.748184 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Jun 20 19:21:00.748234 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Jun 20 19:21:00.748297 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Jun 20 19:21:00.748351 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Jun 20 19:21:00.748402 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref]: assigned Jun 20 19:21:00.748451 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref]: assigned Jun 20 19:21:00.748502 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.748550 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.748602 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.748651 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.748704 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.748754 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.748803 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.748851 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.748901 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.748951 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.749001 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.749050 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.749109 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.749159 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.749208 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.749257 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.749320 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.749371 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.749421 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.749473 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.749523 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.749572 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.749623 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.749673 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.749722 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.749773 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.749824 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.749875 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.749925 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.749974 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.750024 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.750080 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.750130 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.750180 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.750229 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.750302 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.750357 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.750407 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.750457 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.750507 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.750557 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.750607 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.750656 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.750709 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.750760 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.750810 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.750860 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.750910 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.750959 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.751008 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.751058 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.751106 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.751156 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.751208 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.751258 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.751332 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.751383 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.751433 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.751483 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.751533 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.751583 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.751634 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.751687 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.751736 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.751785 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.751835 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.751885 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.751934 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.751984 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.752033 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.752087 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.752138 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.752190 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.752240 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.752305 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.752356 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.752407 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.752459 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.752511 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.752560 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.753005 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.753071 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.753130 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.753182 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.753235 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jun 20 19:21:00.754317 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Jun 20 19:21:00.754383 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jun 20 19:21:00.754439 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jun 20 19:21:00.754490 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jun 20 19:21:00.754545 kernel: pci 0000:03:00.0: ROM [mem 0xfd500000-0xfd50ffff pref]: assigned Jun 20 19:21:00.754597 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jun 20 19:21:00.754647 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jun 20 19:21:00.754696 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jun 20 19:21:00.754746 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Jun 20 19:21:00.754798 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jun 20 19:21:00.754850 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jun 20 19:21:00.754900 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jun 20 19:21:00.754950 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jun 20 19:21:00.755001 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jun 20 19:21:00.755051 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jun 20 19:21:00.755101 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jun 20 19:21:00.755150 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jun 20 19:21:00.755200 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jun 20 19:21:00.755249 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jun 20 19:21:00.755310 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jun 20 19:21:00.755364 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jun 20 19:21:00.755413 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jun 20 19:21:00.755462 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jun 20 19:21:00.755513 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jun 20 19:21:00.755562 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jun 20 19:21:00.755611 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jun 20 19:21:00.755663 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jun 20 19:21:00.755713 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jun 20 19:21:00.755762 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jun 20 19:21:00.755812 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jun 20 19:21:00.755861 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jun 20 19:21:00.755910 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jun 20 19:21:00.755964 kernel: pci 0000:0b:00.0: ROM [mem 0xfd400000-0xfd40ffff pref]: assigned Jun 20 19:21:00.756013 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jun 20 19:21:00.756071 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jun 20 19:21:00.756122 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jun 20 19:21:00.756170 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Jun 20 19:21:00.756220 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jun 20 19:21:00.756269 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jun 20 19:21:00.757345 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jun 20 19:21:00.757403 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jun 20 19:21:00.757458 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jun 20 19:21:00.757509 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jun 20 19:21:00.757559 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jun 20 19:21:00.757612 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jun 20 19:21:00.757663 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jun 20 19:21:00.757712 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jun 20 19:21:00.757762 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jun 20 19:21:00.757812 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jun 20 19:21:00.757862 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jun 20 19:21:00.757912 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jun 20 19:21:00.757965 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jun 20 19:21:00.758014 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jun 20 19:21:00.758063 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jun 20 19:21:00.758114 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jun 20 19:21:00.758163 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jun 20 19:21:00.758212 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jun 20 19:21:00.758263 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jun 20 19:21:00.758324 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jun 20 19:21:00.758376 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jun 20 19:21:00.758427 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jun 20 19:21:00.758476 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jun 20 19:21:00.758525 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jun 20 19:21:00.758574 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jun 20 19:21:00.758624 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jun 20 19:21:00.758673 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jun 20 19:21:00.758721 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jun 20 19:21:00.758770 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jun 20 19:21:00.758823 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jun 20 19:21:00.758872 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jun 20 19:21:00.758920 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jun 20 19:21:00.758969 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jun 20 19:21:00.759019 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jun 20 19:21:00.759073 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jun 20 19:21:00.759123 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jun 20 19:21:00.759174 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jun 20 19:21:00.759225 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jun 20 19:21:00.759275 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jun 20 19:21:00.759665 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jun 20 19:21:00.759718 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jun 20 19:21:00.759768 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jun 20 19:21:00.759819 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jun 20 19:21:00.759869 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jun 20 19:21:00.759918 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jun 20 19:21:00.759972 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jun 20 19:21:00.760022 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jun 20 19:21:00.760078 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jun 20 19:21:00.760130 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jun 20 19:21:00.760180 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jun 20 19:21:00.760229 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jun 20 19:21:00.760287 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jun 20 19:21:00.760341 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jun 20 19:21:00.760394 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jun 20 19:21:00.760443 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jun 20 19:21:00.760493 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jun 20 19:21:00.760544 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jun 20 19:21:00.760593 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jun 20 19:21:00.760643 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jun 20 19:21:00.760695 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jun 20 19:21:00.760744 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jun 20 19:21:00.760795 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jun 20 19:21:00.760846 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jun 20 19:21:00.760895 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jun 20 19:21:00.760944 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jun 20 19:21:00.760994 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jun 20 19:21:00.761043 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jun 20 19:21:00.761093 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jun 20 19:21:00.761147 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jun 20 19:21:00.761196 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jun 20 19:21:00.761244 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jun 20 19:21:00.761319 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jun 20 19:21:00.761371 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jun 20 19:21:00.761421 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jun 20 19:21:00.761470 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Jun 20 19:21:00.761516 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Jun 20 19:21:00.761560 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Jun 20 19:21:00.761603 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Jun 20 19:21:00.761645 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Jun 20 19:21:00.761693 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Jun 20 19:21:00.761851 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Jun 20 19:21:00.761902 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Jun 20 19:21:00.761947 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Jun 20 19:21:00.761995 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Jun 20 19:21:00.762043 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Jun 20 19:21:00.762091 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Jun 20 19:21:00.762136 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Jun 20 19:21:00.762186 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Jun 20 19:21:00.762232 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Jun 20 19:21:00.762277 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Jun 20 19:21:00.762348 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Jun 20 19:21:00.762395 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Jun 20 19:21:00.762440 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Jun 20 19:21:00.762492 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Jun 20 19:21:00.762538 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Jun 20 19:21:00.762583 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Jun 20 19:21:00.762631 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Jun 20 19:21:00.762680 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Jun 20 19:21:00.762729 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Jun 20 19:21:00.762774 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Jun 20 19:21:00.762823 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Jun 20 19:21:00.762868 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Jun 20 19:21:00.762917 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Jun 20 19:21:00.762965 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Jun 20 19:21:00.763016 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Jun 20 19:21:00.763067 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Jun 20 19:21:00.763116 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Jun 20 19:21:00.763162 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Jun 20 19:21:00.763209 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Jun 20 19:21:00.763258 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Jun 20 19:21:00.763320 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Jun 20 19:21:00.763366 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Jun 20 19:21:00.763415 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Jun 20 19:21:00.763461 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Jun 20 19:21:00.763506 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Jun 20 19:21:00.763559 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Jun 20 19:21:00.763605 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Jun 20 19:21:00.763653 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Jun 20 19:21:00.763699 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Jun 20 19:21:00.763747 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Jun 20 19:21:00.763794 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Jun 20 19:21:00.763845 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Jun 20 19:21:00.763890 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Jun 20 19:21:00.763941 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Jun 20 19:21:00.763987 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Jun 20 19:21:00.764035 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Jun 20 19:21:00.764081 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Jun 20 19:21:00.764128 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Jun 20 19:21:00.764177 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Jun 20 19:21:00.764222 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Jun 20 19:21:00.764266 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Jun 20 19:21:00.764325 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Jun 20 19:21:00.764371 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Jun 20 19:21:00.764415 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Jun 20 19:21:00.764466 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Jun 20 19:21:00.764511 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Jun 20 19:21:00.764562 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Jun 20 19:21:00.764607 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Jun 20 19:21:00.764655 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Jun 20 19:21:00.764700 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Jun 20 19:21:00.764750 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Jun 20 19:21:00.764797 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Jun 20 19:21:00.764846 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Jun 20 19:21:00.764891 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Jun 20 19:21:00.764941 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Jun 20 19:21:00.764987 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Jun 20 19:21:00.765031 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Jun 20 19:21:00.765082 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Jun 20 19:21:00.765128 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Jun 20 19:21:00.765173 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Jun 20 19:21:00.765222 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Jun 20 19:21:00.765268 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Jun 20 19:21:00.765325 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Jun 20 19:21:00.765374 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Jun 20 19:21:00.765423 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Jun 20 19:21:00.765469 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Jun 20 19:21:00.765518 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Jun 20 19:21:00.765563 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Jun 20 19:21:00.765613 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Jun 20 19:21:00.765658 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Jun 20 19:21:00.765710 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Jun 20 19:21:00.765755 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Jun 20 19:21:00.765810 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jun 20 19:21:00.765819 kernel: PCI: CLS 32 bytes, default 64 Jun 20 19:21:00.765826 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jun 20 19:21:00.765832 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jun 20 19:21:00.765838 kernel: clocksource: Switched to clocksource tsc Jun 20 19:21:00.765846 kernel: Initialise system trusted keyrings Jun 20 19:21:00.765852 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jun 20 19:21:00.765858 kernel: Key type asymmetric registered Jun 20 19:21:00.765864 kernel: Asymmetric key parser 'x509' registered Jun 20 19:21:00.765870 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jun 20 19:21:00.765876 kernel: io scheduler mq-deadline registered Jun 20 19:21:00.765883 kernel: io scheduler kyber registered Jun 20 19:21:00.765888 kernel: io scheduler bfq registered Jun 20 19:21:00.765940 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Jun 20 19:21:00.765993 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.766045 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Jun 20 19:21:00.766100 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.766151 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Jun 20 19:21:00.766201 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.766253 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Jun 20 19:21:00.766381 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.766438 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Jun 20 19:21:00.766491 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.766543 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Jun 20 19:21:00.766592 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.766643 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Jun 20 19:21:00.766693 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.766743 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Jun 20 19:21:00.766795 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.766847 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Jun 20 19:21:00.766896 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.766947 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Jun 20 19:21:00.766997 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.767047 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Jun 20 19:21:00.767096 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.767148 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Jun 20 19:21:00.767198 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.767257 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Jun 20 19:21:00.767698 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.767755 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Jun 20 19:21:00.767808 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.767860 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Jun 20 19:21:00.767912 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.767968 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Jun 20 19:21:00.768019 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.768070 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Jun 20 19:21:00.768120 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.768171 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Jun 20 19:21:00.768221 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.768272 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Jun 20 19:21:00.768348 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.768402 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Jun 20 19:21:00.768453 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.768505 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Jun 20 19:21:00.768556 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.768608 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Jun 20 19:21:00.768662 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.768713 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Jun 20 19:21:00.768766 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.768818 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Jun 20 19:21:00.768868 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.768920 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Jun 20 19:21:00.768970 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.769021 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Jun 20 19:21:00.769072 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.769125 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Jun 20 19:21:00.769176 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.769227 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Jun 20 19:21:00.769295 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.769355 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Jun 20 19:21:00.769406 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.769458 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Jun 20 19:21:00.769510 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.769563 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Jun 20 19:21:00.769613 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.769665 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Jun 20 19:21:00.769716 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.769725 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 20 19:21:00.769734 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 20 19:21:00.769741 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 20 19:21:00.769748 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Jun 20 19:21:00.769754 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 20 19:21:00.769760 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 20 19:21:00.769812 kernel: rtc_cmos 00:01: registered as rtc0 Jun 20 19:21:00.769860 kernel: rtc_cmos 00:01: setting system clock to 2025-06-20T19:21:00 UTC (1750447260) Jun 20 19:21:00.769869 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jun 20 19:21:00.769912 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Jun 20 19:21:00.769923 kernel: intel_pstate: CPU model not supported Jun 20 19:21:00.769930 kernel: NET: Registered PF_INET6 protocol family Jun 20 19:21:00.769936 kernel: Segment Routing with IPv6 Jun 20 19:21:00.769942 kernel: In-situ OAM (IOAM) with IPv6 Jun 20 19:21:00.769949 kernel: NET: Registered PF_PACKET protocol family Jun 20 19:21:00.769955 kernel: Key type dns_resolver registered Jun 20 19:21:00.769961 kernel: IPI shorthand broadcast: enabled Jun 20 19:21:00.769968 kernel: sched_clock: Marking stable (2671003519, 167650021)->(2853023831, -14370291) Jun 20 19:21:00.769974 kernel: registered taskstats version 1 Jun 20 19:21:00.769981 kernel: Loading compiled-in X.509 certificates Jun 20 19:21:00.769988 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.34-flatcar: 9a085d119111c823c157514215d0379e3a2f1b94' Jun 20 19:21:00.769994 kernel: Demotion targets for Node 0: null Jun 20 19:21:00.770001 kernel: Key type .fscrypt registered Jun 20 19:21:00.770007 kernel: Key type fscrypt-provisioning registered Jun 20 19:21:00.770013 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 20 19:21:00.770019 kernel: ima: Allocated hash algorithm: sha1 Jun 20 19:21:00.770025 kernel: ima: No architecture policies found Jun 20 19:21:00.770032 kernel: clk: Disabling unused clocks Jun 20 19:21:00.770039 kernel: Warning: unable to open an initial console. Jun 20 19:21:00.770046 kernel: Freeing unused kernel image (initmem) memory: 54424K Jun 20 19:21:00.770052 kernel: Write protecting the kernel read-only data: 24576k Jun 20 19:21:00.770064 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jun 20 19:21:00.770071 kernel: Run /init as init process Jun 20 19:21:00.770077 kernel: with arguments: Jun 20 19:21:00.770084 kernel: /init Jun 20 19:21:00.770090 kernel: with environment: Jun 20 19:21:00.770096 kernel: HOME=/ Jun 20 19:21:00.770102 kernel: TERM=linux Jun 20 19:21:00.770109 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 20 19:21:00.770117 systemd[1]: Successfully made /usr/ read-only. Jun 20 19:21:00.770127 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 19:21:00.770134 systemd[1]: Detected virtualization vmware. Jun 20 19:21:00.770140 systemd[1]: Detected architecture x86-64. Jun 20 19:21:00.770146 systemd[1]: Running in initrd. Jun 20 19:21:00.770153 systemd[1]: No hostname configured, using default hostname. Jun 20 19:21:00.770161 systemd[1]: Hostname set to . Jun 20 19:21:00.770167 systemd[1]: Initializing machine ID from random generator. Jun 20 19:21:00.770173 systemd[1]: Queued start job for default target initrd.target. Jun 20 19:21:00.770180 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:21:00.770186 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:21:00.770194 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 20 19:21:00.770200 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 19:21:00.770207 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 20 19:21:00.770215 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 20 19:21:00.770222 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 20 19:21:00.770229 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 20 19:21:00.770236 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:21:00.770242 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:21:00.770249 systemd[1]: Reached target paths.target - Path Units. Jun 20 19:21:00.770255 systemd[1]: Reached target slices.target - Slice Units. Jun 20 19:21:00.770263 systemd[1]: Reached target swap.target - Swaps. Jun 20 19:21:00.770269 systemd[1]: Reached target timers.target - Timer Units. Jun 20 19:21:00.770276 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 19:21:00.771913 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 19:21:00.771922 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 20 19:21:00.771929 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 20 19:21:00.771935 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:21:00.771942 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 19:21:00.771950 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:21:00.771957 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 19:21:00.771964 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 20 19:21:00.771970 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 19:21:00.771977 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 20 19:21:00.771984 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jun 20 19:21:00.771991 systemd[1]: Starting systemd-fsck-usr.service... Jun 20 19:21:00.771997 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 19:21:00.772004 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 19:21:00.772012 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:21:00.772018 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 20 19:21:00.772025 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:21:00.772032 systemd[1]: Finished systemd-fsck-usr.service. Jun 20 19:21:00.772040 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 20 19:21:00.772062 systemd-journald[243]: Collecting audit messages is disabled. Jun 20 19:21:00.772080 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 19:21:00.772087 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 20 19:21:00.772096 kernel: Bridge firewalling registered Jun 20 19:21:00.772102 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 19:21:00.772109 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 19:21:00.772116 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:21:00.772122 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 19:21:00.772129 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:21:00.772136 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:21:00.772142 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:21:00.772151 systemd-journald[243]: Journal started Jun 20 19:21:00.772166 systemd-journald[243]: Runtime Journal (/run/log/journal/df8c84ef0e8e4257b3d131b7c106ca44) is 4.8M, max 38.8M, 34M free. Jun 20 19:21:00.717664 systemd-modules-load[244]: Inserted module 'overlay' Jun 20 19:21:00.773172 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 19:21:00.740033 systemd-modules-load[244]: Inserted module 'br_netfilter' Jun 20 19:21:00.776356 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 19:21:00.785403 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:21:00.787969 systemd-tmpfiles[276]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jun 20 19:21:00.788597 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 20 19:21:00.789528 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:21:00.797347 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 19:21:00.804704 dracut-cmdline[281]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:21:00.822016 systemd-resolved[283]: Positive Trust Anchors: Jun 20 19:21:00.822023 systemd-resolved[283]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 19:21:00.822046 systemd-resolved[283]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 19:21:00.824650 systemd-resolved[283]: Defaulting to hostname 'linux'. Jun 20 19:21:00.825330 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 19:21:00.825497 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:21:00.859298 kernel: SCSI subsystem initialized Jun 20 19:21:00.876310 kernel: Loading iSCSI transport class v2.0-870. Jun 20 19:21:00.884297 kernel: iscsi: registered transport (tcp) Jun 20 19:21:00.907299 kernel: iscsi: registered transport (qla4xxx) Jun 20 19:21:00.907342 kernel: QLogic iSCSI HBA Driver Jun 20 19:21:00.917106 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 19:21:00.928091 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:21:00.928912 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 19:21:00.950938 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 20 19:21:00.951758 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 20 19:21:00.992330 kernel: raid6: avx2x4 gen() 48394 MB/s Jun 20 19:21:01.008322 kernel: raid6: avx2x2 gen() 55048 MB/s Jun 20 19:21:01.025426 kernel: raid6: avx2x1 gen() 45515 MB/s Jun 20 19:21:01.025447 kernel: raid6: using algorithm avx2x2 gen() 55048 MB/s Jun 20 19:21:01.043442 kernel: raid6: .... xor() 33356 MB/s, rmw enabled Jun 20 19:21:01.043462 kernel: raid6: using avx2x2 recovery algorithm Jun 20 19:21:01.057299 kernel: xor: automatically using best checksumming function avx Jun 20 19:21:01.163296 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 20 19:21:01.166543 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 20 19:21:01.167584 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:21:01.188902 systemd-udevd[492]: Using default interface naming scheme 'v255'. Jun 20 19:21:01.192598 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:21:01.193845 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 20 19:21:01.210263 dracut-pre-trigger[498]: rd.md=0: removing MD RAID activation Jun 20 19:21:01.224803 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 19:21:01.225684 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 19:21:01.299971 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:21:01.301456 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 20 19:21:01.368338 kernel: VMware PVSCSI driver - version 1.0.7.0-k Jun 20 19:21:01.368374 kernel: vmw_pvscsi: using 64bit dma Jun 20 19:21:01.369383 kernel: vmw_pvscsi: max_id: 16 Jun 20 19:21:01.369404 kernel: vmw_pvscsi: setting ring_pages to 8 Jun 20 19:21:01.375343 kernel: vmw_pvscsi: enabling reqCallThreshold Jun 20 19:21:01.375361 kernel: vmw_pvscsi: driver-based request coalescing enabled Jun 20 19:21:01.375369 kernel: vmw_pvscsi: using MSI-X Jun 20 19:21:01.376932 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Jun 20 19:21:01.377744 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Jun 20 19:21:01.380286 kernel: VMware vmxnet3 virtual NIC driver - version 1.9.0.0-k-NAPI Jun 20 19:21:01.380302 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Jun 20 19:21:01.381577 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Jun 20 19:21:01.390291 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Jun 20 19:21:01.404078 (udev-worker)[548]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Jun 20 19:21:01.406219 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:21:01.406298 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:21:01.407368 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Jun 20 19:21:01.409924 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:21:01.411425 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:21:01.420300 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Jun 20 19:21:01.423295 kernel: sd 0:0:0:0: [sda] Write Protect is off Jun 20 19:21:01.423390 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Jun 20 19:21:01.423455 kernel: sd 0:0:0:0: [sda] Cache data unavailable Jun 20 19:21:01.423539 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Jun 20 19:21:01.423602 kernel: libata version 3.00 loaded. Jun 20 19:21:01.426317 kernel: cryptd: max_cpu_qlen set to 1000 Jun 20 19:21:01.431292 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 19:21:01.435289 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jun 20 19:21:01.438817 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:21:01.442337 kernel: ata_piix 0000:00:07.1: version 2.13 Jun 20 19:21:01.443286 kernel: scsi host1: ata_piix Jun 20 19:21:01.446064 kernel: AES CTR mode by8 optimization enabled Jun 20 19:21:01.446085 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Jun 20 19:21:01.449099 kernel: scsi host2: ata_piix Jun 20 19:21:01.449207 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 lpm-pol 0 Jun 20 19:21:01.449220 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 lpm-pol 0 Jun 20 19:21:01.486580 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Jun 20 19:21:01.492079 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Jun 20 19:21:01.497579 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jun 20 19:21:01.502030 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Jun 20 19:21:01.502323 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Jun 20 19:21:01.503117 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 20 19:21:01.545310 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 19:21:01.560302 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 19:21:01.618399 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Jun 20 19:21:01.625329 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Jun 20 19:21:01.647290 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Jun 20 19:21:01.647416 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 20 19:21:01.658329 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jun 20 19:21:01.978019 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 20 19:21:01.978569 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 19:21:01.978692 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:21:01.978885 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 19:21:01.979504 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 20 19:21:01.989381 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 20 19:21:02.555299 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 19:21:02.556764 disk-uuid[633]: The operation has completed successfully. Jun 20 19:21:02.590913 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 20 19:21:02.590982 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 20 19:21:02.606350 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 20 19:21:02.619150 sh[674]: Success Jun 20 19:21:02.632299 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 20 19:21:02.632333 kernel: device-mapper: uevent: version 1.0.3 Jun 20 19:21:02.633986 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jun 20 19:21:02.641293 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Jun 20 19:21:02.676362 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 20 19:21:02.679326 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 20 19:21:02.690367 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 20 19:21:02.706447 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jun 20 19:21:02.706503 kernel: BTRFS: device fsid 048b924a-9f97-43f5-98d6-0fff18874966 devid 1 transid 41 /dev/mapper/usr (254:0) scanned by mount (686) Jun 20 19:21:02.708224 kernel: BTRFS info (device dm-0): first mount of filesystem 048b924a-9f97-43f5-98d6-0fff18874966 Jun 20 19:21:02.708246 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:21:02.710304 kernel: BTRFS info (device dm-0): using free-space-tree Jun 20 19:21:02.719045 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 20 19:21:02.719395 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jun 20 19:21:02.719982 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Jun 20 19:21:02.721347 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 20 19:21:02.748301 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 (8:6) scanned by mount (709) Jun 20 19:21:02.751327 kernel: BTRFS info (device sda6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:21:02.751345 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:21:02.751353 kernel: BTRFS info (device sda6): using free-space-tree Jun 20 19:21:02.766300 kernel: BTRFS info (device sda6): last unmount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:21:02.767791 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 20 19:21:02.768692 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 20 19:21:02.801443 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jun 20 19:21:02.802224 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 20 19:21:02.869085 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 19:21:02.870132 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 19:21:02.901353 ignition[728]: Ignition 2.21.0 Jun 20 19:21:02.901360 ignition[728]: Stage: fetch-offline Jun 20 19:21:02.902466 systemd-networkd[865]: lo: Link UP Jun 20 19:21:02.901378 ignition[728]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:21:02.902468 systemd-networkd[865]: lo: Gained carrier Jun 20 19:21:02.901383 ignition[728]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jun 20 19:21:02.901433 ignition[728]: parsed url from cmdline: "" Jun 20 19:21:02.901435 ignition[728]: no config URL provided Jun 20 19:21:02.901438 ignition[728]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 19:21:02.903189 systemd-networkd[865]: Enumeration completed Jun 20 19:21:02.901442 ignition[728]: no config at "/usr/lib/ignition/user.ign" Jun 20 19:21:02.903244 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 19:21:02.901809 ignition[728]: config successfully fetched Jun 20 19:21:02.903419 systemd[1]: Reached target network.target - Network. Jun 20 19:21:02.901827 ignition[728]: parsing config with SHA512: 6722f65f6c043b50b2c71e6c8148a6bcbd23e5a0d7eca2722d9e92edd04c3026c0c64e64546eed1d65a6d97e4b2bb5f9bcbc74ea027ecfc613f6a9e3e8700aa2 Jun 20 19:21:02.903638 systemd-networkd[865]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Jun 20 19:21:02.905297 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jun 20 19:21:02.906750 systemd-networkd[865]: ens192: Link UP Jun 20 19:21:02.906754 systemd-networkd[865]: ens192: Gained carrier Jun 20 19:21:02.908310 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jun 20 19:21:02.909523 unknown[728]: fetched base config from "system" Jun 20 19:21:02.909532 unknown[728]: fetched user config from "vmware" Jun 20 19:21:02.909738 ignition[728]: fetch-offline: fetch-offline passed Jun 20 19:21:02.909772 ignition[728]: Ignition finished successfully Jun 20 19:21:02.910789 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 19:21:02.911374 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jun 20 19:21:02.913086 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 20 19:21:02.936394 ignition[870]: Ignition 2.21.0 Jun 20 19:21:02.936670 ignition[870]: Stage: kargs Jun 20 19:21:02.936861 ignition[870]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:21:02.936987 ignition[870]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jun 20 19:21:02.937664 ignition[870]: kargs: kargs passed Jun 20 19:21:02.937692 ignition[870]: Ignition finished successfully Jun 20 19:21:02.938848 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 20 19:21:02.939727 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 20 19:21:02.954980 ignition[877]: Ignition 2.21.0 Jun 20 19:21:02.954989 ignition[877]: Stage: disks Jun 20 19:21:02.955073 ignition[877]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:21:02.955078 ignition[877]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jun 20 19:21:02.955521 ignition[877]: disks: disks passed Jun 20 19:21:02.955545 ignition[877]: Ignition finished successfully Jun 20 19:21:02.956384 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 20 19:21:02.956849 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 20 19:21:02.956960 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 20 19:21:02.957061 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 19:21:02.957148 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 19:21:02.957234 systemd[1]: Reached target basic.target - Basic System. Jun 20 19:21:02.957838 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 20 19:21:02.993273 systemd-fsck[885]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Jun 20 19:21:02.994078 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 20 19:21:02.995044 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 20 19:21:03.073390 kernel: EXT4-fs (sda9): mounted filesystem 6290a154-3512-46a6-a5f5-a7fb62c65caa r/w with ordered data mode. Quota mode: none. Jun 20 19:21:03.073831 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 20 19:21:03.074194 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 20 19:21:03.075693 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 19:21:03.076328 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 20 19:21:03.077484 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 20 19:21:03.077699 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 20 19:21:03.077719 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 19:21:03.085517 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 20 19:21:03.086581 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 20 19:21:03.091610 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 (8:6) scanned by mount (893) Jun 20 19:21:03.091639 kernel: BTRFS info (device sda6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:21:03.093944 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:21:03.093966 kernel: BTRFS info (device sda6): using free-space-tree Jun 20 19:21:03.098388 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 19:21:03.116909 initrd-setup-root[917]: cut: /sysroot/etc/passwd: No such file or directory Jun 20 19:21:03.119484 initrd-setup-root[924]: cut: /sysroot/etc/group: No such file or directory Jun 20 19:21:03.121638 initrd-setup-root[931]: cut: /sysroot/etc/shadow: No such file or directory Jun 20 19:21:03.123614 initrd-setup-root[938]: cut: /sysroot/etc/gshadow: No such file or directory Jun 20 19:21:03.160192 systemd-resolved[283]: Detected conflict on linux IN A 139.178.70.108 Jun 20 19:21:03.160200 systemd-resolved[283]: Hostname conflict, changing published hostname from 'linux' to 'linux11'. Jun 20 19:21:03.178306 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 20 19:21:03.179096 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 20 19:21:03.180335 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 20 19:21:03.192293 kernel: BTRFS info (device sda6): last unmount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:21:03.206087 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 20 19:21:03.207935 ignition[1006]: INFO : Ignition 2.21.0 Jun 20 19:21:03.208142 ignition[1006]: INFO : Stage: mount Jun 20 19:21:03.208353 ignition[1006]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:21:03.208490 ignition[1006]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jun 20 19:21:03.209131 ignition[1006]: INFO : mount: mount passed Jun 20 19:21:03.209253 ignition[1006]: INFO : Ignition finished successfully Jun 20 19:21:03.209980 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 20 19:21:03.210629 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 20 19:21:03.704604 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 20 19:21:03.705538 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 19:21:03.720301 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 (8:6) scanned by mount (1018) Jun 20 19:21:03.723094 kernel: BTRFS info (device sda6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:21:03.723113 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:21:03.723121 kernel: BTRFS info (device sda6): using free-space-tree Jun 20 19:21:03.726349 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 19:21:03.744234 ignition[1035]: INFO : Ignition 2.21.0 Jun 20 19:21:03.745093 ignition[1035]: INFO : Stage: files Jun 20 19:21:03.745093 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:21:03.745093 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jun 20 19:21:03.745093 ignition[1035]: DEBUG : files: compiled without relabeling support, skipping Jun 20 19:21:03.745675 ignition[1035]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 20 19:21:03.745813 ignition[1035]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 20 19:21:03.747375 ignition[1035]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 20 19:21:03.747605 ignition[1035]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 20 19:21:03.747732 ignition[1035]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 20 19:21:03.747636 unknown[1035]: wrote ssh authorized keys file for user: core Jun 20 19:21:03.749122 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 20 19:21:03.749359 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 20 19:21:03.760704 systemd-resolved[283]: Detected conflict on linux11 IN A 139.178.70.108 Jun 20 19:21:03.760714 systemd-resolved[283]: Hostname conflict, changing published hostname from 'linux11' to 'linux18'. Jun 20 19:21:03.779972 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 20 19:21:03.893578 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 20 19:21:03.894211 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 20 19:21:03.894211 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 20 19:21:03.894211 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 20 19:21:03.894211 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 20 19:21:03.894211 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 19:21:03.894211 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 19:21:03.894211 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 19:21:03.894211 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 19:21:03.895720 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 19:21:03.896032 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 19:21:03.896032 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jun 20 19:21:03.898229 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jun 20 19:21:03.898229 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jun 20 19:21:03.898795 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jun 20 19:21:04.608675 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 20 19:21:04.748367 systemd-networkd[865]: ens192: Gained IPv6LL Jun 20 19:21:04.788006 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jun 20 19:21:04.788006 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jun 20 19:21:04.812522 ignition[1035]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jun 20 19:21:04.812952 ignition[1035]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jun 20 19:21:04.837037 ignition[1035]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 19:21:04.843823 ignition[1035]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 19:21:04.843823 ignition[1035]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jun 20 19:21:04.843823 ignition[1035]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jun 20 19:21:04.844339 ignition[1035]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 20 19:21:04.844339 ignition[1035]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 20 19:21:04.844339 ignition[1035]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jun 20 19:21:04.844339 ignition[1035]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jun 20 19:21:04.870402 ignition[1035]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jun 20 19:21:04.872572 ignition[1035]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jun 20 19:21:04.872816 ignition[1035]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jun 20 19:21:04.872816 ignition[1035]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jun 20 19:21:04.872816 ignition[1035]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jun 20 19:21:04.872816 ignition[1035]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 20 19:21:04.874147 ignition[1035]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 20 19:21:04.874147 ignition[1035]: INFO : files: files passed Jun 20 19:21:04.874147 ignition[1035]: INFO : Ignition finished successfully Jun 20 19:21:04.873751 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 20 19:21:04.874775 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 20 19:21:04.877139 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 20 19:21:04.890226 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 20 19:21:04.890316 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 20 19:21:04.893977 initrd-setup-root-after-ignition[1067]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:21:04.893977 initrd-setup-root-after-ignition[1067]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:21:04.894640 initrd-setup-root-after-ignition[1071]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:21:04.895540 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 19:21:04.896086 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 20 19:21:04.896701 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 20 19:21:04.931750 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 20 19:21:04.931824 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 20 19:21:04.932143 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 20 19:21:04.932292 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 20 19:21:04.932505 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 20 19:21:04.933017 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 20 19:21:04.948508 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 19:21:04.949414 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 20 19:21:04.962664 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:21:04.963010 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:21:04.963410 systemd[1]: Stopped target timers.target - Timer Units. Jun 20 19:21:04.963713 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 20 19:21:04.963914 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 19:21:04.964405 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 20 19:21:04.964708 systemd[1]: Stopped target basic.target - Basic System. Jun 20 19:21:04.964962 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 20 19:21:04.965294 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 19:21:04.965578 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 20 19:21:04.965900 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jun 20 19:21:04.966210 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 20 19:21:04.966489 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 19:21:04.966843 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 20 19:21:04.967156 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 20 19:21:04.967451 systemd[1]: Stopped target swap.target - Swaps. Jun 20 19:21:04.967703 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 20 19:21:04.967902 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 20 19:21:04.968368 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:21:04.968681 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:21:04.968992 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 20 19:21:04.969163 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:21:04.969487 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 20 19:21:04.969559 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 20 19:21:04.970043 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 20 19:21:04.970155 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 19:21:04.970745 systemd[1]: Stopped target paths.target - Path Units. Jun 20 19:21:04.971002 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 20 19:21:04.971207 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:21:04.971558 systemd[1]: Stopped target slices.target - Slice Units. Jun 20 19:21:04.971841 systemd[1]: Stopped target sockets.target - Socket Units. Jun 20 19:21:04.972113 systemd[1]: iscsid.socket: Deactivated successfully. Jun 20 19:21:04.972175 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 19:21:04.972594 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 20 19:21:04.972653 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 19:21:04.973083 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 20 19:21:04.973166 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 19:21:04.973653 systemd[1]: ignition-files.service: Deactivated successfully. Jun 20 19:21:04.973723 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 20 19:21:04.974695 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 20 19:21:04.974955 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 20 19:21:04.975161 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:21:04.975909 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 20 19:21:04.977468 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 20 19:21:04.977714 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:21:04.978060 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 20 19:21:04.978267 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 19:21:04.980806 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 20 19:21:04.982461 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 20 19:21:04.991391 ignition[1091]: INFO : Ignition 2.21.0 Jun 20 19:21:04.991391 ignition[1091]: INFO : Stage: umount Jun 20 19:21:04.991768 ignition[1091]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:21:04.991768 ignition[1091]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jun 20 19:21:04.993562 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 20 19:21:04.995367 ignition[1091]: INFO : umount: umount passed Jun 20 19:21:04.995586 ignition[1091]: INFO : Ignition finished successfully Jun 20 19:21:04.997064 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 20 19:21:04.997347 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 20 19:21:04.997701 systemd[1]: Stopped target network.target - Network. Jun 20 19:21:04.997908 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 20 19:21:04.997938 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 20 19:21:04.998265 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 20 19:21:04.998399 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 20 19:21:04.998636 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 20 19:21:04.998756 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 20 19:21:04.998992 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 20 19:21:04.999113 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 20 19:21:04.999572 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 20 19:21:04.999841 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 20 19:21:05.004672 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 20 19:21:05.004956 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 20 19:21:05.006075 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 20 19:21:05.006391 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 20 19:21:05.006606 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 20 19:21:05.008100 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 20 19:21:05.008824 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jun 20 19:21:05.009003 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 20 19:21:05.009026 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:21:05.009906 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 20 19:21:05.010026 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 20 19:21:05.010056 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 19:21:05.010222 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Jun 20 19:21:05.010247 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jun 20 19:21:05.010468 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 19:21:05.010493 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:21:05.010783 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 20 19:21:05.010822 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 20 19:21:05.011506 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 20 19:21:05.011529 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:21:05.011854 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:21:05.013120 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 20 19:21:05.013156 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:21:05.026483 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 20 19:21:05.028561 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:21:05.029076 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 20 19:21:05.029104 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 20 19:21:05.029543 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 20 19:21:05.029693 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:21:05.029920 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 20 19:21:05.030042 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 20 19:21:05.030352 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 20 19:21:05.030379 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 20 19:21:05.030752 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 19:21:05.030777 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:21:05.031688 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 20 19:21:05.031925 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jun 20 19:21:05.032079 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:21:05.033355 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 20 19:21:05.033383 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:21:05.033810 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:21:05.033941 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:21:05.034958 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jun 20 19:21:05.034992 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 20 19:21:05.035017 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:21:05.035170 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 20 19:21:05.040342 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 20 19:21:05.043347 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 20 19:21:05.043405 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 20 19:21:05.051270 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 20 19:21:05.051379 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 20 19:21:05.051684 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 20 19:21:05.051825 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 20 19:21:05.051859 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 20 19:21:05.052643 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 20 19:21:05.066678 systemd[1]: Switching root. Jun 20 19:21:05.104704 systemd-journald[243]: Journal stopped Jun 20 19:21:07.267660 systemd-journald[243]: Received SIGTERM from PID 1 (systemd). Jun 20 19:21:07.267686 kernel: SELinux: policy capability network_peer_controls=1 Jun 20 19:21:07.267695 kernel: SELinux: policy capability open_perms=1 Jun 20 19:21:07.267701 kernel: SELinux: policy capability extended_socket_class=1 Jun 20 19:21:07.267708 kernel: SELinux: policy capability always_check_network=0 Jun 20 19:21:07.267715 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 20 19:21:07.267721 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 20 19:21:07.267727 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 20 19:21:07.267733 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 20 19:21:07.267739 kernel: SELinux: policy capability userspace_initial_context=0 Jun 20 19:21:07.267745 systemd[1]: Successfully loaded SELinux policy in 104.170ms. Jun 20 19:21:07.267753 kernel: audit: type=1403 audit(1750447265.892:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 20 19:21:07.267760 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.075ms. Jun 20 19:21:07.267768 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 19:21:07.267775 systemd[1]: Detected virtualization vmware. Jun 20 19:21:07.267782 systemd[1]: Detected architecture x86-64. Jun 20 19:21:07.267789 systemd[1]: Detected first boot. Jun 20 19:21:07.267796 systemd[1]: Initializing machine ID from random generator. Jun 20 19:21:07.267803 zram_generator::config[1134]: No configuration found. Jun 20 19:21:07.267893 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Jun 20 19:21:07.267904 kernel: Guest personality initialized and is active Jun 20 19:21:07.267911 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jun 20 19:21:07.267917 kernel: Initialized host personality Jun 20 19:21:07.267926 kernel: NET: Registered PF_VSOCK protocol family Jun 20 19:21:07.267933 systemd[1]: Populated /etc with preset unit settings. Jun 20 19:21:07.267941 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jun 20 19:21:07.267949 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Jun 20 19:21:07.267955 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 20 19:21:07.267962 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 20 19:21:07.267969 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 20 19:21:07.267977 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 20 19:21:07.267984 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 20 19:21:07.267991 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 20 19:21:07.267998 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 20 19:21:07.268004 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 20 19:21:07.268011 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 20 19:21:07.268018 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 20 19:21:07.268026 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 20 19:21:07.268033 systemd[1]: Created slice user.slice - User and Session Slice. Jun 20 19:21:07.268040 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:21:07.268049 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:21:07.268056 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 20 19:21:07.268062 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 20 19:21:07.268069 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 20 19:21:07.268077 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 19:21:07.268085 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 20 19:21:07.268092 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:21:07.268099 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:21:07.268106 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 20 19:21:07.268113 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 20 19:21:07.268120 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 20 19:21:07.268127 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 20 19:21:07.268134 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:21:07.268142 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 19:21:07.268149 systemd[1]: Reached target slices.target - Slice Units. Jun 20 19:21:07.268156 systemd[1]: Reached target swap.target - Swaps. Jun 20 19:21:07.268162 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 20 19:21:07.268170 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 20 19:21:07.268178 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 20 19:21:07.268185 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:21:07.268192 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 19:21:07.268199 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:21:07.268206 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 20 19:21:07.268213 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 20 19:21:07.268220 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 20 19:21:07.268228 systemd[1]: Mounting media.mount - External Media Directory... Jun 20 19:21:07.268236 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:21:07.268243 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 20 19:21:07.268250 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 20 19:21:07.268257 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 20 19:21:07.268265 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 20 19:21:07.268272 systemd[1]: Reached target machines.target - Containers. Jun 20 19:21:07.268292 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 20 19:21:07.268302 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Jun 20 19:21:07.268312 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 19:21:07.268319 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 20 19:21:07.268326 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:21:07.268333 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 19:21:07.268340 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:21:07.268348 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 20 19:21:07.268355 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:21:07.268362 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 20 19:21:07.268370 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 20 19:21:07.268377 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 20 19:21:07.268385 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 20 19:21:07.268392 systemd[1]: Stopped systemd-fsck-usr.service. Jun 20 19:21:07.268399 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:21:07.268407 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 19:21:07.268414 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 19:21:07.268421 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 19:21:07.268428 kernel: fuse: init (API version 7.41) Jun 20 19:21:07.268436 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 20 19:21:07.268443 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 20 19:21:07.268450 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 19:21:07.268457 systemd[1]: verity-setup.service: Deactivated successfully. Jun 20 19:21:07.268464 systemd[1]: Stopped verity-setup.service. Jun 20 19:21:07.268471 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:21:07.268478 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 20 19:21:07.268485 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 20 19:21:07.268493 systemd[1]: Mounted media.mount - External Media Directory. Jun 20 19:21:07.268500 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 20 19:21:07.268509 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 20 19:21:07.268516 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 20 19:21:07.268523 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:21:07.268530 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 20 19:21:07.268537 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 20 19:21:07.268544 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 20 19:21:07.268552 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:21:07.268560 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:21:07.268567 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:21:07.268574 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:21:07.268581 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 20 19:21:07.268588 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 20 19:21:07.268595 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 19:21:07.268602 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:21:07.268609 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 20 19:21:07.268618 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 20 19:21:07.268625 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 19:21:07.268635 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 20 19:21:07.268643 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 20 19:21:07.268651 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 20 19:21:07.268658 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 19:21:07.268666 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 20 19:21:07.268675 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 20 19:21:07.268682 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:21:07.268690 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 20 19:21:07.268697 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:21:07.268719 systemd-journald[1231]: Collecting audit messages is disabled. Jun 20 19:21:07.268737 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 20 19:21:07.268746 systemd-journald[1231]: Journal started Jun 20 19:21:07.268762 systemd-journald[1231]: Runtime Journal (/run/log/journal/b1d37d5fb49048139fba794da2b5c0fc) is 4.8M, max 38.8M, 34M free. Jun 20 19:21:07.052147 systemd[1]: Queued start job for default target multi-user.target. Jun 20 19:21:07.065867 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jun 20 19:21:07.066170 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 20 19:21:07.270815 jq[1204]: true Jun 20 19:21:07.271334 jq[1242]: true Jun 20 19:21:07.275577 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:21:07.275608 kernel: ACPI: bus type drm_connector registered Jun 20 19:21:07.276221 ignition[1251]: Ignition 2.21.0 Jun 20 19:21:07.276532 ignition[1251]: deleting config from guestinfo properties Jun 20 19:21:07.279433 ignition[1251]: Successfully deleted config Jun 20 19:21:07.281443 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 20 19:21:07.286081 kernel: loop: module loaded Jun 20 19:21:07.290762 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 20 19:21:07.290799 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 19:21:07.290106 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 19:21:07.291375 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 19:21:07.291692 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:21:07.291831 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:21:07.292112 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Jun 20 19:21:07.292298 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 20 19:21:07.292454 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 20 19:21:07.305485 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 20 19:21:07.306249 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 20 19:21:07.309330 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 20 19:21:07.310443 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 20 19:21:07.311327 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:21:07.319295 kernel: loop0: detected capacity change from 0 to 2960 Jun 20 19:21:07.339003 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:21:07.343625 systemd-journald[1231]: Time spent on flushing to /var/log/journal/b1d37d5fb49048139fba794da2b5c0fc is 27.831ms for 1771 entries. Jun 20 19:21:07.343625 systemd-journald[1231]: System Journal (/var/log/journal/b1d37d5fb49048139fba794da2b5c0fc) is 8M, max 584.8M, 576.8M free. Jun 20 19:21:07.378302 systemd-journald[1231]: Received client request to flush runtime journal. Jun 20 19:21:07.378335 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 20 19:21:07.378347 kernel: loop1: detected capacity change from 0 to 146240 Jun 20 19:21:07.345364 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 20 19:21:07.377385 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 20 19:21:07.379376 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 19:21:07.379681 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 20 19:21:07.420296 kernel: loop2: detected capacity change from 0 to 113872 Jun 20 19:21:07.423350 systemd-tmpfiles[1300]: ACLs are not supported, ignoring. Jun 20 19:21:07.423362 systemd-tmpfiles[1300]: ACLs are not supported, ignoring. Jun 20 19:21:07.431838 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:21:07.450726 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:21:07.462299 kernel: loop3: detected capacity change from 0 to 221472 Jun 20 19:21:07.615310 kernel: loop4: detected capacity change from 0 to 2960 Jun 20 19:21:07.631302 kernel: loop5: detected capacity change from 0 to 146240 Jun 20 19:21:07.663747 kernel: loop6: detected capacity change from 0 to 113872 Jun 20 19:21:07.680296 kernel: loop7: detected capacity change from 0 to 221472 Jun 20 19:21:07.708787 (sd-merge)[1309]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. Jun 20 19:21:07.709088 (sd-merge)[1309]: Merged extensions into '/usr'. Jun 20 19:21:07.712664 systemd[1]: Reload requested from client PID 1263 ('systemd-sysext') (unit systemd-sysext.service)... Jun 20 19:21:07.712673 systemd[1]: Reloading... Jun 20 19:21:07.757433 zram_generator::config[1334]: No configuration found. Jun 20 19:21:07.846909 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:21:07.856271 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jun 20 19:21:07.900985 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 20 19:21:07.901181 systemd[1]: Reloading finished in 188 ms. Jun 20 19:21:07.916653 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 20 19:21:07.924254 systemd[1]: Starting ensure-sysext.service... Jun 20 19:21:07.926342 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 19:21:07.938028 systemd-tmpfiles[1391]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jun 20 19:21:07.938050 systemd-tmpfiles[1391]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jun 20 19:21:07.938200 systemd-tmpfiles[1391]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 20 19:21:07.938368 systemd-tmpfiles[1391]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 20 19:21:07.938861 systemd-tmpfiles[1391]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 20 19:21:07.939029 systemd-tmpfiles[1391]: ACLs are not supported, ignoring. Jun 20 19:21:07.939067 systemd-tmpfiles[1391]: ACLs are not supported, ignoring. Jun 20 19:21:07.944641 systemd-tmpfiles[1391]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 19:21:07.944648 systemd-tmpfiles[1391]: Skipping /boot Jun 20 19:21:07.946038 systemd[1]: Reload requested from client PID 1390 ('systemctl') (unit ensure-sysext.service)... Jun 20 19:21:07.946051 systemd[1]: Reloading... Jun 20 19:21:07.954508 systemd-tmpfiles[1391]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 19:21:07.954515 systemd-tmpfiles[1391]: Skipping /boot Jun 20 19:21:08.001296 zram_generator::config[1419]: No configuration found. Jun 20 19:21:08.080209 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:21:08.088232 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jun 20 19:21:08.133723 systemd[1]: Reloading finished in 187 ms. Jun 20 19:21:08.141811 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 20 19:21:08.144757 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:21:08.152102 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 19:21:08.158352 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 20 19:21:08.160097 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 20 19:21:08.162341 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 19:21:08.164355 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:21:08.166534 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 20 19:21:08.171724 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:21:08.173174 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:21:08.174435 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:21:08.176274 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:21:08.177288 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:21:08.177360 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:21:08.177424 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:21:08.178876 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:21:08.178966 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:21:08.179020 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:21:08.179079 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:21:08.181731 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:21:08.187473 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 19:21:08.187828 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:21:08.187900 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:21:08.187997 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:21:08.194660 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 20 19:21:08.199447 systemd[1]: Finished ensure-sysext.service. Jun 20 19:21:08.199746 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 20 19:21:08.204207 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 20 19:21:08.213467 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:21:08.213698 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:21:08.214015 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:21:08.214128 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:21:08.214332 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:21:08.220861 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:21:08.223467 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:21:08.223653 ldconfig[1254]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 20 19:21:08.223781 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:21:08.226311 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 20 19:21:08.226626 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 19:21:08.226738 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 19:21:08.232633 systemd-udevd[1481]: Using default interface naming scheme 'v255'. Jun 20 19:21:08.241752 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 20 19:21:08.243368 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 20 19:21:08.253047 augenrules[1517]: No rules Jun 20 19:21:08.255692 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 20 19:21:08.256713 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 19:21:08.257070 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 19:21:08.261544 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 20 19:21:08.274802 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:21:08.277183 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 19:21:08.280962 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 20 19:21:08.281212 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 20 19:21:08.357397 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 20 19:21:08.357601 systemd[1]: Reached target time-set.target - System Time Set. Jun 20 19:21:08.366175 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 20 19:21:08.369428 systemd-resolved[1480]: Positive Trust Anchors: Jun 20 19:21:08.369588 systemd-resolved[1480]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 19:21:08.369644 systemd-resolved[1480]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 19:21:08.377730 systemd-resolved[1480]: Defaulting to hostname 'linux'. Jun 20 19:21:08.378636 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 19:21:08.378790 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:21:08.378954 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 19:21:08.379114 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 20 19:21:08.379265 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 20 19:21:08.379399 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jun 20 19:21:08.379585 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 20 19:21:08.379722 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 20 19:21:08.379835 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 20 19:21:08.379955 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 20 19:21:08.379973 systemd[1]: Reached target paths.target - Path Units. Jun 20 19:21:08.380072 systemd[1]: Reached target timers.target - Timer Units. Jun 20 19:21:08.380763 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 20 19:21:08.381862 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 20 19:21:08.385413 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 20 19:21:08.385881 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jun 20 19:21:08.386004 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jun 20 19:21:08.388381 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 20 19:21:08.388691 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 20 19:21:08.389187 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 20 19:21:08.390722 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 19:21:08.390821 systemd[1]: Reached target basic.target - Basic System. Jun 20 19:21:08.390942 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 20 19:21:08.390954 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 20 19:21:08.391689 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 20 19:21:08.394413 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 20 19:21:08.396402 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 20 19:21:08.398755 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 20 19:21:08.398869 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 20 19:21:08.406395 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jun 20 19:21:08.408385 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 20 19:21:08.410980 jq[1568]: false Jun 20 19:21:08.411387 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 20 19:21:08.414400 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 20 19:21:08.417153 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 20 19:21:08.421788 systemd-networkd[1533]: lo: Link UP Jun 20 19:21:08.421944 systemd-networkd[1533]: lo: Gained carrier Jun 20 19:21:08.423840 systemd-networkd[1533]: Enumeration completed Jun 20 19:21:08.424766 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 20 19:21:08.425363 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 20 19:21:08.428957 systemd-networkd[1533]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Jun 20 19:21:08.429708 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 20 19:21:08.433040 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jun 20 19:21:08.433189 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jun 20 19:21:08.433479 systemd[1]: Starting update-engine.service - Update Engine... Jun 20 19:21:08.436364 systemd-networkd[1533]: ens192: Link UP Jun 20 19:21:08.436780 systemd-networkd[1533]: ens192: Gained carrier Jun 20 19:21:08.440411 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 20 19:21:08.441914 systemd-timesyncd[1497]: Network configuration changed, trying to establish connection. Jun 20 19:21:08.446888 google_oslogin_nss_cache[1570]: oslogin_cache_refresh[1570]: Refreshing passwd entry cache Jun 20 19:21:08.445395 oslogin_cache_refresh[1570]: Refreshing passwd entry cache Jun 20 19:21:08.447392 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Jun 20 19:21:08.447911 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 19:21:08.449655 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 20 19:21:08.449910 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 20 19:21:08.450041 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 20 19:21:08.452748 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 20 19:21:08.452912 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 20 19:21:08.455564 update_engine[1577]: I20250620 19:21:08.455509 1577 main.cc:92] Flatcar Update Engine starting Jun 20 19:21:08.457602 systemd[1]: Reached target network.target - Network. Jun 20 19:21:08.460580 google_oslogin_nss_cache[1570]: oslogin_cache_refresh[1570]: Failure getting users, quitting Jun 20 19:21:08.460580 google_oslogin_nss_cache[1570]: oslogin_cache_refresh[1570]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 20 19:21:08.460571 oslogin_cache_refresh[1570]: Failure getting users, quitting Jun 20 19:21:08.460673 google_oslogin_nss_cache[1570]: oslogin_cache_refresh[1570]: Refreshing group entry cache Jun 20 19:21:08.460584 oslogin_cache_refresh[1570]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 20 19:21:08.460621 oslogin_cache_refresh[1570]: Refreshing group entry cache Jun 20 19:21:08.467369 systemd[1]: Starting containerd.service - containerd container runtime... Jun 20 19:21:08.470738 google_oslogin_nss_cache[1570]: oslogin_cache_refresh[1570]: Failure getting groups, quitting Jun 20 19:21:08.470738 google_oslogin_nss_cache[1570]: oslogin_cache_refresh[1570]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 20 19:21:08.470732 oslogin_cache_refresh[1570]: Failure getting groups, quitting Jun 20 19:21:08.470740 oslogin_cache_refresh[1570]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 20 19:21:08.473195 jq[1578]: true Jun 20 19:21:08.478521 extend-filesystems[1569]: Found /dev/sda6 Jun 20 19:21:08.478953 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 20 19:21:08.490387 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 20 19:21:08.490840 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jun 20 19:21:08.491701 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jun 20 19:21:08.496048 extend-filesystems[1569]: Found /dev/sda9 Jun 20 19:21:08.501395 extend-filesystems[1569]: Checking size of /dev/sda9 Jun 20 19:21:08.507361 systemd[1]: motdgen.service: Deactivated successfully. Jun 20 19:21:08.507813 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 20 19:21:08.518455 jq[1603]: true Jun 20 19:21:08.521835 tar[1582]: linux-amd64/helm Jun 20 19:21:08.523529 dbus-daemon[1566]: [system] SELinux support is enabled Jun 20 19:21:08.524982 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 20 19:21:08.528735 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 20 19:21:08.528754 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 20 19:21:08.528911 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 20 19:21:08.528920 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 20 19:21:08.535106 systemd[1]: Started update-engine.service - Update Engine. Jun 20 19:21:08.536685 update_engine[1577]: I20250620 19:21:08.536590 1577 update_check_scheduler.cc:74] Next update check in 7m10s Jun 20 19:21:08.537414 extend-filesystems[1569]: Old size kept for /dev/sda9 Jun 20 19:21:08.541366 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 20 19:21:08.542232 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 20 19:21:08.542288 kernel: mousedev: PS/2 mouse device common for all mice Jun 20 19:21:08.542427 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 20 19:21:08.545396 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Jun 20 19:21:08.548579 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 20 19:21:08.549971 (ntainerd)[1619]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 20 19:21:08.552928 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Jun 20 19:21:08.600668 bash[1639]: Updated "/home/core/.ssh/authorized_keys" Jun 20 19:21:08.606504 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 20 19:21:08.606942 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 20 19:21:08.617865 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Jun 20 19:21:08.620918 unknown[1622]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Jun 20 19:21:08.626032 unknown[1622]: Core dump limit set to -1 Jun 20 19:21:08.634568 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jun 20 19:21:08.637541 systemd-logind[1576]: New seat seat0. Jun 20 19:21:08.648682 systemd[1]: Started systemd-logind.service - User Login Management. Jun 20 19:21:08.673292 kernel: ACPI: button: Power Button [PWRF] Jun 20 19:21:08.741293 sshd_keygen[1607]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 20 19:21:08.787214 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 20 19:21:08.791518 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 20 19:21:08.834319 systemd[1]: issuegen.service: Deactivated successfully. Jun 20 19:21:08.834467 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 20 19:21:08.835822 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 20 19:21:08.878564 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 20 19:21:08.879932 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 20 19:21:08.881199 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 20 19:21:08.881905 systemd[1]: Reached target getty.target - Login Prompts. Jun 20 19:21:08.895930 locksmithd[1620]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 20 19:21:08.905299 containerd[1619]: time="2025-06-20T19:21:08Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jun 20 19:21:08.905299 containerd[1619]: time="2025-06-20T19:21:08.904689940Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jun 20 19:21:08.919386 containerd[1619]: time="2025-06-20T19:21:08.919360396Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.226µs" Jun 20 19:21:08.919474 containerd[1619]: time="2025-06-20T19:21:08.919464378Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jun 20 19:21:08.919512 containerd[1619]: time="2025-06-20T19:21:08.919505002Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jun 20 19:21:08.919631 containerd[1619]: time="2025-06-20T19:21:08.919621726Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jun 20 19:21:08.919665 containerd[1619]: time="2025-06-20T19:21:08.919659082Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jun 20 19:21:08.919707 containerd[1619]: time="2025-06-20T19:21:08.919699848Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 20 19:21:08.919772 containerd[1619]: time="2025-06-20T19:21:08.919763152Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 20 19:21:08.919807 containerd[1619]: time="2025-06-20T19:21:08.919800522Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 20 19:21:08.920117 containerd[1619]: time="2025-06-20T19:21:08.920092770Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 20 19:21:08.920187 containerd[1619]: time="2025-06-20T19:21:08.920178491Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 20 19:21:08.920239 containerd[1619]: time="2025-06-20T19:21:08.920231088Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 20 19:21:08.920349 containerd[1619]: time="2025-06-20T19:21:08.920340638Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jun 20 19:21:08.920430 containerd[1619]: time="2025-06-20T19:21:08.920421963Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jun 20 19:21:08.920575 containerd[1619]: time="2025-06-20T19:21:08.920566207Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 20 19:21:08.920619 containerd[1619]: time="2025-06-20T19:21:08.920610561Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 20 19:21:08.920648 containerd[1619]: time="2025-06-20T19:21:08.920642320Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jun 20 19:21:08.920667 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jun 20 19:21:08.920766 containerd[1619]: time="2025-06-20T19:21:08.920757319Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jun 20 19:21:08.920969 containerd[1619]: time="2025-06-20T19:21:08.920953990Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jun 20 19:21:08.921031 containerd[1619]: time="2025-06-20T19:21:08.921022989Z" level=info msg="metadata content store policy set" policy=shared Jun 20 19:21:08.924161 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 20 19:21:08.934099 containerd[1619]: time="2025-06-20T19:21:08.933298459Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jun 20 19:21:08.934099 containerd[1619]: time="2025-06-20T19:21:08.933358645Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jun 20 19:21:08.934099 containerd[1619]: time="2025-06-20T19:21:08.933370167Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jun 20 19:21:08.934099 containerd[1619]: time="2025-06-20T19:21:08.933378008Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jun 20 19:21:08.934099 containerd[1619]: time="2025-06-20T19:21:08.933387298Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jun 20 19:21:08.934099 containerd[1619]: time="2025-06-20T19:21:08.933393598Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jun 20 19:21:08.934099 containerd[1619]: time="2025-06-20T19:21:08.933401963Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jun 20 19:21:08.934099 containerd[1619]: time="2025-06-20T19:21:08.933410605Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jun 20 19:21:08.934099 containerd[1619]: time="2025-06-20T19:21:08.933416648Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jun 20 19:21:08.934099 containerd[1619]: time="2025-06-20T19:21:08.933422456Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jun 20 19:21:08.934099 containerd[1619]: time="2025-06-20T19:21:08.933427255Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jun 20 19:21:08.934099 containerd[1619]: time="2025-06-20T19:21:08.933435210Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jun 20 19:21:08.934099 containerd[1619]: time="2025-06-20T19:21:08.933510704Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jun 20 19:21:08.934099 containerd[1619]: time="2025-06-20T19:21:08.933523272Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jun 20 19:21:08.934365 containerd[1619]: time="2025-06-20T19:21:08.933532294Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jun 20 19:21:08.934365 containerd[1619]: time="2025-06-20T19:21:08.933538690Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jun 20 19:21:08.934365 containerd[1619]: time="2025-06-20T19:21:08.933547828Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jun 20 19:21:08.934365 containerd[1619]: time="2025-06-20T19:21:08.933553619Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jun 20 19:21:08.934365 containerd[1619]: time="2025-06-20T19:21:08.933559794Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jun 20 19:21:08.934365 containerd[1619]: time="2025-06-20T19:21:08.933564953Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jun 20 19:21:08.934365 containerd[1619]: time="2025-06-20T19:21:08.933571268Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jun 20 19:21:08.934365 containerd[1619]: time="2025-06-20T19:21:08.933577011Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jun 20 19:21:08.934365 containerd[1619]: time="2025-06-20T19:21:08.933583169Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jun 20 19:21:08.934365 containerd[1619]: time="2025-06-20T19:21:08.933623918Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jun 20 19:21:08.934365 containerd[1619]: time="2025-06-20T19:21:08.933632573Z" level=info msg="Start snapshots syncer" Jun 20 19:21:08.934365 containerd[1619]: time="2025-06-20T19:21:08.933645159Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jun 20 19:21:08.934512 containerd[1619]: time="2025-06-20T19:21:08.933791594Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jun 20 19:21:08.934512 containerd[1619]: time="2025-06-20T19:21:08.933821652Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jun 20 19:21:08.934592 containerd[1619]: time="2025-06-20T19:21:08.933868505Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jun 20 19:21:08.934592 containerd[1619]: time="2025-06-20T19:21:08.933919476Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jun 20 19:21:08.934592 containerd[1619]: time="2025-06-20T19:21:08.933935845Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jun 20 19:21:08.934592 containerd[1619]: time="2025-06-20T19:21:08.933943281Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jun 20 19:21:08.934592 containerd[1619]: time="2025-06-20T19:21:08.933949655Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jun 20 19:21:08.934592 containerd[1619]: time="2025-06-20T19:21:08.933956774Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jun 20 19:21:08.934592 containerd[1619]: time="2025-06-20T19:21:08.933962656Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jun 20 19:21:08.934592 containerd[1619]: time="2025-06-20T19:21:08.933968240Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jun 20 19:21:08.934592 containerd[1619]: time="2025-06-20T19:21:08.933986390Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jun 20 19:21:08.934592 containerd[1619]: time="2025-06-20T19:21:08.933993956Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jun 20 19:21:08.934592 containerd[1619]: time="2025-06-20T19:21:08.933999898Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jun 20 19:21:08.935489 containerd[1619]: time="2025-06-20T19:21:08.935358661Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 20 19:21:08.935641 containerd[1619]: time="2025-06-20T19:21:08.935630105Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 20 19:21:08.935674 containerd[1619]: time="2025-06-20T19:21:08.935667642Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 20 19:21:08.935872 containerd[1619]: time="2025-06-20T19:21:08.935775991Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 20 19:21:08.935872 containerd[1619]: time="2025-06-20T19:21:08.935785604Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jun 20 19:21:08.935872 containerd[1619]: time="2025-06-20T19:21:08.935792302Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jun 20 19:21:08.935872 containerd[1619]: time="2025-06-20T19:21:08.935798493Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jun 20 19:21:08.935872 containerd[1619]: time="2025-06-20T19:21:08.935809647Z" level=info msg="runtime interface created" Jun 20 19:21:08.935872 containerd[1619]: time="2025-06-20T19:21:08.935812935Z" level=info msg="created NRI interface" Jun 20 19:21:08.935872 containerd[1619]: time="2025-06-20T19:21:08.935817868Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jun 20 19:21:08.935872 containerd[1619]: time="2025-06-20T19:21:08.935828128Z" level=info msg="Connect containerd service" Jun 20 19:21:08.935872 containerd[1619]: time="2025-06-20T19:21:08.935853575Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 20 19:21:08.937750 containerd[1619]: time="2025-06-20T19:21:08.937571792Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 19:21:08.950402 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Jun 20 19:21:08.964120 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 20 19:21:09.052348 (udev-worker)[1543]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Jun 20 19:21:09.087048 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:21:09.097238 systemd-logind[1576]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 20 19:21:09.100494 systemd-logind[1576]: Watching system buttons on /dev/input/event2 (Power Button) Jun 20 19:21:09.134469 containerd[1619]: time="2025-06-20T19:21:09.134442688Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 20 19:21:09.134537 containerd[1619]: time="2025-06-20T19:21:09.134483295Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 20 19:21:09.134537 containerd[1619]: time="2025-06-20T19:21:09.134503224Z" level=info msg="Start subscribing containerd event" Jun 20 19:21:09.134537 containerd[1619]: time="2025-06-20T19:21:09.134519696Z" level=info msg="Start recovering state" Jun 20 19:21:09.135299 containerd[1619]: time="2025-06-20T19:21:09.135223447Z" level=info msg="Start event monitor" Jun 20 19:21:09.135299 containerd[1619]: time="2025-06-20T19:21:09.135247701Z" level=info msg="Start cni network conf syncer for default" Jun 20 19:21:09.135299 containerd[1619]: time="2025-06-20T19:21:09.135253548Z" level=info msg="Start streaming server" Jun 20 19:21:09.135299 containerd[1619]: time="2025-06-20T19:21:09.135258948Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jun 20 19:21:09.135299 containerd[1619]: time="2025-06-20T19:21:09.135263273Z" level=info msg="runtime interface starting up..." Jun 20 19:21:09.135299 containerd[1619]: time="2025-06-20T19:21:09.135266389Z" level=info msg="starting plugins..." Jun 20 19:21:09.135481 containerd[1619]: time="2025-06-20T19:21:09.135275310Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jun 20 19:21:09.135535 systemd[1]: Started containerd.service - containerd container runtime. Jun 20 19:21:09.136920 containerd[1619]: time="2025-06-20T19:21:09.136425589Z" level=info msg="containerd successfully booted in 0.232392s" Jun 20 19:21:09.183994 tar[1582]: linux-amd64/LICENSE Jun 20 19:21:09.184119 tar[1582]: linux-amd64/README.md Jun 20 19:21:09.196637 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 20 19:21:09.201554 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:21:09.676441 systemd-networkd[1533]: ens192: Gained IPv6LL Jun 20 19:21:09.677376 systemd-timesyncd[1497]: Network configuration changed, trying to establish connection. Jun 20 19:21:09.678133 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 20 19:21:09.678956 systemd[1]: Reached target network-online.target - Network is Online. Jun 20 19:21:09.680364 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Jun 20 19:21:09.700449 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:21:09.701565 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 20 19:21:09.729478 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 20 19:21:09.730132 systemd[1]: coreos-metadata.service: Deactivated successfully. Jun 20 19:21:09.730537 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Jun 20 19:21:09.731133 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 20 19:21:10.641637 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:21:10.641965 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 20 19:21:10.642547 systemd[1]: Startup finished in 2.704s (kernel) + 5.247s (initrd) + 4.853s (userspace) = 12.805s. Jun 20 19:21:10.660127 (kubelet)[1794]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:21:10.688391 login[1691]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 20 19:21:10.689411 login[1692]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 20 19:21:10.693872 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 20 19:21:10.695565 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 20 19:21:10.702797 systemd-logind[1576]: New session 1 of user core. Jun 20 19:21:10.707235 systemd-logind[1576]: New session 2 of user core. Jun 20 19:21:10.711490 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 20 19:21:10.713825 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 20 19:21:10.727074 (systemd)[1801]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 20 19:21:10.728500 systemd-logind[1576]: New session c1 of user core. Jun 20 19:21:10.828389 systemd[1801]: Queued start job for default target default.target. Jun 20 19:21:10.836036 systemd[1801]: Created slice app.slice - User Application Slice. Jun 20 19:21:10.836053 systemd[1801]: Reached target paths.target - Paths. Jun 20 19:21:10.836094 systemd[1801]: Reached target timers.target - Timers. Jun 20 19:21:10.836797 systemd[1801]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 20 19:21:10.847358 systemd[1801]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 20 19:21:10.847454 systemd[1801]: Reached target sockets.target - Sockets. Jun 20 19:21:10.847481 systemd[1801]: Reached target basic.target - Basic System. Jun 20 19:21:10.847503 systemd[1801]: Reached target default.target - Main User Target. Jun 20 19:21:10.847519 systemd[1801]: Startup finished in 115ms. Jun 20 19:21:10.847528 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 20 19:21:10.852385 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 20 19:21:10.853004 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 20 19:21:11.210632 systemd-timesyncd[1497]: Network configuration changed, trying to establish connection. Jun 20 19:21:11.236996 kubelet[1794]: E0620 19:21:11.236957 1794 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:21:11.238231 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:21:11.238332 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:21:11.238605 systemd[1]: kubelet.service: Consumed 675ms CPU time, 264M memory peak. Jun 20 19:21:21.390866 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 20 19:21:21.392244 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:21:21.785441 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:21:21.794567 (kubelet)[1846]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:21:21.848443 kubelet[1846]: E0620 19:21:21.848407 1846 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:21:21.850853 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:21:21.851003 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:21:21.851428 systemd[1]: kubelet.service: Consumed 109ms CPU time, 108.9M memory peak. Jun 20 19:21:31.890154 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 20 19:21:31.891702 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:21:32.246934 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:21:32.252486 (kubelet)[1861]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:21:32.310319 kubelet[1861]: E0620 19:21:32.310274 1861 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:21:32.311797 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:21:32.311936 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:21:32.312301 systemd[1]: kubelet.service: Consumed 103ms CPU time, 108.4M memory peak. Jun 20 19:21:38.786207 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 20 19:21:38.788058 systemd[1]: Started sshd@0-139.178.70.108:22-147.75.109.163:60424.service - OpenSSH per-connection server daemon (147.75.109.163:60424). Jun 20 19:21:38.838223 sshd[1869]: Accepted publickey for core from 147.75.109.163 port 60424 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:21:38.839036 sshd-session[1869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:21:38.841755 systemd-logind[1576]: New session 3 of user core. Jun 20 19:21:38.851459 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 20 19:21:38.903403 systemd[1]: Started sshd@1-139.178.70.108:22-147.75.109.163:60432.service - OpenSSH per-connection server daemon (147.75.109.163:60432). Jun 20 19:21:38.943065 sshd[1874]: Accepted publickey for core from 147.75.109.163 port 60432 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:21:38.943545 sshd-session[1874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:21:38.946706 systemd-logind[1576]: New session 4 of user core. Jun 20 19:21:38.952356 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 20 19:21:38.998804 sshd[1876]: Connection closed by 147.75.109.163 port 60432 Jun 20 19:21:38.999455 sshd-session[1874]: pam_unix(sshd:session): session closed for user core Jun 20 19:21:39.007212 systemd[1]: sshd@1-139.178.70.108:22-147.75.109.163:60432.service: Deactivated successfully. Jun 20 19:21:39.008057 systemd[1]: session-4.scope: Deactivated successfully. Jun 20 19:21:39.008486 systemd-logind[1576]: Session 4 logged out. Waiting for processes to exit. Jun 20 19:21:39.010001 systemd[1]: Started sshd@2-139.178.70.108:22-147.75.109.163:60440.service - OpenSSH per-connection server daemon (147.75.109.163:60440). Jun 20 19:21:39.010600 systemd-logind[1576]: Removed session 4. Jun 20 19:21:39.044927 sshd[1882]: Accepted publickey for core from 147.75.109.163 port 60440 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:21:39.045448 sshd-session[1882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:21:39.047872 systemd-logind[1576]: New session 5 of user core. Jun 20 19:21:39.057366 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 20 19:21:39.101940 sshd[1884]: Connection closed by 147.75.109.163 port 60440 Jun 20 19:21:39.102223 sshd-session[1882]: pam_unix(sshd:session): session closed for user core Jun 20 19:21:39.120299 systemd[1]: sshd@2-139.178.70.108:22-147.75.109.163:60440.service: Deactivated successfully. Jun 20 19:21:39.121018 systemd[1]: session-5.scope: Deactivated successfully. Jun 20 19:21:39.121666 systemd-logind[1576]: Session 5 logged out. Waiting for processes to exit. Jun 20 19:21:39.122488 systemd[1]: Started sshd@3-139.178.70.108:22-147.75.109.163:60452.service - OpenSSH per-connection server daemon (147.75.109.163:60452). Jun 20 19:21:39.123507 systemd-logind[1576]: Removed session 5. Jun 20 19:21:39.158516 sshd[1890]: Accepted publickey for core from 147.75.109.163 port 60452 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:21:39.159168 sshd-session[1890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:21:39.162155 systemd-logind[1576]: New session 6 of user core. Jun 20 19:21:39.168352 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 20 19:21:39.215390 sshd[1892]: Connection closed by 147.75.109.163 port 60452 Jun 20 19:21:39.215172 sshd-session[1890]: pam_unix(sshd:session): session closed for user core Jun 20 19:21:39.225257 systemd[1]: sshd@3-139.178.70.108:22-147.75.109.163:60452.service: Deactivated successfully. Jun 20 19:21:39.225973 systemd[1]: session-6.scope: Deactivated successfully. Jun 20 19:21:39.226380 systemd-logind[1576]: Session 6 logged out. Waiting for processes to exit. Jun 20 19:21:39.227374 systemd[1]: Started sshd@4-139.178.70.108:22-147.75.109.163:60454.service - OpenSSH per-connection server daemon (147.75.109.163:60454). Jun 20 19:21:39.228501 systemd-logind[1576]: Removed session 6. Jun 20 19:21:39.266237 sshd[1898]: Accepted publickey for core from 147.75.109.163 port 60454 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:21:39.266945 sshd-session[1898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:21:39.269544 systemd-logind[1576]: New session 7 of user core. Jun 20 19:21:39.279384 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 20 19:21:39.334729 sudo[1901]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 20 19:21:39.334901 sudo[1901]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:21:39.343601 sudo[1901]: pam_unix(sudo:session): session closed for user root Jun 20 19:21:39.344356 sshd[1900]: Connection closed by 147.75.109.163 port 60454 Jun 20 19:21:39.344630 sshd-session[1898]: pam_unix(sshd:session): session closed for user core Jun 20 19:21:39.350628 systemd[1]: sshd@4-139.178.70.108:22-147.75.109.163:60454.service: Deactivated successfully. Jun 20 19:21:39.351501 systemd[1]: session-7.scope: Deactivated successfully. Jun 20 19:21:39.351990 systemd-logind[1576]: Session 7 logged out. Waiting for processes to exit. Jun 20 19:21:39.353871 systemd[1]: Started sshd@5-139.178.70.108:22-147.75.109.163:60466.service - OpenSSH per-connection server daemon (147.75.109.163:60466). Jun 20 19:21:39.354611 systemd-logind[1576]: Removed session 7. Jun 20 19:21:39.394577 sshd[1907]: Accepted publickey for core from 147.75.109.163 port 60466 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:21:39.395364 sshd-session[1907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:21:39.398103 systemd-logind[1576]: New session 8 of user core. Jun 20 19:21:39.405366 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 20 19:21:39.454310 sudo[1911]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 20 19:21:39.454468 sudo[1911]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:21:39.456813 sudo[1911]: pam_unix(sudo:session): session closed for user root Jun 20 19:21:39.459833 sudo[1910]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jun 20 19:21:39.459983 sudo[1910]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:21:39.465680 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 19:21:39.496335 augenrules[1933]: No rules Jun 20 19:21:39.497059 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 19:21:39.497223 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 19:21:39.497897 sudo[1910]: pam_unix(sudo:session): session closed for user root Jun 20 19:21:39.498678 sshd[1909]: Connection closed by 147.75.109.163 port 60466 Jun 20 19:21:39.498931 sshd-session[1907]: pam_unix(sshd:session): session closed for user core Jun 20 19:21:39.504706 systemd[1]: sshd@5-139.178.70.108:22-147.75.109.163:60466.service: Deactivated successfully. Jun 20 19:21:39.505732 systemd[1]: session-8.scope: Deactivated successfully. Jun 20 19:21:39.506626 systemd-logind[1576]: Session 8 logged out. Waiting for processes to exit. Jun 20 19:21:39.507481 systemd-logind[1576]: Removed session 8. Jun 20 19:21:39.508341 systemd[1]: Started sshd@6-139.178.70.108:22-147.75.109.163:60468.service - OpenSSH per-connection server daemon (147.75.109.163:60468). Jun 20 19:21:39.545377 sshd[1942]: Accepted publickey for core from 147.75.109.163 port 60468 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:21:39.546367 sshd-session[1942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:21:39.549684 systemd-logind[1576]: New session 9 of user core. Jun 20 19:21:39.561428 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 20 19:21:39.610040 sudo[1945]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 20 19:21:39.610231 sudo[1945]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:21:39.935313 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 20 19:21:39.950578 (dockerd)[1963]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 20 19:21:40.339711 dockerd[1963]: time="2025-06-20T19:21:40.339620921Z" level=info msg="Starting up" Jun 20 19:21:40.340229 dockerd[1963]: time="2025-06-20T19:21:40.340211616Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jun 20 19:21:40.381705 systemd[1]: var-lib-docker-metacopy\x2dcheck2552894229-merged.mount: Deactivated successfully. Jun 20 19:21:40.392624 dockerd[1963]: time="2025-06-20T19:21:40.392588706Z" level=info msg="Loading containers: start." Jun 20 19:21:40.403417 kernel: Initializing XFRM netlink socket Jun 20 19:21:40.567750 systemd-timesyncd[1497]: Network configuration changed, trying to establish connection. Jun 20 19:21:40.595238 systemd-networkd[1533]: docker0: Link UP Jun 20 19:21:40.597616 dockerd[1963]: time="2025-06-20T19:21:40.597592924Z" level=info msg="Loading containers: done." Jun 20 19:21:40.608220 dockerd[1963]: time="2025-06-20T19:21:40.608190322Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 20 19:21:40.608339 dockerd[1963]: time="2025-06-20T19:21:40.608242332Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jun 20 19:21:40.608339 dockerd[1963]: time="2025-06-20T19:21:40.608315758Z" level=info msg="Initializing buildkit" Jun 20 19:23:05.068472 systemd-resolved[1480]: Clock change detected. Flushing caches. Jun 20 19:23:05.068599 systemd-timesyncd[1497]: Contacted time server 138.68.201.49:123 (2.flatcar.pool.ntp.org). Jun 20 19:23:05.068938 systemd-timesyncd[1497]: Initial clock synchronization to Fri 2025-06-20 19:23:05.068246 UTC. Jun 20 19:23:05.074027 dockerd[1963]: time="2025-06-20T19:23:05.074000984Z" level=info msg="Completed buildkit initialization" Jun 20 19:23:05.078456 dockerd[1963]: time="2025-06-20T19:23:05.078434719Z" level=info msg="Daemon has completed initialization" Jun 20 19:23:05.078865 dockerd[1963]: time="2025-06-20T19:23:05.078515828Z" level=info msg="API listen on /run/docker.sock" Jun 20 19:23:05.078606 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 20 19:23:05.960762 containerd[1619]: time="2025-06-20T19:23:05.960727151Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jun 20 19:23:06.472337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3684240262.mount: Deactivated successfully. Jun 20 19:23:06.840733 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 20 19:23:06.843969 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:23:07.207725 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:23:07.208349 (kubelet)[2226]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:23:07.244983 kubelet[2226]: E0620 19:23:07.244941 2226 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:23:07.247470 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:23:07.247552 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:23:07.247767 systemd[1]: kubelet.service: Consumed 96ms CPU time, 110.4M memory peak. Jun 20 19:23:07.615482 containerd[1619]: time="2025-06-20T19:23:07.615447234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:07.616106 containerd[1619]: time="2025-06-20T19:23:07.616086262Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077744" Jun 20 19:23:07.616681 containerd[1619]: time="2025-06-20T19:23:07.616255752Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:07.617771 containerd[1619]: time="2025-06-20T19:23:07.617739889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:07.618485 containerd[1619]: time="2025-06-20T19:23:07.618402906Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 1.657653459s" Jun 20 19:23:07.618485 containerd[1619]: time="2025-06-20T19:23:07.618421190Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jun 20 19:23:07.619238 containerd[1619]: time="2025-06-20T19:23:07.619229007Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jun 20 19:23:09.070255 containerd[1619]: time="2025-06-20T19:23:09.069735014Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:09.076995 containerd[1619]: time="2025-06-20T19:23:09.076973685Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713294" Jun 20 19:23:09.092084 containerd[1619]: time="2025-06-20T19:23:09.092057393Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:09.097496 containerd[1619]: time="2025-06-20T19:23:09.097469462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:09.098050 containerd[1619]: time="2025-06-20T19:23:09.098035067Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 1.478752089s" Jun 20 19:23:09.098100 containerd[1619]: time="2025-06-20T19:23:09.098092188Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jun 20 19:23:09.098423 containerd[1619]: time="2025-06-20T19:23:09.098402151Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jun 20 19:23:10.654340 containerd[1619]: time="2025-06-20T19:23:10.654298899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:10.661039 containerd[1619]: time="2025-06-20T19:23:10.661005175Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783671" Jun 20 19:23:10.666216 containerd[1619]: time="2025-06-20T19:23:10.666186092Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:10.671079 containerd[1619]: time="2025-06-20T19:23:10.671040875Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:10.672040 containerd[1619]: time="2025-06-20T19:23:10.671700395Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 1.573275997s" Jun 20 19:23:10.672040 containerd[1619]: time="2025-06-20T19:23:10.671722228Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jun 20 19:23:10.672121 containerd[1619]: time="2025-06-20T19:23:10.672113423Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jun 20 19:23:11.995954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3595958750.mount: Deactivated successfully. Jun 20 19:23:12.446951 containerd[1619]: time="2025-06-20T19:23:12.446902249Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:12.454129 containerd[1619]: time="2025-06-20T19:23:12.454109278Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383943" Jun 20 19:23:12.461602 containerd[1619]: time="2025-06-20T19:23:12.461569795Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:12.470674 containerd[1619]: time="2025-06-20T19:23:12.470624879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:12.471045 containerd[1619]: time="2025-06-20T19:23:12.470825743Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 1.798695563s" Jun 20 19:23:12.471045 containerd[1619]: time="2025-06-20T19:23:12.470848225Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jun 20 19:23:12.471257 containerd[1619]: time="2025-06-20T19:23:12.471189060Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jun 20 19:23:13.199288 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4267914679.mount: Deactivated successfully. Jun 20 19:23:14.469249 containerd[1619]: time="2025-06-20T19:23:14.469184375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:14.474677 containerd[1619]: time="2025-06-20T19:23:14.474645820Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jun 20 19:23:14.487424 containerd[1619]: time="2025-06-20T19:23:14.487369291Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:14.497782 containerd[1619]: time="2025-06-20T19:23:14.497737969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:14.498223 containerd[1619]: time="2025-06-20T19:23:14.498115327Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.026854847s" Jun 20 19:23:14.498223 containerd[1619]: time="2025-06-20T19:23:14.498139621Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jun 20 19:23:14.498563 containerd[1619]: time="2025-06-20T19:23:14.498551712Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 20 19:23:15.060197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1405836901.mount: Deactivated successfully. Jun 20 19:23:15.123402 containerd[1619]: time="2025-06-20T19:23:15.123295660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:23:15.135541 containerd[1619]: time="2025-06-20T19:23:15.135505366Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jun 20 19:23:15.144582 containerd[1619]: time="2025-06-20T19:23:15.144516233Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:23:15.154564 containerd[1619]: time="2025-06-20T19:23:15.154531083Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:23:15.155065 containerd[1619]: time="2025-06-20T19:23:15.155045770Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 656.438455ms" Jun 20 19:23:15.155129 containerd[1619]: time="2025-06-20T19:23:15.155119885Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jun 20 19:23:15.155484 containerd[1619]: time="2025-06-20T19:23:15.155464271Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jun 20 19:23:15.829813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2964900814.mount: Deactivated successfully. Jun 20 19:23:17.340631 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jun 20 19:23:17.342316 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:23:17.502159 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:23:17.504732 (kubelet)[2363]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:23:17.770684 kubelet[2363]: E0620 19:23:17.769624 2363 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:23:17.771180 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:23:17.771268 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:23:17.771602 systemd[1]: kubelet.service: Consumed 114ms CPU time, 110.2M memory peak. Jun 20 19:23:18.301191 update_engine[1577]: I20250620 19:23:18.301120 1577 update_attempter.cc:509] Updating boot flags... Jun 20 19:23:21.954332 containerd[1619]: time="2025-06-20T19:23:21.954134136Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:22.024799 containerd[1619]: time="2025-06-20T19:23:22.024760400Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Jun 20 19:23:22.063934 containerd[1619]: time="2025-06-20T19:23:22.063886113Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:22.343722 containerd[1619]: time="2025-06-20T19:23:22.343678116Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:22.344429 containerd[1619]: time="2025-06-20T19:23:22.344194776Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 7.188705124s" Jun 20 19:23:22.344429 containerd[1619]: time="2025-06-20T19:23:22.344219937Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jun 20 19:23:24.168755 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:23:24.169377 systemd[1]: kubelet.service: Consumed 114ms CPU time, 110.2M memory peak. Jun 20 19:23:24.177362 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:23:24.189745 systemd[1]: Reload requested from client PID 2418 ('systemctl') (unit session-9.scope)... Jun 20 19:23:24.189760 systemd[1]: Reloading... Jun 20 19:23:24.266673 zram_generator::config[2461]: No configuration found. Jun 20 19:23:24.326723 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:23:24.334735 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jun 20 19:23:24.402315 systemd[1]: Reloading finished in 212 ms. Jun 20 19:23:24.430848 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 20 19:23:24.430902 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 20 19:23:24.431061 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:23:24.433168 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:23:25.540625 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:23:25.554928 (kubelet)[2529]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:23:25.601847 kubelet[2529]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:23:25.601847 kubelet[2529]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 20 19:23:25.601847 kubelet[2529]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:23:25.601847 kubelet[2529]: I0620 19:23:25.601812 2529 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 19:23:25.943565 kubelet[2529]: I0620 19:23:25.943535 2529 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jun 20 19:23:25.943565 kubelet[2529]: I0620 19:23:25.943563 2529 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 19:23:25.943788 kubelet[2529]: I0620 19:23:25.943770 2529 server.go:934] "Client rotation is on, will bootstrap in background" Jun 20 19:23:26.297753 kubelet[2529]: I0620 19:23:26.297281 2529 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:23:26.311062 kubelet[2529]: E0620 19:23:26.311034 2529 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.108:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:23:26.476945 kubelet[2529]: I0620 19:23:26.476836 2529 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 20 19:23:26.497876 kubelet[2529]: I0620 19:23:26.497863 2529 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 19:23:26.526487 kubelet[2529]: I0620 19:23:26.526430 2529 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jun 20 19:23:26.526559 kubelet[2529]: I0620 19:23:26.526531 2529 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 19:23:26.526686 kubelet[2529]: I0620 19:23:26.526558 2529 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 19:23:26.526785 kubelet[2529]: I0620 19:23:26.526690 2529 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 19:23:26.526785 kubelet[2529]: I0620 19:23:26.526698 2529 container_manager_linux.go:300] "Creating device plugin manager" Jun 20 19:23:26.541166 kubelet[2529]: I0620 19:23:26.541130 2529 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:23:26.620930 kubelet[2529]: I0620 19:23:26.620867 2529 kubelet.go:408] "Attempting to sync node with API server" Jun 20 19:23:26.620930 kubelet[2529]: I0620 19:23:26.620896 2529 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 19:23:26.620930 kubelet[2529]: I0620 19:23:26.620922 2529 kubelet.go:314] "Adding apiserver pod source" Jun 20 19:23:26.621224 kubelet[2529]: I0620 19:23:26.620939 2529 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 19:23:26.681048 kubelet[2529]: W0620 19:23:26.680938 2529 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Jun 20 19:23:26.681048 kubelet[2529]: E0620 19:23:26.681013 2529 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:23:26.681048 kubelet[2529]: W0620 19:23:26.680999 2529 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Jun 20 19:23:26.681203 kubelet[2529]: I0620 19:23:26.681091 2529 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 20 19:23:26.681313 kubelet[2529]: E0620 19:23:26.681251 2529 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:23:26.695216 kubelet[2529]: I0620 19:23:26.695179 2529 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 20 19:23:26.695294 kubelet[2529]: W0620 19:23:26.695238 2529 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 20 19:23:26.695925 kubelet[2529]: I0620 19:23:26.695740 2529 server.go:1274] "Started kubelet" Jun 20 19:23:26.695925 kubelet[2529]: I0620 19:23:26.695824 2529 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 19:23:26.699439 kubelet[2529]: I0620 19:23:26.699426 2529 server.go:449] "Adding debug handlers to kubelet server" Jun 20 19:23:26.705613 kubelet[2529]: I0620 19:23:26.705596 2529 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 19:23:26.707196 kubelet[2529]: I0620 19:23:26.707163 2529 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 19:23:26.707332 kubelet[2529]: I0620 19:23:26.707315 2529 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 19:23:26.716620 kubelet[2529]: I0620 19:23:26.716499 2529 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 19:23:26.722665 kubelet[2529]: E0620 19:23:26.712499 2529 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.108:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.108:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184ad6a4621a2ca2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-06-20 19:23:26.695722146 +0000 UTC m=+1.138346481,LastTimestamp:2025-06-20 19:23:26.695722146 +0000 UTC m=+1.138346481,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jun 20 19:23:26.727814 kubelet[2529]: I0620 19:23:26.727799 2529 volume_manager.go:289] "Starting Kubelet Volume Manager" Jun 20 19:23:26.728106 kubelet[2529]: E0620 19:23:26.728095 2529 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:23:26.728821 kubelet[2529]: I0620 19:23:26.728812 2529 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jun 20 19:23:26.728906 kubelet[2529]: I0620 19:23:26.728899 2529 reconciler.go:26] "Reconciler: start to sync state" Jun 20 19:23:26.729924 kubelet[2529]: I0620 19:23:26.729914 2529 factory.go:221] Registration of the systemd container factory successfully Jun 20 19:23:26.730011 kubelet[2529]: I0620 19:23:26.730000 2529 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 19:23:26.730245 kubelet[2529]: W0620 19:23:26.730223 2529 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Jun 20 19:23:26.730321 kubelet[2529]: E0620 19:23:26.730308 2529 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:23:26.730636 kubelet[2529]: E0620 19:23:26.730608 2529 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.108:6443: connect: connection refused" interval="200ms" Jun 20 19:23:26.735900 kubelet[2529]: E0620 19:23:26.735796 2529 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 19:23:26.736169 kubelet[2529]: I0620 19:23:26.736154 2529 factory.go:221] Registration of the containerd container factory successfully Jun 20 19:23:26.759693 kubelet[2529]: I0620 19:23:26.759627 2529 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 20 19:23:26.760846 kubelet[2529]: I0620 19:23:26.760824 2529 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 20 19:23:26.760846 kubelet[2529]: I0620 19:23:26.760849 2529 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 20 19:23:26.760925 kubelet[2529]: I0620 19:23:26.760863 2529 kubelet.go:2321] "Starting kubelet main sync loop" Jun 20 19:23:26.760925 kubelet[2529]: E0620 19:23:26.760888 2529 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 19:23:26.764640 kubelet[2529]: I0620 19:23:26.764613 2529 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 20 19:23:26.764640 kubelet[2529]: I0620 19:23:26.764629 2529 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 20 19:23:26.764754 kubelet[2529]: I0620 19:23:26.764646 2529 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:23:26.783060 kubelet[2529]: W0620 19:23:26.783039 2529 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Jun 20 19:23:26.783154 kubelet[2529]: E0620 19:23:26.783065 2529 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:23:26.820787 kubelet[2529]: I0620 19:23:26.820759 2529 policy_none.go:49] "None policy: Start" Jun 20 19:23:26.821545 kubelet[2529]: I0620 19:23:26.821529 2529 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 20 19:23:26.821608 kubelet[2529]: I0620 19:23:26.821548 2529 state_mem.go:35] "Initializing new in-memory state store" Jun 20 19:23:26.828810 kubelet[2529]: E0620 19:23:26.828789 2529 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:23:26.861584 kubelet[2529]: E0620 19:23:26.861550 2529 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 20 19:23:26.868287 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 20 19:23:26.882303 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 20 19:23:26.886272 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 20 19:23:26.897403 kubelet[2529]: I0620 19:23:26.897379 2529 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 20 19:23:26.897617 kubelet[2529]: I0620 19:23:26.897607 2529 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 19:23:26.897713 kubelet[2529]: I0620 19:23:26.897687 2529 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 19:23:26.898056 kubelet[2529]: I0620 19:23:26.898046 2529 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 19:23:26.899016 kubelet[2529]: E0620 19:23:26.898996 2529 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jun 20 19:23:26.931134 kubelet[2529]: E0620 19:23:26.931104 2529 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.108:6443: connect: connection refused" interval="400ms" Jun 20 19:23:26.999479 kubelet[2529]: I0620 19:23:26.999437 2529 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jun 20 19:23:26.999834 kubelet[2529]: E0620 19:23:26.999809 2529 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.108:6443/api/v1/nodes\": dial tcp 139.178.70.108:6443: connect: connection refused" node="localhost" Jun 20 19:23:27.068756 systemd[1]: Created slice kubepods-burstable-pod66ef5492c281852280d687861a23a554.slice - libcontainer container kubepods-burstable-pod66ef5492c281852280d687861a23a554.slice. Jun 20 19:23:27.080753 systemd[1]: Created slice kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice - libcontainer container kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice. Jun 20 19:23:27.084357 systemd[1]: Created slice kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice - libcontainer container kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice. Jun 20 19:23:27.201049 kubelet[2529]: I0620 19:23:27.200984 2529 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jun 20 19:23:27.201359 kubelet[2529]: E0620 19:23:27.201307 2529 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.108:6443/api/v1/nodes\": dial tcp 139.178.70.108:6443: connect: connection refused" node="localhost" Jun 20 19:23:27.232606 kubelet[2529]: I0620 19:23:27.232588 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:23:27.232606 kubelet[2529]: I0620 19:23:27.232606 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:23:27.232682 kubelet[2529]: I0620 19:23:27.232617 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jun 20 19:23:27.232682 kubelet[2529]: I0620 19:23:27.232625 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66ef5492c281852280d687861a23a554-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"66ef5492c281852280d687861a23a554\") " pod="kube-system/kube-apiserver-localhost" Jun 20 19:23:27.232682 kubelet[2529]: I0620 19:23:27.232635 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66ef5492c281852280d687861a23a554-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"66ef5492c281852280d687861a23a554\") " pod="kube-system/kube-apiserver-localhost" Jun 20 19:23:27.232682 kubelet[2529]: I0620 19:23:27.232645 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:23:27.232682 kubelet[2529]: I0620 19:23:27.232665 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:23:27.232765 kubelet[2529]: I0620 19:23:27.232677 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66ef5492c281852280d687861a23a554-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"66ef5492c281852280d687861a23a554\") " pod="kube-system/kube-apiserver-localhost" Jun 20 19:23:27.232765 kubelet[2529]: I0620 19:23:27.232686 2529 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:23:27.332332 kubelet[2529]: E0620 19:23:27.332288 2529 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.108:6443: connect: connection refused" interval="800ms" Jun 20 19:23:27.379233 containerd[1619]: time="2025-06-20T19:23:27.379196590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:66ef5492c281852280d687861a23a554,Namespace:kube-system,Attempt:0,}" Jun 20 19:23:27.389186 containerd[1619]: time="2025-06-20T19:23:27.389156844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jun 20 19:23:27.389245 containerd[1619]: time="2025-06-20T19:23:27.389157061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jun 20 19:23:27.603840 kubelet[2529]: I0620 19:23:27.603816 2529 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jun 20 19:23:27.604095 kubelet[2529]: E0620 19:23:27.604080 2529 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.108:6443/api/v1/nodes\": dial tcp 139.178.70.108:6443: connect: connection refused" node="localhost" Jun 20 19:23:27.604239 containerd[1619]: time="2025-06-20T19:23:27.604148180Z" level=info msg="connecting to shim 87f98de076e38c34a5aaffc7472446445a5f1b08356cc70dc3e641ec017839e2" address="unix:///run/containerd/s/6ce267d97200365262c8401f971cef9f74687704ff0140cde02d68235774020a" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:23:27.604438 containerd[1619]: time="2025-06-20T19:23:27.604417316Z" level=info msg="connecting to shim 437e77d8c3573ce32ac14e645a3789f00f982084191ece0a7881ac2ae10e07f8" address="unix:///run/containerd/s/9e27e04b07bc80c359088f6a2d0e15920d418140b41c3edc7243ba8aa30a8a72" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:23:27.609580 containerd[1619]: time="2025-06-20T19:23:27.609561205Z" level=info msg="connecting to shim a19aa8e776846071c2ddd33ce580a804e963f038acf02a3fca24ed2312e90963" address="unix:///run/containerd/s/8a6043d556f159e7e688a9418aa0bbb8c02558b8803726c5e2f88c6474880395" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:23:27.630600 kubelet[2529]: W0620 19:23:27.630557 2529 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Jun 20 19:23:27.630865 kubelet[2529]: E0620 19:23:27.630691 2529 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:23:27.775803 systemd[1]: Started cri-containerd-a19aa8e776846071c2ddd33ce580a804e963f038acf02a3fca24ed2312e90963.scope - libcontainer container a19aa8e776846071c2ddd33ce580a804e963f038acf02a3fca24ed2312e90963. Jun 20 19:23:27.779604 systemd[1]: Started cri-containerd-437e77d8c3573ce32ac14e645a3789f00f982084191ece0a7881ac2ae10e07f8.scope - libcontainer container 437e77d8c3573ce32ac14e645a3789f00f982084191ece0a7881ac2ae10e07f8. Jun 20 19:23:27.780616 systemd[1]: Started cri-containerd-87f98de076e38c34a5aaffc7472446445a5f1b08356cc70dc3e641ec017839e2.scope - libcontainer container 87f98de076e38c34a5aaffc7472446445a5f1b08356cc70dc3e641ec017839e2. Jun 20 19:23:27.808673 kubelet[2529]: W0620 19:23:27.808460 2529 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Jun 20 19:23:27.808809 kubelet[2529]: E0620 19:23:27.808798 2529 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:23:27.847531 containerd[1619]: time="2025-06-20T19:23:27.847503745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"437e77d8c3573ce32ac14e645a3789f00f982084191ece0a7881ac2ae10e07f8\"" Jun 20 19:23:27.847621 containerd[1619]: time="2025-06-20T19:23:27.847581084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:66ef5492c281852280d687861a23a554,Namespace:kube-system,Attempt:0,} returns sandbox id \"a19aa8e776846071c2ddd33ce580a804e963f038acf02a3fca24ed2312e90963\"" Jun 20 19:23:27.850068 containerd[1619]: time="2025-06-20T19:23:27.849761110Z" level=info msg="CreateContainer within sandbox \"a19aa8e776846071c2ddd33ce580a804e963f038acf02a3fca24ed2312e90963\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 20 19:23:27.861767 containerd[1619]: time="2025-06-20T19:23:27.861708873Z" level=info msg="CreateContainer within sandbox \"437e77d8c3573ce32ac14e645a3789f00f982084191ece0a7881ac2ae10e07f8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 20 19:23:27.862988 containerd[1619]: time="2025-06-20T19:23:27.862955317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"87f98de076e38c34a5aaffc7472446445a5f1b08356cc70dc3e641ec017839e2\"" Jun 20 19:23:27.864284 containerd[1619]: time="2025-06-20T19:23:27.864168145Z" level=info msg="CreateContainer within sandbox \"87f98de076e38c34a5aaffc7472446445a5f1b08356cc70dc3e641ec017839e2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 20 19:23:27.911864 containerd[1619]: time="2025-06-20T19:23:27.911838054Z" level=info msg="Container 90ae09fb1b52d479f992a71c4c51510b2be107b8e2c7604f05841cf21434b5ac: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:23:27.912523 containerd[1619]: time="2025-06-20T19:23:27.912512293Z" level=info msg="Container f4470626e2bb105573335bdc1bfd8eb7e428a9db64eaa0c9660ac66fb9b99a3d: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:23:27.922314 containerd[1619]: time="2025-06-20T19:23:27.922285009Z" level=info msg="Container 155c423743cc5d65ac14257879aded54e1b0f537373893b90405e34f5d4c6f06: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:23:27.934163 containerd[1619]: time="2025-06-20T19:23:27.934139904Z" level=info msg="CreateContainer within sandbox \"a19aa8e776846071c2ddd33ce580a804e963f038acf02a3fca24ed2312e90963\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"90ae09fb1b52d479f992a71c4c51510b2be107b8e2c7604f05841cf21434b5ac\"" Jun 20 19:23:27.934666 containerd[1619]: time="2025-06-20T19:23:27.934631227Z" level=info msg="StartContainer for \"90ae09fb1b52d479f992a71c4c51510b2be107b8e2c7604f05841cf21434b5ac\"" Jun 20 19:23:27.935644 containerd[1619]: time="2025-06-20T19:23:27.935627083Z" level=info msg="connecting to shim 90ae09fb1b52d479f992a71c4c51510b2be107b8e2c7604f05841cf21434b5ac" address="unix:///run/containerd/s/8a6043d556f159e7e688a9418aa0bbb8c02558b8803726c5e2f88c6474880395" protocol=ttrpc version=3 Jun 20 19:23:27.948744 systemd[1]: Started cri-containerd-90ae09fb1b52d479f992a71c4c51510b2be107b8e2c7604f05841cf21434b5ac.scope - libcontainer container 90ae09fb1b52d479f992a71c4c51510b2be107b8e2c7604f05841cf21434b5ac. Jun 20 19:23:27.964503 containerd[1619]: time="2025-06-20T19:23:27.964477506Z" level=info msg="CreateContainer within sandbox \"437e77d8c3573ce32ac14e645a3789f00f982084191ece0a7881ac2ae10e07f8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f4470626e2bb105573335bdc1bfd8eb7e428a9db64eaa0c9660ac66fb9b99a3d\"" Jun 20 19:23:27.965117 containerd[1619]: time="2025-06-20T19:23:27.965102455Z" level=info msg="StartContainer for \"f4470626e2bb105573335bdc1bfd8eb7e428a9db64eaa0c9660ac66fb9b99a3d\"" Jun 20 19:23:27.965852 containerd[1619]: time="2025-06-20T19:23:27.965837222Z" level=info msg="connecting to shim f4470626e2bb105573335bdc1bfd8eb7e428a9db64eaa0c9660ac66fb9b99a3d" address="unix:///run/containerd/s/9e27e04b07bc80c359088f6a2d0e15920d418140b41c3edc7243ba8aa30a8a72" protocol=ttrpc version=3 Jun 20 19:23:27.980785 systemd[1]: Started cri-containerd-f4470626e2bb105573335bdc1bfd8eb7e428a9db64eaa0c9660ac66fb9b99a3d.scope - libcontainer container f4470626e2bb105573335bdc1bfd8eb7e428a9db64eaa0c9660ac66fb9b99a3d. Jun 20 19:23:27.981024 containerd[1619]: time="2025-06-20T19:23:27.980971087Z" level=info msg="CreateContainer within sandbox \"87f98de076e38c34a5aaffc7472446445a5f1b08356cc70dc3e641ec017839e2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"155c423743cc5d65ac14257879aded54e1b0f537373893b90405e34f5d4c6f06\"" Jun 20 19:23:27.993665 containerd[1619]: time="2025-06-20T19:23:27.993632711Z" level=info msg="StartContainer for \"155c423743cc5d65ac14257879aded54e1b0f537373893b90405e34f5d4c6f06\"" Jun 20 19:23:27.994273 containerd[1619]: time="2025-06-20T19:23:27.994258430Z" level=info msg="connecting to shim 155c423743cc5d65ac14257879aded54e1b0f537373893b90405e34f5d4c6f06" address="unix:///run/containerd/s/6ce267d97200365262c8401f971cef9f74687704ff0140cde02d68235774020a" protocol=ttrpc version=3 Jun 20 19:23:28.009838 systemd[1]: Started cri-containerd-155c423743cc5d65ac14257879aded54e1b0f537373893b90405e34f5d4c6f06.scope - libcontainer container 155c423743cc5d65ac14257879aded54e1b0f537373893b90405e34f5d4c6f06. Jun 20 19:23:28.014065 containerd[1619]: time="2025-06-20T19:23:28.013837585Z" level=info msg="StartContainer for \"90ae09fb1b52d479f992a71c4c51510b2be107b8e2c7604f05841cf21434b5ac\" returns successfully" Jun 20 19:23:28.032190 containerd[1619]: time="2025-06-20T19:23:28.032154544Z" level=info msg="StartContainer for \"f4470626e2bb105573335bdc1bfd8eb7e428a9db64eaa0c9660ac66fb9b99a3d\" returns successfully" Jun 20 19:23:28.066523 containerd[1619]: time="2025-06-20T19:23:28.066496662Z" level=info msg="StartContainer for \"155c423743cc5d65ac14257879aded54e1b0f537373893b90405e34f5d4c6f06\" returns successfully" Jun 20 19:23:28.069245 kubelet[2529]: W0620 19:23:28.069197 2529 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Jun 20 19:23:28.069305 kubelet[2529]: E0620 19:23:28.069252 2529 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:23:28.133711 kubelet[2529]: E0620 19:23:28.132917 2529 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.108:6443: connect: connection refused" interval="1.6s" Jun 20 19:23:28.195584 kubelet[2529]: W0620 19:23:28.195542 2529 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Jun 20 19:23:28.195584 kubelet[2529]: E0620 19:23:28.195587 2529 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:23:28.405279 kubelet[2529]: I0620 19:23:28.405220 2529 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jun 20 19:23:28.405585 kubelet[2529]: E0620 19:23:28.405571 2529 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.108:6443/api/v1/nodes\": dial tcp 139.178.70.108:6443: connect: connection refused" node="localhost" Jun 20 19:23:28.429137 kubelet[2529]: E0620 19:23:28.429116 2529 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.108:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:23:29.318326 kubelet[2529]: W0620 19:23:29.318254 2529 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.108:6443: connect: connection refused Jun 20 19:23:29.318326 kubelet[2529]: E0620 19:23:29.318304 2529 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.108:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:23:30.009104 kubelet[2529]: I0620 19:23:30.009082 2529 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jun 20 19:23:30.234077 kubelet[2529]: E0620 19:23:30.234044 2529 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jun 20 19:23:30.334623 kubelet[2529]: I0620 19:23:30.334417 2529 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jun 20 19:23:30.334623 kubelet[2529]: E0620 19:23:30.334449 2529 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jun 20 19:23:30.341690 kubelet[2529]: E0620 19:23:30.341638 2529 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:23:30.441926 kubelet[2529]: E0620 19:23:30.441890 2529 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:23:30.542810 kubelet[2529]: E0620 19:23:30.542780 2529 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:23:30.643258 kubelet[2529]: E0620 19:23:30.643226 2529 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:23:30.744098 kubelet[2529]: E0620 19:23:30.744071 2529 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:23:30.844567 kubelet[2529]: E0620 19:23:30.844534 2529 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:23:30.945204 kubelet[2529]: E0620 19:23:30.945132 2529 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:23:31.045627 kubelet[2529]: E0620 19:23:31.045597 2529 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:23:31.146318 kubelet[2529]: E0620 19:23:31.146290 2529 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:23:31.247168 kubelet[2529]: E0620 19:23:31.247036 2529 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:23:31.347981 kubelet[2529]: E0620 19:23:31.347948 2529 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:23:31.448459 kubelet[2529]: E0620 19:23:31.448423 2529 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:23:31.549113 kubelet[2529]: E0620 19:23:31.549032 2529 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:23:31.649718 kubelet[2529]: E0620 19:23:31.649691 2529 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:23:31.750727 kubelet[2529]: E0620 19:23:31.750699 2529 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:23:31.851369 kubelet[2529]: E0620 19:23:31.851341 2529 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:23:31.952383 kubelet[2529]: E0620 19:23:31.952351 2529 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:23:32.052980 kubelet[2529]: E0620 19:23:32.052951 2529 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:23:32.153621 kubelet[2529]: E0620 19:23:32.153546 2529 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:23:32.254310 kubelet[2529]: E0620 19:23:32.254275 2529 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:23:32.271942 systemd[1]: Reload requested from client PID 2798 ('systemctl') (unit session-9.scope)... Jun 20 19:23:32.271961 systemd[1]: Reloading... Jun 20 19:23:32.355288 kubelet[2529]: E0620 19:23:32.355258 2529 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:23:32.364684 zram_generator::config[2854]: No configuration found. Jun 20 19:23:32.440672 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:23:32.449012 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jun 20 19:23:32.455394 kubelet[2529]: E0620 19:23:32.455375 2529 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:23:32.525256 systemd[1]: Reloading finished in 252 ms. Jun 20 19:23:32.548681 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:23:32.562029 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 19:23:32.562195 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:23:32.562232 systemd[1]: kubelet.service: Consumed 537ms CPU time, 125.9M memory peak. Jun 20 19:23:32.563850 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:23:33.514081 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:23:33.519927 (kubelet)[2909]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:23:33.609580 kubelet[2909]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:23:33.609580 kubelet[2909]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 20 19:23:33.609580 kubelet[2909]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:23:33.610496 kubelet[2909]: I0620 19:23:33.609634 2909 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 19:23:33.617895 kubelet[2909]: I0620 19:23:33.617584 2909 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jun 20 19:23:33.617895 kubelet[2909]: I0620 19:23:33.617600 2909 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 19:23:33.618107 kubelet[2909]: I0620 19:23:33.618099 2909 server.go:934] "Client rotation is on, will bootstrap in background" Jun 20 19:23:33.618969 kubelet[2909]: I0620 19:23:33.618936 2909 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 20 19:23:33.625797 kubelet[2909]: I0620 19:23:33.625787 2909 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:23:33.628812 kubelet[2909]: I0620 19:23:33.628764 2909 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 20 19:23:33.633815 kubelet[2909]: I0620 19:23:33.633806 2909 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 19:23:33.634475 kubelet[2909]: I0620 19:23:33.633911 2909 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jun 20 19:23:33.634475 kubelet[2909]: I0620 19:23:33.633978 2909 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 19:23:33.634475 kubelet[2909]: I0620 19:23:33.633994 2909 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 19:23:33.634475 kubelet[2909]: I0620 19:23:33.634091 2909 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 19:23:33.634586 kubelet[2909]: I0620 19:23:33.634096 2909 container_manager_linux.go:300] "Creating device plugin manager" Jun 20 19:23:33.634586 kubelet[2909]: I0620 19:23:33.634112 2909 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:23:33.634743 kubelet[2909]: I0620 19:23:33.634710 2909 kubelet.go:408] "Attempting to sync node with API server" Jun 20 19:23:33.634802 kubelet[2909]: I0620 19:23:33.634796 2909 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 19:23:33.634847 kubelet[2909]: I0620 19:23:33.634843 2909 kubelet.go:314] "Adding apiserver pod source" Jun 20 19:23:33.634878 kubelet[2909]: I0620 19:23:33.634874 2909 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 19:23:33.636072 kubelet[2909]: I0620 19:23:33.636061 2909 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 20 19:23:33.637951 kubelet[2909]: I0620 19:23:33.637090 2909 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 20 19:23:33.640074 kubelet[2909]: I0620 19:23:33.640063 2909 server.go:1274] "Started kubelet" Jun 20 19:23:33.646772 kubelet[2909]: I0620 19:23:33.646677 2909 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 19:23:33.649166 kubelet[2909]: I0620 19:23:33.648703 2909 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 19:23:33.649166 kubelet[2909]: I0620 19:23:33.648816 2909 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 19:23:33.655107 kubelet[2909]: I0620 19:23:33.655013 2909 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 19:23:33.658350 kubelet[2909]: I0620 19:23:33.658046 2909 server.go:449] "Adding debug handlers to kubelet server" Jun 20 19:23:33.659775 kubelet[2909]: I0620 19:23:33.659707 2909 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 19:23:33.660744 kubelet[2909]: I0620 19:23:33.660730 2909 volume_manager.go:289] "Starting Kubelet Volume Manager" Jun 20 19:23:33.663019 kubelet[2909]: I0620 19:23:33.662683 2909 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jun 20 19:23:33.663019 kubelet[2909]: I0620 19:23:33.662745 2909 reconciler.go:26] "Reconciler: start to sync state" Jun 20 19:23:33.663694 kubelet[2909]: E0620 19:23:33.663678 2909 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 19:23:33.664573 kubelet[2909]: I0620 19:23:33.664297 2909 factory.go:221] Registration of the systemd container factory successfully Jun 20 19:23:33.664573 kubelet[2909]: I0620 19:23:33.664352 2909 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 19:23:33.665218 kubelet[2909]: I0620 19:23:33.665203 2909 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 20 19:23:33.666251 kubelet[2909]: I0620 19:23:33.666237 2909 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 20 19:23:33.666251 kubelet[2909]: I0620 19:23:33.666250 2909 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 20 19:23:33.666314 kubelet[2909]: I0620 19:23:33.666259 2909 kubelet.go:2321] "Starting kubelet main sync loop" Jun 20 19:23:33.666314 kubelet[2909]: E0620 19:23:33.666280 2909 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 19:23:33.669013 kubelet[2909]: I0620 19:23:33.668989 2909 factory.go:221] Registration of the containerd container factory successfully Jun 20 19:23:33.714848 kubelet[2909]: I0620 19:23:33.714829 2909 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 20 19:23:33.714848 kubelet[2909]: I0620 19:23:33.714841 2909 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 20 19:23:33.714848 kubelet[2909]: I0620 19:23:33.714870 2909 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:23:33.714973 kubelet[2909]: I0620 19:23:33.714959 2909 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 20 19:23:33.714973 kubelet[2909]: I0620 19:23:33.714965 2909 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 20 19:23:33.715009 kubelet[2909]: I0620 19:23:33.714977 2909 policy_none.go:49] "None policy: Start" Jun 20 19:23:33.715606 kubelet[2909]: I0620 19:23:33.715594 2909 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 20 19:23:33.715606 kubelet[2909]: I0620 19:23:33.715606 2909 state_mem.go:35] "Initializing new in-memory state store" Jun 20 19:23:33.715728 kubelet[2909]: I0620 19:23:33.715716 2909 state_mem.go:75] "Updated machine memory state" Jun 20 19:23:33.719780 kubelet[2909]: I0620 19:23:33.719771 2909 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 20 19:23:33.719979 kubelet[2909]: I0620 19:23:33.719921 2909 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 19:23:33.719979 kubelet[2909]: I0620 19:23:33.719930 2909 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 19:23:33.720459 kubelet[2909]: I0620 19:23:33.720325 2909 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 19:23:33.828529 kubelet[2909]: I0620 19:23:33.828466 2909 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jun 20 19:23:33.833812 kubelet[2909]: I0620 19:23:33.833729 2909 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jun 20 19:23:33.833812 kubelet[2909]: I0620 19:23:33.833788 2909 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jun 20 19:23:33.963821 kubelet[2909]: I0620 19:23:33.963729 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:23:33.963821 kubelet[2909]: I0620 19:23:33.963768 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jun 20 19:23:33.963821 kubelet[2909]: I0620 19:23:33.963781 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66ef5492c281852280d687861a23a554-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"66ef5492c281852280d687861a23a554\") " pod="kube-system/kube-apiserver-localhost" Jun 20 19:23:33.963821 kubelet[2909]: I0620 19:23:33.963792 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66ef5492c281852280d687861a23a554-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"66ef5492c281852280d687861a23a554\") " pod="kube-system/kube-apiserver-localhost" Jun 20 19:23:33.963821 kubelet[2909]: I0620 19:23:33.963805 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:23:33.963990 kubelet[2909]: I0620 19:23:33.963821 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:23:33.963990 kubelet[2909]: I0620 19:23:33.963831 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:23:33.963990 kubelet[2909]: I0620 19:23:33.963841 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:23:33.963990 kubelet[2909]: I0620 19:23:33.963850 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66ef5492c281852280d687861a23a554-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"66ef5492c281852280d687861a23a554\") " pod="kube-system/kube-apiserver-localhost" Jun 20 19:23:34.640864 kubelet[2909]: I0620 19:23:34.640838 2909 apiserver.go:52] "Watching apiserver" Jun 20 19:23:34.663775 kubelet[2909]: I0620 19:23:34.663746 2909 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jun 20 19:23:34.703288 kubelet[2909]: E0620 19:23:34.703119 2909 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jun 20 19:23:34.717492 kubelet[2909]: I0620 19:23:34.717440 2909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.717425609 podStartE2EDuration="1.717425609s" podCreationTimestamp="2025-06-20 19:23:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:23:34.713114548 +0000 UTC m=+1.141353225" watchObservedRunningTime="2025-06-20 19:23:34.717425609 +0000 UTC m=+1.145664278" Jun 20 19:23:34.721962 kubelet[2909]: I0620 19:23:34.721883 2909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.72187025 podStartE2EDuration="1.72187025s" podCreationTimestamp="2025-06-20 19:23:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:23:34.717840985 +0000 UTC m=+1.146079652" watchObservedRunningTime="2025-06-20 19:23:34.72187025 +0000 UTC m=+1.150108926" Jun 20 19:23:34.727905 kubelet[2909]: I0620 19:23:34.727869 2909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.727858575 podStartE2EDuration="1.727858575s" podCreationTimestamp="2025-06-20 19:23:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:23:34.722848743 +0000 UTC m=+1.151087411" watchObservedRunningTime="2025-06-20 19:23:34.727858575 +0000 UTC m=+1.156097253" Jun 20 19:23:36.283491 kubelet[2909]: I0620 19:23:36.283399 2909 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 20 19:23:36.284001 containerd[1619]: time="2025-06-20T19:23:36.283939451Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 20 19:23:36.284204 kubelet[2909]: I0620 19:23:36.284103 2909 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 20 19:23:36.908287 systemd[1]: Created slice kubepods-besteffort-podf8424112_6868_45fb_8ba7_a368e108d8bd.slice - libcontainer container kubepods-besteffort-podf8424112_6868_45fb_8ba7_a368e108d8bd.slice. Jun 20 19:23:36.982997 kubelet[2909]: I0620 19:23:36.982951 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f8424112-6868-45fb-8ba7-a368e108d8bd-kube-proxy\") pod \"kube-proxy-vcf7s\" (UID: \"f8424112-6868-45fb-8ba7-a368e108d8bd\") " pod="kube-system/kube-proxy-vcf7s" Jun 20 19:23:36.982997 kubelet[2909]: I0620 19:23:36.982991 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f8424112-6868-45fb-8ba7-a368e108d8bd-lib-modules\") pod \"kube-proxy-vcf7s\" (UID: \"f8424112-6868-45fb-8ba7-a368e108d8bd\") " pod="kube-system/kube-proxy-vcf7s" Jun 20 19:23:36.982997 kubelet[2909]: I0620 19:23:36.983003 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f8424112-6868-45fb-8ba7-a368e108d8bd-xtables-lock\") pod \"kube-proxy-vcf7s\" (UID: \"f8424112-6868-45fb-8ba7-a368e108d8bd\") " pod="kube-system/kube-proxy-vcf7s" Jun 20 19:23:36.983138 kubelet[2909]: I0620 19:23:36.983013 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7tn6\" (UniqueName: \"kubernetes.io/projected/f8424112-6868-45fb-8ba7-a368e108d8bd-kube-api-access-s7tn6\") pod \"kube-proxy-vcf7s\" (UID: \"f8424112-6868-45fb-8ba7-a368e108d8bd\") " pod="kube-system/kube-proxy-vcf7s" Jun 20 19:23:37.215591 containerd[1619]: time="2025-06-20T19:23:37.215529434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vcf7s,Uid:f8424112-6868-45fb-8ba7-a368e108d8bd,Namespace:kube-system,Attempt:0,}" Jun 20 19:23:37.226604 containerd[1619]: time="2025-06-20T19:23:37.226551161Z" level=info msg="connecting to shim 165472bc4af39668eb3435d2de16995321a1d26d2b8b80791dc537be9a860eeb" address="unix:///run/containerd/s/05985983715729d52fafcad15e35c44e9233efba0439316faf62c59033973f42" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:23:37.247751 systemd[1]: Started cri-containerd-165472bc4af39668eb3435d2de16995321a1d26d2b8b80791dc537be9a860eeb.scope - libcontainer container 165472bc4af39668eb3435d2de16995321a1d26d2b8b80791dc537be9a860eeb. Jun 20 19:23:37.266212 containerd[1619]: time="2025-06-20T19:23:37.266190429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vcf7s,Uid:f8424112-6868-45fb-8ba7-a368e108d8bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"165472bc4af39668eb3435d2de16995321a1d26d2b8b80791dc537be9a860eeb\"" Jun 20 19:23:37.269632 containerd[1619]: time="2025-06-20T19:23:37.269585319Z" level=info msg="CreateContainer within sandbox \"165472bc4af39668eb3435d2de16995321a1d26d2b8b80791dc537be9a860eeb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 20 19:23:37.276848 containerd[1619]: time="2025-06-20T19:23:37.276822175Z" level=info msg="Container 1b350dc3e6134b7c87f0c9ac5700fa59eb6a9a2702c57eebeb01249e49036bad: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:23:37.277313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount582138042.mount: Deactivated successfully. Jun 20 19:23:37.280749 containerd[1619]: time="2025-06-20T19:23:37.280709122Z" level=info msg="CreateContainer within sandbox \"165472bc4af39668eb3435d2de16995321a1d26d2b8b80791dc537be9a860eeb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1b350dc3e6134b7c87f0c9ac5700fa59eb6a9a2702c57eebeb01249e49036bad\"" Jun 20 19:23:37.281288 containerd[1619]: time="2025-06-20T19:23:37.281273475Z" level=info msg="StartContainer for \"1b350dc3e6134b7c87f0c9ac5700fa59eb6a9a2702c57eebeb01249e49036bad\"" Jun 20 19:23:37.282121 containerd[1619]: time="2025-06-20T19:23:37.282102455Z" level=info msg="connecting to shim 1b350dc3e6134b7c87f0c9ac5700fa59eb6a9a2702c57eebeb01249e49036bad" address="unix:///run/containerd/s/05985983715729d52fafcad15e35c44e9233efba0439316faf62c59033973f42" protocol=ttrpc version=3 Jun 20 19:23:37.296863 systemd[1]: Started cri-containerd-1b350dc3e6134b7c87f0c9ac5700fa59eb6a9a2702c57eebeb01249e49036bad.scope - libcontainer container 1b350dc3e6134b7c87f0c9ac5700fa59eb6a9a2702c57eebeb01249e49036bad. Jun 20 19:23:37.307352 systemd[1]: Created slice kubepods-besteffort-pod72cf94a7_c59b_439c_92aa_4e5a55c59060.slice - libcontainer container kubepods-besteffort-pod72cf94a7_c59b_439c_92aa_4e5a55c59060.slice. Jun 20 19:23:37.350139 containerd[1619]: time="2025-06-20T19:23:37.350111280Z" level=info msg="StartContainer for \"1b350dc3e6134b7c87f0c9ac5700fa59eb6a9a2702c57eebeb01249e49036bad\" returns successfully" Jun 20 19:23:37.386127 kubelet[2909]: I0620 19:23:37.386094 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/72cf94a7-c59b-439c-92aa-4e5a55c59060-var-lib-calico\") pod \"tigera-operator-6c78c649f6-5kzkw\" (UID: \"72cf94a7-c59b-439c-92aa-4e5a55c59060\") " pod="tigera-operator/tigera-operator-6c78c649f6-5kzkw" Jun 20 19:23:37.386362 kubelet[2909]: I0620 19:23:37.386172 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bdn5\" (UniqueName: \"kubernetes.io/projected/72cf94a7-c59b-439c-92aa-4e5a55c59060-kube-api-access-8bdn5\") pod \"tigera-operator-6c78c649f6-5kzkw\" (UID: \"72cf94a7-c59b-439c-92aa-4e5a55c59060\") " pod="tigera-operator/tigera-operator-6c78c649f6-5kzkw" Jun 20 19:23:37.611235 containerd[1619]: time="2025-06-20T19:23:37.611197146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6c78c649f6-5kzkw,Uid:72cf94a7-c59b-439c-92aa-4e5a55c59060,Namespace:tigera-operator,Attempt:0,}" Jun 20 19:23:37.622245 containerd[1619]: time="2025-06-20T19:23:37.622215518Z" level=info msg="connecting to shim 0fd6e38682a8d2d8ca82986dcd030ac6febfdde3f5eb815fbcba327a612f73a1" address="unix:///run/containerd/s/612a349c255aeaaf5205abcc46558333ffa918c6673b63e2f5b414e57c393771" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:23:37.641768 systemd[1]: Started cri-containerd-0fd6e38682a8d2d8ca82986dcd030ac6febfdde3f5eb815fbcba327a612f73a1.scope - libcontainer container 0fd6e38682a8d2d8ca82986dcd030ac6febfdde3f5eb815fbcba327a612f73a1. Jun 20 19:23:37.680953 containerd[1619]: time="2025-06-20T19:23:37.680924329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6c78c649f6-5kzkw,Uid:72cf94a7-c59b-439c-92aa-4e5a55c59060,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0fd6e38682a8d2d8ca82986dcd030ac6febfdde3f5eb815fbcba327a612f73a1\"" Jun 20 19:23:37.683202 containerd[1619]: time="2025-06-20T19:23:37.683180647Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.1\"" Jun 20 19:23:37.709351 kubelet[2909]: I0620 19:23:37.709069 2909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vcf7s" podStartSLOduration=1.709054979 podStartE2EDuration="1.709054979s" podCreationTimestamp="2025-06-20 19:23:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:23:37.708778423 +0000 UTC m=+4.137017099" watchObservedRunningTime="2025-06-20 19:23:37.709054979 +0000 UTC m=+4.137293656" Jun 20 19:23:38.095863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount142476307.mount: Deactivated successfully. Jun 20 19:23:39.118770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3910923917.mount: Deactivated successfully. Jun 20 19:23:39.480166 containerd[1619]: time="2025-06-20T19:23:39.480101754Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:39.480720 containerd[1619]: time="2025-06-20T19:23:39.480696474Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.1: active requests=0, bytes read=25059858" Jun 20 19:23:39.481553 containerd[1619]: time="2025-06-20T19:23:39.480972449Z" level=info msg="ImageCreate event name:\"sha256:9fe1a04a0e6c440395d63018f1a72bb1ed07d81ed81be41e9b8adcc35a64164c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:39.481900 containerd[1619]: time="2025-06-20T19:23:39.481884596Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a2a468d1ac1b6a7049c1c2505cd933461fcadb127b5c3f98f03bd8e402bce456\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:39.482322 containerd[1619]: time="2025-06-20T19:23:39.482308862Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.1\" with image id \"sha256:9fe1a04a0e6c440395d63018f1a72bb1ed07d81ed81be41e9b8adcc35a64164c\", repo tag \"quay.io/tigera/operator:v1.38.1\", repo digest \"quay.io/tigera/operator@sha256:a2a468d1ac1b6a7049c1c2505cd933461fcadb127b5c3f98f03bd8e402bce456\", size \"25055853\" in 1.799107099s" Jun 20 19:23:39.482369 containerd[1619]: time="2025-06-20T19:23:39.482361004Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.1\" returns image reference \"sha256:9fe1a04a0e6c440395d63018f1a72bb1ed07d81ed81be41e9b8adcc35a64164c\"" Jun 20 19:23:39.484178 containerd[1619]: time="2025-06-20T19:23:39.484163063Z" level=info msg="CreateContainer within sandbox \"0fd6e38682a8d2d8ca82986dcd030ac6febfdde3f5eb815fbcba327a612f73a1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 20 19:23:39.487671 containerd[1619]: time="2025-06-20T19:23:39.487638335Z" level=info msg="Container 4a6bdf8cea9b2d441727a9488badac79f174d333e4e594c90e0ce3a769e91a29: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:23:39.491574 containerd[1619]: time="2025-06-20T19:23:39.491540172Z" level=info msg="CreateContainer within sandbox \"0fd6e38682a8d2d8ca82986dcd030ac6febfdde3f5eb815fbcba327a612f73a1\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4a6bdf8cea9b2d441727a9488badac79f174d333e4e594c90e0ce3a769e91a29\"" Jun 20 19:23:39.492159 containerd[1619]: time="2025-06-20T19:23:39.492074377Z" level=info msg="StartContainer for \"4a6bdf8cea9b2d441727a9488badac79f174d333e4e594c90e0ce3a769e91a29\"" Jun 20 19:23:39.492669 containerd[1619]: time="2025-06-20T19:23:39.492642917Z" level=info msg="connecting to shim 4a6bdf8cea9b2d441727a9488badac79f174d333e4e594c90e0ce3a769e91a29" address="unix:///run/containerd/s/612a349c255aeaaf5205abcc46558333ffa918c6673b63e2f5b414e57c393771" protocol=ttrpc version=3 Jun 20 19:23:39.514813 systemd[1]: Started cri-containerd-4a6bdf8cea9b2d441727a9488badac79f174d333e4e594c90e0ce3a769e91a29.scope - libcontainer container 4a6bdf8cea9b2d441727a9488badac79f174d333e4e594c90e0ce3a769e91a29. Jun 20 19:23:39.537242 containerd[1619]: time="2025-06-20T19:23:39.537200732Z" level=info msg="StartContainer for \"4a6bdf8cea9b2d441727a9488badac79f174d333e4e594c90e0ce3a769e91a29\" returns successfully" Jun 20 19:23:39.730453 kubelet[2909]: I0620 19:23:39.730303 2909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6c78c649f6-5kzkw" podStartSLOduration=0.929391197 podStartE2EDuration="2.730289956s" podCreationTimestamp="2025-06-20 19:23:37 +0000 UTC" firstStartedPulling="2025-06-20 19:23:37.682182341 +0000 UTC m=+4.110421007" lastFinishedPulling="2025-06-20 19:23:39.483081099 +0000 UTC m=+5.911319766" observedRunningTime="2025-06-20 19:23:39.729814246 +0000 UTC m=+6.158052930" watchObservedRunningTime="2025-06-20 19:23:39.730289956 +0000 UTC m=+6.158528634" Jun 20 19:23:44.715169 sudo[1945]: pam_unix(sudo:session): session closed for user root Jun 20 19:23:44.718230 sshd[1944]: Connection closed by 147.75.109.163 port 60468 Jun 20 19:23:44.718585 sshd-session[1942]: pam_unix(sshd:session): session closed for user core Jun 20 19:23:44.721404 systemd[1]: sshd@6-139.178.70.108:22-147.75.109.163:60468.service: Deactivated successfully. Jun 20 19:23:44.723004 systemd[1]: session-9.scope: Deactivated successfully. Jun 20 19:23:44.724571 systemd[1]: session-9.scope: Consumed 2.482s CPU time, 151.7M memory peak. Jun 20 19:23:44.727128 systemd-logind[1576]: Session 9 logged out. Waiting for processes to exit. Jun 20 19:23:44.729997 systemd-logind[1576]: Removed session 9. Jun 20 19:23:47.520479 systemd[1]: Created slice kubepods-besteffort-pod4715c2f9_a18b_439a_89f4_f5925e979efb.slice - libcontainer container kubepods-besteffort-pod4715c2f9_a18b_439a_89f4_f5925e979efb.slice. Jun 20 19:23:47.549606 kubelet[2909]: I0620 19:23:47.549573 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4715c2f9-a18b-439a-89f4-f5925e979efb-tigera-ca-bundle\") pod \"calico-typha-77fb6fd785-m46rf\" (UID: \"4715c2f9-a18b-439a-89f4-f5925e979efb\") " pod="calico-system/calico-typha-77fb6fd785-m46rf" Jun 20 19:23:47.549606 kubelet[2909]: I0620 19:23:47.549614 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4715c2f9-a18b-439a-89f4-f5925e979efb-typha-certs\") pod \"calico-typha-77fb6fd785-m46rf\" (UID: \"4715c2f9-a18b-439a-89f4-f5925e979efb\") " pod="calico-system/calico-typha-77fb6fd785-m46rf" Jun 20 19:23:47.550470 kubelet[2909]: I0620 19:23:47.549632 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qdpr\" (UniqueName: \"kubernetes.io/projected/4715c2f9-a18b-439a-89f4-f5925e979efb-kube-api-access-6qdpr\") pod \"calico-typha-77fb6fd785-m46rf\" (UID: \"4715c2f9-a18b-439a-89f4-f5925e979efb\") " pod="calico-system/calico-typha-77fb6fd785-m46rf" Jun 20 19:23:47.826742 containerd[1619]: time="2025-06-20T19:23:47.826257412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77fb6fd785-m46rf,Uid:4715c2f9-a18b-439a-89f4-f5925e979efb,Namespace:calico-system,Attempt:0,}" Jun 20 19:23:47.847423 systemd[1]: Created slice kubepods-besteffort-podffe350fc_667d_41a4_9315_d8a1f87f7b5e.slice - libcontainer container kubepods-besteffort-podffe350fc_667d_41a4_9315_d8a1f87f7b5e.slice. Jun 20 19:23:47.852015 kubelet[2909]: I0620 19:23:47.851562 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ffe350fc-667d-41a4-9315-d8a1f87f7b5e-lib-modules\") pod \"calico-node-rk7mh\" (UID: \"ffe350fc-667d-41a4-9315-d8a1f87f7b5e\") " pod="calico-system/calico-node-rk7mh" Jun 20 19:23:47.852015 kubelet[2909]: I0620 19:23:47.851592 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ffe350fc-667d-41a4-9315-d8a1f87f7b5e-tigera-ca-bundle\") pod \"calico-node-rk7mh\" (UID: \"ffe350fc-667d-41a4-9315-d8a1f87f7b5e\") " pod="calico-system/calico-node-rk7mh" Jun 20 19:23:47.852015 kubelet[2909]: I0620 19:23:47.851621 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv4np\" (UniqueName: \"kubernetes.io/projected/ffe350fc-667d-41a4-9315-d8a1f87f7b5e-kube-api-access-fv4np\") pod \"calico-node-rk7mh\" (UID: \"ffe350fc-667d-41a4-9315-d8a1f87f7b5e\") " pod="calico-system/calico-node-rk7mh" Jun 20 19:23:47.852015 kubelet[2909]: I0620 19:23:47.851640 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ffe350fc-667d-41a4-9315-d8a1f87f7b5e-cni-net-dir\") pod \"calico-node-rk7mh\" (UID: \"ffe350fc-667d-41a4-9315-d8a1f87f7b5e\") " pod="calico-system/calico-node-rk7mh" Jun 20 19:23:47.852015 kubelet[2909]: I0620 19:23:47.851669 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ffe350fc-667d-41a4-9315-d8a1f87f7b5e-var-lib-calico\") pod \"calico-node-rk7mh\" (UID: \"ffe350fc-667d-41a4-9315-d8a1f87f7b5e\") " pod="calico-system/calico-node-rk7mh" Jun 20 19:23:47.852180 kubelet[2909]: I0620 19:23:47.851687 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ffe350fc-667d-41a4-9315-d8a1f87f7b5e-xtables-lock\") pod \"calico-node-rk7mh\" (UID: \"ffe350fc-667d-41a4-9315-d8a1f87f7b5e\") " pod="calico-system/calico-node-rk7mh" Jun 20 19:23:47.852180 kubelet[2909]: I0620 19:23:47.851704 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ffe350fc-667d-41a4-9315-d8a1f87f7b5e-flexvol-driver-host\") pod \"calico-node-rk7mh\" (UID: \"ffe350fc-667d-41a4-9315-d8a1f87f7b5e\") " pod="calico-system/calico-node-rk7mh" Jun 20 19:23:47.852180 kubelet[2909]: I0620 19:23:47.851724 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ffe350fc-667d-41a4-9315-d8a1f87f7b5e-cni-bin-dir\") pod \"calico-node-rk7mh\" (UID: \"ffe350fc-667d-41a4-9315-d8a1f87f7b5e\") " pod="calico-system/calico-node-rk7mh" Jun 20 19:23:47.852180 kubelet[2909]: I0620 19:23:47.851740 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ffe350fc-667d-41a4-9315-d8a1f87f7b5e-cni-log-dir\") pod \"calico-node-rk7mh\" (UID: \"ffe350fc-667d-41a4-9315-d8a1f87f7b5e\") " pod="calico-system/calico-node-rk7mh" Jun 20 19:23:47.852180 kubelet[2909]: I0620 19:23:47.851750 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ffe350fc-667d-41a4-9315-d8a1f87f7b5e-node-certs\") pod \"calico-node-rk7mh\" (UID: \"ffe350fc-667d-41a4-9315-d8a1f87f7b5e\") " pod="calico-system/calico-node-rk7mh" Jun 20 19:23:47.852301 kubelet[2909]: I0620 19:23:47.851765 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ffe350fc-667d-41a4-9315-d8a1f87f7b5e-policysync\") pod \"calico-node-rk7mh\" (UID: \"ffe350fc-667d-41a4-9315-d8a1f87f7b5e\") " pod="calico-system/calico-node-rk7mh" Jun 20 19:23:47.852301 kubelet[2909]: I0620 19:23:47.851776 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ffe350fc-667d-41a4-9315-d8a1f87f7b5e-var-run-calico\") pod \"calico-node-rk7mh\" (UID: \"ffe350fc-667d-41a4-9315-d8a1f87f7b5e\") " pod="calico-system/calico-node-rk7mh" Jun 20 19:23:47.886556 containerd[1619]: time="2025-06-20T19:23:47.886457672Z" level=info msg="connecting to shim 9c8a6dd3bfff36c236402bb2ec6c92da56017b0fe9705ea4fb1e6255308dc5fc" address="unix:///run/containerd/s/d8dfd608829f284339d5e157a91a3fb00d99608d7aa6750e6bb2b2016e83ecb9" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:23:47.932840 systemd[1]: Started cri-containerd-9c8a6dd3bfff36c236402bb2ec6c92da56017b0fe9705ea4fb1e6255308dc5fc.scope - libcontainer container 9c8a6dd3bfff36c236402bb2ec6c92da56017b0fe9705ea4fb1e6255308dc5fc. Jun 20 19:23:47.971257 kubelet[2909]: E0620 19:23:47.971221 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:47.971257 kubelet[2909]: W0620 19:23:47.971237 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:47.971807 kubelet[2909]: E0620 19:23:47.971688 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.024037 containerd[1619]: time="2025-06-20T19:23:48.024008652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77fb6fd785-m46rf,Uid:4715c2f9-a18b-439a-89f4-f5925e979efb,Namespace:calico-system,Attempt:0,} returns sandbox id \"9c8a6dd3bfff36c236402bb2ec6c92da56017b0fe9705ea4fb1e6255308dc5fc\"" Jun 20 19:23:48.024964 containerd[1619]: time="2025-06-20T19:23:48.024946758Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.1\"" Jun 20 19:23:48.093074 kubelet[2909]: E0620 19:23:48.093040 2909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zbzl5" podUID="7c012b14-211b-4b5d-8a87-e4c6c0b434f3" Jun 20 19:23:48.143708 kubelet[2909]: E0620 19:23:48.143631 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.143708 kubelet[2909]: W0620 19:23:48.143651 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.143708 kubelet[2909]: E0620 19:23:48.143710 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.143897 kubelet[2909]: E0620 19:23:48.143879 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.143897 kubelet[2909]: W0620 19:23:48.143892 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.143953 kubelet[2909]: E0620 19:23:48.143917 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.144091 kubelet[2909]: E0620 19:23:48.144071 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.144091 kubelet[2909]: W0620 19:23:48.144086 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.144160 kubelet[2909]: E0620 19:23:48.144098 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.144258 kubelet[2909]: E0620 19:23:48.144241 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.144286 kubelet[2909]: W0620 19:23:48.144262 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.144286 kubelet[2909]: E0620 19:23:48.144271 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.144426 kubelet[2909]: E0620 19:23:48.144409 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.144426 kubelet[2909]: W0620 19:23:48.144419 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.144476 kubelet[2909]: E0620 19:23:48.144427 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.144577 kubelet[2909]: E0620 19:23:48.144563 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.144577 kubelet[2909]: W0620 19:23:48.144574 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.144633 kubelet[2909]: E0620 19:23:48.144582 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.144731 kubelet[2909]: E0620 19:23:48.144716 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.144731 kubelet[2909]: W0620 19:23:48.144726 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.144783 kubelet[2909]: E0620 19:23:48.144735 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.144848 kubelet[2909]: E0620 19:23:48.144831 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.144884 kubelet[2909]: W0620 19:23:48.144851 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.144884 kubelet[2909]: E0620 19:23:48.144858 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.144980 kubelet[2909]: E0620 19:23:48.144966 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.144980 kubelet[2909]: W0620 19:23:48.144974 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.144980 kubelet[2909]: E0620 19:23:48.144981 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.145080 kubelet[2909]: E0620 19:23:48.145066 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.145118 kubelet[2909]: W0620 19:23:48.145087 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.145118 kubelet[2909]: E0620 19:23:48.145094 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.145191 kubelet[2909]: E0620 19:23:48.145176 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.145191 kubelet[2909]: W0620 19:23:48.145186 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.145255 kubelet[2909]: E0620 19:23:48.145194 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.145326 kubelet[2909]: E0620 19:23:48.145311 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.145326 kubelet[2909]: W0620 19:23:48.145322 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.145379 kubelet[2909]: E0620 19:23:48.145330 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.145450 kubelet[2909]: E0620 19:23:48.145436 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.145450 kubelet[2909]: W0620 19:23:48.145445 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.145510 kubelet[2909]: E0620 19:23:48.145453 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.145570 kubelet[2909]: E0620 19:23:48.145556 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.145570 kubelet[2909]: W0620 19:23:48.145566 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.145621 kubelet[2909]: E0620 19:23:48.145573 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.145706 kubelet[2909]: E0620 19:23:48.145692 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.145706 kubelet[2909]: W0620 19:23:48.145702 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.145757 kubelet[2909]: E0620 19:23:48.145710 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.145834 kubelet[2909]: E0620 19:23:48.145820 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.145834 kubelet[2909]: W0620 19:23:48.145830 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.145884 kubelet[2909]: E0620 19:23:48.145838 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.145934 kubelet[2909]: E0620 19:23:48.145921 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.145934 kubelet[2909]: W0620 19:23:48.145929 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.145934 kubelet[2909]: E0620 19:23:48.145934 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.146020 kubelet[2909]: E0620 19:23:48.146008 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.146020 kubelet[2909]: W0620 19:23:48.146017 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.146071 kubelet[2909]: E0620 19:23:48.146022 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.146095 kubelet[2909]: E0620 19:23:48.146091 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.146118 kubelet[2909]: W0620 19:23:48.146096 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.146118 kubelet[2909]: E0620 19:23:48.146100 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.146200 kubelet[2909]: E0620 19:23:48.146186 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.146200 kubelet[2909]: W0620 19:23:48.146194 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.146200 kubelet[2909]: E0620 19:23:48.146199 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.153637 kubelet[2909]: E0620 19:23:48.153568 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.153637 kubelet[2909]: W0620 19:23:48.153583 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.153637 kubelet[2909]: E0620 19:23:48.153598 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.153897 kubelet[2909]: I0620 19:23:48.153616 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7c012b14-211b-4b5d-8a87-e4c6c0b434f3-kubelet-dir\") pod \"csi-node-driver-zbzl5\" (UID: \"7c012b14-211b-4b5d-8a87-e4c6c0b434f3\") " pod="calico-system/csi-node-driver-zbzl5" Jun 20 19:23:48.154194 kubelet[2909]: E0620 19:23:48.154044 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.154194 kubelet[2909]: W0620 19:23:48.154071 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.154194 kubelet[2909]: E0620 19:23:48.154088 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.154194 kubelet[2909]: I0620 19:23:48.154104 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq59g\" (UniqueName: \"kubernetes.io/projected/7c012b14-211b-4b5d-8a87-e4c6c0b434f3-kube-api-access-gq59g\") pod \"csi-node-driver-zbzl5\" (UID: \"7c012b14-211b-4b5d-8a87-e4c6c0b434f3\") " pod="calico-system/csi-node-driver-zbzl5" Jun 20 19:23:48.154503 containerd[1619]: time="2025-06-20T19:23:48.153936843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rk7mh,Uid:ffe350fc-667d-41a4-9315-d8a1f87f7b5e,Namespace:calico-system,Attempt:0,}" Jun 20 19:23:48.154637 kubelet[2909]: E0620 19:23:48.154472 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.154637 kubelet[2909]: W0620 19:23:48.154480 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.155447 kubelet[2909]: E0620 19:23:48.154863 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.155447 kubelet[2909]: I0620 19:23:48.154883 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7c012b14-211b-4b5d-8a87-e4c6c0b434f3-varrun\") pod \"csi-node-driver-zbzl5\" (UID: \"7c012b14-211b-4b5d-8a87-e4c6c0b434f3\") " pod="calico-system/csi-node-driver-zbzl5" Jun 20 19:23:48.155447 kubelet[2909]: E0620 19:23:48.154942 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.155447 kubelet[2909]: W0620 19:23:48.154951 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.155447 kubelet[2909]: E0620 19:23:48.154961 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.155447 kubelet[2909]: E0620 19:23:48.155416 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.155447 kubelet[2909]: W0620 19:23:48.155424 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.156289 kubelet[2909]: E0620 19:23:48.155436 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.156289 kubelet[2909]: E0620 19:23:48.155712 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.156289 kubelet[2909]: W0620 19:23:48.155720 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.156289 kubelet[2909]: E0620 19:23:48.155877 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.156289 kubelet[2909]: I0620 19:23:48.155897 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7c012b14-211b-4b5d-8a87-e4c6c0b434f3-socket-dir\") pod \"csi-node-driver-zbzl5\" (UID: \"7c012b14-211b-4b5d-8a87-e4c6c0b434f3\") " pod="calico-system/csi-node-driver-zbzl5" Jun 20 19:23:48.156289 kubelet[2909]: E0620 19:23:48.156084 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.156289 kubelet[2909]: W0620 19:23:48.156092 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.156289 kubelet[2909]: E0620 19:23:48.156130 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.156830 kubelet[2909]: E0620 19:23:48.156453 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.156830 kubelet[2909]: W0620 19:23:48.156458 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.156830 kubelet[2909]: E0620 19:23:48.156465 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.156830 kubelet[2909]: E0620 19:23:48.156806 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.156830 kubelet[2909]: W0620 19:23:48.156812 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.156830 kubelet[2909]: E0620 19:23:48.156823 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.157169 kubelet[2909]: E0620 19:23:48.157152 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.157237 kubelet[2909]: W0620 19:23:48.157191 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.157237 kubelet[2909]: E0620 19:23:48.157201 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.157560 kubelet[2909]: E0620 19:23:48.157540 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.157739 kubelet[2909]: W0620 19:23:48.157554 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.157739 kubelet[2909]: E0620 19:23:48.157737 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.158211 kubelet[2909]: E0620 19:23:48.158008 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.158211 kubelet[2909]: W0620 19:23:48.158014 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.158211 kubelet[2909]: E0620 19:23:48.158020 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.159484 kubelet[2909]: E0620 19:23:48.158246 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.159484 kubelet[2909]: W0620 19:23:48.158253 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.159484 kubelet[2909]: E0620 19:23:48.158261 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.159484 kubelet[2909]: I0620 19:23:48.158287 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7c012b14-211b-4b5d-8a87-e4c6c0b434f3-registration-dir\") pod \"csi-node-driver-zbzl5\" (UID: \"7c012b14-211b-4b5d-8a87-e4c6c0b434f3\") " pod="calico-system/csi-node-driver-zbzl5" Jun 20 19:23:48.159484 kubelet[2909]: E0620 19:23:48.159027 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.159484 kubelet[2909]: W0620 19:23:48.159039 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.159484 kubelet[2909]: E0620 19:23:48.159053 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.159484 kubelet[2909]: E0620 19:23:48.159191 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.159484 kubelet[2909]: W0620 19:23:48.159199 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.160173 kubelet[2909]: E0620 19:23:48.159207 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.166630 containerd[1619]: time="2025-06-20T19:23:48.166584833Z" level=info msg="connecting to shim 4f811c73574d96e8eaaecfbc6707b78bfb34e9dfc0bbc3bfa34d835bae67ab83" address="unix:///run/containerd/s/daa9bc7dca13463268fb4f704d16c9b832126f1a9df3f64caba0e2e0f362c690" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:23:48.191848 systemd[1]: Started cri-containerd-4f811c73574d96e8eaaecfbc6707b78bfb34e9dfc0bbc3bfa34d835bae67ab83.scope - libcontainer container 4f811c73574d96e8eaaecfbc6707b78bfb34e9dfc0bbc3bfa34d835bae67ab83. Jun 20 19:23:48.216741 containerd[1619]: time="2025-06-20T19:23:48.216710158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rk7mh,Uid:ffe350fc-667d-41a4-9315-d8a1f87f7b5e,Namespace:calico-system,Attempt:0,} returns sandbox id \"4f811c73574d96e8eaaecfbc6707b78bfb34e9dfc0bbc3bfa34d835bae67ab83\"" Jun 20 19:23:48.259691 kubelet[2909]: E0620 19:23:48.259506 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.259691 kubelet[2909]: W0620 19:23:48.259525 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.259691 kubelet[2909]: E0620 19:23:48.259542 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.259935 kubelet[2909]: E0620 19:23:48.259905 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.259935 kubelet[2909]: W0620 19:23:48.259915 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.260226 kubelet[2909]: E0620 19:23:48.260120 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.260322 kubelet[2909]: E0620 19:23:48.260309 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.260375 kubelet[2909]: W0620 19:23:48.260367 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.260427 kubelet[2909]: E0620 19:23:48.260417 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.260618 kubelet[2909]: E0620 19:23:48.260600 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.260618 kubelet[2909]: W0620 19:23:48.260633 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.260618 kubelet[2909]: E0620 19:23:48.260646 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.260871 kubelet[2909]: E0620 19:23:48.260863 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.260917 kubelet[2909]: W0620 19:23:48.260910 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.260991 kubelet[2909]: E0620 19:23:48.260975 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.261121 kubelet[2909]: E0620 19:23:48.261114 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.261220 kubelet[2909]: W0620 19:23:48.261161 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.261220 kubelet[2909]: E0620 19:23:48.261184 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.261317 kubelet[2909]: E0620 19:23:48.261308 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.261359 kubelet[2909]: W0620 19:23:48.261352 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.261414 kubelet[2909]: E0620 19:23:48.261407 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.261572 kubelet[2909]: E0620 19:23:48.261557 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.261572 kubelet[2909]: W0620 19:23:48.261567 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.261628 kubelet[2909]: E0620 19:23:48.261578 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.261781 kubelet[2909]: E0620 19:23:48.261768 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.261781 kubelet[2909]: W0620 19:23:48.261777 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.261855 kubelet[2909]: E0620 19:23:48.261790 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.261943 kubelet[2909]: E0620 19:23:48.261901 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.261943 kubelet[2909]: W0620 19:23:48.261912 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.262008 kubelet[2909]: E0620 19:23:48.261945 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.262213 kubelet[2909]: E0620 19:23:48.262200 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.262213 kubelet[2909]: W0620 19:23:48.262210 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.262546 kubelet[2909]: E0620 19:23:48.262505 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.262596 kubelet[2909]: E0620 19:23:48.262572 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.262596 kubelet[2909]: W0620 19:23:48.262579 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.263003 kubelet[2909]: E0620 19:23:48.262616 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.263003 kubelet[2909]: E0620 19:23:48.262974 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.263003 kubelet[2909]: W0620 19:23:48.262980 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.263118 kubelet[2909]: E0620 19:23:48.263103 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.263149 kubelet[2909]: E0620 19:23:48.263142 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.263149 kubelet[2909]: W0620 19:23:48.263148 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.263298 kubelet[2909]: E0620 19:23:48.263168 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.263359 kubelet[2909]: E0620 19:23:48.263348 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.263359 kubelet[2909]: W0620 19:23:48.263356 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.263636 kubelet[2909]: E0620 19:23:48.263372 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.263636 kubelet[2909]: E0620 19:23:48.263474 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.263636 kubelet[2909]: W0620 19:23:48.263495 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.263636 kubelet[2909]: E0620 19:23:48.263511 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.263636 kubelet[2909]: E0620 19:23:48.263609 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.263636 kubelet[2909]: W0620 19:23:48.263615 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.263636 kubelet[2909]: E0620 19:23:48.263626 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.263919 kubelet[2909]: E0620 19:23:48.263775 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.263919 kubelet[2909]: W0620 19:23:48.263781 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.263919 kubelet[2909]: E0620 19:23:48.263822 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.263983 kubelet[2909]: E0620 19:23:48.263963 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.263983 kubelet[2909]: W0620 19:23:48.263977 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.264693 kubelet[2909]: E0620 19:23:48.264478 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.264693 kubelet[2909]: W0620 19:23:48.264489 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.264693 kubelet[2909]: E0620 19:23:48.264498 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.264693 kubelet[2909]: E0620 19:23:48.264643 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.264816 kubelet[2909]: E0620 19:23:48.264716 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.264816 kubelet[2909]: W0620 19:23:48.264721 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.264816 kubelet[2909]: E0620 19:23:48.264728 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.265063 kubelet[2909]: E0620 19:23:48.264847 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.265063 kubelet[2909]: W0620 19:23:48.264853 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.265063 kubelet[2909]: E0620 19:23:48.264861 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.265365 kubelet[2909]: E0620 19:23:48.265279 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.265365 kubelet[2909]: W0620 19:23:48.265287 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.265365 kubelet[2909]: E0620 19:23:48.265298 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.265712 kubelet[2909]: E0620 19:23:48.265700 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.266033 kubelet[2909]: W0620 19:23:48.266022 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.266097 kubelet[2909]: E0620 19:23:48.266088 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.267242 kubelet[2909]: E0620 19:23:48.267214 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.267242 kubelet[2909]: W0620 19:23:48.267225 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.267242 kubelet[2909]: E0620 19:23:48.267235 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:48.273506 kubelet[2909]: E0620 19:23:48.273469 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:48.273506 kubelet[2909]: W0620 19:23:48.273488 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:48.273506 kubelet[2909]: E0620 19:23:48.273505 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:49.529749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount22322773.mount: Deactivated successfully. Jun 20 19:23:49.666917 kubelet[2909]: E0620 19:23:49.666626 2909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zbzl5" podUID="7c012b14-211b-4b5d-8a87-e4c6c0b434f3" Jun 20 19:23:50.322937 containerd[1619]: time="2025-06-20T19:23:50.322596709Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:50.323282 containerd[1619]: time="2025-06-20T19:23:50.323149275Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.1: active requests=0, bytes read=35227888" Jun 20 19:23:50.323923 containerd[1619]: time="2025-06-20T19:23:50.323549878Z" level=info msg="ImageCreate event name:\"sha256:11d920cd1d8c935bdf3cb40dd9e67f22c3624df627bdd58cf6d0e503230688d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:50.324746 containerd[1619]: time="2025-06-20T19:23:50.324692617Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f1edaa4eaa6349a958c409e0dab2d6ee7d1234e5f0eeefc9f508d0b1c9d7d0d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:50.325302 containerd[1619]: time="2025-06-20T19:23:50.325166650Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.1\" with image id \"sha256:11d920cd1d8c935bdf3cb40dd9e67f22c3624df627bdd58cf6d0e503230688d7\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f1edaa4eaa6349a958c409e0dab2d6ee7d1234e5f0eeefc9f508d0b1c9d7d0d1\", size \"35227742\" in 2.300200453s" Jun 20 19:23:50.325302 containerd[1619]: time="2025-06-20T19:23:50.325192299Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.1\" returns image reference \"sha256:11d920cd1d8c935bdf3cb40dd9e67f22c3624df627bdd58cf6d0e503230688d7\"" Jun 20 19:23:50.326695 containerd[1619]: time="2025-06-20T19:23:50.325990696Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\"" Jun 20 19:23:50.337689 containerd[1619]: time="2025-06-20T19:23:50.337481503Z" level=info msg="CreateContainer within sandbox \"9c8a6dd3bfff36c236402bb2ec6c92da56017b0fe9705ea4fb1e6255308dc5fc\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 20 19:23:50.348708 containerd[1619]: time="2025-06-20T19:23:50.347461447Z" level=info msg="Container e8c5b11678173cc5be9fcbc2bbfeb59e9fa4c88d1c8560dafbe7c30e4f5bf63f: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:23:50.354005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1625498707.mount: Deactivated successfully. Jun 20 19:23:50.357272 containerd[1619]: time="2025-06-20T19:23:50.357238011Z" level=info msg="CreateContainer within sandbox \"9c8a6dd3bfff36c236402bb2ec6c92da56017b0fe9705ea4fb1e6255308dc5fc\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e8c5b11678173cc5be9fcbc2bbfeb59e9fa4c88d1c8560dafbe7c30e4f5bf63f\"" Jun 20 19:23:50.358938 containerd[1619]: time="2025-06-20T19:23:50.358728185Z" level=info msg="StartContainer for \"e8c5b11678173cc5be9fcbc2bbfeb59e9fa4c88d1c8560dafbe7c30e4f5bf63f\"" Jun 20 19:23:50.359685 containerd[1619]: time="2025-06-20T19:23:50.359631994Z" level=info msg="connecting to shim e8c5b11678173cc5be9fcbc2bbfeb59e9fa4c88d1c8560dafbe7c30e4f5bf63f" address="unix:///run/containerd/s/d8dfd608829f284339d5e157a91a3fb00d99608d7aa6750e6bb2b2016e83ecb9" protocol=ttrpc version=3 Jun 20 19:23:50.384854 systemd[1]: Started cri-containerd-e8c5b11678173cc5be9fcbc2bbfeb59e9fa4c88d1c8560dafbe7c30e4f5bf63f.scope - libcontainer container e8c5b11678173cc5be9fcbc2bbfeb59e9fa4c88d1c8560dafbe7c30e4f5bf63f. Jun 20 19:23:50.434476 containerd[1619]: time="2025-06-20T19:23:50.434446334Z" level=info msg="StartContainer for \"e8c5b11678173cc5be9fcbc2bbfeb59e9fa4c88d1c8560dafbe7c30e4f5bf63f\" returns successfully" Jun 20 19:23:50.864022 kubelet[2909]: E0620 19:23:50.863970 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.864022 kubelet[2909]: W0620 19:23:50.863987 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.864022 kubelet[2909]: E0620 19:23:50.864002 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.864528 kubelet[2909]: E0620 19:23:50.864462 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.864528 kubelet[2909]: W0620 19:23:50.864470 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.864528 kubelet[2909]: E0620 19:23:50.864477 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.864648 kubelet[2909]: E0620 19:23:50.864640 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.868665 kubelet[2909]: W0620 19:23:50.864678 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.868665 kubelet[2909]: E0620 19:23:50.864686 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.868665 kubelet[2909]: E0620 19:23:50.864854 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.868665 kubelet[2909]: W0620 19:23:50.864860 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.868665 kubelet[2909]: E0620 19:23:50.864867 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.868665 kubelet[2909]: E0620 19:23:50.864983 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.868665 kubelet[2909]: W0620 19:23:50.864988 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.868665 kubelet[2909]: E0620 19:23:50.864994 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.868665 kubelet[2909]: E0620 19:23:50.865090 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.868665 kubelet[2909]: W0620 19:23:50.865096 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.870024 kubelet[2909]: E0620 19:23:50.865104 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.870024 kubelet[2909]: E0620 19:23:50.865218 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.870024 kubelet[2909]: W0620 19:23:50.865237 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.870024 kubelet[2909]: E0620 19:23:50.865242 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.870024 kubelet[2909]: E0620 19:23:50.865351 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.870024 kubelet[2909]: W0620 19:23:50.865357 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.870024 kubelet[2909]: E0620 19:23:50.865366 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.870024 kubelet[2909]: E0620 19:23:50.865490 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.870024 kubelet[2909]: W0620 19:23:50.865495 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.870024 kubelet[2909]: E0620 19:23:50.865500 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.870188 kubelet[2909]: E0620 19:23:50.865586 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.870188 kubelet[2909]: W0620 19:23:50.865591 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.870188 kubelet[2909]: E0620 19:23:50.865596 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.870188 kubelet[2909]: E0620 19:23:50.865709 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.870188 kubelet[2909]: W0620 19:23:50.865714 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.870188 kubelet[2909]: E0620 19:23:50.865734 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.870188 kubelet[2909]: E0620 19:23:50.865848 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.870188 kubelet[2909]: W0620 19:23:50.865853 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.870188 kubelet[2909]: E0620 19:23:50.865860 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.870188 kubelet[2909]: E0620 19:23:50.865957 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.870407 kubelet[2909]: W0620 19:23:50.865961 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.870407 kubelet[2909]: E0620 19:23:50.865966 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.870407 kubelet[2909]: E0620 19:23:50.866098 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.870407 kubelet[2909]: W0620 19:23:50.866104 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.870407 kubelet[2909]: E0620 19:23:50.866111 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.870407 kubelet[2909]: E0620 19:23:50.866282 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.870407 kubelet[2909]: W0620 19:23:50.866287 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.870407 kubelet[2909]: E0620 19:23:50.866293 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.877527 kubelet[2909]: E0620 19:23:50.877507 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.877527 kubelet[2909]: W0620 19:23:50.877522 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.877738 kubelet[2909]: E0620 19:23:50.877536 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.877738 kubelet[2909]: E0620 19:23:50.877707 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.877738 kubelet[2909]: W0620 19:23:50.877712 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.877738 kubelet[2909]: E0620 19:23:50.877718 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.880398 kubelet[2909]: E0620 19:23:50.877838 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.880398 kubelet[2909]: W0620 19:23:50.877845 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.880398 kubelet[2909]: E0620 19:23:50.877852 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.880398 kubelet[2909]: E0620 19:23:50.877950 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.880398 kubelet[2909]: W0620 19:23:50.877954 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.880398 kubelet[2909]: E0620 19:23:50.877976 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.880398 kubelet[2909]: E0620 19:23:50.878227 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.880398 kubelet[2909]: W0620 19:23:50.878235 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.880398 kubelet[2909]: E0620 19:23:50.878248 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.880398 kubelet[2909]: E0620 19:23:50.878364 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.880591 kubelet[2909]: W0620 19:23:50.878372 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.880591 kubelet[2909]: E0620 19:23:50.878385 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.880591 kubelet[2909]: E0620 19:23:50.878547 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.880591 kubelet[2909]: W0620 19:23:50.878552 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.880591 kubelet[2909]: E0620 19:23:50.878561 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.880591 kubelet[2909]: E0620 19:23:50.878666 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.880591 kubelet[2909]: W0620 19:23:50.878671 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.880591 kubelet[2909]: E0620 19:23:50.878683 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.880591 kubelet[2909]: E0620 19:23:50.878770 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.880591 kubelet[2909]: W0620 19:23:50.878775 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.880835 kubelet[2909]: E0620 19:23:50.878785 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.880835 kubelet[2909]: E0620 19:23:50.878909 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.880835 kubelet[2909]: W0620 19:23:50.878914 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.880835 kubelet[2909]: E0620 19:23:50.878963 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.880835 kubelet[2909]: E0620 19:23:50.879227 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.880835 kubelet[2909]: W0620 19:23:50.879234 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.880835 kubelet[2909]: E0620 19:23:50.879258 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.880835 kubelet[2909]: E0620 19:23:50.879321 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.880835 kubelet[2909]: W0620 19:23:50.879327 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.880835 kubelet[2909]: E0620 19:23:50.879353 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.880994 kubelet[2909]: E0620 19:23:50.879450 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.880994 kubelet[2909]: W0620 19:23:50.879456 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.880994 kubelet[2909]: E0620 19:23:50.879467 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.880994 kubelet[2909]: E0620 19:23:50.879569 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.880994 kubelet[2909]: W0620 19:23:50.879575 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.880994 kubelet[2909]: E0620 19:23:50.879581 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.880994 kubelet[2909]: E0620 19:23:50.879692 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.880994 kubelet[2909]: W0620 19:23:50.879698 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.880994 kubelet[2909]: E0620 19:23:50.879705 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.880994 kubelet[2909]: E0620 19:23:50.879802 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.881218 kubelet[2909]: W0620 19:23:50.879808 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.881218 kubelet[2909]: E0620 19:23:50.879813 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.881218 kubelet[2909]: E0620 19:23:50.879913 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.881218 kubelet[2909]: W0620 19:23:50.879917 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.881218 kubelet[2909]: E0620 19:23:50.879922 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.881218 kubelet[2909]: E0620 19:23:50.880104 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.881218 kubelet[2909]: W0620 19:23:50.880109 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.881218 kubelet[2909]: E0620 19:23:50.880114 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:51.668395 kubelet[2909]: E0620 19:23:51.668112 2909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zbzl5" podUID="7c012b14-211b-4b5d-8a87-e4c6c0b434f3" Jun 20 19:23:51.802973 kubelet[2909]: I0620 19:23:51.802674 2909 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:23:51.872228 kubelet[2909]: E0620 19:23:51.872210 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:51.872810 kubelet[2909]: W0620 19:23:51.872463 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:51.872810 kubelet[2909]: E0620 19:23:51.872485 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:51.872996 kubelet[2909]: E0620 19:23:51.872865 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:51.872996 kubelet[2909]: W0620 19:23:51.872874 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:51.872996 kubelet[2909]: E0620 19:23:51.872882 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:51.873235 kubelet[2909]: E0620 19:23:51.873160 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:51.873235 kubelet[2909]: W0620 19:23:51.873170 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:51.873235 kubelet[2909]: E0620 19:23:51.873179 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:51.873588 kubelet[2909]: E0620 19:23:51.873434 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:51.873588 kubelet[2909]: W0620 19:23:51.873443 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:51.873588 kubelet[2909]: E0620 19:23:51.873452 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:51.873887 kubelet[2909]: E0620 19:23:51.873830 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:51.873887 kubelet[2909]: W0620 19:23:51.873838 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:51.873887 kubelet[2909]: E0620 19:23:51.873845 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:51.874106 kubelet[2909]: E0620 19:23:51.874091 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:51.874251 kubelet[2909]: W0620 19:23:51.874184 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:51.874251 kubelet[2909]: E0620 19:23:51.874196 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:51.874531 kubelet[2909]: E0620 19:23:51.874442 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:51.874531 kubelet[2909]: W0620 19:23:51.874449 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:51.874531 kubelet[2909]: E0620 19:23:51.874457 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:51.874684 kubelet[2909]: E0620 19:23:51.874624 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:51.874684 kubelet[2909]: W0620 19:23:51.874632 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:51.874684 kubelet[2909]: E0620 19:23:51.874640 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:51.875087 kubelet[2909]: E0620 19:23:51.874941 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:51.875087 kubelet[2909]: W0620 19:23:51.874950 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:51.875087 kubelet[2909]: E0620 19:23:51.874956 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:51.875087 kubelet[2909]: E0620 19:23:51.875039 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:51.875087 kubelet[2909]: W0620 19:23:51.875044 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:51.875087 kubelet[2909]: E0620 19:23:51.875050 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:51.875270 kubelet[2909]: E0620 19:23:51.875259 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:51.875386 kubelet[2909]: W0620 19:23:51.875331 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:51.875386 kubelet[2909]: E0620 19:23:51.875342 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:51.875511 kubelet[2909]: E0620 19:23:51.875503 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:51.875619 kubelet[2909]: W0620 19:23:51.875553 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:51.875619 kubelet[2909]: E0620 19:23:51.875564 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:51.875868 kubelet[2909]: E0620 19:23:51.875755 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:51.875868 kubelet[2909]: W0620 19:23:51.875763 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:51.875868 kubelet[2909]: E0620 19:23:51.875770 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:51.876157 kubelet[2909]: E0620 19:23:51.876118 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:51.876157 kubelet[2909]: W0620 19:23:51.876125 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:51.876157 kubelet[2909]: E0620 19:23:51.876131 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:51.876402 kubelet[2909]: E0620 19:23:51.876358 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:51.876402 kubelet[2909]: W0620 19:23:51.876368 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:51.876639 kubelet[2909]: E0620 19:23:51.876376 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:51.886798 kubelet[2909]: E0620 19:23:51.886778 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:51.887197 kubelet[2909]: W0620 19:23:51.886986 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:51.887197 kubelet[2909]: E0620 19:23:51.887024 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:51.887351 kubelet[2909]: E0620 19:23:51.887300 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:51.887351 kubelet[2909]: W0620 19:23:51.887306 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:51.887351 kubelet[2909]: E0620 19:23:51.887318 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:51.887556 kubelet[2909]: E0620 19:23:51.887549 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:51.887671 kubelet[2909]: W0620 19:23:51.887590 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:51.887671 kubelet[2909]: E0620 19:23:51.887639 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:51.887968 kubelet[2909]: E0620 19:23:51.887846 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:51.887968 kubelet[2909]: W0620 19:23:51.887854 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:51.887968 kubelet[2909]: E0620 19:23:51.887862 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:51.888180 kubelet[2909]: E0620 19:23:51.888144 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:51.888180 kubelet[2909]: W0620 19:23:51.888151 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:51.888180 kubelet[2909]: E0620 19:23:51.888161 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:51.888470 kubelet[2909]: E0620 19:23:51.888417 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:51.888470 kubelet[2909]: W0620 19:23:51.888423 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:51.888470 kubelet[2909]: E0620 19:23:51.888432 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:51.889976 kubelet[2909]: E0620 19:23:51.889608 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:51.889976 kubelet[2909]: W0620 19:23:51.889619 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:51.889976 kubelet[2909]: E0620 19:23:51.889633 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:51.890084 kubelet[2909]: E0620 19:23:51.890001 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:51.890084 kubelet[2909]: W0620 19:23:51.890012 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:51.890084 kubelet[2909]: E0620 19:23:51.890027 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:51.890193 kubelet[2909]: E0620 19:23:51.890183 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:51.890193 kubelet[2909]: W0620 19:23:51.890192 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:51.890269 kubelet[2909]: E0620 19:23:51.890199 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:51.890412 kubelet[2909]: E0620 19:23:51.890402 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:51.890504 kubelet[2909]: W0620 19:23:51.890413 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:51.890504 kubelet[2909]: E0620 19:23:51.890431 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:51.890620 kubelet[2909]: E0620 19:23:51.890576 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:51.890683 kubelet[2909]: W0620 19:23:51.890674 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:51.890782 kubelet[2909]: E0620 19:23:51.890694 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:51.890782 kubelet[2909]: E0620 19:23:51.890771 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:51.890782 kubelet[2909]: W0620 19:23:51.890778 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:51.890841 kubelet[2909]: E0620 19:23:51.890789 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:51.891039 kubelet[2909]: E0620 19:23:51.891028 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:51.891039 kubelet[2909]: W0620 19:23:51.891036 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:51.891086 kubelet[2909]: E0620 19:23:51.891047 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:51.891237 kubelet[2909]: E0620 19:23:51.891189 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:51.891237 kubelet[2909]: W0620 19:23:51.891197 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:51.891237 kubelet[2909]: E0620 19:23:51.891203 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:51.891352 kubelet[2909]: E0620 19:23:51.891346 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:51.891684 kubelet[2909]: W0620 19:23:51.891382 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:51.891684 kubelet[2909]: E0620 19:23:51.891567 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:51.891817 kubelet[2909]: E0620 19:23:51.891770 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:51.891817 kubelet[2909]: W0620 19:23:51.891776 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:51.891817 kubelet[2909]: E0620 19:23:51.891785 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:51.891934 kubelet[2909]: E0620 19:23:51.891920 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:51.891934 kubelet[2909]: W0620 19:23:51.891930 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:51.891979 kubelet[2909]: E0620 19:23:51.891938 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:51.892207 kubelet[2909]: E0620 19:23:51.892180 2909 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:51.892207 kubelet[2909]: W0620 19:23:51.892187 2909 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:51.892207 kubelet[2909]: E0620 19:23:51.892193 2909 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:51.896196 containerd[1619]: time="2025-06-20T19:23:51.895834009Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:51.896621 containerd[1619]: time="2025-06-20T19:23:51.896609455Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1: active requests=0, bytes read=4441627" Jun 20 19:23:51.897304 containerd[1619]: time="2025-06-20T19:23:51.897288258Z" level=info msg="ImageCreate event name:\"sha256:2eb0d46821080fd806e1b7f8ca42889800fcb3f0af912b6fbb09a13b21454d48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:51.898498 containerd[1619]: time="2025-06-20T19:23:51.898481112Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b9246fe925ee5b8a5c7dfe1d1c3c29063cbfd512663088b135a015828c20401e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:51.898984 containerd[1619]: time="2025-06-20T19:23:51.898755860Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" with image id \"sha256:2eb0d46821080fd806e1b7f8ca42889800fcb3f0af912b6fbb09a13b21454d48\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b9246fe925ee5b8a5c7dfe1d1c3c29063cbfd512663088b135a015828c20401e\", size \"5934290\" in 1.572743991s" Jun 20 19:23:51.898984 containerd[1619]: time="2025-06-20T19:23:51.898925535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" returns image reference \"sha256:2eb0d46821080fd806e1b7f8ca42889800fcb3f0af912b6fbb09a13b21454d48\"" Jun 20 19:23:51.901305 containerd[1619]: time="2025-06-20T19:23:51.901282844Z" level=info msg="CreateContainer within sandbox \"4f811c73574d96e8eaaecfbc6707b78bfb34e9dfc0bbc3bfa34d835bae67ab83\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 20 19:23:51.909717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount74034892.mount: Deactivated successfully. Jun 20 19:23:51.911185 containerd[1619]: time="2025-06-20T19:23:51.910871427Z" level=info msg="Container 6c3ac93e79043f9394825ccc4ade7932d5497e0ca8c24f5c05080483bd16d2b1: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:23:51.919991 containerd[1619]: time="2025-06-20T19:23:51.919895041Z" level=info msg="CreateContainer within sandbox \"4f811c73574d96e8eaaecfbc6707b78bfb34e9dfc0bbc3bfa34d835bae67ab83\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6c3ac93e79043f9394825ccc4ade7932d5497e0ca8c24f5c05080483bd16d2b1\"" Jun 20 19:23:51.922417 containerd[1619]: time="2025-06-20T19:23:51.922322844Z" level=info msg="StartContainer for \"6c3ac93e79043f9394825ccc4ade7932d5497e0ca8c24f5c05080483bd16d2b1\"" Jun 20 19:23:51.923457 containerd[1619]: time="2025-06-20T19:23:51.923393490Z" level=info msg="connecting to shim 6c3ac93e79043f9394825ccc4ade7932d5497e0ca8c24f5c05080483bd16d2b1" address="unix:///run/containerd/s/daa9bc7dca13463268fb4f704d16c9b832126f1a9df3f64caba0e2e0f362c690" protocol=ttrpc version=3 Jun 20 19:23:51.948826 systemd[1]: Started cri-containerd-6c3ac93e79043f9394825ccc4ade7932d5497e0ca8c24f5c05080483bd16d2b1.scope - libcontainer container 6c3ac93e79043f9394825ccc4ade7932d5497e0ca8c24f5c05080483bd16d2b1. Jun 20 19:23:51.982932 containerd[1619]: time="2025-06-20T19:23:51.982909470Z" level=info msg="StartContainer for \"6c3ac93e79043f9394825ccc4ade7932d5497e0ca8c24f5c05080483bd16d2b1\" returns successfully" Jun 20 19:23:51.986985 systemd[1]: cri-containerd-6c3ac93e79043f9394825ccc4ade7932d5497e0ca8c24f5c05080483bd16d2b1.scope: Deactivated successfully. Jun 20 19:23:51.987163 systemd[1]: cri-containerd-6c3ac93e79043f9394825ccc4ade7932d5497e0ca8c24f5c05080483bd16d2b1.scope: Consumed 21ms CPU time, 6.1M memory peak, 2M written to disk. Jun 20 19:23:52.038009 containerd[1619]: time="2025-06-20T19:23:52.037964419Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6c3ac93e79043f9394825ccc4ade7932d5497e0ca8c24f5c05080483bd16d2b1\" id:\"6c3ac93e79043f9394825ccc4ade7932d5497e0ca8c24f5c05080483bd16d2b1\" pid:3603 exited_at:{seconds:1750447431 nanos:989449739}" Jun 20 19:23:52.038425 containerd[1619]: time="2025-06-20T19:23:52.038399441Z" level=info msg="received exit event container_id:\"6c3ac93e79043f9394825ccc4ade7932d5497e0ca8c24f5c05080483bd16d2b1\" id:\"6c3ac93e79043f9394825ccc4ade7932d5497e0ca8c24f5c05080483bd16d2b1\" pid:3603 exited_at:{seconds:1750447431 nanos:989449739}" Jun 20 19:23:52.050398 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c3ac93e79043f9394825ccc4ade7932d5497e0ca8c24f5c05080483bd16d2b1-rootfs.mount: Deactivated successfully. Jun 20 19:23:52.795795 containerd[1619]: time="2025-06-20T19:23:52.795481913Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.1\"" Jun 20 19:23:52.833977 kubelet[2909]: I0620 19:23:52.833933 2909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-77fb6fd785-m46rf" podStartSLOduration=3.532824047 podStartE2EDuration="5.833919488s" podCreationTimestamp="2025-06-20 19:23:47 +0000 UTC" firstStartedPulling="2025-06-20 19:23:48.024814494 +0000 UTC m=+14.453053163" lastFinishedPulling="2025-06-20 19:23:50.325909932 +0000 UTC m=+16.754148604" observedRunningTime="2025-06-20 19:23:50.799414067 +0000 UTC m=+17.227652743" watchObservedRunningTime="2025-06-20 19:23:52.833919488 +0000 UTC m=+19.262158174" Jun 20 19:23:53.329952 kubelet[2909]: I0620 19:23:53.329925 2909 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:23:53.667599 kubelet[2909]: E0620 19:23:53.667060 2909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zbzl5" podUID="7c012b14-211b-4b5d-8a87-e4c6c0b434f3" Jun 20 19:23:55.667158 kubelet[2909]: E0620 19:23:55.667128 2909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zbzl5" podUID="7c012b14-211b-4b5d-8a87-e4c6c0b434f3" Jun 20 19:23:55.983777 containerd[1619]: time="2025-06-20T19:23:55.983698587Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:55.984323 containerd[1619]: time="2025-06-20T19:23:55.984290647Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.1: active requests=0, bytes read=70405879" Jun 20 19:23:55.984664 containerd[1619]: time="2025-06-20T19:23:55.984469956Z" level=info msg="ImageCreate event name:\"sha256:0d2cd976ff6ee711927e02b1c2ba0b532275ff85d5dc05fc413cc660d5bec68e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:55.985729 containerd[1619]: time="2025-06-20T19:23:55.985709528Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:930b33311eec7523e36d95977281681d74d33efff937302b26516b2bc03a5fe9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:55.986276 containerd[1619]: time="2025-06-20T19:23:55.986257408Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.1\" with image id \"sha256:0d2cd976ff6ee711927e02b1c2ba0b532275ff85d5dc05fc413cc660d5bec68e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:930b33311eec7523e36d95977281681d74d33efff937302b26516b2bc03a5fe9\", size \"71898582\" in 3.190694407s" Jun 20 19:23:55.986346 containerd[1619]: time="2025-06-20T19:23:55.986335555Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.1\" returns image reference \"sha256:0d2cd976ff6ee711927e02b1c2ba0b532275ff85d5dc05fc413cc660d5bec68e\"" Jun 20 19:23:55.988368 containerd[1619]: time="2025-06-20T19:23:55.988327459Z" level=info msg="CreateContainer within sandbox \"4f811c73574d96e8eaaecfbc6707b78bfb34e9dfc0bbc3bfa34d835bae67ab83\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 20 19:23:55.995605 containerd[1619]: time="2025-06-20T19:23:55.994820386Z" level=info msg="Container 9028c6aee2253a83d84130d404a6a49015260de54e13b220e8b4debd5b230ae3: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:23:55.996811 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4095125123.mount: Deactivated successfully. Jun 20 19:23:56.014200 containerd[1619]: time="2025-06-20T19:23:56.014160681Z" level=info msg="CreateContainer within sandbox \"4f811c73574d96e8eaaecfbc6707b78bfb34e9dfc0bbc3bfa34d835bae67ab83\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9028c6aee2253a83d84130d404a6a49015260de54e13b220e8b4debd5b230ae3\"" Jun 20 19:23:56.062181 containerd[1619]: time="2025-06-20T19:23:56.062154021Z" level=info msg="StartContainer for \"9028c6aee2253a83d84130d404a6a49015260de54e13b220e8b4debd5b230ae3\"" Jun 20 19:23:56.063460 containerd[1619]: time="2025-06-20T19:23:56.063424950Z" level=info msg="connecting to shim 9028c6aee2253a83d84130d404a6a49015260de54e13b220e8b4debd5b230ae3" address="unix:///run/containerd/s/daa9bc7dca13463268fb4f704d16c9b832126f1a9df3f64caba0e2e0f362c690" protocol=ttrpc version=3 Jun 20 19:23:56.089924 systemd[1]: Started cri-containerd-9028c6aee2253a83d84130d404a6a49015260de54e13b220e8b4debd5b230ae3.scope - libcontainer container 9028c6aee2253a83d84130d404a6a49015260de54e13b220e8b4debd5b230ae3. Jun 20 19:23:56.128200 containerd[1619]: time="2025-06-20T19:23:56.128170375Z" level=info msg="StartContainer for \"9028c6aee2253a83d84130d404a6a49015260de54e13b220e8b4debd5b230ae3\" returns successfully" Jun 20 19:23:57.668292 kubelet[2909]: E0620 19:23:57.667864 2909 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zbzl5" podUID="7c012b14-211b-4b5d-8a87-e4c6c0b434f3" Jun 20 19:23:58.821068 systemd[1]: cri-containerd-9028c6aee2253a83d84130d404a6a49015260de54e13b220e8b4debd5b230ae3.scope: Deactivated successfully. Jun 20 19:23:58.821543 systemd[1]: cri-containerd-9028c6aee2253a83d84130d404a6a49015260de54e13b220e8b4debd5b230ae3.scope: Consumed 328ms CPU time, 165.6M memory peak, 284K read from disk, 171.2M written to disk. Jun 20 19:23:58.878591 containerd[1619]: time="2025-06-20T19:23:58.878554640Z" level=info msg="received exit event container_id:\"9028c6aee2253a83d84130d404a6a49015260de54e13b220e8b4debd5b230ae3\" id:\"9028c6aee2253a83d84130d404a6a49015260de54e13b220e8b4debd5b230ae3\" pid:3673 exited_at:{seconds:1750447438 nanos:878383925}" Jun 20 19:23:58.888938 containerd[1619]: time="2025-06-20T19:23:58.888905782Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9028c6aee2253a83d84130d404a6a49015260de54e13b220e8b4debd5b230ae3\" id:\"9028c6aee2253a83d84130d404a6a49015260de54e13b220e8b4debd5b230ae3\" pid:3673 exited_at:{seconds:1750447438 nanos:878383925}" Jun 20 19:23:58.916272 kubelet[2909]: I0620 19:23:58.916224 2909 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jun 20 19:23:59.074170 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9028c6aee2253a83d84130d404a6a49015260de54e13b220e8b4debd5b230ae3-rootfs.mount: Deactivated successfully. Jun 20 19:23:59.308949 systemd[1]: Created slice kubepods-besteffort-pod22ecde37_4cc5_4658_bd30_85fbd7200f9b.slice - libcontainer container kubepods-besteffort-pod22ecde37_4cc5_4658_bd30_85fbd7200f9b.slice. Jun 20 19:23:59.313884 systemd[1]: Created slice kubepods-besteffort-pod1d842e7e_b13c_4fcb_abc1_9f6f125de686.slice - libcontainer container kubepods-besteffort-pod1d842e7e_b13c_4fcb_abc1_9f6f125de686.slice. Jun 20 19:23:59.334465 systemd[1]: Created slice kubepods-besteffort-podb168f456_8c10_4805_a1f9_c9eafb698c4f.slice - libcontainer container kubepods-besteffort-podb168f456_8c10_4805_a1f9_c9eafb698c4f.slice. Jun 20 19:23:59.338870 systemd[1]: Created slice kubepods-besteffort-pod21cf1d00_599a_4dd3_b593_a1e94e1246f9.slice - libcontainer container kubepods-besteffort-pod21cf1d00_599a_4dd3_b593_a1e94e1246f9.slice. Jun 20 19:23:59.343589 systemd[1]: Created slice kubepods-besteffort-podda2fa4c7_de11_4e7d_93ae_6602a4ac4909.slice - libcontainer container kubepods-besteffort-podda2fa4c7_de11_4e7d_93ae_6602a4ac4909.slice. Jun 20 19:23:59.347673 systemd[1]: Created slice kubepods-burstable-pod3de7d915_4af2_4d5c_b2d0_e893b49fda1d.slice - libcontainer container kubepods-burstable-pod3de7d915_4af2_4d5c_b2d0_e893b49fda1d.slice. Jun 20 19:23:59.352190 systemd[1]: Created slice kubepods-burstable-pod988cdc8c_2bae_47e2_bbdb_4d40efabde59.slice - libcontainer container kubepods-burstable-pod988cdc8c_2bae_47e2_bbdb_4d40efabde59.slice. Jun 20 19:23:59.357282 systemd[1]: Created slice kubepods-besteffort-pod2e0b2d2c_4387_4f0a_acd2_67ee85d925c8.slice - libcontainer container kubepods-besteffort-pod2e0b2d2c_4387_4f0a_acd2_67ee85d925c8.slice. Jun 20 19:23:59.368612 kubelet[2909]: I0620 19:23:59.358386 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7bg9\" (UniqueName: \"kubernetes.io/projected/2e0b2d2c-4387-4f0a-acd2-67ee85d925c8-kube-api-access-d7bg9\") pod \"calico-apiserver-57c64d78d5-28z6n\" (UID: \"2e0b2d2c-4387-4f0a-acd2-67ee85d925c8\") " pod="calico-apiserver/calico-apiserver-57c64d78d5-28z6n" Jun 20 19:23:59.368612 kubelet[2909]: I0620 19:23:59.358405 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/21cf1d00-599a-4dd3-b593-a1e94e1246f9-whisker-backend-key-pair\") pod \"whisker-64684d96c9-xttd6\" (UID: \"21cf1d00-599a-4dd3-b593-a1e94e1246f9\") " pod="calico-system/whisker-64684d96c9-xttd6" Jun 20 19:23:59.368612 kubelet[2909]: I0620 19:23:59.358419 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj6xd\" (UniqueName: \"kubernetes.io/projected/988cdc8c-2bae-47e2-bbdb-4d40efabde59-kube-api-access-nj6xd\") pod \"coredns-7c65d6cfc9-6tz6l\" (UID: \"988cdc8c-2bae-47e2-bbdb-4d40efabde59\") " pod="kube-system/coredns-7c65d6cfc9-6tz6l" Jun 20 19:23:59.368612 kubelet[2909]: I0620 19:23:59.358430 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d842e7e-b13c-4fcb-abc1-9f6f125de686-goldmane-ca-bundle\") pod \"goldmane-dc7b455cb-kmtz6\" (UID: \"1d842e7e-b13c-4fcb-abc1-9f6f125de686\") " pod="calico-system/goldmane-dc7b455cb-kmtz6" Jun 20 19:23:59.368612 kubelet[2909]: I0620 19:23:59.358439 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/988cdc8c-2bae-47e2-bbdb-4d40efabde59-config-volume\") pod \"coredns-7c65d6cfc9-6tz6l\" (UID: \"988cdc8c-2bae-47e2-bbdb-4d40efabde59\") " pod="kube-system/coredns-7c65d6cfc9-6tz6l" Jun 20 19:23:59.368784 kubelet[2909]: I0620 19:23:59.358451 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3de7d915-4af2-4d5c-b2d0-e893b49fda1d-config-volume\") pod \"coredns-7c65d6cfc9-v8ssj\" (UID: \"3de7d915-4af2-4d5c-b2d0-e893b49fda1d\") " pod="kube-system/coredns-7c65d6cfc9-v8ssj" Jun 20 19:23:59.368784 kubelet[2909]: I0620 19:23:59.358463 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbwfw\" (UniqueName: \"kubernetes.io/projected/3de7d915-4af2-4d5c-b2d0-e893b49fda1d-kube-api-access-hbwfw\") pod \"coredns-7c65d6cfc9-v8ssj\" (UID: \"3de7d915-4af2-4d5c-b2d0-e893b49fda1d\") " pod="kube-system/coredns-7c65d6cfc9-v8ssj" Jun 20 19:23:59.368784 kubelet[2909]: I0620 19:23:59.358479 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/1d842e7e-b13c-4fcb-abc1-9f6f125de686-goldmane-key-pair\") pod \"goldmane-dc7b455cb-kmtz6\" (UID: \"1d842e7e-b13c-4fcb-abc1-9f6f125de686\") " pod="calico-system/goldmane-dc7b455cb-kmtz6" Jun 20 19:23:59.368784 kubelet[2909]: I0620 19:23:59.358495 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fz9z\" (UniqueName: \"kubernetes.io/projected/da2fa4c7-de11-4e7d-93ae-6602a4ac4909-kube-api-access-8fz9z\") pod \"calico-apiserver-54f4b499c8-vwd6m\" (UID: \"da2fa4c7-de11-4e7d-93ae-6602a4ac4909\") " pod="calico-apiserver/calico-apiserver-54f4b499c8-vwd6m" Jun 20 19:23:59.368784 kubelet[2909]: I0620 19:23:59.358509 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b168f456-8c10-4805-a1f9-c9eafb698c4f-calico-apiserver-certs\") pod \"calico-apiserver-57c64d78d5-dz2fv\" (UID: \"b168f456-8c10-4805-a1f9-c9eafb698c4f\") " pod="calico-apiserver/calico-apiserver-57c64d78d5-dz2fv" Jun 20 19:23:59.393491 kubelet[2909]: I0620 19:23:59.358526 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22ecde37-4cc5-4658-bd30-85fbd7200f9b-tigera-ca-bundle\") pod \"calico-kube-controllers-844cfccc97-b8rbg\" (UID: \"22ecde37-4cc5-4658-bd30-85fbd7200f9b\") " pod="calico-system/calico-kube-controllers-844cfccc97-b8rbg" Jun 20 19:23:59.393491 kubelet[2909]: I0620 19:23:59.358546 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5k7f\" (UniqueName: \"kubernetes.io/projected/22ecde37-4cc5-4658-bd30-85fbd7200f9b-kube-api-access-x5k7f\") pod \"calico-kube-controllers-844cfccc97-b8rbg\" (UID: \"22ecde37-4cc5-4658-bd30-85fbd7200f9b\") " pod="calico-system/calico-kube-controllers-844cfccc97-b8rbg" Jun 20 19:23:59.393491 kubelet[2909]: I0620 19:23:59.358561 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmrxk\" (UniqueName: \"kubernetes.io/projected/b168f456-8c10-4805-a1f9-c9eafb698c4f-kube-api-access-mmrxk\") pod \"calico-apiserver-57c64d78d5-dz2fv\" (UID: \"b168f456-8c10-4805-a1f9-c9eafb698c4f\") " pod="calico-apiserver/calico-apiserver-57c64d78d5-dz2fv" Jun 20 19:23:59.393491 kubelet[2909]: I0620 19:23:59.358577 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d842e7e-b13c-4fcb-abc1-9f6f125de686-config\") pod \"goldmane-dc7b455cb-kmtz6\" (UID: \"1d842e7e-b13c-4fcb-abc1-9f6f125de686\") " pod="calico-system/goldmane-dc7b455cb-kmtz6" Jun 20 19:23:59.393491 kubelet[2909]: I0620 19:23:59.358589 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21cf1d00-599a-4dd3-b593-a1e94e1246f9-whisker-ca-bundle\") pod \"whisker-64684d96c9-xttd6\" (UID: \"21cf1d00-599a-4dd3-b593-a1e94e1246f9\") " pod="calico-system/whisker-64684d96c9-xttd6" Jun 20 19:23:59.393622 kubelet[2909]: I0620 19:23:59.358598 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2e0b2d2c-4387-4f0a-acd2-67ee85d925c8-calico-apiserver-certs\") pod \"calico-apiserver-57c64d78d5-28z6n\" (UID: \"2e0b2d2c-4387-4f0a-acd2-67ee85d925c8\") " pod="calico-apiserver/calico-apiserver-57c64d78d5-28z6n" Jun 20 19:23:59.393622 kubelet[2909]: I0620 19:23:59.358611 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpntt\" (UniqueName: \"kubernetes.io/projected/21cf1d00-599a-4dd3-b593-a1e94e1246f9-kube-api-access-wpntt\") pod \"whisker-64684d96c9-xttd6\" (UID: \"21cf1d00-599a-4dd3-b593-a1e94e1246f9\") " pod="calico-system/whisker-64684d96c9-xttd6" Jun 20 19:23:59.393622 kubelet[2909]: I0620 19:23:59.358620 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/da2fa4c7-de11-4e7d-93ae-6602a4ac4909-calico-apiserver-certs\") pod \"calico-apiserver-54f4b499c8-vwd6m\" (UID: \"da2fa4c7-de11-4e7d-93ae-6602a4ac4909\") " pod="calico-apiserver/calico-apiserver-54f4b499c8-vwd6m" Jun 20 19:23:59.393622 kubelet[2909]: I0620 19:23:59.358628 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxxmn\" (UniqueName: \"kubernetes.io/projected/1d842e7e-b13c-4fcb-abc1-9f6f125de686-kube-api-access-vxxmn\") pod \"goldmane-dc7b455cb-kmtz6\" (UID: \"1d842e7e-b13c-4fcb-abc1-9f6f125de686\") " pod="calico-system/goldmane-dc7b455cb-kmtz6" Jun 20 19:23:59.724362 containerd[1619]: time="2025-06-20T19:23:59.724184676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57c64d78d5-28z6n,Uid:2e0b2d2c-4387-4f0a-acd2-67ee85d925c8,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:23:59.724786 containerd[1619]: time="2025-06-20T19:23:59.724760053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57c64d78d5-dz2fv,Uid:b168f456-8c10-4805-a1f9-c9eafb698c4f,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:23:59.726074 containerd[1619]: time="2025-06-20T19:23:59.725057591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-v8ssj,Uid:3de7d915-4af2-4d5c-b2d0-e893b49fda1d,Namespace:kube-system,Attempt:0,}" Jun 20 19:23:59.726628 containerd[1619]: time="2025-06-20T19:23:59.725925003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-844cfccc97-b8rbg,Uid:22ecde37-4cc5-4658-bd30-85fbd7200f9b,Namespace:calico-system,Attempt:0,}" Jun 20 19:23:59.726628 containerd[1619]: time="2025-06-20T19:23:59.725952969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-dc7b455cb-kmtz6,Uid:1d842e7e-b13c-4fcb-abc1-9f6f125de686,Namespace:calico-system,Attempt:0,}" Jun 20 19:23:59.726628 containerd[1619]: time="2025-06-20T19:23:59.726482848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64684d96c9-xttd6,Uid:21cf1d00-599a-4dd3-b593-a1e94e1246f9,Namespace:calico-system,Attempt:0,}" Jun 20 19:23:59.726628 containerd[1619]: time="2025-06-20T19:23:59.726453045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54f4b499c8-vwd6m,Uid:da2fa4c7-de11-4e7d-93ae-6602a4ac4909,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:23:59.726628 containerd[1619]: time="2025-06-20T19:23:59.726573658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6tz6l,Uid:988cdc8c-2bae-47e2-bbdb-4d40efabde59,Namespace:kube-system,Attempt:0,}" Jun 20 19:23:59.729852 systemd[1]: Created slice kubepods-besteffort-pod7c012b14_211b_4b5d_8a87_e4c6c0b434f3.slice - libcontainer container kubepods-besteffort-pod7c012b14_211b_4b5d_8a87_e4c6c0b434f3.slice. Jun 20 19:23:59.778085 containerd[1619]: time="2025-06-20T19:23:59.732633112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zbzl5,Uid:7c012b14-211b-4b5d-8a87-e4c6c0b434f3,Namespace:calico-system,Attempt:0,}" Jun 20 19:24:00.948781 containerd[1619]: time="2025-06-20T19:24:00.948740938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.1\"" Jun 20 19:24:01.990037 containerd[1619]: time="2025-06-20T19:24:01.989946095Z" level=error msg="Failed to destroy network for sandbox \"3f2698f2588b4cd355518e8e9bf269b44ec8f8ed433b604fa5ffff125318e36d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:24:01.992841 systemd[1]: run-netns-cni\x2d4d80468b\x2db7a1\x2da1da\x2d5c77\x2d521c1094393a.mount: Deactivated successfully. Jun 20 19:24:01.997061 containerd[1619]: time="2025-06-20T19:24:01.997029375Z" level=error msg="Failed to destroy network for sandbox \"8fe86c7b594852080b94659b06aacbf457f51da9b8affc4686936bda7d41f6d7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:24:01.999193 systemd[1]: run-netns-cni\x2d1b60aebc\x2dab85\x2d0585\x2d7e71\x2d031a3dc2865a.mount: Deactivated successfully. Jun 20 19:24:01.999800 containerd[1619]: time="2025-06-20T19:24:01.999776469Z" level=error msg="Failed to destroy network for sandbox \"6a730b83775e33ac9f07f445a49a78cc603aa06db6270ff1b07b016598a6fc30\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:24:02.002261 systemd[1]: run-netns-cni\x2d829b3a44\x2d7ccb\x2de9cc\x2d8b0d\x2d04e260d21e37.mount: Deactivated successfully. Jun 20 19:24:02.004222 systemd[1]: run-netns-cni\x2de7d8672a\x2d5d88\x2d1a06\x2d8231\x2dfdaa09b3d106.mount: Deactivated successfully. Jun 20 19:24:02.010526 containerd[1619]: time="2025-06-20T19:24:02.002912703Z" level=error msg="Failed to destroy network for sandbox \"c9f5e2ac7f50efbf81470823f463d8f9e765865564cc8963d811245b00b35eee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:24:02.010526 containerd[1619]: time="2025-06-20T19:24:02.005728587Z" level=error msg="Failed to destroy network for sandbox \"94a2c3626e6e7bf3d77ac0b47ebabfbc069008b11d5c930c23b706e9f213e511\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:24:02.010526 containerd[1619]: time="2025-06-20T19:24:02.007342188Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57c64d78d5-dz2fv,Uid:b168f456-8c10-4805-a1f9-c9eafb698c4f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f2698f2588b4cd355518e8e9bf269b44ec8f8ed433b604fa5ffff125318e36d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:24:02.010526 containerd[1619]: time="2025-06-20T19:24:02.009009120Z" level=error msg="Failed to destroy network for sandbox \"28fa29f82251263c973e1a0667e66cbeff8b2f41debdb0497bac1f8fb98f085c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:24:02.010526 containerd[1619]: time="2025-06-20T19:24:02.010058200Z" level=error msg="Failed to destroy network for sandbox \"5020f63f487d6a4e0ad2bea941bc5094881628574e2ebd66678d16a0ee43bd9b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:24:02.007996 systemd[1]: run-netns-cni\x2dd0870d70\x2d1361\x2ddad1\x2dc809\x2d2a1bdaac0b24.mount: Deactivated successfully. Jun 20 19:24:02.022746 kubelet[2909]: E0620 19:24:02.014857 2909 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f2698f2588b4cd355518e8e9bf269b44ec8f8ed433b604fa5ffff125318e36d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:24:02.022746 kubelet[2909]: E0620 19:24:02.014936 2909 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f2698f2588b4cd355518e8e9bf269b44ec8f8ed433b604fa5ffff125318e36d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57c64d78d5-dz2fv" Jun 20 19:24:02.022746 kubelet[2909]: E0620 19:24:02.014950 2909 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f2698f2588b4cd355518e8e9bf269b44ec8f8ed433b604fa5ffff125318e36d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57c64d78d5-dz2fv" Jun 20 19:24:02.022959 containerd[1619]: time="2025-06-20T19:24:02.010124299Z" level=error msg="Failed to destroy network for sandbox \"f5459691b9d766b5114111448eca029398ef501fe482fb6c074b307a2990e6a5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:24:02.022959 containerd[1619]: time="2025-06-20T19:24:02.010802579Z" level=error msg="Failed to destroy network for sandbox \"e3864bf8791d6bc69086897a5756d85fbd0219590abf7bd0685acba30a051f69\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:24:02.022959 containerd[1619]: time="2025-06-20T19:24:02.022485114Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-v8ssj,Uid:3de7d915-4af2-4d5c-b2d0-e893b49fda1d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fe86c7b594852080b94659b06aacbf457f51da9b8affc4686936bda7d41f6d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:24:02.023034 kubelet[2909]: E0620 19:24:02.015030 2909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57c64d78d5-dz2fv_calico-apiserver(b168f456-8c10-4805-a1f9-c9eafb698c4f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57c64d78d5-dz2fv_calico-apiserver(b168f456-8c10-4805-a1f9-c9eafb698c4f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3f2698f2588b4cd355518e8e9bf269b44ec8f8ed433b604fa5ffff125318e36d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57c64d78d5-dz2fv" podUID="b168f456-8c10-4805-a1f9-c9eafb698c4f" Jun 20 19:24:02.023034 kubelet[2909]: E0620 19:24:02.022741 2909 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fe86c7b594852080b94659b06aacbf457f51da9b8affc4686936bda7d41f6d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:24:02.023034 kubelet[2909]: E0620 19:24:02.022771 2909 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fe86c7b594852080b94659b06aacbf457f51da9b8affc4686936bda7d41f6d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-v8ssj" Jun 20 19:24:02.023100 kubelet[2909]: E0620 19:24:02.022783 2909 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fe86c7b594852080b94659b06aacbf457f51da9b8affc4686936bda7d41f6d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-v8ssj" Jun 20 19:24:02.023100 kubelet[2909]: E0620 19:24:02.022829 2909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-v8ssj_kube-system(3de7d915-4af2-4d5c-b2d0-e893b49fda1d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-v8ssj_kube-system(3de7d915-4af2-4d5c-b2d0-e893b49fda1d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8fe86c7b594852080b94659b06aacbf457f51da9b8affc4686936bda7d41f6d7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-v8ssj" podUID="3de7d915-4af2-4d5c-b2d0-e893b49fda1d" Jun 20 19:24:02.039720 containerd[1619]: time="2025-06-20T19:24:02.039690524Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zbzl5,Uid:7c012b14-211b-4b5d-8a87-e4c6c0b434f3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a730b83775e33ac9f07f445a49a78cc603aa06db6270ff1b07b016598a6fc30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:24:02.039916 kubelet[2909]: E0620 19:24:02.039810 2909 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a730b83775e33ac9f07f445a49a78cc603aa06db6270ff1b07b016598a6fc30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:24:02.039916 kubelet[2909]: E0620 19:24:02.039857 2909 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a730b83775e33ac9f07f445a49a78cc603aa06db6270ff1b07b016598a6fc30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zbzl5" Jun 20 19:24:02.039916 kubelet[2909]: E0620 19:24:02.039871 2909 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a730b83775e33ac9f07f445a49a78cc603aa06db6270ff1b07b016598a6fc30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zbzl5" Jun 20 19:24:02.039986 kubelet[2909]: E0620 19:24:02.039917 2909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zbzl5_calico-system(7c012b14-211b-4b5d-8a87-e4c6c0b434f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zbzl5_calico-system(7c012b14-211b-4b5d-8a87-e4c6c0b434f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6a730b83775e33ac9f07f445a49a78cc603aa06db6270ff1b07b016598a6fc30\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zbzl5" podUID="7c012b14-211b-4b5d-8a87-e4c6c0b434f3" Jun 20 19:24:02.055920 containerd[1619]: time="2025-06-20T19:24:02.055832653Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64684d96c9-xttd6,Uid:21cf1d00-599a-4dd3-b593-a1e94e1246f9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9f5e2ac7f50efbf81470823f463d8f9e765865564cc8963d811245b00b35eee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:24:02.056042 kubelet[2909]: E0620 19:24:02.056010 2909 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9f5e2ac7f50efbf81470823f463d8f9e765865564cc8963d811245b00b35eee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:24:02.056085 kubelet[2909]: E0620 19:24:02.056042 2909 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9f5e2ac7f50efbf81470823f463d8f9e765865564cc8963d811245b00b35eee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-64684d96c9-xttd6" Jun 20 19:24:02.056085 kubelet[2909]: E0620 19:24:02.056056 2909 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9f5e2ac7f50efbf81470823f463d8f9e765865564cc8963d811245b00b35eee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-64684d96c9-xttd6" Jun 20 19:24:02.056183 kubelet[2909]: E0620 19:24:02.056083 2909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-64684d96c9-xttd6_calico-system(21cf1d00-599a-4dd3-b593-a1e94e1246f9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-64684d96c9-xttd6_calico-system(21cf1d00-599a-4dd3-b593-a1e94e1246f9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c9f5e2ac7f50efbf81470823f463d8f9e765865564cc8963d811245b00b35eee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-64684d96c9-xttd6" podUID="21cf1d00-599a-4dd3-b593-a1e94e1246f9" Jun 20 19:24:02.071021 containerd[1619]: time="2025-06-20T19:24:02.070983445Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54f4b499c8-vwd6m,Uid:da2fa4c7-de11-4e7d-93ae-6602a4ac4909,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"94a2c3626e6e7bf3d77ac0b47ebabfbc069008b11d5c930c23b706e9f213e511\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:24:02.071449 kubelet[2909]: E0620 19:24:02.071416 2909 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94a2c3626e6e7bf3d77ac0b47ebabfbc069008b11d5c930c23b706e9f213e511\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:24:02.071518 kubelet[2909]: E0620 19:24:02.071458 2909 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94a2c3626e6e7bf3d77ac0b47ebabfbc069008b11d5c930c23b706e9f213e511\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54f4b499c8-vwd6m" Jun 20 19:24:02.071518 kubelet[2909]: E0620 19:24:02.071473 2909 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94a2c3626e6e7bf3d77ac0b47ebabfbc069008b11d5c930c23b706e9f213e511\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54f4b499c8-vwd6m" Jun 20 19:24:02.071518 kubelet[2909]: E0620 19:24:02.071505 2909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54f4b499c8-vwd6m_calico-apiserver(da2fa4c7-de11-4e7d-93ae-6602a4ac4909)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54f4b499c8-vwd6m_calico-apiserver(da2fa4c7-de11-4e7d-93ae-6602a4ac4909)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"94a2c3626e6e7bf3d77ac0b47ebabfbc069008b11d5c930c23b706e9f213e511\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54f4b499c8-vwd6m" podUID="da2fa4c7-de11-4e7d-93ae-6602a4ac4909" Jun 20 19:24:02.089415 containerd[1619]: time="2025-06-20T19:24:02.089302683Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-dc7b455cb-kmtz6,Uid:1d842e7e-b13c-4fcb-abc1-9f6f125de686,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"28fa29f82251263c973e1a0667e66cbeff8b2f41debdb0497bac1f8fb98f085c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:24:02.089587 kubelet[2909]: E0620 19:24:02.089521 2909 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28fa29f82251263c973e1a0667e66cbeff8b2f41debdb0497bac1f8fb98f085c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:24:02.089664 kubelet[2909]: E0620 19:24:02.089592 2909 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28fa29f82251263c973e1a0667e66cbeff8b2f41debdb0497bac1f8fb98f085c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-dc7b455cb-kmtz6" Jun 20 19:24:02.089664 kubelet[2909]: E0620 19:24:02.089620 2909 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28fa29f82251263c973e1a0667e66cbeff8b2f41debdb0497bac1f8fb98f085c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-dc7b455cb-kmtz6" Jun 20 19:24:02.089749 kubelet[2909]: E0620 19:24:02.089677 2909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-dc7b455cb-kmtz6_calico-system(1d842e7e-b13c-4fcb-abc1-9f6f125de686)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-dc7b455cb-kmtz6_calico-system(1d842e7e-b13c-4fcb-abc1-9f6f125de686)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"28fa29f82251263c973e1a0667e66cbeff8b2f41debdb0497bac1f8fb98f085c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-dc7b455cb-kmtz6" podUID="1d842e7e-b13c-4fcb-abc1-9f6f125de686" Jun 20 19:24:02.111260 containerd[1619]: time="2025-06-20T19:24:02.111152404Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-844cfccc97-b8rbg,Uid:22ecde37-4cc5-4658-bd30-85fbd7200f9b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5020f63f487d6a4e0ad2bea941bc5094881628574e2ebd66678d16a0ee43bd9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:24:02.111427 kubelet[2909]: E0620 19:24:02.111393 2909 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5020f63f487d6a4e0ad2bea941bc5094881628574e2ebd66678d16a0ee43bd9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:24:02.111482 kubelet[2909]: E0620 19:24:02.111433 2909 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5020f63f487d6a4e0ad2bea941bc5094881628574e2ebd66678d16a0ee43bd9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-844cfccc97-b8rbg" Jun 20 19:24:02.111482 kubelet[2909]: E0620 19:24:02.111453 2909 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5020f63f487d6a4e0ad2bea941bc5094881628574e2ebd66678d16a0ee43bd9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-844cfccc97-b8rbg" Jun 20 19:24:02.111535 kubelet[2909]: E0620 19:24:02.111516 2909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-844cfccc97-b8rbg_calico-system(22ecde37-4cc5-4658-bd30-85fbd7200f9b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-844cfccc97-b8rbg_calico-system(22ecde37-4cc5-4658-bd30-85fbd7200f9b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5020f63f487d6a4e0ad2bea941bc5094881628574e2ebd66678d16a0ee43bd9b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-844cfccc97-b8rbg" podUID="22ecde37-4cc5-4658-bd30-85fbd7200f9b" Jun 20 19:24:02.124056 containerd[1619]: time="2025-06-20T19:24:02.123905345Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57c64d78d5-28z6n,Uid:2e0b2d2c-4387-4f0a-acd2-67ee85d925c8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5459691b9d766b5114111448eca029398ef501fe482fb6c074b307a2990e6a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:24:02.124202 kubelet[2909]: E0620 19:24:02.124174 2909 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5459691b9d766b5114111448eca029398ef501fe482fb6c074b307a2990e6a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:24:02.124245 kubelet[2909]: E0620 19:24:02.124210 2909 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5459691b9d766b5114111448eca029398ef501fe482fb6c074b307a2990e6a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57c64d78d5-28z6n" Jun 20 19:24:02.124272 kubelet[2909]: E0620 19:24:02.124246 2909 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5459691b9d766b5114111448eca029398ef501fe482fb6c074b307a2990e6a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57c64d78d5-28z6n" Jun 20 19:24:02.124302 kubelet[2909]: E0620 19:24:02.124285 2909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57c64d78d5-28z6n_calico-apiserver(2e0b2d2c-4387-4f0a-acd2-67ee85d925c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57c64d78d5-28z6n_calico-apiserver(2e0b2d2c-4387-4f0a-acd2-67ee85d925c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f5459691b9d766b5114111448eca029398ef501fe482fb6c074b307a2990e6a5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57c64d78d5-28z6n" podUID="2e0b2d2c-4387-4f0a-acd2-67ee85d925c8" Jun 20 19:24:02.135595 containerd[1619]: time="2025-06-20T19:24:02.135498031Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6tz6l,Uid:988cdc8c-2bae-47e2-bbdb-4d40efabde59,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3864bf8791d6bc69086897a5756d85fbd0219590abf7bd0685acba30a051f69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:24:02.135900 kubelet[2909]: E0620 19:24:02.135841 2909 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3864bf8791d6bc69086897a5756d85fbd0219590abf7bd0685acba30a051f69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:24:02.135900 kubelet[2909]: E0620 19:24:02.135874 2909 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3864bf8791d6bc69086897a5756d85fbd0219590abf7bd0685acba30a051f69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-6tz6l" Jun 20 19:24:02.136066 kubelet[2909]: E0620 19:24:02.135987 2909 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3864bf8791d6bc69086897a5756d85fbd0219590abf7bd0685acba30a051f69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-6tz6l" Jun 20 19:24:02.136066 kubelet[2909]: E0620 19:24:02.136027 2909 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-6tz6l_kube-system(988cdc8c-2bae-47e2-bbdb-4d40efabde59)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-6tz6l_kube-system(988cdc8c-2bae-47e2-bbdb-4d40efabde59)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e3864bf8791d6bc69086897a5756d85fbd0219590abf7bd0685acba30a051f69\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-6tz6l" podUID="988cdc8c-2bae-47e2-bbdb-4d40efabde59" Jun 20 19:24:02.991952 systemd[1]: run-netns-cni\x2ddea90995\x2dcb2c\x2d4ed8\x2d4077\x2d4813ea3fbc26.mount: Deactivated successfully. Jun 20 19:24:02.992760 systemd[1]: run-netns-cni\x2dec657f6a\x2d2830\x2daa07\x2d00fa\x2d95f3cdd8b55d.mount: Deactivated successfully. Jun 20 19:24:02.992814 systemd[1]: run-netns-cni\x2d9133515a\x2d41c6\x2dfdbf\x2de65e\x2dd9c5ee9db668.mount: Deactivated successfully. Jun 20 19:24:02.992856 systemd[1]: run-netns-cni\x2d1639ec57\x2de8ca\x2df02a\x2d1781\x2dde13bbff72a0.mount: Deactivated successfully. Jun 20 19:24:06.392005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2491127566.mount: Deactivated successfully. Jun 20 19:24:06.721447 containerd[1619]: time="2025-06-20T19:24:06.721374918Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:24:06.725036 containerd[1619]: time="2025-06-20T19:24:06.673138117Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.1: active requests=0, bytes read=156518913" Jun 20 19:24:06.748668 containerd[1619]: time="2025-06-20T19:24:06.748054302Z" level=info msg="ImageCreate event name:\"sha256:9ac26af2ca9c35e475f921a9bcf40c7c0ce106819208883b006e64c489251722\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:24:06.770981 containerd[1619]: time="2025-06-20T19:24:06.770951110Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:8da6d025e5cf2ff5080c801ac8611bedb513e5922500fcc8161d8164e4679597\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:24:06.771957 containerd[1619]: time="2025-06-20T19:24:06.771942056Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.1\" with image id \"sha256:9ac26af2ca9c35e475f921a9bcf40c7c0ce106819208883b006e64c489251722\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:8da6d025e5cf2ff5080c801ac8611bedb513e5922500fcc8161d8164e4679597\", size \"156518775\" in 5.822603077s" Jun 20 19:24:06.772012 containerd[1619]: time="2025-06-20T19:24:06.772004488Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.1\" returns image reference \"sha256:9ac26af2ca9c35e475f921a9bcf40c7c0ce106819208883b006e64c489251722\"" Jun 20 19:24:06.800285 containerd[1619]: time="2025-06-20T19:24:06.800255452Z" level=info msg="CreateContainer within sandbox \"4f811c73574d96e8eaaecfbc6707b78bfb34e9dfc0bbc3bfa34d835bae67ab83\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 20 19:24:06.845767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1904943643.mount: Deactivated successfully. Jun 20 19:24:06.846005 containerd[1619]: time="2025-06-20T19:24:06.845784888Z" level=info msg="Container e7e5149af51caf084bf97c04562e5331d256d9b819504a23a8a27279d4f450f0: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:24:06.893722 containerd[1619]: time="2025-06-20T19:24:06.893683784Z" level=info msg="CreateContainer within sandbox \"4f811c73574d96e8eaaecfbc6707b78bfb34e9dfc0bbc3bfa34d835bae67ab83\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e7e5149af51caf084bf97c04562e5331d256d9b819504a23a8a27279d4f450f0\"" Jun 20 19:24:06.894434 containerd[1619]: time="2025-06-20T19:24:06.894416655Z" level=info msg="StartContainer for \"e7e5149af51caf084bf97c04562e5331d256d9b819504a23a8a27279d4f450f0\"" Jun 20 19:24:06.904645 containerd[1619]: time="2025-06-20T19:24:06.904615531Z" level=info msg="connecting to shim e7e5149af51caf084bf97c04562e5331d256d9b819504a23a8a27279d4f450f0" address="unix:///run/containerd/s/daa9bc7dca13463268fb4f704d16c9b832126f1a9df3f64caba0e2e0f362c690" protocol=ttrpc version=3 Jun 20 19:24:07.001791 systemd[1]: Started cri-containerd-e7e5149af51caf084bf97c04562e5331d256d9b819504a23a8a27279d4f450f0.scope - libcontainer container e7e5149af51caf084bf97c04562e5331d256d9b819504a23a8a27279d4f450f0. Jun 20 19:24:07.078060 containerd[1619]: time="2025-06-20T19:24:07.078034834Z" level=info msg="StartContainer for \"e7e5149af51caf084bf97c04562e5331d256d9b819504a23a8a27279d4f450f0\" returns successfully" Jun 20 19:24:07.891189 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 20 19:24:07.892876 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 20 19:24:08.333865 containerd[1619]: time="2025-06-20T19:24:08.333795759Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e7e5149af51caf084bf97c04562e5331d256d9b819504a23a8a27279d4f450f0\" id:\"40046cc646254f1ec0e47b13aecd3a74c72ab9957cc5455cf6e9b2d69f19e3e8\" pid:4032 exit_status:1 exited_at:{seconds:1750447448 nanos:325582493}" Jun 20 19:24:09.116331 containerd[1619]: time="2025-06-20T19:24:09.116278182Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e7e5149af51caf084bf97c04562e5331d256d9b819504a23a8a27279d4f450f0\" id:\"1286aaeac0ab1012b1b7403be7cceab277bea478ea54a1d8becf038c185406af\" pid:4072 exit_status:1 exited_at:{seconds:1750447449 nanos:116108486}" Jun 20 19:24:09.187617 kubelet[2909]: I0620 19:24:09.187579 2909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rk7mh" podStartSLOduration=3.633524316 podStartE2EDuration="22.187563996s" podCreationTimestamp="2025-06-20 19:23:47 +0000 UTC" firstStartedPulling="2025-06-20 19:23:48.218486469 +0000 UTC m=+14.646725140" lastFinishedPulling="2025-06-20 19:24:06.772526153 +0000 UTC m=+33.200764820" observedRunningTime="2025-06-20 19:24:07.97318531 +0000 UTC m=+34.401423985" watchObservedRunningTime="2025-06-20 19:24:09.187563996 +0000 UTC m=+35.615802663" Jun 20 19:24:09.369892 kubelet[2909]: I0620 19:24:09.369734 2909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21cf1d00-599a-4dd3-b593-a1e94e1246f9-whisker-ca-bundle\") pod \"21cf1d00-599a-4dd3-b593-a1e94e1246f9\" (UID: \"21cf1d00-599a-4dd3-b593-a1e94e1246f9\") " Jun 20 19:24:09.369892 kubelet[2909]: I0620 19:24:09.369775 2909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpntt\" (UniqueName: \"kubernetes.io/projected/21cf1d00-599a-4dd3-b593-a1e94e1246f9-kube-api-access-wpntt\") pod \"21cf1d00-599a-4dd3-b593-a1e94e1246f9\" (UID: \"21cf1d00-599a-4dd3-b593-a1e94e1246f9\") " Jun 20 19:24:09.369892 kubelet[2909]: I0620 19:24:09.369791 2909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/21cf1d00-599a-4dd3-b593-a1e94e1246f9-whisker-backend-key-pair\") pod \"21cf1d00-599a-4dd3-b593-a1e94e1246f9\" (UID: \"21cf1d00-599a-4dd3-b593-a1e94e1246f9\") " Jun 20 19:24:09.374544 kubelet[2909]: I0620 19:24:09.374455 2909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21cf1d00-599a-4dd3-b593-a1e94e1246f9-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "21cf1d00-599a-4dd3-b593-a1e94e1246f9" (UID: "21cf1d00-599a-4dd3-b593-a1e94e1246f9"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 20 19:24:09.387721 systemd[1]: var-lib-kubelet-pods-21cf1d00\x2d599a\x2d4dd3\x2db593\x2da1e94e1246f9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwpntt.mount: Deactivated successfully. Jun 20 19:24:09.387978 kubelet[2909]: I0620 19:24:09.387775 2909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21cf1d00-599a-4dd3-b593-a1e94e1246f9-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "21cf1d00-599a-4dd3-b593-a1e94e1246f9" (UID: "21cf1d00-599a-4dd3-b593-a1e94e1246f9"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 20 19:24:09.387784 systemd[1]: var-lib-kubelet-pods-21cf1d00\x2d599a\x2d4dd3\x2db593\x2da1e94e1246f9-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jun 20 19:24:09.388443 kubelet[2909]: I0620 19:24:09.388426 2909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21cf1d00-599a-4dd3-b593-a1e94e1246f9-kube-api-access-wpntt" (OuterVolumeSpecName: "kube-api-access-wpntt") pod "21cf1d00-599a-4dd3-b593-a1e94e1246f9" (UID: "21cf1d00-599a-4dd3-b593-a1e94e1246f9"). InnerVolumeSpecName "kube-api-access-wpntt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 20 19:24:09.470896 kubelet[2909]: I0620 19:24:09.470850 2909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpntt\" (UniqueName: \"kubernetes.io/projected/21cf1d00-599a-4dd3-b593-a1e94e1246f9-kube-api-access-wpntt\") on node \"localhost\" DevicePath \"\"" Jun 20 19:24:09.470896 kubelet[2909]: I0620 19:24:09.470873 2909 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/21cf1d00-599a-4dd3-b593-a1e94e1246f9-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jun 20 19:24:09.470896 kubelet[2909]: I0620 19:24:09.470880 2909 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21cf1d00-599a-4dd3-b593-a1e94e1246f9-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jun 20 19:24:09.671205 systemd[1]: Removed slice kubepods-besteffort-pod21cf1d00_599a_4dd3_b593_a1e94e1246f9.slice - libcontainer container kubepods-besteffort-pod21cf1d00_599a_4dd3_b593_a1e94e1246f9.slice. Jun 20 19:24:10.114463 systemd[1]: Created slice kubepods-besteffort-pod252ea4ac_ad39_400a_837e_7c47cdee30f6.slice - libcontainer container kubepods-besteffort-pod252ea4ac_ad39_400a_837e_7c47cdee30f6.slice. Jun 20 19:24:10.174986 kubelet[2909]: I0620 19:24:10.174802 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7vvn\" (UniqueName: \"kubernetes.io/projected/252ea4ac-ad39-400a-837e-7c47cdee30f6-kube-api-access-n7vvn\") pod \"whisker-77547c856d-svbh4\" (UID: \"252ea4ac-ad39-400a-837e-7c47cdee30f6\") " pod="calico-system/whisker-77547c856d-svbh4" Jun 20 19:24:10.174986 kubelet[2909]: I0620 19:24:10.174833 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/252ea4ac-ad39-400a-837e-7c47cdee30f6-whisker-ca-bundle\") pod \"whisker-77547c856d-svbh4\" (UID: \"252ea4ac-ad39-400a-837e-7c47cdee30f6\") " pod="calico-system/whisker-77547c856d-svbh4" Jun 20 19:24:10.174986 kubelet[2909]: I0620 19:24:10.174848 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/252ea4ac-ad39-400a-837e-7c47cdee30f6-whisker-backend-key-pair\") pod \"whisker-77547c856d-svbh4\" (UID: \"252ea4ac-ad39-400a-837e-7c47cdee30f6\") " pod="calico-system/whisker-77547c856d-svbh4" Jun 20 19:24:10.419603 containerd[1619]: time="2025-06-20T19:24:10.419535305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77547c856d-svbh4,Uid:252ea4ac-ad39-400a-837e-7c47cdee30f6,Namespace:calico-system,Attempt:0,}" Jun 20 19:24:10.431424 systemd-networkd[1533]: vxlan.calico: Link UP Jun 20 19:24:10.431599 systemd-networkd[1533]: vxlan.calico: Gained carrier Jun 20 19:24:10.828330 systemd-networkd[1533]: cali899a7e21829: Link UP Jun 20 19:24:10.828803 systemd-networkd[1533]: cali899a7e21829: Gained carrier Jun 20 19:24:10.838779 containerd[1619]: 2025-06-20 19:24:10.508 [INFO][4225] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--77547c856d--svbh4-eth0 whisker-77547c856d- calico-system 252ea4ac-ad39-400a-837e-7c47cdee30f6 889 0 2025-06-20 19:24:09 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:77547c856d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-77547c856d-svbh4 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali899a7e21829 [] [] }} ContainerID="3aaa9f8dbad0903b50519ef3ba44f03d256b8e91ff374e185e72673327672adc" Namespace="calico-system" Pod="whisker-77547c856d-svbh4" WorkloadEndpoint="localhost-k8s-whisker--77547c856d--svbh4-" Jun 20 19:24:10.838779 containerd[1619]: 2025-06-20 19:24:10.508 [INFO][4225] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3aaa9f8dbad0903b50519ef3ba44f03d256b8e91ff374e185e72673327672adc" Namespace="calico-system" Pod="whisker-77547c856d-svbh4" WorkloadEndpoint="localhost-k8s-whisker--77547c856d--svbh4-eth0" Jun 20 19:24:10.838779 containerd[1619]: 2025-06-20 19:24:10.780 [INFO][4262] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3aaa9f8dbad0903b50519ef3ba44f03d256b8e91ff374e185e72673327672adc" HandleID="k8s-pod-network.3aaa9f8dbad0903b50519ef3ba44f03d256b8e91ff374e185e72673327672adc" Workload="localhost-k8s-whisker--77547c856d--svbh4-eth0" Jun 20 19:24:10.840555 containerd[1619]: 2025-06-20 19:24:10.782 [INFO][4262] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3aaa9f8dbad0903b50519ef3ba44f03d256b8e91ff374e185e72673327672adc" HandleID="k8s-pod-network.3aaa9f8dbad0903b50519ef3ba44f03d256b8e91ff374e185e72673327672adc" Workload="localhost-k8s-whisker--77547c856d--svbh4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000602430), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-77547c856d-svbh4", "timestamp":"2025-06-20 19:24:10.780026539 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:24:10.840555 containerd[1619]: 2025-06-20 19:24:10.782 [INFO][4262] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:24:10.840555 containerd[1619]: 2025-06-20 19:24:10.783 [INFO][4262] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:24:10.840555 containerd[1619]: 2025-06-20 19:24:10.783 [INFO][4262] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 20 19:24:10.840555 containerd[1619]: 2025-06-20 19:24:10.799 [INFO][4262] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3aaa9f8dbad0903b50519ef3ba44f03d256b8e91ff374e185e72673327672adc" host="localhost" Jun 20 19:24:10.840555 containerd[1619]: 2025-06-20 19:24:10.809 [INFO][4262] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 20 19:24:10.840555 containerd[1619]: 2025-06-20 19:24:10.812 [INFO][4262] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 20 19:24:10.840555 containerd[1619]: 2025-06-20 19:24:10.813 [INFO][4262] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 20 19:24:10.840555 containerd[1619]: 2025-06-20 19:24:10.814 [INFO][4262] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 20 19:24:10.840555 containerd[1619]: 2025-06-20 19:24:10.814 [INFO][4262] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3aaa9f8dbad0903b50519ef3ba44f03d256b8e91ff374e185e72673327672adc" host="localhost" Jun 20 19:24:10.841453 containerd[1619]: 2025-06-20 19:24:10.815 [INFO][4262] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3aaa9f8dbad0903b50519ef3ba44f03d256b8e91ff374e185e72673327672adc Jun 20 19:24:10.841453 containerd[1619]: 2025-06-20 19:24:10.817 [INFO][4262] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3aaa9f8dbad0903b50519ef3ba44f03d256b8e91ff374e185e72673327672adc" host="localhost" Jun 20 19:24:10.841453 containerd[1619]: 2025-06-20 19:24:10.820 [INFO][4262] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.3aaa9f8dbad0903b50519ef3ba44f03d256b8e91ff374e185e72673327672adc" host="localhost" Jun 20 19:24:10.841453 containerd[1619]: 2025-06-20 19:24:10.820 [INFO][4262] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.3aaa9f8dbad0903b50519ef3ba44f03d256b8e91ff374e185e72673327672adc" host="localhost" Jun 20 19:24:10.841453 containerd[1619]: 2025-06-20 19:24:10.820 [INFO][4262] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:24:10.841453 containerd[1619]: 2025-06-20 19:24:10.820 [INFO][4262] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="3aaa9f8dbad0903b50519ef3ba44f03d256b8e91ff374e185e72673327672adc" HandleID="k8s-pod-network.3aaa9f8dbad0903b50519ef3ba44f03d256b8e91ff374e185e72673327672adc" Workload="localhost-k8s-whisker--77547c856d--svbh4-eth0" Jun 20 19:24:10.841620 containerd[1619]: 2025-06-20 19:24:10.822 [INFO][4225] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3aaa9f8dbad0903b50519ef3ba44f03d256b8e91ff374e185e72673327672adc" Namespace="calico-system" Pod="whisker-77547c856d-svbh4" WorkloadEndpoint="localhost-k8s-whisker--77547c856d--svbh4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--77547c856d--svbh4-eth0", GenerateName:"whisker-77547c856d-", Namespace:"calico-system", SelfLink:"", UID:"252ea4ac-ad39-400a-837e-7c47cdee30f6", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 24, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"77547c856d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-77547c856d-svbh4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali899a7e21829", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:24:10.841620 containerd[1619]: 2025-06-20 19:24:10.822 [INFO][4225] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="3aaa9f8dbad0903b50519ef3ba44f03d256b8e91ff374e185e72673327672adc" Namespace="calico-system" Pod="whisker-77547c856d-svbh4" WorkloadEndpoint="localhost-k8s-whisker--77547c856d--svbh4-eth0" Jun 20 19:24:10.841692 containerd[1619]: 2025-06-20 19:24:10.822 [INFO][4225] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali899a7e21829 ContainerID="3aaa9f8dbad0903b50519ef3ba44f03d256b8e91ff374e185e72673327672adc" Namespace="calico-system" Pod="whisker-77547c856d-svbh4" WorkloadEndpoint="localhost-k8s-whisker--77547c856d--svbh4-eth0" Jun 20 19:24:10.841692 containerd[1619]: 2025-06-20 19:24:10.829 [INFO][4225] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3aaa9f8dbad0903b50519ef3ba44f03d256b8e91ff374e185e72673327672adc" Namespace="calico-system" Pod="whisker-77547c856d-svbh4" WorkloadEndpoint="localhost-k8s-whisker--77547c856d--svbh4-eth0" Jun 20 19:24:10.841726 containerd[1619]: 2025-06-20 19:24:10.829 [INFO][4225] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3aaa9f8dbad0903b50519ef3ba44f03d256b8e91ff374e185e72673327672adc" Namespace="calico-system" Pod="whisker-77547c856d-svbh4" WorkloadEndpoint="localhost-k8s-whisker--77547c856d--svbh4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--77547c856d--svbh4-eth0", GenerateName:"whisker-77547c856d-", Namespace:"calico-system", SelfLink:"", UID:"252ea4ac-ad39-400a-837e-7c47cdee30f6", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 24, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"77547c856d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3aaa9f8dbad0903b50519ef3ba44f03d256b8e91ff374e185e72673327672adc", Pod:"whisker-77547c856d-svbh4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali899a7e21829", MAC:"2e:d3:e0:77:79:1d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:24:10.841802 containerd[1619]: 2025-06-20 19:24:10.836 [INFO][4225] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3aaa9f8dbad0903b50519ef3ba44f03d256b8e91ff374e185e72673327672adc" Namespace="calico-system" Pod="whisker-77547c856d-svbh4" WorkloadEndpoint="localhost-k8s-whisker--77547c856d--svbh4-eth0" Jun 20 19:24:10.930616 containerd[1619]: time="2025-06-20T19:24:10.930567095Z" level=info msg="connecting to shim 3aaa9f8dbad0903b50519ef3ba44f03d256b8e91ff374e185e72673327672adc" address="unix:///run/containerd/s/82e27a8b4f3b410cc773dbf4086a1cc1364bd6c66c79a6313f6f68abd4dbaf71" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:24:10.984788 systemd[1]: Started cri-containerd-3aaa9f8dbad0903b50519ef3ba44f03d256b8e91ff374e185e72673327672adc.scope - libcontainer container 3aaa9f8dbad0903b50519ef3ba44f03d256b8e91ff374e185e72673327672adc. Jun 20 19:24:10.994425 systemd-resolved[1480]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 20 19:24:11.029019 containerd[1619]: time="2025-06-20T19:24:11.028931466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77547c856d-svbh4,Uid:252ea4ac-ad39-400a-837e-7c47cdee30f6,Namespace:calico-system,Attempt:0,} returns sandbox id \"3aaa9f8dbad0903b50519ef3ba44f03d256b8e91ff374e185e72673327672adc\"" Jun 20 19:24:11.077300 containerd[1619]: time="2025-06-20T19:24:11.076265512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.1\"" Jun 20 19:24:11.669234 kubelet[2909]: I0620 19:24:11.668949 2909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21cf1d00-599a-4dd3-b593-a1e94e1246f9" path="/var/lib/kubelet/pods/21cf1d00-599a-4dd3-b593-a1e94e1246f9/volumes" Jun 20 19:24:11.854774 systemd-networkd[1533]: vxlan.calico: Gained IPv6LL Jun 20 19:24:12.110760 systemd-networkd[1533]: cali899a7e21829: Gained IPv6LL Jun 20 19:24:12.397971 containerd[1619]: time="2025-06-20T19:24:12.397808718Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:24:12.398434 containerd[1619]: time="2025-06-20T19:24:12.398385838Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.1: active requests=0, bytes read=4661202" Jun 20 19:24:12.398714 containerd[1619]: time="2025-06-20T19:24:12.398696981Z" level=info msg="ImageCreate event name:\"sha256:f9c2addb6553484a4cf8cf5e38959c95aff70d213991bb2626aab9eb9b0ce51c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:24:12.400083 containerd[1619]: time="2025-06-20T19:24:12.400064743Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:7f323954f2f741238d256690a674536bf562d4b4bd7cd6bab3c21a0a1327e1fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:24:12.400498 containerd[1619]: time="2025-06-20T19:24:12.400478508Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.1\" with image id \"sha256:f9c2addb6553484a4cf8cf5e38959c95aff70d213991bb2626aab9eb9b0ce51c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:7f323954f2f741238d256690a674536bf562d4b4bd7cd6bab3c21a0a1327e1fc\", size \"6153897\" in 1.324188279s" Jun 20 19:24:12.400552 containerd[1619]: time="2025-06-20T19:24:12.400497276Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.1\" returns image reference \"sha256:f9c2addb6553484a4cf8cf5e38959c95aff70d213991bb2626aab9eb9b0ce51c\"" Jun 20 19:24:12.406212 containerd[1619]: time="2025-06-20T19:24:12.406166461Z" level=info msg="CreateContainer within sandbox \"3aaa9f8dbad0903b50519ef3ba44f03d256b8e91ff374e185e72673327672adc\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jun 20 19:24:12.409491 containerd[1619]: time="2025-06-20T19:24:12.409434836Z" level=info msg="Container d1d2086b1924c6221e834a736108748ed97bb09a010cef07b8269a3ed07851bf: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:24:12.413507 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3585814158.mount: Deactivated successfully. Jun 20 19:24:12.415081 containerd[1619]: time="2025-06-20T19:24:12.415065731Z" level=info msg="CreateContainer within sandbox \"3aaa9f8dbad0903b50519ef3ba44f03d256b8e91ff374e185e72673327672adc\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"d1d2086b1924c6221e834a736108748ed97bb09a010cef07b8269a3ed07851bf\"" Jun 20 19:24:12.415478 containerd[1619]: time="2025-06-20T19:24:12.415465651Z" level=info msg="StartContainer for \"d1d2086b1924c6221e834a736108748ed97bb09a010cef07b8269a3ed07851bf\"" Jun 20 19:24:12.417019 containerd[1619]: time="2025-06-20T19:24:12.417004604Z" level=info msg="connecting to shim d1d2086b1924c6221e834a736108748ed97bb09a010cef07b8269a3ed07851bf" address="unix:///run/containerd/s/82e27a8b4f3b410cc773dbf4086a1cc1364bd6c66c79a6313f6f68abd4dbaf71" protocol=ttrpc version=3 Jun 20 19:24:12.433743 systemd[1]: Started cri-containerd-d1d2086b1924c6221e834a736108748ed97bb09a010cef07b8269a3ed07851bf.scope - libcontainer container d1d2086b1924c6221e834a736108748ed97bb09a010cef07b8269a3ed07851bf. Jun 20 19:24:12.479711 containerd[1619]: time="2025-06-20T19:24:12.479673881Z" level=info msg="StartContainer for \"d1d2086b1924c6221e834a736108748ed97bb09a010cef07b8269a3ed07851bf\" returns successfully" Jun 20 19:24:12.480810 containerd[1619]: time="2025-06-20T19:24:12.480771678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\"" Jun 20 19:24:12.667733 containerd[1619]: time="2025-06-20T19:24:12.667641924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-dc7b455cb-kmtz6,Uid:1d842e7e-b13c-4fcb-abc1-9f6f125de686,Namespace:calico-system,Attempt:0,}" Jun 20 19:24:12.761285 systemd-networkd[1533]: calib6de04ad79a: Link UP Jun 20 19:24:12.761833 systemd-networkd[1533]: calib6de04ad79a: Gained carrier Jun 20 19:24:12.775563 containerd[1619]: 2025-06-20 19:24:12.708 [INFO][4398] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--dc7b455cb--kmtz6-eth0 goldmane-dc7b455cb- calico-system 1d842e7e-b13c-4fcb-abc1-9f6f125de686 810 0 2025-06-20 19:23:46 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:dc7b455cb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-dc7b455cb-kmtz6 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib6de04ad79a [] [] }} ContainerID="3c2bcf82961a9f0e3d81b2b7620c85f5306bedf60b2f9b513034060a60a63844" Namespace="calico-system" Pod="goldmane-dc7b455cb-kmtz6" WorkloadEndpoint="localhost-k8s-goldmane--dc7b455cb--kmtz6-" Jun 20 19:24:12.775563 containerd[1619]: 2025-06-20 19:24:12.708 [INFO][4398] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3c2bcf82961a9f0e3d81b2b7620c85f5306bedf60b2f9b513034060a60a63844" Namespace="calico-system" Pod="goldmane-dc7b455cb-kmtz6" WorkloadEndpoint="localhost-k8s-goldmane--dc7b455cb--kmtz6-eth0" Jun 20 19:24:12.775563 containerd[1619]: 2025-06-20 19:24:12.732 [INFO][4410] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3c2bcf82961a9f0e3d81b2b7620c85f5306bedf60b2f9b513034060a60a63844" HandleID="k8s-pod-network.3c2bcf82961a9f0e3d81b2b7620c85f5306bedf60b2f9b513034060a60a63844" Workload="localhost-k8s-goldmane--dc7b455cb--kmtz6-eth0" Jun 20 19:24:12.775805 containerd[1619]: 2025-06-20 19:24:12.732 [INFO][4410] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3c2bcf82961a9f0e3d81b2b7620c85f5306bedf60b2f9b513034060a60a63844" HandleID="k8s-pod-network.3c2bcf82961a9f0e3d81b2b7620c85f5306bedf60b2f9b513034060a60a63844" Workload="localhost-k8s-goldmane--dc7b455cb--kmtz6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f920), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-dc7b455cb-kmtz6", "timestamp":"2025-06-20 19:24:12.732422868 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:24:12.775805 containerd[1619]: 2025-06-20 19:24:12.732 [INFO][4410] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:24:12.775805 containerd[1619]: 2025-06-20 19:24:12.732 [INFO][4410] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:24:12.775805 containerd[1619]: 2025-06-20 19:24:12.732 [INFO][4410] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 20 19:24:12.775805 containerd[1619]: 2025-06-20 19:24:12.738 [INFO][4410] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3c2bcf82961a9f0e3d81b2b7620c85f5306bedf60b2f9b513034060a60a63844" host="localhost" Jun 20 19:24:12.775805 containerd[1619]: 2025-06-20 19:24:12.740 [INFO][4410] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 20 19:24:12.775805 containerd[1619]: 2025-06-20 19:24:12.743 [INFO][4410] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 20 19:24:12.775805 containerd[1619]: 2025-06-20 19:24:12.744 [INFO][4410] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 20 19:24:12.775805 containerd[1619]: 2025-06-20 19:24:12.745 [INFO][4410] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 20 19:24:12.775805 containerd[1619]: 2025-06-20 19:24:12.745 [INFO][4410] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3c2bcf82961a9f0e3d81b2b7620c85f5306bedf60b2f9b513034060a60a63844" host="localhost" Jun 20 19:24:12.789765 containerd[1619]: 2025-06-20 19:24:12.745 [INFO][4410] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3c2bcf82961a9f0e3d81b2b7620c85f5306bedf60b2f9b513034060a60a63844 Jun 20 19:24:12.789765 containerd[1619]: 2025-06-20 19:24:12.752 [INFO][4410] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3c2bcf82961a9f0e3d81b2b7620c85f5306bedf60b2f9b513034060a60a63844" host="localhost" Jun 20 19:24:12.789765 containerd[1619]: 2025-06-20 19:24:12.757 [INFO][4410] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.3c2bcf82961a9f0e3d81b2b7620c85f5306bedf60b2f9b513034060a60a63844" host="localhost" Jun 20 19:24:12.789765 containerd[1619]: 2025-06-20 19:24:12.757 [INFO][4410] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.3c2bcf82961a9f0e3d81b2b7620c85f5306bedf60b2f9b513034060a60a63844" host="localhost" Jun 20 19:24:12.789765 containerd[1619]: 2025-06-20 19:24:12.757 [INFO][4410] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:24:12.789765 containerd[1619]: 2025-06-20 19:24:12.757 [INFO][4410] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="3c2bcf82961a9f0e3d81b2b7620c85f5306bedf60b2f9b513034060a60a63844" HandleID="k8s-pod-network.3c2bcf82961a9f0e3d81b2b7620c85f5306bedf60b2f9b513034060a60a63844" Workload="localhost-k8s-goldmane--dc7b455cb--kmtz6-eth0" Jun 20 19:24:12.789859 containerd[1619]: 2025-06-20 19:24:12.759 [INFO][4398] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3c2bcf82961a9f0e3d81b2b7620c85f5306bedf60b2f9b513034060a60a63844" Namespace="calico-system" Pod="goldmane-dc7b455cb-kmtz6" WorkloadEndpoint="localhost-k8s-goldmane--dc7b455cb--kmtz6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--dc7b455cb--kmtz6-eth0", GenerateName:"goldmane-dc7b455cb-", Namespace:"calico-system", SelfLink:"", UID:"1d842e7e-b13c-4fcb-abc1-9f6f125de686", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 23, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"dc7b455cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-dc7b455cb-kmtz6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib6de04ad79a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:24:12.789859 containerd[1619]: 2025-06-20 19:24:12.759 [INFO][4398] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="3c2bcf82961a9f0e3d81b2b7620c85f5306bedf60b2f9b513034060a60a63844" Namespace="calico-system" Pod="goldmane-dc7b455cb-kmtz6" WorkloadEndpoint="localhost-k8s-goldmane--dc7b455cb--kmtz6-eth0" Jun 20 19:24:12.789922 containerd[1619]: 2025-06-20 19:24:12.759 [INFO][4398] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib6de04ad79a ContainerID="3c2bcf82961a9f0e3d81b2b7620c85f5306bedf60b2f9b513034060a60a63844" Namespace="calico-system" Pod="goldmane-dc7b455cb-kmtz6" WorkloadEndpoint="localhost-k8s-goldmane--dc7b455cb--kmtz6-eth0" Jun 20 19:24:12.789922 containerd[1619]: 2025-06-20 19:24:12.762 [INFO][4398] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3c2bcf82961a9f0e3d81b2b7620c85f5306bedf60b2f9b513034060a60a63844" Namespace="calico-system" Pod="goldmane-dc7b455cb-kmtz6" WorkloadEndpoint="localhost-k8s-goldmane--dc7b455cb--kmtz6-eth0" Jun 20 19:24:12.796571 containerd[1619]: 2025-06-20 19:24:12.762 [INFO][4398] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3c2bcf82961a9f0e3d81b2b7620c85f5306bedf60b2f9b513034060a60a63844" Namespace="calico-system" Pod="goldmane-dc7b455cb-kmtz6" WorkloadEndpoint="localhost-k8s-goldmane--dc7b455cb--kmtz6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--dc7b455cb--kmtz6-eth0", GenerateName:"goldmane-dc7b455cb-", Namespace:"calico-system", SelfLink:"", UID:"1d842e7e-b13c-4fcb-abc1-9f6f125de686", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 23, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"dc7b455cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3c2bcf82961a9f0e3d81b2b7620c85f5306bedf60b2f9b513034060a60a63844", Pod:"goldmane-dc7b455cb-kmtz6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib6de04ad79a", MAC:"26:b1:b9:71:fe:9a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:24:12.796692 containerd[1619]: 2025-06-20 19:24:12.772 [INFO][4398] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3c2bcf82961a9f0e3d81b2b7620c85f5306bedf60b2f9b513034060a60a63844" Namespace="calico-system" Pod="goldmane-dc7b455cb-kmtz6" WorkloadEndpoint="localhost-k8s-goldmane--dc7b455cb--kmtz6-eth0" Jun 20 19:24:12.853411 containerd[1619]: time="2025-06-20T19:24:12.853386450Z" level=info msg="connecting to shim 3c2bcf82961a9f0e3d81b2b7620c85f5306bedf60b2f9b513034060a60a63844" address="unix:///run/containerd/s/83546bf54fca28ac5d10d2a6e3977fffd446ef10762499e9faf46982cda7b1bb" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:24:12.870766 systemd[1]: Started cri-containerd-3c2bcf82961a9f0e3d81b2b7620c85f5306bedf60b2f9b513034060a60a63844.scope - libcontainer container 3c2bcf82961a9f0e3d81b2b7620c85f5306bedf60b2f9b513034060a60a63844. Jun 20 19:24:12.878730 systemd-resolved[1480]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 20 19:24:12.904559 containerd[1619]: time="2025-06-20T19:24:12.904524084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-dc7b455cb-kmtz6,Uid:1d842e7e-b13c-4fcb-abc1-9f6f125de686,Namespace:calico-system,Attempt:0,} returns sandbox id \"3c2bcf82961a9f0e3d81b2b7620c85f5306bedf60b2f9b513034060a60a63844\"" Jun 20 19:24:14.350840 systemd-networkd[1533]: calib6de04ad79a: Gained IPv6LL Jun 20 19:24:14.356152 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3219788988.mount: Deactivated successfully. Jun 20 19:24:14.368482 containerd[1619]: time="2025-06-20T19:24:14.368454619Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:24:14.369172 containerd[1619]: time="2025-06-20T19:24:14.369149935Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.1: active requests=0, bytes read=33086345" Jun 20 19:24:14.369447 containerd[1619]: time="2025-06-20T19:24:14.369430012Z" level=info msg="ImageCreate event name:\"sha256:a8d73c8fd22b3a7a28e9baab63169fb459bc504d71d871f96225c4f2d5e660a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:24:14.370865 containerd[1619]: time="2025-06-20T19:24:14.370848601Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:4b8bcb8b4fc05026ba811bf0b25b736086c1b8b26a83a9039a84dd3a06b06bd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:24:14.371247 containerd[1619]: time="2025-06-20T19:24:14.371062862Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" with image id \"sha256:a8d73c8fd22b3a7a28e9baab63169fb459bc504d71d871f96225c4f2d5e660a5\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:4b8bcb8b4fc05026ba811bf0b25b736086c1b8b26a83a9039a84dd3a06b06bd4\", size \"33086175\" in 1.890273382s" Jun 20 19:24:14.371247 containerd[1619]: time="2025-06-20T19:24:14.371113317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" returns image reference \"sha256:a8d73c8fd22b3a7a28e9baab63169fb459bc504d71d871f96225c4f2d5e660a5\"" Jun 20 19:24:14.372069 containerd[1619]: time="2025-06-20T19:24:14.372042883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.1\"" Jun 20 19:24:14.373376 containerd[1619]: time="2025-06-20T19:24:14.373327229Z" level=info msg="CreateContainer within sandbox \"3aaa9f8dbad0903b50519ef3ba44f03d256b8e91ff374e185e72673327672adc\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jun 20 19:24:14.377501 containerd[1619]: time="2025-06-20T19:24:14.377388418Z" level=info msg="Container 0c58fc091e0477cf9a548f77eb5baa16eed703494c461a7c8a6ced1b686da3e3: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:24:14.384821 containerd[1619]: time="2025-06-20T19:24:14.384795438Z" level=info msg="CreateContainer within sandbox \"3aaa9f8dbad0903b50519ef3ba44f03d256b8e91ff374e185e72673327672adc\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"0c58fc091e0477cf9a548f77eb5baa16eed703494c461a7c8a6ced1b686da3e3\"" Jun 20 19:24:14.385177 containerd[1619]: time="2025-06-20T19:24:14.385167367Z" level=info msg="StartContainer for \"0c58fc091e0477cf9a548f77eb5baa16eed703494c461a7c8a6ced1b686da3e3\"" Jun 20 19:24:14.385960 containerd[1619]: time="2025-06-20T19:24:14.385920456Z" level=info msg="connecting to shim 0c58fc091e0477cf9a548f77eb5baa16eed703494c461a7c8a6ced1b686da3e3" address="unix:///run/containerd/s/82e27a8b4f3b410cc773dbf4086a1cc1364bd6c66c79a6313f6f68abd4dbaf71" protocol=ttrpc version=3 Jun 20 19:24:14.403770 systemd[1]: Started cri-containerd-0c58fc091e0477cf9a548f77eb5baa16eed703494c461a7c8a6ced1b686da3e3.scope - libcontainer container 0c58fc091e0477cf9a548f77eb5baa16eed703494c461a7c8a6ced1b686da3e3. Jun 20 19:24:14.442111 containerd[1619]: time="2025-06-20T19:24:14.442076881Z" level=info msg="StartContainer for \"0c58fc091e0477cf9a548f77eb5baa16eed703494c461a7c8a6ced1b686da3e3\" returns successfully" Jun 20 19:24:14.667893 containerd[1619]: time="2025-06-20T19:24:14.667802826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-v8ssj,Uid:3de7d915-4af2-4d5c-b2d0-e893b49fda1d,Namespace:kube-system,Attempt:0,}" Jun 20 19:24:14.668244 containerd[1619]: time="2025-06-20T19:24:14.667803022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zbzl5,Uid:7c012b14-211b-4b5d-8a87-e4c6c0b434f3,Namespace:calico-system,Attempt:0,}" Jun 20 19:24:14.812931 systemd-networkd[1533]: cali9930bb7fa90: Link UP Jun 20 19:24:14.813850 systemd-networkd[1533]: cali9930bb7fa90: Gained carrier Jun 20 19:24:14.835620 containerd[1619]: 2025-06-20 19:24:14.721 [INFO][4511] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--v8ssj-eth0 coredns-7c65d6cfc9- kube-system 3de7d915-4af2-4d5c-b2d0-e893b49fda1d 809 0 2025-06-20 19:23:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-v8ssj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9930bb7fa90 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c01d8b93804be367fb8c780823a38cf6cb20df2a32f4a42a33bc1e6bc6aa3cb9" Namespace="kube-system" Pod="coredns-7c65d6cfc9-v8ssj" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--v8ssj-" Jun 20 19:24:14.835620 containerd[1619]: 2025-06-20 19:24:14.722 [INFO][4511] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c01d8b93804be367fb8c780823a38cf6cb20df2a32f4a42a33bc1e6bc6aa3cb9" Namespace="kube-system" Pod="coredns-7c65d6cfc9-v8ssj" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--v8ssj-eth0" Jun 20 19:24:14.835620 containerd[1619]: 2025-06-20 19:24:14.746 [INFO][4534] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c01d8b93804be367fb8c780823a38cf6cb20df2a32f4a42a33bc1e6bc6aa3cb9" HandleID="k8s-pod-network.c01d8b93804be367fb8c780823a38cf6cb20df2a32f4a42a33bc1e6bc6aa3cb9" Workload="localhost-k8s-coredns--7c65d6cfc9--v8ssj-eth0" Jun 20 19:24:14.835840 containerd[1619]: 2025-06-20 19:24:14.746 [INFO][4534] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c01d8b93804be367fb8c780823a38cf6cb20df2a32f4a42a33bc1e6bc6aa3cb9" HandleID="k8s-pod-network.c01d8b93804be367fb8c780823a38cf6cb20df2a32f4a42a33bc1e6bc6aa3cb9" Workload="localhost-k8s-coredns--7c65d6cfc9--v8ssj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d56c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-v8ssj", "timestamp":"2025-06-20 19:24:14.746286005 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:24:14.835840 containerd[1619]: 2025-06-20 19:24:14.746 [INFO][4534] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:24:14.835840 containerd[1619]: 2025-06-20 19:24:14.746 [INFO][4534] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:24:14.835840 containerd[1619]: 2025-06-20 19:24:14.746 [INFO][4534] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 20 19:24:14.835840 containerd[1619]: 2025-06-20 19:24:14.750 [INFO][4534] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c01d8b93804be367fb8c780823a38cf6cb20df2a32f4a42a33bc1e6bc6aa3cb9" host="localhost" Jun 20 19:24:14.835840 containerd[1619]: 2025-06-20 19:24:14.757 [INFO][4534] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 20 19:24:14.835840 containerd[1619]: 2025-06-20 19:24:14.768 [INFO][4534] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 20 19:24:14.835840 containerd[1619]: 2025-06-20 19:24:14.769 [INFO][4534] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 20 19:24:14.835840 containerd[1619]: 2025-06-20 19:24:14.771 [INFO][4534] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 20 19:24:14.835840 containerd[1619]: 2025-06-20 19:24:14.771 [INFO][4534] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c01d8b93804be367fb8c780823a38cf6cb20df2a32f4a42a33bc1e6bc6aa3cb9" host="localhost" Jun 20 19:24:14.865456 containerd[1619]: 2025-06-20 19:24:14.772 [INFO][4534] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c01d8b93804be367fb8c780823a38cf6cb20df2a32f4a42a33bc1e6bc6aa3cb9 Jun 20 19:24:14.865456 containerd[1619]: 2025-06-20 19:24:14.785 [INFO][4534] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c01d8b93804be367fb8c780823a38cf6cb20df2a32f4a42a33bc1e6bc6aa3cb9" host="localhost" Jun 20 19:24:14.865456 containerd[1619]: 2025-06-20 19:24:14.807 [INFO][4534] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.c01d8b93804be367fb8c780823a38cf6cb20df2a32f4a42a33bc1e6bc6aa3cb9" host="localhost" Jun 20 19:24:14.865456 containerd[1619]: 2025-06-20 19:24:14.807 [INFO][4534] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.c01d8b93804be367fb8c780823a38cf6cb20df2a32f4a42a33bc1e6bc6aa3cb9" host="localhost" Jun 20 19:24:14.865456 containerd[1619]: 2025-06-20 19:24:14.807 [INFO][4534] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:24:14.865456 containerd[1619]: 2025-06-20 19:24:14.807 [INFO][4534] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="c01d8b93804be367fb8c780823a38cf6cb20df2a32f4a42a33bc1e6bc6aa3cb9" HandleID="k8s-pod-network.c01d8b93804be367fb8c780823a38cf6cb20df2a32f4a42a33bc1e6bc6aa3cb9" Workload="localhost-k8s-coredns--7c65d6cfc9--v8ssj-eth0" Jun 20 19:24:14.865564 containerd[1619]: 2025-06-20 19:24:14.811 [INFO][4511] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c01d8b93804be367fb8c780823a38cf6cb20df2a32f4a42a33bc1e6bc6aa3cb9" Namespace="kube-system" Pod="coredns-7c65d6cfc9-v8ssj" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--v8ssj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--v8ssj-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3de7d915-4af2-4d5c-b2d0-e893b49fda1d", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 23, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-v8ssj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9930bb7fa90", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:24:14.865637 containerd[1619]: 2025-06-20 19:24:14.811 [INFO][4511] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="c01d8b93804be367fb8c780823a38cf6cb20df2a32f4a42a33bc1e6bc6aa3cb9" Namespace="kube-system" Pod="coredns-7c65d6cfc9-v8ssj" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--v8ssj-eth0" Jun 20 19:24:14.865637 containerd[1619]: 2025-06-20 19:24:14.811 [INFO][4511] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9930bb7fa90 ContainerID="c01d8b93804be367fb8c780823a38cf6cb20df2a32f4a42a33bc1e6bc6aa3cb9" Namespace="kube-system" Pod="coredns-7c65d6cfc9-v8ssj" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--v8ssj-eth0" Jun 20 19:24:14.865637 containerd[1619]: 2025-06-20 19:24:14.815 [INFO][4511] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c01d8b93804be367fb8c780823a38cf6cb20df2a32f4a42a33bc1e6bc6aa3cb9" Namespace="kube-system" Pod="coredns-7c65d6cfc9-v8ssj" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--v8ssj-eth0" Jun 20 19:24:14.865807 containerd[1619]: 2025-06-20 19:24:14.815 [INFO][4511] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c01d8b93804be367fb8c780823a38cf6cb20df2a32f4a42a33bc1e6bc6aa3cb9" Namespace="kube-system" Pod="coredns-7c65d6cfc9-v8ssj" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--v8ssj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--v8ssj-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3de7d915-4af2-4d5c-b2d0-e893b49fda1d", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 23, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c01d8b93804be367fb8c780823a38cf6cb20df2a32f4a42a33bc1e6bc6aa3cb9", Pod:"coredns-7c65d6cfc9-v8ssj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9930bb7fa90", MAC:"f2:a5:db:89:7b:35", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:24:14.865807 containerd[1619]: 2025-06-20 19:24:14.834 [INFO][4511] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c01d8b93804be367fb8c780823a38cf6cb20df2a32f4a42a33bc1e6bc6aa3cb9" Namespace="kube-system" Pod="coredns-7c65d6cfc9-v8ssj" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--v8ssj-eth0" Jun 20 19:24:14.888118 systemd-networkd[1533]: cali302c776754a: Link UP Jun 20 19:24:14.888723 systemd-networkd[1533]: cali302c776754a: Gained carrier Jun 20 19:24:14.904732 containerd[1619]: 2025-06-20 19:24:14.743 [INFO][4515] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--zbzl5-eth0 csi-node-driver- calico-system 7c012b14-211b-4b5d-8a87-e4c6c0b434f3 697 0 2025-06-20 19:23:48 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:896496fb5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-zbzl5 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali302c776754a [] [] }} ContainerID="4b1d6822f4f62f344cfec31695059b9a38adba0152341513a56634a71b1d9757" Namespace="calico-system" Pod="csi-node-driver-zbzl5" WorkloadEndpoint="localhost-k8s-csi--node--driver--zbzl5-" Jun 20 19:24:14.904732 containerd[1619]: 2025-06-20 19:24:14.743 [INFO][4515] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4b1d6822f4f62f344cfec31695059b9a38adba0152341513a56634a71b1d9757" Namespace="calico-system" Pod="csi-node-driver-zbzl5" WorkloadEndpoint="localhost-k8s-csi--node--driver--zbzl5-eth0" Jun 20 19:24:14.904732 containerd[1619]: 2025-06-20 19:24:14.765 [INFO][4543] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4b1d6822f4f62f344cfec31695059b9a38adba0152341513a56634a71b1d9757" HandleID="k8s-pod-network.4b1d6822f4f62f344cfec31695059b9a38adba0152341513a56634a71b1d9757" Workload="localhost-k8s-csi--node--driver--zbzl5-eth0" Jun 20 19:24:14.904732 containerd[1619]: 2025-06-20 19:24:14.765 [INFO][4543] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4b1d6822f4f62f344cfec31695059b9a38adba0152341513a56634a71b1d9757" HandleID="k8s-pod-network.4b1d6822f4f62f344cfec31695059b9a38adba0152341513a56634a71b1d9757" Workload="localhost-k8s-csi--node--driver--zbzl5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024eff0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-zbzl5", "timestamp":"2025-06-20 19:24:14.765789482 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:24:14.904732 containerd[1619]: 2025-06-20 19:24:14.765 [INFO][4543] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:24:14.904732 containerd[1619]: 2025-06-20 19:24:14.807 [INFO][4543] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:24:14.904732 containerd[1619]: 2025-06-20 19:24:14.807 [INFO][4543] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 20 19:24:14.904732 containerd[1619]: 2025-06-20 19:24:14.850 [INFO][4543] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4b1d6822f4f62f344cfec31695059b9a38adba0152341513a56634a71b1d9757" host="localhost" Jun 20 19:24:14.904732 containerd[1619]: 2025-06-20 19:24:14.858 [INFO][4543] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 20 19:24:14.904732 containerd[1619]: 2025-06-20 19:24:14.862 [INFO][4543] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 20 19:24:14.904732 containerd[1619]: 2025-06-20 19:24:14.863 [INFO][4543] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 20 19:24:14.904732 containerd[1619]: 2025-06-20 19:24:14.865 [INFO][4543] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 20 19:24:14.904732 containerd[1619]: 2025-06-20 19:24:14.865 [INFO][4543] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4b1d6822f4f62f344cfec31695059b9a38adba0152341513a56634a71b1d9757" host="localhost" Jun 20 19:24:14.904732 containerd[1619]: 2025-06-20 19:24:14.865 [INFO][4543] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4b1d6822f4f62f344cfec31695059b9a38adba0152341513a56634a71b1d9757 Jun 20 19:24:14.904732 containerd[1619]: 2025-06-20 19:24:14.871 [INFO][4543] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4b1d6822f4f62f344cfec31695059b9a38adba0152341513a56634a71b1d9757" host="localhost" Jun 20 19:24:14.904732 containerd[1619]: 2025-06-20 19:24:14.883 [INFO][4543] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.4b1d6822f4f62f344cfec31695059b9a38adba0152341513a56634a71b1d9757" host="localhost" Jun 20 19:24:14.904732 containerd[1619]: 2025-06-20 19:24:14.883 [INFO][4543] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.4b1d6822f4f62f344cfec31695059b9a38adba0152341513a56634a71b1d9757" host="localhost" Jun 20 19:24:14.904732 containerd[1619]: 2025-06-20 19:24:14.883 [INFO][4543] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:24:14.904732 containerd[1619]: 2025-06-20 19:24:14.883 [INFO][4543] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="4b1d6822f4f62f344cfec31695059b9a38adba0152341513a56634a71b1d9757" HandleID="k8s-pod-network.4b1d6822f4f62f344cfec31695059b9a38adba0152341513a56634a71b1d9757" Workload="localhost-k8s-csi--node--driver--zbzl5-eth0" Jun 20 19:24:14.917415 containerd[1619]: 2025-06-20 19:24:14.885 [INFO][4515] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4b1d6822f4f62f344cfec31695059b9a38adba0152341513a56634a71b1d9757" Namespace="calico-system" Pod="csi-node-driver-zbzl5" WorkloadEndpoint="localhost-k8s-csi--node--driver--zbzl5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--zbzl5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7c012b14-211b-4b5d-8a87-e4c6c0b434f3", ResourceVersion:"697", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 23, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"896496fb5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-zbzl5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali302c776754a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:24:14.917415 containerd[1619]: 2025-06-20 19:24:14.885 [INFO][4515] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="4b1d6822f4f62f344cfec31695059b9a38adba0152341513a56634a71b1d9757" Namespace="calico-system" Pod="csi-node-driver-zbzl5" WorkloadEndpoint="localhost-k8s-csi--node--driver--zbzl5-eth0" Jun 20 19:24:14.917415 containerd[1619]: 2025-06-20 19:24:14.885 [INFO][4515] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali302c776754a ContainerID="4b1d6822f4f62f344cfec31695059b9a38adba0152341513a56634a71b1d9757" Namespace="calico-system" Pod="csi-node-driver-zbzl5" WorkloadEndpoint="localhost-k8s-csi--node--driver--zbzl5-eth0" Jun 20 19:24:14.917415 containerd[1619]: 2025-06-20 19:24:14.888 [INFO][4515] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4b1d6822f4f62f344cfec31695059b9a38adba0152341513a56634a71b1d9757" Namespace="calico-system" Pod="csi-node-driver-zbzl5" WorkloadEndpoint="localhost-k8s-csi--node--driver--zbzl5-eth0" Jun 20 19:24:14.917415 containerd[1619]: 2025-06-20 19:24:14.889 [INFO][4515] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4b1d6822f4f62f344cfec31695059b9a38adba0152341513a56634a71b1d9757" Namespace="calico-system" Pod="csi-node-driver-zbzl5" WorkloadEndpoint="localhost-k8s-csi--node--driver--zbzl5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--zbzl5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7c012b14-211b-4b5d-8a87-e4c6c0b434f3", ResourceVersion:"697", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 23, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"896496fb5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4b1d6822f4f62f344cfec31695059b9a38adba0152341513a56634a71b1d9757", Pod:"csi-node-driver-zbzl5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali302c776754a", MAC:"96:10:9a:35:af:11", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:24:14.917415 containerd[1619]: 2025-06-20 19:24:14.903 [INFO][4515] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4b1d6822f4f62f344cfec31695059b9a38adba0152341513a56634a71b1d9757" Namespace="calico-system" Pod="csi-node-driver-zbzl5" WorkloadEndpoint="localhost-k8s-csi--node--driver--zbzl5-eth0" Jun 20 19:24:14.929212 containerd[1619]: time="2025-06-20T19:24:14.929004768Z" level=info msg="connecting to shim 4b1d6822f4f62f344cfec31695059b9a38adba0152341513a56634a71b1d9757" address="unix:///run/containerd/s/0a3928ad89d71fa2ed4f8ffc9b0546314e2f204211848e4221d6b58eb1b48bce" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:24:14.931743 containerd[1619]: time="2025-06-20T19:24:14.931718391Z" level=info msg="connecting to shim c01d8b93804be367fb8c780823a38cf6cb20df2a32f4a42a33bc1e6bc6aa3cb9" address="unix:///run/containerd/s/bf46d5d95c8c2e1bdba610038eb6f84d4250d07c0f3db3fb5556c5ec4143d0fa" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:24:14.955746 systemd[1]: Started cri-containerd-4b1d6822f4f62f344cfec31695059b9a38adba0152341513a56634a71b1d9757.scope - libcontainer container 4b1d6822f4f62f344cfec31695059b9a38adba0152341513a56634a71b1d9757. Jun 20 19:24:14.958473 systemd[1]: Started cri-containerd-c01d8b93804be367fb8c780823a38cf6cb20df2a32f4a42a33bc1e6bc6aa3cb9.scope - libcontainer container c01d8b93804be367fb8c780823a38cf6cb20df2a32f4a42a33bc1e6bc6aa3cb9. Jun 20 19:24:14.968301 systemd-resolved[1480]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 20 19:24:14.969860 systemd-resolved[1480]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 20 19:24:14.978719 containerd[1619]: time="2025-06-20T19:24:14.978661697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zbzl5,Uid:7c012b14-211b-4b5d-8a87-e4c6c0b434f3,Namespace:calico-system,Attempt:0,} returns sandbox id \"4b1d6822f4f62f344cfec31695059b9a38adba0152341513a56634a71b1d9757\"" Jun 20 19:24:15.007993 containerd[1619]: time="2025-06-20T19:24:15.007905985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-v8ssj,Uid:3de7d915-4af2-4d5c-b2d0-e893b49fda1d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c01d8b93804be367fb8c780823a38cf6cb20df2a32f4a42a33bc1e6bc6aa3cb9\"" Jun 20 19:24:15.010449 containerd[1619]: time="2025-06-20T19:24:15.010296640Z" level=info msg="CreateContainer within sandbox \"c01d8b93804be367fb8c780823a38cf6cb20df2a32f4a42a33bc1e6bc6aa3cb9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 19:24:15.030035 kubelet[2909]: I0620 19:24:15.017098 2909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-77547c856d-svbh4" podStartSLOduration=2.690832498 podStartE2EDuration="6.015920371s" podCreationTimestamp="2025-06-20 19:24:09 +0000 UTC" firstStartedPulling="2025-06-20 19:24:11.046649464 +0000 UTC m=+37.474888132" lastFinishedPulling="2025-06-20 19:24:14.371737336 +0000 UTC m=+40.799976005" observedRunningTime="2025-06-20 19:24:15.013481188 +0000 UTC m=+41.441719863" watchObservedRunningTime="2025-06-20 19:24:15.015920371 +0000 UTC m=+41.444159048" Jun 20 19:24:15.033084 containerd[1619]: time="2025-06-20T19:24:15.032985543Z" level=info msg="Container f28f3b59a5e3b925e71c25ae0eab14b5bea1db2dffa3ba79a1ee351dce3d3cdb: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:24:15.036381 containerd[1619]: time="2025-06-20T19:24:15.036325098Z" level=info msg="CreateContainer within sandbox \"c01d8b93804be367fb8c780823a38cf6cb20df2a32f4a42a33bc1e6bc6aa3cb9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f28f3b59a5e3b925e71c25ae0eab14b5bea1db2dffa3ba79a1ee351dce3d3cdb\"" Jun 20 19:24:15.036803 containerd[1619]: time="2025-06-20T19:24:15.036753801Z" level=info msg="StartContainer for \"f28f3b59a5e3b925e71c25ae0eab14b5bea1db2dffa3ba79a1ee351dce3d3cdb\"" Jun 20 19:24:15.037543 containerd[1619]: time="2025-06-20T19:24:15.037523737Z" level=info msg="connecting to shim f28f3b59a5e3b925e71c25ae0eab14b5bea1db2dffa3ba79a1ee351dce3d3cdb" address="unix:///run/containerd/s/bf46d5d95c8c2e1bdba610038eb6f84d4250d07c0f3db3fb5556c5ec4143d0fa" protocol=ttrpc version=3 Jun 20 19:24:15.055806 systemd[1]: Started cri-containerd-f28f3b59a5e3b925e71c25ae0eab14b5bea1db2dffa3ba79a1ee351dce3d3cdb.scope - libcontainer container f28f3b59a5e3b925e71c25ae0eab14b5bea1db2dffa3ba79a1ee351dce3d3cdb. Jun 20 19:24:15.079638 containerd[1619]: time="2025-06-20T19:24:15.079616349Z" level=info msg="StartContainer for \"f28f3b59a5e3b925e71c25ae0eab14b5bea1db2dffa3ba79a1ee351dce3d3cdb\" returns successfully" Jun 20 19:24:15.673364 containerd[1619]: time="2025-06-20T19:24:15.673330816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54f4b499c8-vwd6m,Uid:da2fa4c7-de11-4e7d-93ae-6602a4ac4909,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:24:15.690008 containerd[1619]: time="2025-06-20T19:24:15.689724352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-844cfccc97-b8rbg,Uid:22ecde37-4cc5-4658-bd30-85fbd7200f9b,Namespace:calico-system,Attempt:0,}" Jun 20 19:24:15.691693 containerd[1619]: time="2025-06-20T19:24:15.691444909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57c64d78d5-28z6n,Uid:2e0b2d2c-4387-4f0a-acd2-67ee85d925c8,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:24:15.692347 containerd[1619]: time="2025-06-20T19:24:15.691992469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6tz6l,Uid:988cdc8c-2bae-47e2-bbdb-4d40efabde59,Namespace:kube-system,Attempt:0,}" Jun 20 19:24:16.014704 systemd-networkd[1533]: cali4620a6951ff: Link UP Jun 20 19:24:16.015523 systemd-networkd[1533]: cali4620a6951ff: Gained carrier Jun 20 19:24:16.030701 containerd[1619]: 2025-06-20 19:24:15.869 [INFO][4726] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--57c64d78d5--28z6n-eth0 calico-apiserver-57c64d78d5- calico-apiserver 2e0b2d2c-4387-4f0a-acd2-67ee85d925c8 816 0 2025-06-20 19:23:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57c64d78d5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-57c64d78d5-28z6n eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4620a6951ff [] [] }} ContainerID="eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" Namespace="calico-apiserver" Pod="calico-apiserver-57c64d78d5-28z6n" WorkloadEndpoint="localhost-k8s-calico--apiserver--57c64d78d5--28z6n-" Jun 20 19:24:16.030701 containerd[1619]: 2025-06-20 19:24:15.876 [INFO][4726] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" Namespace="calico-apiserver" Pod="calico-apiserver-57c64d78d5-28z6n" WorkloadEndpoint="localhost-k8s-calico--apiserver--57c64d78d5--28z6n-eth0" Jun 20 19:24:16.030701 containerd[1619]: 2025-06-20 19:24:15.950 [INFO][4755] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" HandleID="k8s-pod-network.eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" Workload="localhost-k8s-calico--apiserver--57c64d78d5--28z6n-eth0" Jun 20 19:24:16.030701 containerd[1619]: 2025-06-20 19:24:15.951 [INFO][4755] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" HandleID="k8s-pod-network.eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" Workload="localhost-k8s-calico--apiserver--57c64d78d5--28z6n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5970), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-57c64d78d5-28z6n", "timestamp":"2025-06-20 19:24:15.950938127 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:24:16.030701 containerd[1619]: 2025-06-20 19:24:15.951 [INFO][4755] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:24:16.030701 containerd[1619]: 2025-06-20 19:24:15.951 [INFO][4755] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:24:16.030701 containerd[1619]: 2025-06-20 19:24:15.951 [INFO][4755] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 20 19:24:16.030701 containerd[1619]: 2025-06-20 19:24:15.955 [INFO][4755] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" host="localhost" Jun 20 19:24:16.030701 containerd[1619]: 2025-06-20 19:24:15.970 [INFO][4755] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 20 19:24:16.030701 containerd[1619]: 2025-06-20 19:24:15.972 [INFO][4755] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 20 19:24:16.030701 containerd[1619]: 2025-06-20 19:24:15.974 [INFO][4755] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 20 19:24:16.030701 containerd[1619]: 2025-06-20 19:24:15.975 [INFO][4755] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 20 19:24:16.030701 containerd[1619]: 2025-06-20 19:24:15.975 [INFO][4755] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" host="localhost" Jun 20 19:24:16.030701 containerd[1619]: 2025-06-20 19:24:15.975 [INFO][4755] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf Jun 20 19:24:16.030701 containerd[1619]: 2025-06-20 19:24:15.980 [INFO][4755] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" host="localhost" Jun 20 19:24:16.030701 containerd[1619]: 2025-06-20 19:24:15.992 [INFO][4755] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" host="localhost" Jun 20 19:24:16.030701 containerd[1619]: 2025-06-20 19:24:15.992 [INFO][4755] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" host="localhost" Jun 20 19:24:16.030701 containerd[1619]: 2025-06-20 19:24:15.992 [INFO][4755] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:24:16.030701 containerd[1619]: 2025-06-20 19:24:15.992 [INFO][4755] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" HandleID="k8s-pod-network.eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" Workload="localhost-k8s-calico--apiserver--57c64d78d5--28z6n-eth0" Jun 20 19:24:16.056098 containerd[1619]: 2025-06-20 19:24:15.995 [INFO][4726] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" Namespace="calico-apiserver" Pod="calico-apiserver-57c64d78d5-28z6n" WorkloadEndpoint="localhost-k8s-calico--apiserver--57c64d78d5--28z6n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57c64d78d5--28z6n-eth0", GenerateName:"calico-apiserver-57c64d78d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"2e0b2d2c-4387-4f0a-acd2-67ee85d925c8", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 23, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57c64d78d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-57c64d78d5-28z6n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4620a6951ff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:24:16.056098 containerd[1619]: 2025-06-20 19:24:16.012 [INFO][4726] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" Namespace="calico-apiserver" Pod="calico-apiserver-57c64d78d5-28z6n" WorkloadEndpoint="localhost-k8s-calico--apiserver--57c64d78d5--28z6n-eth0" Jun 20 19:24:16.056098 containerd[1619]: 2025-06-20 19:24:16.012 [INFO][4726] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4620a6951ff ContainerID="eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" Namespace="calico-apiserver" Pod="calico-apiserver-57c64d78d5-28z6n" WorkloadEndpoint="localhost-k8s-calico--apiserver--57c64d78d5--28z6n-eth0" Jun 20 19:24:16.056098 containerd[1619]: 2025-06-20 19:24:16.015 [INFO][4726] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" Namespace="calico-apiserver" Pod="calico-apiserver-57c64d78d5-28z6n" WorkloadEndpoint="localhost-k8s-calico--apiserver--57c64d78d5--28z6n-eth0" Jun 20 19:24:16.056098 containerd[1619]: 2025-06-20 19:24:16.016 [INFO][4726] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" Namespace="calico-apiserver" Pod="calico-apiserver-57c64d78d5-28z6n" WorkloadEndpoint="localhost-k8s-calico--apiserver--57c64d78d5--28z6n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57c64d78d5--28z6n-eth0", GenerateName:"calico-apiserver-57c64d78d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"2e0b2d2c-4387-4f0a-acd2-67ee85d925c8", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 23, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57c64d78d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf", Pod:"calico-apiserver-57c64d78d5-28z6n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4620a6951ff", MAC:"da:06:e8:c6:f2:2e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:24:16.056098 containerd[1619]: 2025-06-20 19:24:16.027 [INFO][4726] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" Namespace="calico-apiserver" Pod="calico-apiserver-57c64d78d5-28z6n" WorkloadEndpoint="localhost-k8s-calico--apiserver--57c64d78d5--28z6n-eth0" Jun 20 19:24:16.115257 kubelet[2909]: I0620 19:24:16.115181 2909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-v8ssj" podStartSLOduration=39.115166252 podStartE2EDuration="39.115166252s" podCreationTimestamp="2025-06-20 19:23:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:24:16.068056859 +0000 UTC m=+42.496295536" watchObservedRunningTime="2025-06-20 19:24:16.115166252 +0000 UTC m=+42.543404928" Jun 20 19:24:16.123243 systemd-networkd[1533]: calie8ae8f2e617: Link UP Jun 20 19:24:16.124304 systemd-networkd[1533]: calie8ae8f2e617: Gained carrier Jun 20 19:24:16.149920 containerd[1619]: 2025-06-20 19:24:15.873 [INFO][4707] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--6tz6l-eth0 coredns-7c65d6cfc9- kube-system 988cdc8c-2bae-47e2-bbdb-4d40efabde59 814 0 2025-06-20 19:23:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-6tz6l eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie8ae8f2e617 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1a49b5d0c49a8af5571705c4a4036a169c32c48ac4fef694a0d382004d41c2e3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6tz6l" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--6tz6l-" Jun 20 19:24:16.149920 containerd[1619]: 2025-06-20 19:24:15.875 [INFO][4707] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1a49b5d0c49a8af5571705c4a4036a169c32c48ac4fef694a0d382004d41c2e3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6tz6l" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--6tz6l-eth0" Jun 20 19:24:16.149920 containerd[1619]: 2025-06-20 19:24:15.965 [INFO][4751] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1a49b5d0c49a8af5571705c4a4036a169c32c48ac4fef694a0d382004d41c2e3" HandleID="k8s-pod-network.1a49b5d0c49a8af5571705c4a4036a169c32c48ac4fef694a0d382004d41c2e3" Workload="localhost-k8s-coredns--7c65d6cfc9--6tz6l-eth0" Jun 20 19:24:16.149920 containerd[1619]: 2025-06-20 19:24:15.965 [INFO][4751] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1a49b5d0c49a8af5571705c4a4036a169c32c48ac4fef694a0d382004d41c2e3" HandleID="k8s-pod-network.1a49b5d0c49a8af5571705c4a4036a169c32c48ac4fef694a0d382004d41c2e3" Workload="localhost-k8s-coredns--7c65d6cfc9--6tz6l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002b76e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-6tz6l", "timestamp":"2025-06-20 19:24:15.965066065 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:24:16.149920 containerd[1619]: 2025-06-20 19:24:15.965 [INFO][4751] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:24:16.149920 containerd[1619]: 2025-06-20 19:24:15.992 [INFO][4751] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:24:16.149920 containerd[1619]: 2025-06-20 19:24:15.992 [INFO][4751] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 20 19:24:16.149920 containerd[1619]: 2025-06-20 19:24:16.056 [INFO][4751] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1a49b5d0c49a8af5571705c4a4036a169c32c48ac4fef694a0d382004d41c2e3" host="localhost" Jun 20 19:24:16.149920 containerd[1619]: 2025-06-20 19:24:16.074 [INFO][4751] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 20 19:24:16.149920 containerd[1619]: 2025-06-20 19:24:16.082 [INFO][4751] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 20 19:24:16.149920 containerd[1619]: 2025-06-20 19:24:16.084 [INFO][4751] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 20 19:24:16.149920 containerd[1619]: 2025-06-20 19:24:16.085 [INFO][4751] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 20 19:24:16.149920 containerd[1619]: 2025-06-20 19:24:16.085 [INFO][4751] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1a49b5d0c49a8af5571705c4a4036a169c32c48ac4fef694a0d382004d41c2e3" host="localhost" Jun 20 19:24:16.149920 containerd[1619]: 2025-06-20 19:24:16.086 [INFO][4751] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1a49b5d0c49a8af5571705c4a4036a169c32c48ac4fef694a0d382004d41c2e3 Jun 20 19:24:16.149920 containerd[1619]: 2025-06-20 19:24:16.097 [INFO][4751] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1a49b5d0c49a8af5571705c4a4036a169c32c48ac4fef694a0d382004d41c2e3" host="localhost" Jun 20 19:24:16.149920 containerd[1619]: 2025-06-20 19:24:16.111 [INFO][4751] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.1a49b5d0c49a8af5571705c4a4036a169c32c48ac4fef694a0d382004d41c2e3" host="localhost" Jun 20 19:24:16.149920 containerd[1619]: 2025-06-20 19:24:16.111 [INFO][4751] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.1a49b5d0c49a8af5571705c4a4036a169c32c48ac4fef694a0d382004d41c2e3" host="localhost" Jun 20 19:24:16.149920 containerd[1619]: 2025-06-20 19:24:16.112 [INFO][4751] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:24:16.149920 containerd[1619]: 2025-06-20 19:24:16.112 [INFO][4751] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="1a49b5d0c49a8af5571705c4a4036a169c32c48ac4fef694a0d382004d41c2e3" HandleID="k8s-pod-network.1a49b5d0c49a8af5571705c4a4036a169c32c48ac4fef694a0d382004d41c2e3" Workload="localhost-k8s-coredns--7c65d6cfc9--6tz6l-eth0" Jun 20 19:24:16.150951 containerd[1619]: 2025-06-20 19:24:16.116 [INFO][4707] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1a49b5d0c49a8af5571705c4a4036a169c32c48ac4fef694a0d382004d41c2e3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6tz6l" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--6tz6l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--6tz6l-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"988cdc8c-2bae-47e2-bbdb-4d40efabde59", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 23, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-6tz6l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie8ae8f2e617", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:24:16.150951 containerd[1619]: 2025-06-20 19:24:16.116 [INFO][4707] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="1a49b5d0c49a8af5571705c4a4036a169c32c48ac4fef694a0d382004d41c2e3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6tz6l" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--6tz6l-eth0" Jun 20 19:24:16.150951 containerd[1619]: 2025-06-20 19:24:16.116 [INFO][4707] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie8ae8f2e617 ContainerID="1a49b5d0c49a8af5571705c4a4036a169c32c48ac4fef694a0d382004d41c2e3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6tz6l" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--6tz6l-eth0" Jun 20 19:24:16.150951 containerd[1619]: 2025-06-20 19:24:16.125 [INFO][4707] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1a49b5d0c49a8af5571705c4a4036a169c32c48ac4fef694a0d382004d41c2e3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6tz6l" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--6tz6l-eth0" Jun 20 19:24:16.150951 containerd[1619]: 2025-06-20 19:24:16.126 [INFO][4707] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1a49b5d0c49a8af5571705c4a4036a169c32c48ac4fef694a0d382004d41c2e3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6tz6l" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--6tz6l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--6tz6l-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"988cdc8c-2bae-47e2-bbdb-4d40efabde59", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 23, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1a49b5d0c49a8af5571705c4a4036a169c32c48ac4fef694a0d382004d41c2e3", Pod:"coredns-7c65d6cfc9-6tz6l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie8ae8f2e617", MAC:"2a:d4:42:fe:df:87", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:24:16.150951 containerd[1619]: 2025-06-20 19:24:16.142 [INFO][4707] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1a49b5d0c49a8af5571705c4a4036a169c32c48ac4fef694a0d382004d41c2e3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6tz6l" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--6tz6l-eth0" Jun 20 19:24:16.169831 containerd[1619]: time="2025-06-20T19:24:16.169763404Z" level=info msg="connecting to shim eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" address="unix:///run/containerd/s/0b8a7500fa3c9260539f91d7b9c57e67cb3a0a9f008940e366b9edc75165d335" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:24:16.189991 containerd[1619]: time="2025-06-20T19:24:16.189934828Z" level=info msg="connecting to shim 1a49b5d0c49a8af5571705c4a4036a169c32c48ac4fef694a0d382004d41c2e3" address="unix:///run/containerd/s/afd2a4d7c7af9c6714d6c134d98d4672b5dc55a673e8fe7fc448158779ff945f" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:24:16.249588 systemd-networkd[1533]: cali178aa379f77: Link UP Jun 20 19:24:16.251209 systemd-networkd[1533]: cali178aa379f77: Gained carrier Jun 20 19:24:16.262926 systemd[1]: Started cri-containerd-1a49b5d0c49a8af5571705c4a4036a169c32c48ac4fef694a0d382004d41c2e3.scope - libcontainer container 1a49b5d0c49a8af5571705c4a4036a169c32c48ac4fef694a0d382004d41c2e3. Jun 20 19:24:16.279918 systemd[1]: Started cri-containerd-eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf.scope - libcontainer container eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf. Jun 20 19:24:16.303302 containerd[1619]: 2025-06-20 19:24:15.872 [INFO][4717] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--844cfccc97--b8rbg-eth0 calico-kube-controllers-844cfccc97- calico-system 22ecde37-4cc5-4658-bd30-85fbd7200f9b 812 0 2025-06-20 19:23:48 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:844cfccc97 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-844cfccc97-b8rbg eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali178aa379f77 [] [] }} ContainerID="2f3d1429f29ef961230c2af4b53f62162206728f3d6387ef7e0845cdd1bf05fb" Namespace="calico-system" Pod="calico-kube-controllers-844cfccc97-b8rbg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--844cfccc97--b8rbg-" Jun 20 19:24:16.303302 containerd[1619]: 2025-06-20 19:24:15.875 [INFO][4717] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2f3d1429f29ef961230c2af4b53f62162206728f3d6387ef7e0845cdd1bf05fb" Namespace="calico-system" Pod="calico-kube-controllers-844cfccc97-b8rbg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--844cfccc97--b8rbg-eth0" Jun 20 19:24:16.303302 containerd[1619]: 2025-06-20 19:24:15.988 [INFO][4749] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2f3d1429f29ef961230c2af4b53f62162206728f3d6387ef7e0845cdd1bf05fb" HandleID="k8s-pod-network.2f3d1429f29ef961230c2af4b53f62162206728f3d6387ef7e0845cdd1bf05fb" Workload="localhost-k8s-calico--kube--controllers--844cfccc97--b8rbg-eth0" Jun 20 19:24:16.303302 containerd[1619]: 2025-06-20 19:24:15.988 [INFO][4749] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2f3d1429f29ef961230c2af4b53f62162206728f3d6387ef7e0845cdd1bf05fb" HandleID="k8s-pod-network.2f3d1429f29ef961230c2af4b53f62162206728f3d6387ef7e0845cdd1bf05fb" Workload="localhost-k8s-calico--kube--controllers--844cfccc97--b8rbg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f9d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-844cfccc97-b8rbg", "timestamp":"2025-06-20 19:24:15.98817428 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:24:16.303302 containerd[1619]: 2025-06-20 19:24:15.988 [INFO][4749] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:24:16.303302 containerd[1619]: 2025-06-20 19:24:16.112 [INFO][4749] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:24:16.303302 containerd[1619]: 2025-06-20 19:24:16.112 [INFO][4749] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 20 19:24:16.303302 containerd[1619]: 2025-06-20 19:24:16.160 [INFO][4749] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2f3d1429f29ef961230c2af4b53f62162206728f3d6387ef7e0845cdd1bf05fb" host="localhost" Jun 20 19:24:16.303302 containerd[1619]: 2025-06-20 19:24:16.185 [INFO][4749] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 20 19:24:16.303302 containerd[1619]: 2025-06-20 19:24:16.199 [INFO][4749] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 20 19:24:16.303302 containerd[1619]: 2025-06-20 19:24:16.204 [INFO][4749] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 20 19:24:16.303302 containerd[1619]: 2025-06-20 19:24:16.208 [INFO][4749] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 20 19:24:16.303302 containerd[1619]: 2025-06-20 19:24:16.208 [INFO][4749] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2f3d1429f29ef961230c2af4b53f62162206728f3d6387ef7e0845cdd1bf05fb" host="localhost" Jun 20 19:24:16.303302 containerd[1619]: 2025-06-20 19:24:16.212 [INFO][4749] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2f3d1429f29ef961230c2af4b53f62162206728f3d6387ef7e0845cdd1bf05fb Jun 20 19:24:16.303302 containerd[1619]: 2025-06-20 19:24:16.224 [INFO][4749] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2f3d1429f29ef961230c2af4b53f62162206728f3d6387ef7e0845cdd1bf05fb" host="localhost" Jun 20 19:24:16.303302 containerd[1619]: 2025-06-20 19:24:16.237 [INFO][4749] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.2f3d1429f29ef961230c2af4b53f62162206728f3d6387ef7e0845cdd1bf05fb" host="localhost" Jun 20 19:24:16.303302 containerd[1619]: 2025-06-20 19:24:16.237 [INFO][4749] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.2f3d1429f29ef961230c2af4b53f62162206728f3d6387ef7e0845cdd1bf05fb" host="localhost" Jun 20 19:24:16.303302 containerd[1619]: 2025-06-20 19:24:16.237 [INFO][4749] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:24:16.303302 containerd[1619]: 2025-06-20 19:24:16.237 [INFO][4749] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="2f3d1429f29ef961230c2af4b53f62162206728f3d6387ef7e0845cdd1bf05fb" HandleID="k8s-pod-network.2f3d1429f29ef961230c2af4b53f62162206728f3d6387ef7e0845cdd1bf05fb" Workload="localhost-k8s-calico--kube--controllers--844cfccc97--b8rbg-eth0" Jun 20 19:24:16.287669 systemd-resolved[1480]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 20 19:24:16.312117 containerd[1619]: 2025-06-20 19:24:16.243 [INFO][4717] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2f3d1429f29ef961230c2af4b53f62162206728f3d6387ef7e0845cdd1bf05fb" Namespace="calico-system" Pod="calico-kube-controllers-844cfccc97-b8rbg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--844cfccc97--b8rbg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--844cfccc97--b8rbg-eth0", GenerateName:"calico-kube-controllers-844cfccc97-", Namespace:"calico-system", SelfLink:"", UID:"22ecde37-4cc5-4658-bd30-85fbd7200f9b", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 23, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"844cfccc97", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-844cfccc97-b8rbg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali178aa379f77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:24:16.312117 containerd[1619]: 2025-06-20 19:24:16.243 [INFO][4717] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="2f3d1429f29ef961230c2af4b53f62162206728f3d6387ef7e0845cdd1bf05fb" Namespace="calico-system" Pod="calico-kube-controllers-844cfccc97-b8rbg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--844cfccc97--b8rbg-eth0" Jun 20 19:24:16.312117 containerd[1619]: 2025-06-20 19:24:16.244 [INFO][4717] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali178aa379f77 ContainerID="2f3d1429f29ef961230c2af4b53f62162206728f3d6387ef7e0845cdd1bf05fb" Namespace="calico-system" Pod="calico-kube-controllers-844cfccc97-b8rbg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--844cfccc97--b8rbg-eth0" Jun 20 19:24:16.312117 containerd[1619]: 2025-06-20 19:24:16.252 [INFO][4717] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2f3d1429f29ef961230c2af4b53f62162206728f3d6387ef7e0845cdd1bf05fb" Namespace="calico-system" Pod="calico-kube-controllers-844cfccc97-b8rbg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--844cfccc97--b8rbg-eth0" Jun 20 19:24:16.312117 containerd[1619]: 2025-06-20 19:24:16.253 [INFO][4717] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2f3d1429f29ef961230c2af4b53f62162206728f3d6387ef7e0845cdd1bf05fb" Namespace="calico-system" Pod="calico-kube-controllers-844cfccc97-b8rbg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--844cfccc97--b8rbg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--844cfccc97--b8rbg-eth0", GenerateName:"calico-kube-controllers-844cfccc97-", Namespace:"calico-system", SelfLink:"", UID:"22ecde37-4cc5-4658-bd30-85fbd7200f9b", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 23, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"844cfccc97", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2f3d1429f29ef961230c2af4b53f62162206728f3d6387ef7e0845cdd1bf05fb", Pod:"calico-kube-controllers-844cfccc97-b8rbg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali178aa379f77", MAC:"e2:02:56:46:bf:45", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:24:16.312117 containerd[1619]: 2025-06-20 19:24:16.285 [INFO][4717] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2f3d1429f29ef961230c2af4b53f62162206728f3d6387ef7e0845cdd1bf05fb" Namespace="calico-system" Pod="calico-kube-controllers-844cfccc97-b8rbg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--844cfccc97--b8rbg-eth0" Jun 20 19:24:16.334223 systemd-resolved[1480]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 20 19:24:16.373292 containerd[1619]: time="2025-06-20T19:24:16.373265789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6tz6l,Uid:988cdc8c-2bae-47e2-bbdb-4d40efabde59,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a49b5d0c49a8af5571705c4a4036a169c32c48ac4fef694a0d382004d41c2e3\"" Jun 20 19:24:16.381832 systemd-networkd[1533]: calibd08dd9372f: Link UP Jun 20 19:24:16.383697 systemd-networkd[1533]: calibd08dd9372f: Gained carrier Jun 20 19:24:16.393573 containerd[1619]: time="2025-06-20T19:24:16.393542780Z" level=info msg="CreateContainer within sandbox \"1a49b5d0c49a8af5571705c4a4036a169c32c48ac4fef694a0d382004d41c2e3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 19:24:16.402101 containerd[1619]: time="2025-06-20T19:24:16.401969622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57c64d78d5-28z6n,Uid:2e0b2d2c-4387-4f0a-acd2-67ee85d925c8,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf\"" Jun 20 19:24:16.404271 containerd[1619]: time="2025-06-20T19:24:16.403099812Z" level=info msg="connecting to shim 2f3d1429f29ef961230c2af4b53f62162206728f3d6387ef7e0845cdd1bf05fb" address="unix:///run/containerd/s/21542b47bc65093a9adeddeac7d4e945a10306d9e0144315e3c8c9a56246b72a" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:24:16.418506 containerd[1619]: 2025-06-20 19:24:15.873 [INFO][4692] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--54f4b499c8--vwd6m-eth0 calico-apiserver-54f4b499c8- calico-apiserver da2fa4c7-de11-4e7d-93ae-6602a4ac4909 813 0 2025-06-20 19:23:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54f4b499c8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-54f4b499c8-vwd6m eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibd08dd9372f [] [] }} ContainerID="7cc91be65739e909a501c371b364099e57e816e8945a931ae8e4a4794885ada5" Namespace="calico-apiserver" Pod="calico-apiserver-54f4b499c8-vwd6m" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f4b499c8--vwd6m-" Jun 20 19:24:16.418506 containerd[1619]: 2025-06-20 19:24:15.875 [INFO][4692] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7cc91be65739e909a501c371b364099e57e816e8945a931ae8e4a4794885ada5" Namespace="calico-apiserver" Pod="calico-apiserver-54f4b499c8-vwd6m" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f4b499c8--vwd6m-eth0" Jun 20 19:24:16.418506 containerd[1619]: 2025-06-20 19:24:16.008 [INFO][4753] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7cc91be65739e909a501c371b364099e57e816e8945a931ae8e4a4794885ada5" HandleID="k8s-pod-network.7cc91be65739e909a501c371b364099e57e816e8945a931ae8e4a4794885ada5" Workload="localhost-k8s-calico--apiserver--54f4b499c8--vwd6m-eth0" Jun 20 19:24:16.418506 containerd[1619]: 2025-06-20 19:24:16.012 [INFO][4753] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7cc91be65739e909a501c371b364099e57e816e8945a931ae8e4a4794885ada5" HandleID="k8s-pod-network.7cc91be65739e909a501c371b364099e57e816e8945a931ae8e4a4794885ada5" Workload="localhost-k8s-calico--apiserver--54f4b499c8--vwd6m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000122320), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-54f4b499c8-vwd6m", "timestamp":"2025-06-20 19:24:16.00884203 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:24:16.418506 containerd[1619]: 2025-06-20 19:24:16.012 [INFO][4753] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:24:16.418506 containerd[1619]: 2025-06-20 19:24:16.237 [INFO][4753] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:24:16.418506 containerd[1619]: 2025-06-20 19:24:16.237 [INFO][4753] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 20 19:24:16.418506 containerd[1619]: 2025-06-20 19:24:16.263 [INFO][4753] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7cc91be65739e909a501c371b364099e57e816e8945a931ae8e4a4794885ada5" host="localhost" Jun 20 19:24:16.418506 containerd[1619]: 2025-06-20 19:24:16.299 [INFO][4753] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 20 19:24:16.418506 containerd[1619]: 2025-06-20 19:24:16.314 [INFO][4753] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 20 19:24:16.418506 containerd[1619]: 2025-06-20 19:24:16.317 [INFO][4753] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 20 19:24:16.418506 containerd[1619]: 2025-06-20 19:24:16.321 [INFO][4753] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 20 19:24:16.418506 containerd[1619]: 2025-06-20 19:24:16.321 [INFO][4753] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7cc91be65739e909a501c371b364099e57e816e8945a931ae8e4a4794885ada5" host="localhost" Jun 20 19:24:16.418506 containerd[1619]: 2025-06-20 19:24:16.326 [INFO][4753] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7cc91be65739e909a501c371b364099e57e816e8945a931ae8e4a4794885ada5 Jun 20 19:24:16.418506 containerd[1619]: 2025-06-20 19:24:16.339 [INFO][4753] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7cc91be65739e909a501c371b364099e57e816e8945a931ae8e4a4794885ada5" host="localhost" Jun 20 19:24:16.418506 containerd[1619]: 2025-06-20 19:24:16.368 [INFO][4753] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.7cc91be65739e909a501c371b364099e57e816e8945a931ae8e4a4794885ada5" host="localhost" Jun 20 19:24:16.418506 containerd[1619]: 2025-06-20 19:24:16.368 [INFO][4753] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.7cc91be65739e909a501c371b364099e57e816e8945a931ae8e4a4794885ada5" host="localhost" Jun 20 19:24:16.418506 containerd[1619]: 2025-06-20 19:24:16.368 [INFO][4753] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:24:16.418506 containerd[1619]: 2025-06-20 19:24:16.368 [INFO][4753] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="7cc91be65739e909a501c371b364099e57e816e8945a931ae8e4a4794885ada5" HandleID="k8s-pod-network.7cc91be65739e909a501c371b364099e57e816e8945a931ae8e4a4794885ada5" Workload="localhost-k8s-calico--apiserver--54f4b499c8--vwd6m-eth0" Jun 20 19:24:16.419016 containerd[1619]: 2025-06-20 19:24:16.377 [INFO][4692] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7cc91be65739e909a501c371b364099e57e816e8945a931ae8e4a4794885ada5" Namespace="calico-apiserver" Pod="calico-apiserver-54f4b499c8-vwd6m" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f4b499c8--vwd6m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54f4b499c8--vwd6m-eth0", GenerateName:"calico-apiserver-54f4b499c8-", Namespace:"calico-apiserver", SelfLink:"", UID:"da2fa4c7-de11-4e7d-93ae-6602a4ac4909", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 23, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54f4b499c8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-54f4b499c8-vwd6m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibd08dd9372f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:24:16.419016 containerd[1619]: 2025-06-20 19:24:16.378 [INFO][4692] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="7cc91be65739e909a501c371b364099e57e816e8945a931ae8e4a4794885ada5" Namespace="calico-apiserver" Pod="calico-apiserver-54f4b499c8-vwd6m" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f4b499c8--vwd6m-eth0" Jun 20 19:24:16.419016 containerd[1619]: 2025-06-20 19:24:16.378 [INFO][4692] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibd08dd9372f ContainerID="7cc91be65739e909a501c371b364099e57e816e8945a931ae8e4a4794885ada5" Namespace="calico-apiserver" Pod="calico-apiserver-54f4b499c8-vwd6m" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f4b499c8--vwd6m-eth0" Jun 20 19:24:16.419016 containerd[1619]: 2025-06-20 19:24:16.386 [INFO][4692] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7cc91be65739e909a501c371b364099e57e816e8945a931ae8e4a4794885ada5" Namespace="calico-apiserver" Pod="calico-apiserver-54f4b499c8-vwd6m" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f4b499c8--vwd6m-eth0" Jun 20 19:24:16.419016 containerd[1619]: 2025-06-20 19:24:16.386 [INFO][4692] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7cc91be65739e909a501c371b364099e57e816e8945a931ae8e4a4794885ada5" Namespace="calico-apiserver" Pod="calico-apiserver-54f4b499c8-vwd6m" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f4b499c8--vwd6m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54f4b499c8--vwd6m-eth0", GenerateName:"calico-apiserver-54f4b499c8-", Namespace:"calico-apiserver", SelfLink:"", UID:"da2fa4c7-de11-4e7d-93ae-6602a4ac4909", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 23, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54f4b499c8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7cc91be65739e909a501c371b364099e57e816e8945a931ae8e4a4794885ada5", Pod:"calico-apiserver-54f4b499c8-vwd6m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibd08dd9372f", MAC:"92:5d:a7:a5:50:19", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:24:16.419016 containerd[1619]: 2025-06-20 19:24:16.409 [INFO][4692] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7cc91be65739e909a501c371b364099e57e816e8945a931ae8e4a4794885ada5" Namespace="calico-apiserver" Pod="calico-apiserver-54f4b499c8-vwd6m" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f4b499c8--vwd6m-eth0" Jun 20 19:24:16.434878 systemd[1]: Started cri-containerd-2f3d1429f29ef961230c2af4b53f62162206728f3d6387ef7e0845cdd1bf05fb.scope - libcontainer container 2f3d1429f29ef961230c2af4b53f62162206728f3d6387ef7e0845cdd1bf05fb. Jun 20 19:24:16.448345 systemd-resolved[1480]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 20 19:24:16.550543 containerd[1619]: time="2025-06-20T19:24:16.549872999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-844cfccc97-b8rbg,Uid:22ecde37-4cc5-4658-bd30-85fbd7200f9b,Namespace:calico-system,Attempt:0,} returns sandbox id \"2f3d1429f29ef961230c2af4b53f62162206728f3d6387ef7e0845cdd1bf05fb\"" Jun 20 19:24:16.581467 containerd[1619]: time="2025-06-20T19:24:16.581358455Z" level=info msg="Container aff4a64ac199e93d9e1cad00288a8293f8cb35b2448aff9ce116cfe15e6d8267: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:24:16.602306 containerd[1619]: time="2025-06-20T19:24:16.602128680Z" level=info msg="CreateContainer within sandbox \"1a49b5d0c49a8af5571705c4a4036a169c32c48ac4fef694a0d382004d41c2e3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"aff4a64ac199e93d9e1cad00288a8293f8cb35b2448aff9ce116cfe15e6d8267\"" Jun 20 19:24:16.603270 containerd[1619]: time="2025-06-20T19:24:16.603254713Z" level=info msg="StartContainer for \"aff4a64ac199e93d9e1cad00288a8293f8cb35b2448aff9ce116cfe15e6d8267\"" Jun 20 19:24:16.604105 containerd[1619]: time="2025-06-20T19:24:16.604087269Z" level=info msg="connecting to shim aff4a64ac199e93d9e1cad00288a8293f8cb35b2448aff9ce116cfe15e6d8267" address="unix:///run/containerd/s/afd2a4d7c7af9c6714d6c134d98d4672b5dc55a673e8fe7fc448158779ff945f" protocol=ttrpc version=3 Jun 20 19:24:16.612954 containerd[1619]: time="2025-06-20T19:24:16.612846583Z" level=info msg="connecting to shim 7cc91be65739e909a501c371b364099e57e816e8945a931ae8e4a4794885ada5" address="unix:///run/containerd/s/2543c064b96c7b716adb092c481e8563979e9f108bc6774bd5e5b146901a1223" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:24:16.637997 systemd[1]: Started cri-containerd-aff4a64ac199e93d9e1cad00288a8293f8cb35b2448aff9ce116cfe15e6d8267.scope - libcontainer container aff4a64ac199e93d9e1cad00288a8293f8cb35b2448aff9ce116cfe15e6d8267. Jun 20 19:24:16.654774 systemd-networkd[1533]: cali302c776754a: Gained IPv6LL Jun 20 19:24:16.667826 systemd[1]: Started cri-containerd-7cc91be65739e909a501c371b364099e57e816e8945a931ae8e4a4794885ada5.scope - libcontainer container 7cc91be65739e909a501c371b364099e57e816e8945a931ae8e4a4794885ada5. Jun 20 19:24:16.701128 containerd[1619]: time="2025-06-20T19:24:16.701108401Z" level=info msg="StartContainer for \"aff4a64ac199e93d9e1cad00288a8293f8cb35b2448aff9ce116cfe15e6d8267\" returns successfully" Jun 20 19:24:16.707027 systemd-resolved[1480]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 20 19:24:16.839458 containerd[1619]: time="2025-06-20T19:24:16.839391806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54f4b499c8-vwd6m,Uid:da2fa4c7-de11-4e7d-93ae-6602a4ac4909,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"7cc91be65739e909a501c371b364099e57e816e8945a931ae8e4a4794885ada5\"" Jun 20 19:24:16.847765 systemd-networkd[1533]: cali9930bb7fa90: Gained IPv6LL Jun 20 19:24:16.970075 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount70327055.mount: Deactivated successfully. Jun 20 19:24:17.182277 kubelet[2909]: I0620 19:24:17.182219 2909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-6tz6l" podStartSLOduration=40.182204231 podStartE2EDuration="40.182204231s" podCreationTimestamp="2025-06-20 19:23:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:24:17.167329612 +0000 UTC m=+43.595568289" watchObservedRunningTime="2025-06-20 19:24:17.182204231 +0000 UTC m=+43.610442902" Jun 20 19:24:17.233270 systemd-networkd[1533]: cali4620a6951ff: Gained IPv6LL Jun 20 19:24:17.294811 systemd-networkd[1533]: calie8ae8f2e617: Gained IPv6LL Jun 20 19:24:17.668767 containerd[1619]: time="2025-06-20T19:24:17.668739750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57c64d78d5-dz2fv,Uid:b168f456-8c10-4805-a1f9-c9eafb698c4f,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:24:17.679527 systemd-networkd[1533]: cali178aa379f77: Gained IPv6LL Jun 20 19:24:17.742903 systemd-networkd[1533]: calibd08dd9372f: Gained IPv6LL Jun 20 19:24:17.829312 containerd[1619]: time="2025-06-20T19:24:17.829250259Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:24:17.832937 containerd[1619]: time="2025-06-20T19:24:17.832817904Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.1: active requests=0, bytes read=66352249" Jun 20 19:24:17.876463 containerd[1619]: time="2025-06-20T19:24:17.876432671Z" level=info msg="ImageCreate event name:\"sha256:7ded2fef2b18e2077114599de13fa300df0e1437753deab5c59843a86d2dad82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:24:17.878160 containerd[1619]: time="2025-06-20T19:24:17.878113504Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:173a10ef7a65a843f99fc366c7c860fa4068a8f52fda1b30ee589bc4ca43f45a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:24:17.888352 containerd[1619]: time="2025-06-20T19:24:17.888316476Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.1\" with image id \"sha256:7ded2fef2b18e2077114599de13fa300df0e1437753deab5c59843a86d2dad82\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:173a10ef7a65a843f99fc366c7c860fa4068a8f52fda1b30ee589bc4ca43f45a\", size \"66352095\" in 3.516179446s" Jun 20 19:24:17.888503 containerd[1619]: time="2025-06-20T19:24:17.888473527Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.1\" returns image reference \"sha256:7ded2fef2b18e2077114599de13fa300df0e1437753deab5c59843a86d2dad82\"" Jun 20 19:24:17.893391 containerd[1619]: time="2025-06-20T19:24:17.893365571Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.1\"" Jun 20 19:24:17.895563 containerd[1619]: time="2025-06-20T19:24:17.895476055Z" level=info msg="CreateContainer within sandbox \"3c2bcf82961a9f0e3d81b2b7620c85f5306bedf60b2f9b513034060a60a63844\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jun 20 19:24:17.906677 containerd[1619]: time="2025-06-20T19:24:17.906369366Z" level=info msg="Container e6519052d887fb5790a210931f627c1324ddcf4e64eb6707f7803af69c1f3c8d: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:24:17.954858 containerd[1619]: time="2025-06-20T19:24:17.954788174Z" level=info msg="CreateContainer within sandbox \"3c2bcf82961a9f0e3d81b2b7620c85f5306bedf60b2f9b513034060a60a63844\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"e6519052d887fb5790a210931f627c1324ddcf4e64eb6707f7803af69c1f3c8d\"" Jun 20 19:24:17.955846 containerd[1619]: time="2025-06-20T19:24:17.955794655Z" level=info msg="StartContainer for \"e6519052d887fb5790a210931f627c1324ddcf4e64eb6707f7803af69c1f3c8d\"" Jun 20 19:24:17.992842 containerd[1619]: time="2025-06-20T19:24:17.992809639Z" level=info msg="connecting to shim e6519052d887fb5790a210931f627c1324ddcf4e64eb6707f7803af69c1f3c8d" address="unix:///run/containerd/s/83546bf54fca28ac5d10d2a6e3977fffd446ef10762499e9faf46982cda7b1bb" protocol=ttrpc version=3 Jun 20 19:24:18.013820 systemd[1]: Started cri-containerd-e6519052d887fb5790a210931f627c1324ddcf4e64eb6707f7803af69c1f3c8d.scope - libcontainer container e6519052d887fb5790a210931f627c1324ddcf4e64eb6707f7803af69c1f3c8d. Jun 20 19:24:18.069171 containerd[1619]: time="2025-06-20T19:24:18.069148132Z" level=info msg="StartContainer for \"e6519052d887fb5790a210931f627c1324ddcf4e64eb6707f7803af69c1f3c8d\" returns successfully" Jun 20 19:24:18.207727 kubelet[2909]: I0620 19:24:18.207612 2909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-dc7b455cb-kmtz6" podStartSLOduration=27.224089432 podStartE2EDuration="32.207598702s" podCreationTimestamp="2025-06-20 19:23:46 +0000 UTC" firstStartedPulling="2025-06-20 19:24:12.905513239 +0000 UTC m=+39.333751906" lastFinishedPulling="2025-06-20 19:24:17.889022508 +0000 UTC m=+44.317261176" observedRunningTime="2025-06-20 19:24:18.143409522 +0000 UTC m=+44.571648200" watchObservedRunningTime="2025-06-20 19:24:18.207598702 +0000 UTC m=+44.635837368" Jun 20 19:24:18.293649 systemd-networkd[1533]: cali72cc1832895: Link UP Jun 20 19:24:18.294187 systemd-networkd[1533]: cali72cc1832895: Gained carrier Jun 20 19:24:18.311530 containerd[1619]: 2025-06-20 19:24:17.926 [INFO][5044] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--57c64d78d5--dz2fv-eth0 calico-apiserver-57c64d78d5- calico-apiserver b168f456-8c10-4805-a1f9-c9eafb698c4f 808 0 2025-06-20 19:23:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57c64d78d5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-57c64d78d5-dz2fv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali72cc1832895 [] [] }} ContainerID="ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" Namespace="calico-apiserver" Pod="calico-apiserver-57c64d78d5-dz2fv" WorkloadEndpoint="localhost-k8s-calico--apiserver--57c64d78d5--dz2fv-" Jun 20 19:24:18.311530 containerd[1619]: 2025-06-20 19:24:17.926 [INFO][5044] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" Namespace="calico-apiserver" Pod="calico-apiserver-57c64d78d5-dz2fv" WorkloadEndpoint="localhost-k8s-calico--apiserver--57c64d78d5--dz2fv-eth0" Jun 20 19:24:18.311530 containerd[1619]: 2025-06-20 19:24:18.222 [INFO][5061] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" HandleID="k8s-pod-network.ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" Workload="localhost-k8s-calico--apiserver--57c64d78d5--dz2fv-eth0" Jun 20 19:24:18.311530 containerd[1619]: 2025-06-20 19:24:18.222 [INFO][5061] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" HandleID="k8s-pod-network.ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" Workload="localhost-k8s-calico--apiserver--57c64d78d5--dz2fv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024a7f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-57c64d78d5-dz2fv", "timestamp":"2025-06-20 19:24:18.222739873 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:24:18.311530 containerd[1619]: 2025-06-20 19:24:18.223 [INFO][5061] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:24:18.311530 containerd[1619]: 2025-06-20 19:24:18.223 [INFO][5061] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:24:18.311530 containerd[1619]: 2025-06-20 19:24:18.223 [INFO][5061] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 20 19:24:18.311530 containerd[1619]: 2025-06-20 19:24:18.235 [INFO][5061] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" host="localhost" Jun 20 19:24:18.311530 containerd[1619]: 2025-06-20 19:24:18.242 [INFO][5061] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 20 19:24:18.311530 containerd[1619]: 2025-06-20 19:24:18.245 [INFO][5061] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 20 19:24:18.311530 containerd[1619]: 2025-06-20 19:24:18.246 [INFO][5061] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 20 19:24:18.311530 containerd[1619]: 2025-06-20 19:24:18.248 [INFO][5061] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 20 19:24:18.311530 containerd[1619]: 2025-06-20 19:24:18.248 [INFO][5061] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" host="localhost" Jun 20 19:24:18.311530 containerd[1619]: 2025-06-20 19:24:18.248 [INFO][5061] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3 Jun 20 19:24:18.311530 containerd[1619]: 2025-06-20 19:24:18.270 [INFO][5061] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" host="localhost" Jun 20 19:24:18.311530 containerd[1619]: 2025-06-20 19:24:18.282 [INFO][5061] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" host="localhost" Jun 20 19:24:18.311530 containerd[1619]: 2025-06-20 19:24:18.282 [INFO][5061] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" host="localhost" Jun 20 19:24:18.311530 containerd[1619]: 2025-06-20 19:24:18.282 [INFO][5061] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:24:18.311530 containerd[1619]: 2025-06-20 19:24:18.282 [INFO][5061] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" HandleID="k8s-pod-network.ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" Workload="localhost-k8s-calico--apiserver--57c64d78d5--dz2fv-eth0" Jun 20 19:24:18.334847 containerd[1619]: 2025-06-20 19:24:18.284 [INFO][5044] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" Namespace="calico-apiserver" Pod="calico-apiserver-57c64d78d5-dz2fv" WorkloadEndpoint="localhost-k8s-calico--apiserver--57c64d78d5--dz2fv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57c64d78d5--dz2fv-eth0", GenerateName:"calico-apiserver-57c64d78d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"b168f456-8c10-4805-a1f9-c9eafb698c4f", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 23, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57c64d78d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-57c64d78d5-dz2fv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali72cc1832895", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:24:18.334847 containerd[1619]: 2025-06-20 19:24:18.285 [INFO][5044] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" Namespace="calico-apiserver" Pod="calico-apiserver-57c64d78d5-dz2fv" WorkloadEndpoint="localhost-k8s-calico--apiserver--57c64d78d5--dz2fv-eth0" Jun 20 19:24:18.334847 containerd[1619]: 2025-06-20 19:24:18.285 [INFO][5044] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali72cc1832895 ContainerID="ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" Namespace="calico-apiserver" Pod="calico-apiserver-57c64d78d5-dz2fv" WorkloadEndpoint="localhost-k8s-calico--apiserver--57c64d78d5--dz2fv-eth0" Jun 20 19:24:18.334847 containerd[1619]: 2025-06-20 19:24:18.294 [INFO][5044] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" Namespace="calico-apiserver" Pod="calico-apiserver-57c64d78d5-dz2fv" WorkloadEndpoint="localhost-k8s-calico--apiserver--57c64d78d5--dz2fv-eth0" Jun 20 19:24:18.334847 containerd[1619]: 2025-06-20 19:24:18.294 [INFO][5044] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" Namespace="calico-apiserver" Pod="calico-apiserver-57c64d78d5-dz2fv" WorkloadEndpoint="localhost-k8s-calico--apiserver--57c64d78d5--dz2fv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57c64d78d5--dz2fv-eth0", GenerateName:"calico-apiserver-57c64d78d5-", Namespace:"calico-apiserver", SelfLink:"", UID:"b168f456-8c10-4805-a1f9-c9eafb698c4f", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 23, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57c64d78d5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3", Pod:"calico-apiserver-57c64d78d5-dz2fv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali72cc1832895", MAC:"32:09:f7:65:b1:fb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:24:18.334847 containerd[1619]: 2025-06-20 19:24:18.309 [INFO][5044] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" Namespace="calico-apiserver" Pod="calico-apiserver-57c64d78d5-dz2fv" WorkloadEndpoint="localhost-k8s-calico--apiserver--57c64d78d5--dz2fv-eth0" Jun 20 19:24:18.447763 containerd[1619]: time="2025-06-20T19:24:18.447717625Z" level=info msg="connecting to shim ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" address="unix:///run/containerd/s/12b3b6db763548e65729fa47ca6972d98a4033d3860a54f76ce41827d0fd39fe" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:24:18.464795 systemd[1]: Started cri-containerd-ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3.scope - libcontainer container ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3. Jun 20 19:24:18.474196 systemd-resolved[1480]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 20 19:24:18.503470 containerd[1619]: time="2025-06-20T19:24:18.503443461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57c64d78d5-dz2fv,Uid:b168f456-8c10-4805-a1f9-c9eafb698c4f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3\"" Jun 20 19:24:19.139117 kubelet[2909]: I0620 19:24:19.139083 2909 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:24:19.449722 containerd[1619]: time="2025-06-20T19:24:19.449625724Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e6519052d887fb5790a210931f627c1324ddcf4e64eb6707f7803af69c1f3c8d\" id:\"a7eceed0bfb0c5207467c564f17b5236cf92d264e69361d4c51998a1be501846\" pid:5168 exit_status:1 exited_at:{seconds:1750447459 nanos:436385044}" Jun 20 19:24:19.470848 systemd-networkd[1533]: cali72cc1832895: Gained IPv6LL Jun 20 19:24:19.654776 containerd[1619]: time="2025-06-20T19:24:19.654731702Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e6519052d887fb5790a210931f627c1324ddcf4e64eb6707f7803af69c1f3c8d\" id:\"5c32a75a8ba8067f32e2629dc8e767ee0ac0805ec1c79d3a387c9a5b81ec687b\" pid:5190 exit_status:1 exited_at:{seconds:1750447459 nanos:647262590}" Jun 20 19:24:19.833089 containerd[1619]: time="2025-06-20T19:24:19.832468628Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:24:19.833376 containerd[1619]: time="2025-06-20T19:24:19.833347627Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.1: active requests=0, bytes read=8758389" Jun 20 19:24:19.833566 containerd[1619]: time="2025-06-20T19:24:19.833547399Z" level=info msg="ImageCreate event name:\"sha256:8a733c30ec1a8c9f3f51e2da387b425052ed4a9ca631da57c6b185183243e8e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:24:19.835778 containerd[1619]: time="2025-06-20T19:24:19.835741236Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:b2a5699992dd6c84cfab94ef60536b9aaf19ad8de648e8e0b92d3733f5f52d23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:24:19.836156 containerd[1619]: time="2025-06-20T19:24:19.836129159Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.1\" with image id \"sha256:8a733c30ec1a8c9f3f51e2da387b425052ed4a9ca631da57c6b185183243e8e9\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:b2a5699992dd6c84cfab94ef60536b9aaf19ad8de648e8e0b92d3733f5f52d23\", size \"10251092\" in 1.942624458s" Jun 20 19:24:19.836215 containerd[1619]: time="2025-06-20T19:24:19.836157589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.1\" returns image reference \"sha256:8a733c30ec1a8c9f3f51e2da387b425052ed4a9ca631da57c6b185183243e8e9\"" Jun 20 19:24:19.838718 containerd[1619]: time="2025-06-20T19:24:19.837359850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\"" Jun 20 19:24:19.844707 containerd[1619]: time="2025-06-20T19:24:19.844648776Z" level=info msg="CreateContainer within sandbox \"4b1d6822f4f62f344cfec31695059b9a38adba0152341513a56634a71b1d9757\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 20 19:24:19.887566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3135659688.mount: Deactivated successfully. Jun 20 19:24:19.888763 containerd[1619]: time="2025-06-20T19:24:19.888597742Z" level=info msg="Container 06d4f5f59b68d9dc60cfbf9ac602893a195e96b1ec835a365a38de8f18438d78: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:24:19.942349 containerd[1619]: time="2025-06-20T19:24:19.942321122Z" level=info msg="CreateContainer within sandbox \"4b1d6822f4f62f344cfec31695059b9a38adba0152341513a56634a71b1d9757\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"06d4f5f59b68d9dc60cfbf9ac602893a195e96b1ec835a365a38de8f18438d78\"" Jun 20 19:24:19.943149 containerd[1619]: time="2025-06-20T19:24:19.943112038Z" level=info msg="StartContainer for \"06d4f5f59b68d9dc60cfbf9ac602893a195e96b1ec835a365a38de8f18438d78\"" Jun 20 19:24:19.944353 containerd[1619]: time="2025-06-20T19:24:19.944336241Z" level=info msg="connecting to shim 06d4f5f59b68d9dc60cfbf9ac602893a195e96b1ec835a365a38de8f18438d78" address="unix:///run/containerd/s/0a3928ad89d71fa2ed4f8ffc9b0546314e2f204211848e4221d6b58eb1b48bce" protocol=ttrpc version=3 Jun 20 19:24:19.962752 systemd[1]: Started cri-containerd-06d4f5f59b68d9dc60cfbf9ac602893a195e96b1ec835a365a38de8f18438d78.scope - libcontainer container 06d4f5f59b68d9dc60cfbf9ac602893a195e96b1ec835a365a38de8f18438d78. Jun 20 19:24:20.013546 containerd[1619]: time="2025-06-20T19:24:20.013518931Z" level=info msg="StartContainer for \"06d4f5f59b68d9dc60cfbf9ac602893a195e96b1ec835a365a38de8f18438d78\" returns successfully" Jun 20 19:24:20.200760 containerd[1619]: time="2025-06-20T19:24:20.200731647Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e6519052d887fb5790a210931f627c1324ddcf4e64eb6707f7803af69c1f3c8d\" id:\"c4b22222fa7519d4601822ad94f6a90eeaa8310c86cc26b9b41e4d1c6ba119ad\" pid:5249 exit_status:1 exited_at:{seconds:1750447460 nanos:200427028}" Jun 20 19:24:24.662153 containerd[1619]: time="2025-06-20T19:24:24.662017236Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e6519052d887fb5790a210931f627c1324ddcf4e64eb6707f7803af69c1f3c8d\" id:\"d121692f4f6ed7bde4313a51aee0b3d58594dac2efa5209aa896709ead637cfd\" pid:5285 exited_at:{seconds:1750447464 nanos:660944487}" Jun 20 19:24:24.663308 containerd[1619]: time="2025-06-20T19:24:24.663287874Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:24:24.665652 containerd[1619]: time="2025-06-20T19:24:24.665623123Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.1: active requests=0, bytes read=47305653" Jun 20 19:24:24.676553 containerd[1619]: time="2025-06-20T19:24:24.676516559Z" level=info msg="ImageCreate event name:\"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:24:24.680360 containerd[1619]: time="2025-06-20T19:24:24.680310224Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:24:24.689142 containerd[1619]: time="2025-06-20T19:24:24.688842430Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" with image id \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\", size \"48798372\" in 4.850106759s" Jun 20 19:24:24.689142 containerd[1619]: time="2025-06-20T19:24:24.688894821Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" returns image reference \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\"" Jun 20 19:24:24.691192 containerd[1619]: time="2025-06-20T19:24:24.691126602Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\"" Jun 20 19:24:24.703376 containerd[1619]: time="2025-06-20T19:24:24.703344874Z" level=info msg="CreateContainer within sandbox \"eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 20 19:24:24.711739 containerd[1619]: time="2025-06-20T19:24:24.708726503Z" level=info msg="Container 4a04728d0a05918d5d8ae089b61ab1c97f7c5941a762d4eea9f9c8424ae6a98a: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:24:24.719169 containerd[1619]: time="2025-06-20T19:24:24.716882717Z" level=info msg="CreateContainer within sandbox \"eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4a04728d0a05918d5d8ae089b61ab1c97f7c5941a762d4eea9f9c8424ae6a98a\"" Jun 20 19:24:24.719169 containerd[1619]: time="2025-06-20T19:24:24.717404567Z" level=info msg="StartContainer for \"4a04728d0a05918d5d8ae089b61ab1c97f7c5941a762d4eea9f9c8424ae6a98a\"" Jun 20 19:24:24.719169 containerd[1619]: time="2025-06-20T19:24:24.718286838Z" level=info msg="connecting to shim 4a04728d0a05918d5d8ae089b61ab1c97f7c5941a762d4eea9f9c8424ae6a98a" address="unix:///run/containerd/s/0b8a7500fa3c9260539f91d7b9c57e67cb3a0a9f008940e366b9edc75165d335" protocol=ttrpc version=3 Jun 20 19:24:24.739809 systemd[1]: Started cri-containerd-4a04728d0a05918d5d8ae089b61ab1c97f7c5941a762d4eea9f9c8424ae6a98a.scope - libcontainer container 4a04728d0a05918d5d8ae089b61ab1c97f7c5941a762d4eea9f9c8424ae6a98a. Jun 20 19:24:24.798883 containerd[1619]: time="2025-06-20T19:24:24.798854185Z" level=info msg="StartContainer for \"4a04728d0a05918d5d8ae089b61ab1c97f7c5941a762d4eea9f9c8424ae6a98a\" returns successfully" Jun 20 19:24:25.859619 kubelet[2909]: I0620 19:24:25.859393 2909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-57c64d78d5-28z6n" podStartSLOduration=32.54213873 podStartE2EDuration="40.826177288s" podCreationTimestamp="2025-06-20 19:23:45 +0000 UTC" firstStartedPulling="2025-06-20 19:24:16.406972999 +0000 UTC m=+42.835211666" lastFinishedPulling="2025-06-20 19:24:24.691011557 +0000 UTC m=+51.119250224" observedRunningTime="2025-06-20 19:24:25.44488787 +0000 UTC m=+51.873126547" watchObservedRunningTime="2025-06-20 19:24:25.826177288 +0000 UTC m=+52.254415959" Jun 20 19:24:30.008738 containerd[1619]: time="2025-06-20T19:24:30.008701103Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:24:30.061213 containerd[1619]: time="2025-06-20T19:24:30.061008796Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.1: active requests=0, bytes read=51246233" Jun 20 19:24:30.093959 containerd[1619]: time="2025-06-20T19:24:30.093913608Z" level=info msg="ImageCreate event name:\"sha256:6df5d7da55b19142ea456ddaa7f49909709419c92a39991e84b0f6708f953d73\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:24:30.175562 containerd[1619]: time="2025-06-20T19:24:30.175520845Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5a988b0c09389a083a7f37e3f14e361659f0bcf538c01d50e9f785671a7d9b20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:24:30.176272 containerd[1619]: time="2025-06-20T19:24:30.176245269Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" with image id \"sha256:6df5d7da55b19142ea456ddaa7f49909709419c92a39991e84b0f6708f953d73\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5a988b0c09389a083a7f37e3f14e361659f0bcf538c01d50e9f785671a7d9b20\", size \"52738904\" in 5.485085739s" Jun 20 19:24:30.195002 containerd[1619]: time="2025-06-20T19:24:30.176430288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" returns image reference \"sha256:6df5d7da55b19142ea456ddaa7f49909709419c92a39991e84b0f6708f953d73\"" Jun 20 19:24:30.195002 containerd[1619]: time="2025-06-20T19:24:30.177458979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\"" Jun 20 19:24:30.926295 containerd[1619]: time="2025-06-20T19:24:30.926261282Z" level=info msg="CreateContainer within sandbox \"2f3d1429f29ef961230c2af4b53f62162206728f3d6387ef7e0845cdd1bf05fb\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 20 19:24:32.974150 containerd[1619]: time="2025-06-20T19:24:32.973458808Z" level=info msg="Container 1f45d2f50b4ff4fcea6d83fd47a710976c9b7317c1c1419e16f4d90f2803ac03: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:24:33.108065 containerd[1619]: time="2025-06-20T19:24:33.108030132Z" level=info msg="CreateContainer within sandbox \"2f3d1429f29ef961230c2af4b53f62162206728f3d6387ef7e0845cdd1bf05fb\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1f45d2f50b4ff4fcea6d83fd47a710976c9b7317c1c1419e16f4d90f2803ac03\"" Jun 20 19:24:33.216019 containerd[1619]: time="2025-06-20T19:24:33.214993987Z" level=info msg="StartContainer for \"1f45d2f50b4ff4fcea6d83fd47a710976c9b7317c1c1419e16f4d90f2803ac03\"" Jun 20 19:24:33.216535 containerd[1619]: time="2025-06-20T19:24:33.216352071Z" level=info msg="connecting to shim 1f45d2f50b4ff4fcea6d83fd47a710976c9b7317c1c1419e16f4d90f2803ac03" address="unix:///run/containerd/s/21542b47bc65093a9adeddeac7d4e945a10306d9e0144315e3c8c9a56246b72a" protocol=ttrpc version=3 Jun 20 19:24:33.258147 containerd[1619]: time="2025-06-20T19:24:33.258072326Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:24:33.266034 containerd[1619]: time="2025-06-20T19:24:33.263741742Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.1: active requests=0, bytes read=77" Jun 20 19:24:33.266034 containerd[1619]: time="2025-06-20T19:24:33.264876763Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" with image id \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\", size \"48798372\" in 3.087400707s" Jun 20 19:24:33.266034 containerd[1619]: time="2025-06-20T19:24:33.264895250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" returns image reference \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\"" Jun 20 19:24:33.266034 containerd[1619]: time="2025-06-20T19:24:33.265552495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\"" Jun 20 19:24:33.278889 containerd[1619]: time="2025-06-20T19:24:33.278856028Z" level=info msg="CreateContainer within sandbox \"7cc91be65739e909a501c371b364099e57e816e8945a931ae8e4a4794885ada5\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 20 19:24:33.354705 containerd[1619]: time="2025-06-20T19:24:33.352360513Z" level=info msg="Container f69a93dbcca96c16b17c90217c497ca3795c5578962ec27c13a94d9686fb166f: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:24:33.354417 systemd[1]: Started cri-containerd-1f45d2f50b4ff4fcea6d83fd47a710976c9b7317c1c1419e16f4d90f2803ac03.scope - libcontainer container 1f45d2f50b4ff4fcea6d83fd47a710976c9b7317c1c1419e16f4d90f2803ac03. Jun 20 19:24:33.417601 containerd[1619]: time="2025-06-20T19:24:33.417560624Z" level=info msg="CreateContainer within sandbox \"7cc91be65739e909a501c371b364099e57e816e8945a931ae8e4a4794885ada5\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f69a93dbcca96c16b17c90217c497ca3795c5578962ec27c13a94d9686fb166f\"" Jun 20 19:24:33.418884 containerd[1619]: time="2025-06-20T19:24:33.418067839Z" level=info msg="StartContainer for \"f69a93dbcca96c16b17c90217c497ca3795c5578962ec27c13a94d9686fb166f\"" Jun 20 19:24:33.418884 containerd[1619]: time="2025-06-20T19:24:33.418744907Z" level=info msg="connecting to shim f69a93dbcca96c16b17c90217c497ca3795c5578962ec27c13a94d9686fb166f" address="unix:///run/containerd/s/2543c064b96c7b716adb092c481e8563979e9f108bc6774bd5e5b146901a1223" protocol=ttrpc version=3 Jun 20 19:24:33.437742 containerd[1619]: time="2025-06-20T19:24:33.437705466Z" level=info msg="StartContainer for \"1f45d2f50b4ff4fcea6d83fd47a710976c9b7317c1c1419e16f4d90f2803ac03\" returns successfully" Jun 20 19:24:33.449787 systemd[1]: Started cri-containerd-f69a93dbcca96c16b17c90217c497ca3795c5578962ec27c13a94d9686fb166f.scope - libcontainer container f69a93dbcca96c16b17c90217c497ca3795c5578962ec27c13a94d9686fb166f. Jun 20 19:24:33.510976 containerd[1619]: time="2025-06-20T19:24:33.510771490Z" level=info msg="StartContainer for \"f69a93dbcca96c16b17c90217c497ca3795c5578962ec27c13a94d9686fb166f\" returns successfully" Jun 20 19:24:34.132666 containerd[1619]: time="2025-06-20T19:24:34.132359034Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:24:34.145580 containerd[1619]: time="2025-06-20T19:24:34.145549040Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.1: active requests=0, bytes read=77" Jun 20 19:24:34.146926 containerd[1619]: time="2025-06-20T19:24:34.146905835Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" with image id \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\", size \"48798372\" in 881.118455ms" Jun 20 19:24:34.147117 containerd[1619]: time="2025-06-20T19:24:34.146997258Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" returns image reference \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\"" Jun 20 19:24:34.282783 containerd[1619]: time="2025-06-20T19:24:34.282443960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\"" Jun 20 19:24:34.290522 containerd[1619]: time="2025-06-20T19:24:34.284328038Z" level=info msg="CreateContainer within sandbox \"ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 20 19:24:34.328092 containerd[1619]: time="2025-06-20T19:24:34.326975034Z" level=info msg="Container de9113f23710efb86c1fea03661b6df26537c01fa869462d6f199886e0d25f80: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:24:34.346688 containerd[1619]: time="2025-06-20T19:24:34.346474464Z" level=info msg="CreateContainer within sandbox \"ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"de9113f23710efb86c1fea03661b6df26537c01fa869462d6f199886e0d25f80\"" Jun 20 19:24:34.425200 containerd[1619]: time="2025-06-20T19:24:34.424816833Z" level=info msg="StartContainer for \"de9113f23710efb86c1fea03661b6df26537c01fa869462d6f199886e0d25f80\"" Jun 20 19:24:34.440204 containerd[1619]: time="2025-06-20T19:24:34.440179251Z" level=info msg="connecting to shim de9113f23710efb86c1fea03661b6df26537c01fa869462d6f199886e0d25f80" address="unix:///run/containerd/s/12b3b6db763548e65729fa47ca6972d98a4033d3860a54f76ce41827d0fd39fe" protocol=ttrpc version=3 Jun 20 19:24:34.484819 systemd[1]: Started cri-containerd-de9113f23710efb86c1fea03661b6df26537c01fa869462d6f199886e0d25f80.scope - libcontainer container de9113f23710efb86c1fea03661b6df26537c01fa869462d6f199886e0d25f80. Jun 20 19:24:34.542550 containerd[1619]: time="2025-06-20T19:24:34.542328901Z" level=info msg="StartContainer for \"de9113f23710efb86c1fea03661b6df26537c01fa869462d6f199886e0d25f80\" returns successfully" Jun 20 19:24:34.561122 containerd[1619]: time="2025-06-20T19:24:34.561078205Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1f45d2f50b4ff4fcea6d83fd47a710976c9b7317c1c1419e16f4d90f2803ac03\" id:\"9856d7e08992c839a4eaf9388fd4f277d693cc4859a119165bcabde6260272bf\" pid:5438 exited_at:{seconds:1750447474 nanos:554791696}" Jun 20 19:24:34.686227 kubelet[2909]: I0620 19:24:34.622047 2909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-844cfccc97-b8rbg" podStartSLOduration=32.980316703 podStartE2EDuration="46.606239474s" podCreationTimestamp="2025-06-20 19:23:48 +0000 UTC" firstStartedPulling="2025-06-20 19:24:16.55129954 +0000 UTC m=+42.979538206" lastFinishedPulling="2025-06-20 19:24:30.177222304 +0000 UTC m=+56.605460977" observedRunningTime="2025-06-20 19:24:34.590826823 +0000 UTC m=+61.019065500" watchObservedRunningTime="2025-06-20 19:24:34.606239474 +0000 UTC m=+61.034478150" Jun 20 19:24:35.079036 kubelet[2909]: I0620 19:24:35.078947 2909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-54f4b499c8-vwd6m" podStartSLOduration=33.654872039 podStartE2EDuration="50.078930907s" podCreationTimestamp="2025-06-20 19:23:45 +0000 UTC" firstStartedPulling="2025-06-20 19:24:16.841352728 +0000 UTC m=+43.269591395" lastFinishedPulling="2025-06-20 19:24:33.265411596 +0000 UTC m=+59.693650263" observedRunningTime="2025-06-20 19:24:34.918772259 +0000 UTC m=+61.347010935" watchObservedRunningTime="2025-06-20 19:24:35.078930907 +0000 UTC m=+61.507169578" Jun 20 19:24:35.426805 systemd[1]: Created slice kubepods-besteffort-podf482f811_7af4_4d4f_8fe9_be05e12dca20.slice - libcontainer container kubepods-besteffort-podf482f811_7af4_4d4f_8fe9_be05e12dca20.slice. Jun 20 19:24:35.543302 kubelet[2909]: I0620 19:24:35.543272 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7tkr\" (UniqueName: \"kubernetes.io/projected/f482f811-7af4-4d4f-8fe9-be05e12dca20-kube-api-access-j7tkr\") pod \"calico-apiserver-54f4b499c8-mbft2\" (UID: \"f482f811-7af4-4d4f-8fe9-be05e12dca20\") " pod="calico-apiserver/calico-apiserver-54f4b499c8-mbft2" Jun 20 19:24:35.543469 kubelet[2909]: I0620 19:24:35.543446 2909 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f482f811-7af4-4d4f-8fe9-be05e12dca20-calico-apiserver-certs\") pod \"calico-apiserver-54f4b499c8-mbft2\" (UID: \"f482f811-7af4-4d4f-8fe9-be05e12dca20\") " pod="calico-apiserver/calico-apiserver-54f4b499c8-mbft2" Jun 20 19:24:36.123419 containerd[1619]: time="2025-06-20T19:24:36.123386364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54f4b499c8-mbft2,Uid:f482f811-7af4-4d4f-8fe9-be05e12dca20,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:24:36.817769 containerd[1619]: time="2025-06-20T19:24:36.817742404Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:24:36.847436 containerd[1619]: time="2025-06-20T19:24:36.819232170Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1: active requests=0, bytes read=14705633" Jun 20 19:24:36.854080 containerd[1619]: time="2025-06-20T19:24:36.854057095Z" level=info msg="ImageCreate event name:\"sha256:dfc00385e8755bddd1053a2a482a3559ad6c93bd8b882371b9ed8b5c3dfe22b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:24:36.856718 containerd[1619]: time="2025-06-20T19:24:36.856700301Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:1a882b6866dd22d783a39f1e041b87a154666ea4dd8b669fe98d0b0fac58d225\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:24:36.858795 containerd[1619]: time="2025-06-20T19:24:36.858772111Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" with image id \"sha256:dfc00385e8755bddd1053a2a482a3559ad6c93bd8b882371b9ed8b5c3dfe22b5\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:1a882b6866dd22d783a39f1e041b87a154666ea4dd8b669fe98d0b0fac58d225\", size \"16198288\" in 2.575077267s" Jun 20 19:24:36.858901 containerd[1619]: time="2025-06-20T19:24:36.858889020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" returns image reference \"sha256:dfc00385e8755bddd1053a2a482a3559ad6c93bd8b882371b9ed8b5c3dfe22b5\"" Jun 20 19:24:36.953250 containerd[1619]: time="2025-06-20T19:24:36.952944038Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e7e5149af51caf084bf97c04562e5331d256d9b819504a23a8a27279d4f450f0\" id:\"1e68f686981c5b56257f4f197ec66ed781e1821b935842779ce856d689036e39\" pid:5508 exited_at:{seconds:1750447476 nanos:886116134}" Jun 20 19:24:37.154388 kubelet[2909]: I0620 19:24:37.135984 2909 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:24:37.556415 containerd[1619]: time="2025-06-20T19:24:37.556107995Z" level=info msg="StopContainer for \"de9113f23710efb86c1fea03661b6df26537c01fa869462d6f199886e0d25f80\" with timeout 30 (s)" Jun 20 19:24:37.580013 containerd[1619]: time="2025-06-20T19:24:37.579986118Z" level=info msg="Stop container \"de9113f23710efb86c1fea03661b6df26537c01fa869462d6f199886e0d25f80\" with signal terminated" Jun 20 19:24:37.644901 containerd[1619]: time="2025-06-20T19:24:37.644868032Z" level=info msg="CreateContainer within sandbox \"4b1d6822f4f62f344cfec31695059b9a38adba0152341513a56634a71b1d9757\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 20 19:24:37.702089 systemd-networkd[1533]: cali2ff9dd1659f: Link UP Jun 20 19:24:37.702601 systemd-networkd[1533]: cali2ff9dd1659f: Gained carrier Jun 20 19:24:37.832787 containerd[1619]: 2025-06-20 19:24:36.896 [INFO][5521] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--54f4b499c8--mbft2-eth0 calico-apiserver-54f4b499c8- calico-apiserver f482f811-7af4-4d4f-8fe9-be05e12dca20 1076 0 2025-06-20 19:24:35 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54f4b499c8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-54f4b499c8-mbft2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2ff9dd1659f [] [] }} ContainerID="a4240094175d2e2fbc8ad3775bbe0600731ede1ef50541a86644d0d49d3a4980" Namespace="calico-apiserver" Pod="calico-apiserver-54f4b499c8-mbft2" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f4b499c8--mbft2-" Jun 20 19:24:37.832787 containerd[1619]: 2025-06-20 19:24:36.900 [INFO][5521] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a4240094175d2e2fbc8ad3775bbe0600731ede1ef50541a86644d0d49d3a4980" Namespace="calico-apiserver" Pod="calico-apiserver-54f4b499c8-mbft2" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f4b499c8--mbft2-eth0" Jun 20 19:24:37.832787 containerd[1619]: 2025-06-20 19:24:37.504 [INFO][5543] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a4240094175d2e2fbc8ad3775bbe0600731ede1ef50541a86644d0d49d3a4980" HandleID="k8s-pod-network.a4240094175d2e2fbc8ad3775bbe0600731ede1ef50541a86644d0d49d3a4980" Workload="localhost-k8s-calico--apiserver--54f4b499c8--mbft2-eth0" Jun 20 19:24:37.832787 containerd[1619]: 2025-06-20 19:24:37.538 [INFO][5543] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a4240094175d2e2fbc8ad3775bbe0600731ede1ef50541a86644d0d49d3a4980" HandleID="k8s-pod-network.a4240094175d2e2fbc8ad3775bbe0600731ede1ef50541a86644d0d49d3a4980" Workload="localhost-k8s-calico--apiserver--54f4b499c8--mbft2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003861c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-54f4b499c8-mbft2", "timestamp":"2025-06-20 19:24:37.504406438 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:24:37.832787 containerd[1619]: 2025-06-20 19:24:37.538 [INFO][5543] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:24:37.832787 containerd[1619]: 2025-06-20 19:24:37.538 [INFO][5543] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:24:37.832787 containerd[1619]: 2025-06-20 19:24:37.538 [INFO][5543] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 20 19:24:37.832787 containerd[1619]: 2025-06-20 19:24:37.572 [INFO][5543] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a4240094175d2e2fbc8ad3775bbe0600731ede1ef50541a86644d0d49d3a4980" host="localhost" Jun 20 19:24:37.832787 containerd[1619]: 2025-06-20 19:24:37.588 [INFO][5543] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 20 19:24:37.832787 containerd[1619]: 2025-06-20 19:24:37.596 [INFO][5543] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 20 19:24:37.832787 containerd[1619]: 2025-06-20 19:24:37.602 [INFO][5543] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 20 19:24:37.832787 containerd[1619]: 2025-06-20 19:24:37.606 [INFO][5543] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 20 19:24:37.832787 containerd[1619]: 2025-06-20 19:24:37.606 [INFO][5543] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a4240094175d2e2fbc8ad3775bbe0600731ede1ef50541a86644d0d49d3a4980" host="localhost" Jun 20 19:24:37.832787 containerd[1619]: 2025-06-20 19:24:37.608 [INFO][5543] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a4240094175d2e2fbc8ad3775bbe0600731ede1ef50541a86644d0d49d3a4980 Jun 20 19:24:37.832787 containerd[1619]: 2025-06-20 19:24:37.614 [INFO][5543] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a4240094175d2e2fbc8ad3775bbe0600731ede1ef50541a86644d0d49d3a4980" host="localhost" Jun 20 19:24:37.832787 containerd[1619]: 2025-06-20 19:24:37.631 [INFO][5543] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.138/26] block=192.168.88.128/26 handle="k8s-pod-network.a4240094175d2e2fbc8ad3775bbe0600731ede1ef50541a86644d0d49d3a4980" host="localhost" Jun 20 19:24:37.832787 containerd[1619]: 2025-06-20 19:24:37.631 [INFO][5543] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.138/26] handle="k8s-pod-network.a4240094175d2e2fbc8ad3775bbe0600731ede1ef50541a86644d0d49d3a4980" host="localhost" Jun 20 19:24:37.832787 containerd[1619]: 2025-06-20 19:24:37.631 [INFO][5543] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:24:37.832787 containerd[1619]: 2025-06-20 19:24:37.631 [INFO][5543] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.138/26] IPv6=[] ContainerID="a4240094175d2e2fbc8ad3775bbe0600731ede1ef50541a86644d0d49d3a4980" HandleID="k8s-pod-network.a4240094175d2e2fbc8ad3775bbe0600731ede1ef50541a86644d0d49d3a4980" Workload="localhost-k8s-calico--apiserver--54f4b499c8--mbft2-eth0" Jun 20 19:24:37.844605 containerd[1619]: 2025-06-20 19:24:37.639 [INFO][5521] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a4240094175d2e2fbc8ad3775bbe0600731ede1ef50541a86644d0d49d3a4980" Namespace="calico-apiserver" Pod="calico-apiserver-54f4b499c8-mbft2" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f4b499c8--mbft2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54f4b499c8--mbft2-eth0", GenerateName:"calico-apiserver-54f4b499c8-", Namespace:"calico-apiserver", SelfLink:"", UID:"f482f811-7af4-4d4f-8fe9-be05e12dca20", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 24, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54f4b499c8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-54f4b499c8-mbft2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2ff9dd1659f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:24:37.844605 containerd[1619]: 2025-06-20 19:24:37.639 [INFO][5521] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.138/32] ContainerID="a4240094175d2e2fbc8ad3775bbe0600731ede1ef50541a86644d0d49d3a4980" Namespace="calico-apiserver" Pod="calico-apiserver-54f4b499c8-mbft2" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f4b499c8--mbft2-eth0" Jun 20 19:24:37.844605 containerd[1619]: 2025-06-20 19:24:37.639 [INFO][5521] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2ff9dd1659f ContainerID="a4240094175d2e2fbc8ad3775bbe0600731ede1ef50541a86644d0d49d3a4980" Namespace="calico-apiserver" Pod="calico-apiserver-54f4b499c8-mbft2" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f4b499c8--mbft2-eth0" Jun 20 19:24:37.844605 containerd[1619]: 2025-06-20 19:24:37.721 [INFO][5521] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a4240094175d2e2fbc8ad3775bbe0600731ede1ef50541a86644d0d49d3a4980" Namespace="calico-apiserver" Pod="calico-apiserver-54f4b499c8-mbft2" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f4b499c8--mbft2-eth0" Jun 20 19:24:37.844605 containerd[1619]: 2025-06-20 19:24:37.722 [INFO][5521] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a4240094175d2e2fbc8ad3775bbe0600731ede1ef50541a86644d0d49d3a4980" Namespace="calico-apiserver" Pod="calico-apiserver-54f4b499c8-mbft2" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f4b499c8--mbft2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54f4b499c8--mbft2-eth0", GenerateName:"calico-apiserver-54f4b499c8-", Namespace:"calico-apiserver", SelfLink:"", UID:"f482f811-7af4-4d4f-8fe9-be05e12dca20", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 24, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54f4b499c8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a4240094175d2e2fbc8ad3775bbe0600731ede1ef50541a86644d0d49d3a4980", Pod:"calico-apiserver-54f4b499c8-mbft2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2ff9dd1659f", MAC:"f2:86:ad:4f:14:03", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:24:37.844605 containerd[1619]: 2025-06-20 19:24:37.781 [INFO][5521] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a4240094175d2e2fbc8ad3775bbe0600731ede1ef50541a86644d0d49d3a4980" Namespace="calico-apiserver" Pod="calico-apiserver-54f4b499c8-mbft2" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f4b499c8--mbft2-eth0" Jun 20 19:24:37.952025 containerd[1619]: time="2025-06-20T19:24:37.951953087Z" level=info msg="Container aaebc9140a3b13b8902e54caa9ddce3c97a11010a1480b995ecfc504450625ff: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:24:37.987804 systemd[1]: cri-containerd-de9113f23710efb86c1fea03661b6df26537c01fa869462d6f199886e0d25f80.scope: Deactivated successfully. Jun 20 19:24:37.989637 systemd[1]: cri-containerd-de9113f23710efb86c1fea03661b6df26537c01fa869462d6f199886e0d25f80.scope: Consumed 580ms CPU time, 49.7M memory peak, 1.3M read from disk. Jun 20 19:24:38.038675 containerd[1619]: time="2025-06-20T19:24:38.038484073Z" level=info msg="CreateContainer within sandbox \"4b1d6822f4f62f344cfec31695059b9a38adba0152341513a56634a71b1d9757\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"aaebc9140a3b13b8902e54caa9ddce3c97a11010a1480b995ecfc504450625ff\"" Jun 20 19:24:38.046249 kubelet[2909]: I0620 19:24:38.019589 2909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-57c64d78d5-dz2fv" podStartSLOduration=37.228208991 podStartE2EDuration="52.997865165s" podCreationTimestamp="2025-06-20 19:23:45 +0000 UTC" firstStartedPulling="2025-06-20 19:24:18.50424488 +0000 UTC m=+44.932483549" lastFinishedPulling="2025-06-20 19:24:34.273901052 +0000 UTC m=+60.702139723" observedRunningTime="2025-06-20 19:24:35.618773175 +0000 UTC m=+62.047011850" watchObservedRunningTime="2025-06-20 19:24:37.997865165 +0000 UTC m=+64.426103835" Jun 20 19:24:38.071152 containerd[1619]: time="2025-06-20T19:24:38.071118942Z" level=info msg="StartContainer for \"aaebc9140a3b13b8902e54caa9ddce3c97a11010a1480b995ecfc504450625ff\"" Jun 20 19:24:38.072766 containerd[1619]: time="2025-06-20T19:24:38.072650194Z" level=info msg="received exit event container_id:\"de9113f23710efb86c1fea03661b6df26537c01fa869462d6f199886e0d25f80\" id:\"de9113f23710efb86c1fea03661b6df26537c01fa869462d6f199886e0d25f80\" pid:5460 exit_status:1 exited_at:{seconds:1750447478 nanos:48074053}" Jun 20 19:24:38.103300 containerd[1619]: time="2025-06-20T19:24:38.103261909Z" level=info msg="connecting to shim aaebc9140a3b13b8902e54caa9ddce3c97a11010a1480b995ecfc504450625ff" address="unix:///run/containerd/s/0a3928ad89d71fa2ed4f8ffc9b0546314e2f204211848e4221d6b58eb1b48bce" protocol=ttrpc version=3 Jun 20 19:24:38.123462 containerd[1619]: time="2025-06-20T19:24:38.123437772Z" level=info msg="TaskExit event in podsandbox handler container_id:\"de9113f23710efb86c1fea03661b6df26537c01fa869462d6f199886e0d25f80\" id:\"de9113f23710efb86c1fea03661b6df26537c01fa869462d6f199886e0d25f80\" pid:5460 exit_status:1 exited_at:{seconds:1750447478 nanos:48074053}" Jun 20 19:24:38.136844 systemd[1]: Started cri-containerd-aaebc9140a3b13b8902e54caa9ddce3c97a11010a1480b995ecfc504450625ff.scope - libcontainer container aaebc9140a3b13b8902e54caa9ddce3c97a11010a1480b995ecfc504450625ff. Jun 20 19:24:38.243514 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de9113f23710efb86c1fea03661b6df26537c01fa869462d6f199886e0d25f80-rootfs.mount: Deactivated successfully. Jun 20 19:24:38.283153 containerd[1619]: time="2025-06-20T19:24:38.283124043Z" level=info msg="StopContainer for \"de9113f23710efb86c1fea03661b6df26537c01fa869462d6f199886e0d25f80\" returns successfully" Jun 20 19:24:38.292334 containerd[1619]: time="2025-06-20T19:24:38.292313276Z" level=info msg="StartContainer for \"aaebc9140a3b13b8902e54caa9ddce3c97a11010a1480b995ecfc504450625ff\" returns successfully" Jun 20 19:24:38.359837 containerd[1619]: time="2025-06-20T19:24:38.359421772Z" level=info msg="connecting to shim a4240094175d2e2fbc8ad3775bbe0600731ede1ef50541a86644d0d49d3a4980" address="unix:///run/containerd/s/3a96ed9181a8a8f1d4c87ba5b142571f95872d6205fc7c8b0a937be1e557cb06" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:24:38.366191 containerd[1619]: time="2025-06-20T19:24:38.366170002Z" level=info msg="StopPodSandbox for \"ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3\"" Jun 20 19:24:38.366440 containerd[1619]: time="2025-06-20T19:24:38.366428087Z" level=info msg="Container to stop \"de9113f23710efb86c1fea03661b6df26537c01fa869462d6f199886e0d25f80\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:24:38.377971 systemd[1]: cri-containerd-ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3.scope: Deactivated successfully. Jun 20 19:24:38.379242 containerd[1619]: time="2025-06-20T19:24:38.379197924Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3\" id:\"ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3\" pid:5144 exit_status:137 exited_at:{seconds:1750447478 nanos:378957170}" Jun 20 19:24:38.385089 systemd[1]: Started cri-containerd-a4240094175d2e2fbc8ad3775bbe0600731ede1ef50541a86644d0d49d3a4980.scope - libcontainer container a4240094175d2e2fbc8ad3775bbe0600731ede1ef50541a86644d0d49d3a4980. Jun 20 19:24:38.423104 containerd[1619]: time="2025-06-20T19:24:38.423080600Z" level=info msg="shim disconnected" id=ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3 namespace=k8s.io Jun 20 19:24:38.423216 containerd[1619]: time="2025-06-20T19:24:38.423206897Z" level=warning msg="cleaning up after shim disconnected" id=ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3 namespace=k8s.io Jun 20 19:24:38.428453 containerd[1619]: time="2025-06-20T19:24:38.423250731Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:24:38.438623 systemd-resolved[1480]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 20 19:24:38.477422 containerd[1619]: time="2025-06-20T19:24:38.476956511Z" level=info msg="received exit event sandbox_id:\"ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3\" exit_status:137 exited_at:{seconds:1750447478 nanos:378957170}" Jun 20 19:24:38.544757 containerd[1619]: time="2025-06-20T19:24:38.544729728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54f4b499c8-mbft2,Uid:f482f811-7af4-4d4f-8fe9-be05e12dca20,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a4240094175d2e2fbc8ad3775bbe0600731ede1ef50541a86644d0d49d3a4980\"" Jun 20 19:24:38.587459 containerd[1619]: time="2025-06-20T19:24:38.587348835Z" level=info msg="CreateContainer within sandbox \"a4240094175d2e2fbc8ad3775bbe0600731ede1ef50541a86644d0d49d3a4980\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 20 19:24:38.600651 containerd[1619]: time="2025-06-20T19:24:38.600622508Z" level=info msg="Container b661b8d8ebb11ccda9f48c32a0ca3935340ce22b458a17dcff65d2e05ff76cb6: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:24:38.605125 containerd[1619]: time="2025-06-20T19:24:38.605097674Z" level=info msg="CreateContainer within sandbox \"a4240094175d2e2fbc8ad3775bbe0600731ede1ef50541a86644d0d49d3a4980\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b661b8d8ebb11ccda9f48c32a0ca3935340ce22b458a17dcff65d2e05ff76cb6\"" Jun 20 19:24:38.605630 containerd[1619]: time="2025-06-20T19:24:38.605607692Z" level=info msg="StartContainer for \"b661b8d8ebb11ccda9f48c32a0ca3935340ce22b458a17dcff65d2e05ff76cb6\"" Jun 20 19:24:38.606450 containerd[1619]: time="2025-06-20T19:24:38.606431126Z" level=info msg="connecting to shim b661b8d8ebb11ccda9f48c32a0ca3935340ce22b458a17dcff65d2e05ff76cb6" address="unix:///run/containerd/s/3a96ed9181a8a8f1d4c87ba5b142571f95872d6205fc7c8b0a937be1e557cb06" protocol=ttrpc version=3 Jun 20 19:24:38.628858 systemd[1]: Started cri-containerd-b661b8d8ebb11ccda9f48c32a0ca3935340ce22b458a17dcff65d2e05ff76cb6.scope - libcontainer container b661b8d8ebb11ccda9f48c32a0ca3935340ce22b458a17dcff65d2e05ff76cb6. Jun 20 19:24:38.689780 containerd[1619]: time="2025-06-20T19:24:38.689754316Z" level=info msg="StartContainer for \"b661b8d8ebb11ccda9f48c32a0ca3935340ce22b458a17dcff65d2e05ff76cb6\" returns successfully" Jun 20 19:24:38.813933 systemd-networkd[1533]: cali72cc1832895: Link DOWN Jun 20 19:24:38.813938 systemd-networkd[1533]: cali72cc1832895: Lost carrier Jun 20 19:24:38.961108 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3-rootfs.mount: Deactivated successfully. Jun 20 19:24:38.961193 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3-shm.mount: Deactivated successfully. Jun 20 19:24:38.966847 kubelet[2909]: I0620 19:24:38.966717 2909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" Jun 20 19:24:38.970774 kubelet[2909]: I0620 19:24:38.969410 2909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-54f4b499c8-mbft2" podStartSLOduration=3.969395071 podStartE2EDuration="3.969395071s" podCreationTimestamp="2025-06-20 19:24:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:24:38.968351492 +0000 UTC m=+65.396590168" watchObservedRunningTime="2025-06-20 19:24:38.969395071 +0000 UTC m=+65.397633747" Jun 20 19:24:38.980696 kubelet[2909]: I0620 19:24:38.980639 2909 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-zbzl5" podStartSLOduration=28.594334706 podStartE2EDuration="50.980625595s" podCreationTimestamp="2025-06-20 19:23:48 +0000 UTC" firstStartedPulling="2025-06-20 19:24:14.980824794 +0000 UTC m=+41.409063460" lastFinishedPulling="2025-06-20 19:24:37.367115682 +0000 UTC m=+63.795354349" observedRunningTime="2025-06-20 19:24:38.980458967 +0000 UTC m=+65.408697643" watchObservedRunningTime="2025-06-20 19:24:38.980625595 +0000 UTC m=+65.408864265" Jun 20 19:24:39.182772 systemd-networkd[1533]: cali2ff9dd1659f: Gained IPv6LL Jun 20 19:24:39.418075 containerd[1619]: 2025-06-20 19:24:38.809 [INFO][5703] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" Jun 20 19:24:39.418075 containerd[1619]: 2025-06-20 19:24:38.812 [INFO][5703] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" iface="eth0" netns="/var/run/netns/cni-74f7710d-2eda-78eb-884d-d6a246cb8f27" Jun 20 19:24:39.418075 containerd[1619]: 2025-06-20 19:24:38.813 [INFO][5703] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" iface="eth0" netns="/var/run/netns/cni-74f7710d-2eda-78eb-884d-d6a246cb8f27" Jun 20 19:24:39.418075 containerd[1619]: 2025-06-20 19:24:38.817 [INFO][5703] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" after=4.754785ms iface="eth0" netns="/var/run/netns/cni-74f7710d-2eda-78eb-884d-d6a246cb8f27" Jun 20 19:24:39.418075 containerd[1619]: 2025-06-20 19:24:38.817 [INFO][5703] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" Jun 20 19:24:39.418075 containerd[1619]: 2025-06-20 19:24:38.817 [INFO][5703] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" Jun 20 19:24:39.418075 containerd[1619]: 2025-06-20 19:24:39.261 [INFO][5740] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" HandleID="k8s-pod-network.ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" Workload="localhost-k8s-calico--apiserver--57c64d78d5--dz2fv-eth0" Jun 20 19:24:39.418075 containerd[1619]: 2025-06-20 19:24:39.269 [INFO][5740] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:24:39.418075 containerd[1619]: 2025-06-20 19:24:39.270 [INFO][5740] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:24:39.418075 containerd[1619]: 2025-06-20 19:24:39.411 [INFO][5740] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" HandleID="k8s-pod-network.ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" Workload="localhost-k8s-calico--apiserver--57c64d78d5--dz2fv-eth0" Jun 20 19:24:39.418075 containerd[1619]: 2025-06-20 19:24:39.411 [INFO][5740] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" HandleID="k8s-pod-network.ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" Workload="localhost-k8s-calico--apiserver--57c64d78d5--dz2fv-eth0" Jun 20 19:24:39.418075 containerd[1619]: 2025-06-20 19:24:39.413 [INFO][5740] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:24:39.418075 containerd[1619]: 2025-06-20 19:24:39.415 [INFO][5703] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" Jun 20 19:24:39.420984 systemd[1]: run-netns-cni\x2d74f7710d\x2d2eda\x2d78eb\x2d884d\x2dd6a246cb8f27.mount: Deactivated successfully. Jun 20 19:24:39.426664 containerd[1619]: time="2025-06-20T19:24:39.425972026Z" level=info msg="TearDown network for sandbox \"ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3\" successfully" Jun 20 19:24:39.426664 containerd[1619]: time="2025-06-20T19:24:39.426011476Z" level=info msg="StopPodSandbox for \"ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3\" returns successfully" Jun 20 19:24:39.562104 kubelet[2909]: I0620 19:24:39.553900 2909 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 20 19:24:39.567935 kubelet[2909]: I0620 19:24:39.567912 2909 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 20 19:24:39.899067 kubelet[2909]: I0620 19:24:39.898713 2909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b168f456-8c10-4805-a1f9-c9eafb698c4f-calico-apiserver-certs\") pod \"b168f456-8c10-4805-a1f9-c9eafb698c4f\" (UID: \"b168f456-8c10-4805-a1f9-c9eafb698c4f\") " Jun 20 19:24:39.899067 kubelet[2909]: I0620 19:24:39.898790 2909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmrxk\" (UniqueName: \"kubernetes.io/projected/b168f456-8c10-4805-a1f9-c9eafb698c4f-kube-api-access-mmrxk\") pod \"b168f456-8c10-4805-a1f9-c9eafb698c4f\" (UID: \"b168f456-8c10-4805-a1f9-c9eafb698c4f\") " Jun 20 19:24:39.957049 systemd[1]: var-lib-kubelet-pods-b168f456\x2d8c10\x2d4805\x2da1f9\x2dc9eafb698c4f-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jun 20 19:24:39.957122 systemd[1]: var-lib-kubelet-pods-b168f456\x2d8c10\x2d4805\x2da1f9\x2dc9eafb698c4f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmmrxk.mount: Deactivated successfully. Jun 20 19:24:40.002459 kubelet[2909]: I0620 19:24:39.989084 2909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b168f456-8c10-4805-a1f9-c9eafb698c4f-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "b168f456-8c10-4805-a1f9-c9eafb698c4f" (UID: "b168f456-8c10-4805-a1f9-c9eafb698c4f"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 20 19:24:40.002459 kubelet[2909]: I0620 19:24:39.990068 2909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b168f456-8c10-4805-a1f9-c9eafb698c4f-kube-api-access-mmrxk" (OuterVolumeSpecName: "kube-api-access-mmrxk") pod "b168f456-8c10-4805-a1f9-c9eafb698c4f" (UID: "b168f456-8c10-4805-a1f9-c9eafb698c4f"). InnerVolumeSpecName "kube-api-access-mmrxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 20 19:24:40.005206 kubelet[2909]: I0620 19:24:40.005191 2909 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b168f456-8c10-4805-a1f9-c9eafb698c4f-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" Jun 20 19:24:40.029131 systemd[1]: Removed slice kubepods-besteffort-podb168f456_8c10_4805_a1f9_c9eafb698c4f.slice - libcontainer container kubepods-besteffort-podb168f456_8c10_4805_a1f9_c9eafb698c4f.slice. Jun 20 19:24:40.029797 systemd[1]: kubepods-besteffort-podb168f456_8c10_4805_a1f9_c9eafb698c4f.slice: Consumed 603ms CPU time, 50.3M memory peak, 1.3M read from disk. Jun 20 19:24:40.106329 kubelet[2909]: I0620 19:24:40.106142 2909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmrxk\" (UniqueName: \"kubernetes.io/projected/b168f456-8c10-4805-a1f9-c9eafb698c4f-kube-api-access-mmrxk\") on node \"localhost\" DevicePath \"\"" Jun 20 19:24:42.023179 kubelet[2909]: I0620 19:24:42.020463 2909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b168f456-8c10-4805-a1f9-c9eafb698c4f" path="/var/lib/kubelet/pods/b168f456-8c10-4805-a1f9-c9eafb698c4f/volumes" Jun 20 19:24:42.058969 containerd[1619]: time="2025-06-20T19:24:42.058929015Z" level=info msg="StopContainer for \"4a04728d0a05918d5d8ae089b61ab1c97f7c5941a762d4eea9f9c8424ae6a98a\" with timeout 30 (s)" Jun 20 19:24:42.193784 containerd[1619]: time="2025-06-20T19:24:42.193754940Z" level=info msg="Stop container \"4a04728d0a05918d5d8ae089b61ab1c97f7c5941a762d4eea9f9c8424ae6a98a\" with signal terminated" Jun 20 19:24:42.340285 systemd[1]: cri-containerd-4a04728d0a05918d5d8ae089b61ab1c97f7c5941a762d4eea9f9c8424ae6a98a.scope: Deactivated successfully. Jun 20 19:24:42.340530 systemd[1]: cri-containerd-4a04728d0a05918d5d8ae089b61ab1c97f7c5941a762d4eea9f9c8424ae6a98a.scope: Consumed 680ms CPU time, 60M memory peak, 1.3M read from disk. Jun 20 19:24:42.363860 containerd[1619]: time="2025-06-20T19:24:42.363794564Z" level=info msg="received exit event container_id:\"4a04728d0a05918d5d8ae089b61ab1c97f7c5941a762d4eea9f9c8424ae6a98a\" id:\"4a04728d0a05918d5d8ae089b61ab1c97f7c5941a762d4eea9f9c8424ae6a98a\" pid:5309 exit_status:1 exited_at:{seconds:1750447482 nanos:339282852}" Jun 20 19:24:42.365393 containerd[1619]: time="2025-06-20T19:24:42.364181276Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4a04728d0a05918d5d8ae089b61ab1c97f7c5941a762d4eea9f9c8424ae6a98a\" id:\"4a04728d0a05918d5d8ae089b61ab1c97f7c5941a762d4eea9f9c8424ae6a98a\" pid:5309 exit_status:1 exited_at:{seconds:1750447482 nanos:339282852}" Jun 20 19:24:42.400157 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a04728d0a05918d5d8ae089b61ab1c97f7c5941a762d4eea9f9c8424ae6a98a-rootfs.mount: Deactivated successfully. Jun 20 19:24:42.429959 containerd[1619]: time="2025-06-20T19:24:42.429927548Z" level=info msg="StopContainer for \"4a04728d0a05918d5d8ae089b61ab1c97f7c5941a762d4eea9f9c8424ae6a98a\" returns successfully" Jun 20 19:24:42.442627 containerd[1619]: time="2025-06-20T19:24:42.442599398Z" level=info msg="StopPodSandbox for \"eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf\"" Jun 20 19:24:42.442882 containerd[1619]: time="2025-06-20T19:24:42.442728126Z" level=info msg="Container to stop \"4a04728d0a05918d5d8ae089b61ab1c97f7c5941a762d4eea9f9c8424ae6a98a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:24:42.449050 systemd[1]: cri-containerd-eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf.scope: Deactivated successfully. Jun 20 19:24:42.450573 containerd[1619]: time="2025-06-20T19:24:42.450445632Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf\" id:\"eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf\" pid:4877 exit_status:137 exited_at:{seconds:1750447482 nanos:450246356}" Jun 20 19:24:42.474101 containerd[1619]: time="2025-06-20T19:24:42.474073493Z" level=info msg="shim disconnected" id=eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf namespace=k8s.io Jun 20 19:24:42.474101 containerd[1619]: time="2025-06-20T19:24:42.474095885Z" level=warning msg="cleaning up after shim disconnected" id=eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf namespace=k8s.io Jun 20 19:24:42.475093 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf-rootfs.mount: Deactivated successfully. Jun 20 19:24:42.477383 containerd[1619]: time="2025-06-20T19:24:42.474101371Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:24:42.504429 containerd[1619]: time="2025-06-20T19:24:42.504227633Z" level=info msg="received exit event sandbox_id:\"eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf\" exit_status:137 exited_at:{seconds:1750447482 nanos:450246356}" Jun 20 19:24:42.509372 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf-shm.mount: Deactivated successfully. Jun 20 19:24:42.556428 kubelet[2909]: I0620 19:24:42.556371 2909 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" Jun 20 19:24:42.806882 systemd-networkd[1533]: cali4620a6951ff: Link DOWN Jun 20 19:24:42.806888 systemd-networkd[1533]: cali4620a6951ff: Lost carrier Jun 20 19:24:43.216803 containerd[1619]: 2025-06-20 19:24:42.796 [INFO][5836] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" Jun 20 19:24:43.216803 containerd[1619]: 2025-06-20 19:24:42.800 [INFO][5836] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" iface="eth0" netns="/var/run/netns/cni-b14743bf-3810-dd18-bfdf-7c1b8acc9594" Jun 20 19:24:43.216803 containerd[1619]: 2025-06-20 19:24:42.800 [INFO][5836] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" iface="eth0" netns="/var/run/netns/cni-b14743bf-3810-dd18-bfdf-7c1b8acc9594" Jun 20 19:24:43.216803 containerd[1619]: 2025-06-20 19:24:42.809 [INFO][5836] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" after=9.256243ms iface="eth0" netns="/var/run/netns/cni-b14743bf-3810-dd18-bfdf-7c1b8acc9594" Jun 20 19:24:43.216803 containerd[1619]: 2025-06-20 19:24:42.809 [INFO][5836] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" Jun 20 19:24:43.216803 containerd[1619]: 2025-06-20 19:24:42.810 [INFO][5836] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" Jun 20 19:24:43.216803 containerd[1619]: 2025-06-20 19:24:43.154 [INFO][5844] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" HandleID="k8s-pod-network.eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" Workload="localhost-k8s-calico--apiserver--57c64d78d5--28z6n-eth0" Jun 20 19:24:43.216803 containerd[1619]: 2025-06-20 19:24:43.158 [INFO][5844] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:24:43.216803 containerd[1619]: 2025-06-20 19:24:43.160 [INFO][5844] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:24:43.216803 containerd[1619]: 2025-06-20 19:24:43.211 [INFO][5844] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" HandleID="k8s-pod-network.eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" Workload="localhost-k8s-calico--apiserver--57c64d78d5--28z6n-eth0" Jun 20 19:24:43.216803 containerd[1619]: 2025-06-20 19:24:43.211 [INFO][5844] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" HandleID="k8s-pod-network.eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" Workload="localhost-k8s-calico--apiserver--57c64d78d5--28z6n-eth0" Jun 20 19:24:43.216803 containerd[1619]: 2025-06-20 19:24:43.212 [INFO][5844] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:24:43.216803 containerd[1619]: 2025-06-20 19:24:43.214 [INFO][5836] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" Jun 20 19:24:43.220343 containerd[1619]: time="2025-06-20T19:24:43.218752814Z" level=info msg="TearDown network for sandbox \"eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf\" successfully" Jun 20 19:24:43.220343 containerd[1619]: time="2025-06-20T19:24:43.218779899Z" level=info msg="StopPodSandbox for \"eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf\" returns successfully" Jun 20 19:24:43.220298 systemd[1]: run-netns-cni\x2db14743bf\x2d3810\x2ddd18\x2dbfdf\x2d7c1b8acc9594.mount: Deactivated successfully. Jun 20 19:24:43.394124 kubelet[2909]: I0620 19:24:43.394096 2909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7bg9\" (UniqueName: \"kubernetes.io/projected/2e0b2d2c-4387-4f0a-acd2-67ee85d925c8-kube-api-access-d7bg9\") pod \"2e0b2d2c-4387-4f0a-acd2-67ee85d925c8\" (UID: \"2e0b2d2c-4387-4f0a-acd2-67ee85d925c8\") " Jun 20 19:24:43.394396 kubelet[2909]: I0620 19:24:43.394146 2909 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2e0b2d2c-4387-4f0a-acd2-67ee85d925c8-calico-apiserver-certs\") pod \"2e0b2d2c-4387-4f0a-acd2-67ee85d925c8\" (UID: \"2e0b2d2c-4387-4f0a-acd2-67ee85d925c8\") " Jun 20 19:24:43.433501 systemd[1]: var-lib-kubelet-pods-2e0b2d2c\x2d4387\x2d4f0a\x2dacd2\x2d67ee85d925c8-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jun 20 19:24:43.433563 systemd[1]: var-lib-kubelet-pods-2e0b2d2c\x2d4387\x2d4f0a\x2dacd2\x2d67ee85d925c8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd7bg9.mount: Deactivated successfully. Jun 20 19:24:43.442169 kubelet[2909]: I0620 19:24:43.442141 2909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e0b2d2c-4387-4f0a-acd2-67ee85d925c8-kube-api-access-d7bg9" (OuterVolumeSpecName: "kube-api-access-d7bg9") pod "2e0b2d2c-4387-4f0a-acd2-67ee85d925c8" (UID: "2e0b2d2c-4387-4f0a-acd2-67ee85d925c8"). InnerVolumeSpecName "kube-api-access-d7bg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 20 19:24:43.442480 kubelet[2909]: I0620 19:24:43.442412 2909 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e0b2d2c-4387-4f0a-acd2-67ee85d925c8-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "2e0b2d2c-4387-4f0a-acd2-67ee85d925c8" (UID: "2e0b2d2c-4387-4f0a-acd2-67ee85d925c8"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 20 19:24:43.494852 kubelet[2909]: I0620 19:24:43.494693 2909 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2e0b2d2c-4387-4f0a-acd2-67ee85d925c8-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" Jun 20 19:24:43.494968 kubelet[2909]: I0620 19:24:43.494954 2909 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7bg9\" (UniqueName: \"kubernetes.io/projected/2e0b2d2c-4387-4f0a-acd2-67ee85d925c8-kube-api-access-d7bg9\") on node \"localhost\" DevicePath \"\"" Jun 20 19:24:43.603982 systemd[1]: Removed slice kubepods-besteffort-pod2e0b2d2c_4387_4f0a_acd2_67ee85d925c8.slice - libcontainer container kubepods-besteffort-pod2e0b2d2c_4387_4f0a_acd2_67ee85d925c8.slice. Jun 20 19:24:43.604415 systemd[1]: kubepods-besteffort-pod2e0b2d2c_4387_4f0a_acd2_67ee85d925c8.slice: Consumed 705ms CPU time, 60.7M memory peak, 1.3M read from disk. Jun 20 19:24:43.674829 kubelet[2909]: I0620 19:24:43.674802 2909 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e0b2d2c-4387-4f0a-acd2-67ee85d925c8" path="/var/lib/kubelet/pods/2e0b2d2c-4387-4f0a-acd2-67ee85d925c8/volumes" Jun 20 19:24:45.983790 containerd[1619]: time="2025-06-20T19:24:45.983757032Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1f45d2f50b4ff4fcea6d83fd47a710976c9b7317c1c1419e16f4d90f2803ac03\" id:\"6dfa9c8e7463df0deec5aeac0f8f23a24c430d537de949095c1370bde97ad0f5\" pid:5887 exited_at:{seconds:1750447485 nanos:983369792}" Jun 20 19:24:49.831629 containerd[1619]: time="2025-06-20T19:24:49.831458676Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e6519052d887fb5790a210931f627c1324ddcf4e64eb6707f7803af69c1f3c8d\" id:\"82a74263f1a6c4362b43baa2c4f814bd3d21b5550ba4fd60177090f0c6c5031b\" pid:5911 exited_at:{seconds:1750447489 nanos:830910301}" Jun 20 19:25:02.031600 systemd[1]: Started sshd@7-139.178.70.108:22-147.75.109.163:54064.service - OpenSSH per-connection server daemon (147.75.109.163:54064). Jun 20 19:25:02.264118 sshd[5931]: Accepted publickey for core from 147.75.109.163 port 54064 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:25:02.271297 sshd-session[5931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:25:02.280447 systemd-logind[1576]: New session 10 of user core. Jun 20 19:25:02.286770 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 20 19:25:02.953146 sshd[5937]: Connection closed by 147.75.109.163 port 54064 Jun 20 19:25:02.952613 sshd-session[5931]: pam_unix(sshd:session): session closed for user core Jun 20 19:25:02.959203 systemd[1]: sshd@7-139.178.70.108:22-147.75.109.163:54064.service: Deactivated successfully. Jun 20 19:25:02.960943 systemd[1]: session-10.scope: Deactivated successfully. Jun 20 19:25:02.962046 systemd-logind[1576]: Session 10 logged out. Waiting for processes to exit. Jun 20 19:25:02.966835 systemd-logind[1576]: Removed session 10. Jun 20 19:25:06.069639 containerd[1619]: time="2025-06-20T19:25:06.063167335Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e7e5149af51caf084bf97c04562e5331d256d9b819504a23a8a27279d4f450f0\" id:\"028ca2ed415a1ff7c9e57f5b121365e91f430d368f116bca8afed3aa370ec257\" pid:5969 exited_at:{seconds:1750447506 nanos:24514138}" Jun 20 19:25:06.127047 systemd[1]: Started sshd@8-139.178.70.108:22-20.168.120.250:55584.service - OpenSSH per-connection server daemon (20.168.120.250:55584). Jun 20 19:25:06.266792 sshd[5981]: banner exchange: Connection from 20.168.120.250 port 55584: invalid format Jun 20 19:25:06.267441 systemd[1]: sshd@8-139.178.70.108:22-20.168.120.250:55584.service: Deactivated successfully. Jun 20 19:25:07.971270 systemd[1]: Started sshd@9-139.178.70.108:22-147.75.109.163:42806.service - OpenSSH per-connection server daemon (147.75.109.163:42806). Jun 20 19:25:08.028857 sshd[5988]: Accepted publickey for core from 147.75.109.163 port 42806 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:25:08.030924 sshd-session[5988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:25:08.034317 systemd-logind[1576]: New session 11 of user core. Jun 20 19:25:08.038750 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 20 19:25:08.416177 sshd[5990]: Connection closed by 147.75.109.163 port 42806 Jun 20 19:25:08.416594 sshd-session[5988]: pam_unix(sshd:session): session closed for user core Jun 20 19:25:08.419046 systemd[1]: sshd@9-139.178.70.108:22-147.75.109.163:42806.service: Deactivated successfully. Jun 20 19:25:08.420690 systemd[1]: session-11.scope: Deactivated successfully. Jun 20 19:25:08.421365 systemd-logind[1576]: Session 11 logged out. Waiting for processes to exit. Jun 20 19:25:08.422788 systemd-logind[1576]: Removed session 11. Jun 20 19:25:13.427130 systemd[1]: Started sshd@10-139.178.70.108:22-147.75.109.163:42814.service - OpenSSH per-connection server daemon (147.75.109.163:42814). Jun 20 19:25:13.881990 sshd[6004]: Accepted publickey for core from 147.75.109.163 port 42814 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:25:13.901233 sshd-session[6004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:25:13.914366 systemd-logind[1576]: New session 12 of user core. Jun 20 19:25:13.921810 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 20 19:25:14.433108 sshd[6006]: Connection closed by 147.75.109.163 port 42814 Jun 20 19:25:14.439960 systemd[1]: Started sshd@11-139.178.70.108:22-147.75.109.163:42828.service - OpenSSH per-connection server daemon (147.75.109.163:42828). Jun 20 19:25:14.481737 sshd-session[6004]: pam_unix(sshd:session): session closed for user core Jun 20 19:25:14.508398 systemd[1]: sshd@10-139.178.70.108:22-147.75.109.163:42814.service: Deactivated successfully. Jun 20 19:25:14.509904 systemd[1]: session-12.scope: Deactivated successfully. Jun 20 19:25:14.511317 systemd-logind[1576]: Session 12 logged out. Waiting for processes to exit. Jun 20 19:25:14.512292 systemd-logind[1576]: Removed session 12. Jun 20 19:25:14.601248 sshd[6016]: Accepted publickey for core from 147.75.109.163 port 42828 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:25:14.602430 sshd-session[6016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:25:14.605758 systemd-logind[1576]: New session 13 of user core. Jun 20 19:25:14.612782 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 20 19:25:14.738798 sshd[6021]: Connection closed by 147.75.109.163 port 42828 Jun 20 19:25:14.740781 sshd-session[6016]: pam_unix(sshd:session): session closed for user core Jun 20 19:25:14.749006 systemd[1]: sshd@11-139.178.70.108:22-147.75.109.163:42828.service: Deactivated successfully. Jun 20 19:25:14.751390 systemd[1]: session-13.scope: Deactivated successfully. Jun 20 19:25:14.752536 systemd-logind[1576]: Session 13 logged out. Waiting for processes to exit. Jun 20 19:25:14.757319 systemd[1]: Started sshd@12-139.178.70.108:22-147.75.109.163:42834.service - OpenSSH per-connection server daemon (147.75.109.163:42834). Jun 20 19:25:14.759568 systemd-logind[1576]: Removed session 13. Jun 20 19:25:14.806623 sshd[6030]: Accepted publickey for core from 147.75.109.163 port 42834 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:25:14.808481 sshd-session[6030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:25:14.816315 systemd-logind[1576]: New session 14 of user core. Jun 20 19:25:14.822963 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 20 19:25:14.932511 sshd[6032]: Connection closed by 147.75.109.163 port 42834 Jun 20 19:25:14.933834 sshd-session[6030]: pam_unix(sshd:session): session closed for user core Jun 20 19:25:14.938131 systemd[1]: sshd@12-139.178.70.108:22-147.75.109.163:42834.service: Deactivated successfully. Jun 20 19:25:14.941401 systemd[1]: session-14.scope: Deactivated successfully. Jun 20 19:25:14.943992 systemd-logind[1576]: Session 14 logged out. Waiting for processes to exit. Jun 20 19:25:14.946776 systemd-logind[1576]: Removed session 14. Jun 20 19:25:15.817083 containerd[1619]: time="2025-06-20T19:25:15.817035944Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1f45d2f50b4ff4fcea6d83fd47a710976c9b7317c1c1419e16f4d90f2803ac03\" id:\"66061750f802e2132061624ae32f9ce80fd67593912380670cb4a75fd89dc892\" pid:6055 exited_at:{seconds:1750447515 nanos:816199658}" Jun 20 19:25:19.944008 systemd[1]: Started sshd@13-139.178.70.108:22-147.75.109.163:52456.service - OpenSSH per-connection server daemon (147.75.109.163:52456). Jun 20 19:25:20.571385 sshd[6086]: Accepted publickey for core from 147.75.109.163 port 52456 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:25:20.624512 sshd-session[6086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:25:20.646669 systemd-logind[1576]: New session 15 of user core. Jun 20 19:25:20.651817 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 20 19:25:22.173970 containerd[1619]: time="2025-06-20T19:25:22.173898198Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e6519052d887fb5790a210931f627c1324ddcf4e64eb6707f7803af69c1f3c8d\" id:\"331c5c5cbf6e5f67361866f27d806e1962e42bb16ea40b431e11b9f3a2cb5695\" pid:6076 exited_at:{seconds:1750447522 nanos:173059208}" Jun 20 19:25:22.302998 sshd[6088]: Connection closed by 147.75.109.163 port 52456 Jun 20 19:25:22.304975 sshd-session[6086]: pam_unix(sshd:session): session closed for user core Jun 20 19:25:22.313315 systemd-logind[1576]: Session 15 logged out. Waiting for processes to exit. Jun 20 19:25:22.313442 systemd[1]: sshd@13-139.178.70.108:22-147.75.109.163:52456.service: Deactivated successfully. Jun 20 19:25:22.318235 systemd[1]: session-15.scope: Deactivated successfully. Jun 20 19:25:22.320261 systemd-logind[1576]: Removed session 15. Jun 20 19:25:24.638041 containerd[1619]: time="2025-06-20T19:25:24.638014296Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e6519052d887fb5790a210931f627c1324ddcf4e64eb6707f7803af69c1f3c8d\" id:\"a0251d8737a36f99f7207df302920fa5626c7077ac07d6cfcf2ba9d2bfeae53c\" pid:6129 exited_at:{seconds:1750447524 nanos:626874979}" Jun 20 19:25:27.320480 systemd[1]: Started sshd@14-139.178.70.108:22-147.75.109.163:56474.service - OpenSSH per-connection server daemon (147.75.109.163:56474). Jun 20 19:25:27.512303 sshd[6140]: Accepted publickey for core from 147.75.109.163 port 56474 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:25:27.513545 sshd-session[6140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:25:27.518350 systemd-logind[1576]: New session 16 of user core. Jun 20 19:25:27.525862 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 20 19:25:28.180383 sshd[6142]: Connection closed by 147.75.109.163 port 56474 Jun 20 19:25:28.192454 sshd-session[6140]: pam_unix(sshd:session): session closed for user core Jun 20 19:25:28.201229 systemd[1]: sshd@14-139.178.70.108:22-147.75.109.163:56474.service: Deactivated successfully. Jun 20 19:25:28.202705 systemd[1]: session-16.scope: Deactivated successfully. Jun 20 19:25:28.204196 systemd-logind[1576]: Session 16 logged out. Waiting for processes to exit. Jun 20 19:25:28.206591 systemd-logind[1576]: Removed session 16. Jun 20 19:25:33.194337 systemd[1]: Started sshd@15-139.178.70.108:22-147.75.109.163:56482.service - OpenSSH per-connection server daemon (147.75.109.163:56482). Jun 20 19:25:33.313975 sshd[6161]: Accepted publickey for core from 147.75.109.163 port 56482 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:25:33.316677 sshd-session[6161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:25:33.322936 systemd-logind[1576]: New session 17 of user core. Jun 20 19:25:33.326757 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 20 19:25:33.728285 sshd[6164]: Connection closed by 147.75.109.163 port 56482 Jun 20 19:25:33.732117 systemd[1]: sshd@15-139.178.70.108:22-147.75.109.163:56482.service: Deactivated successfully. Jun 20 19:25:33.728529 sshd-session[6161]: pam_unix(sshd:session): session closed for user core Jun 20 19:25:33.736346 systemd[1]: session-17.scope: Deactivated successfully. Jun 20 19:25:33.738368 systemd-logind[1576]: Session 17 logged out. Waiting for processes to exit. Jun 20 19:25:33.740337 systemd-logind[1576]: Removed session 17. Jun 20 19:25:35.251224 kubelet[2909]: I0620 19:25:35.241250 2909 scope.go:117] "RemoveContainer" containerID="4a04728d0a05918d5d8ae089b61ab1c97f7c5941a762d4eea9f9c8424ae6a98a" Jun 20 19:25:36.040559 containerd[1619]: time="2025-06-20T19:25:36.040519758Z" level=info msg="RemoveContainer for \"4a04728d0a05918d5d8ae089b61ab1c97f7c5941a762d4eea9f9c8424ae6a98a\"" Jun 20 19:25:36.340672 containerd[1619]: time="2025-06-20T19:25:36.340626346Z" level=info msg="RemoveContainer for \"4a04728d0a05918d5d8ae089b61ab1c97f7c5941a762d4eea9f9c8424ae6a98a\" returns successfully" Jun 20 19:25:36.544793 kubelet[2909]: I0620 19:25:36.542622 2909 scope.go:117] "RemoveContainer" containerID="de9113f23710efb86c1fea03661b6df26537c01fa869462d6f199886e0d25f80" Jun 20 19:25:36.551620 containerd[1619]: time="2025-06-20T19:25:36.551577186Z" level=info msg="RemoveContainer for \"de9113f23710efb86c1fea03661b6df26537c01fa869462d6f199886e0d25f80\"" Jun 20 19:25:36.557060 containerd[1619]: time="2025-06-20T19:25:36.556844274Z" level=info msg="RemoveContainer for \"de9113f23710efb86c1fea03661b6df26537c01fa869462d6f199886e0d25f80\" returns successfully" Jun 20 19:25:36.568026 containerd[1619]: time="2025-06-20T19:25:36.567995425Z" level=info msg="StopPodSandbox for \"eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf\"" Jun 20 19:25:37.386464 containerd[1619]: time="2025-06-20T19:25:37.385590038Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1f45d2f50b4ff4fcea6d83fd47a710976c9b7317c1c1419e16f4d90f2803ac03\" id:\"b5cc6fffa0225b57f77988bea35a5e702b9dfc063d6bd151344c322a4e9fedcf\" pid:6226 exited_at:{seconds:1750447537 nanos:363390576}" Jun 20 19:25:37.668774 containerd[1619]: time="2025-06-20T19:25:37.668422234Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e7e5149af51caf084bf97c04562e5331d256d9b819504a23a8a27279d4f450f0\" id:\"8e4c4e49ac72dcbadc4d6ea4855b04b76cdb1836a26c2198fb7a4b31620e6bab\" pid:6191 exited_at:{seconds:1750447537 nanos:668031703}" Jun 20 19:25:38.106675 containerd[1619]: 2025-06-20 19:25:37.501 [WARNING][6209] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" WorkloadEndpoint="localhost-k8s-calico--apiserver--57c64d78d5--28z6n-eth0" Jun 20 19:25:38.106675 containerd[1619]: 2025-06-20 19:25:37.505 [INFO][6209] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" Jun 20 19:25:38.106675 containerd[1619]: 2025-06-20 19:25:37.505 [INFO][6209] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" iface="eth0" netns="" Jun 20 19:25:38.106675 containerd[1619]: 2025-06-20 19:25:37.505 [INFO][6209] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" Jun 20 19:25:38.106675 containerd[1619]: 2025-06-20 19:25:37.505 [INFO][6209] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" Jun 20 19:25:38.106675 containerd[1619]: 2025-06-20 19:25:38.069 [INFO][6239] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" HandleID="k8s-pod-network.eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" Workload="localhost-k8s-calico--apiserver--57c64d78d5--28z6n-eth0" Jun 20 19:25:38.106675 containerd[1619]: 2025-06-20 19:25:38.073 [INFO][6239] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:25:38.106675 containerd[1619]: 2025-06-20 19:25:38.074 [INFO][6239] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:25:38.106675 containerd[1619]: 2025-06-20 19:25:38.089 [WARNING][6239] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" HandleID="k8s-pod-network.eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" Workload="localhost-k8s-calico--apiserver--57c64d78d5--28z6n-eth0" Jun 20 19:25:38.106675 containerd[1619]: 2025-06-20 19:25:38.089 [INFO][6239] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" HandleID="k8s-pod-network.eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" Workload="localhost-k8s-calico--apiserver--57c64d78d5--28z6n-eth0" Jun 20 19:25:38.106675 containerd[1619]: 2025-06-20 19:25:38.092 [INFO][6239] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:25:38.106675 containerd[1619]: 2025-06-20 19:25:38.095 [INFO][6209] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" Jun 20 19:25:38.149700 containerd[1619]: time="2025-06-20T19:25:38.111823813Z" level=info msg="TearDown network for sandbox \"eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf\" successfully" Jun 20 19:25:38.149700 containerd[1619]: time="2025-06-20T19:25:38.111857867Z" level=info msg="StopPodSandbox for \"eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf\" returns successfully" Jun 20 19:25:38.238049 containerd[1619]: time="2025-06-20T19:25:38.238015192Z" level=info msg="RemovePodSandbox for \"eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf\"" Jun 20 19:25:38.238049 containerd[1619]: time="2025-06-20T19:25:38.238054726Z" level=info msg="Forcibly stopping sandbox \"eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf\"" Jun 20 19:25:38.448389 containerd[1619]: 2025-06-20 19:25:38.406 [WARNING][6263] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" WorkloadEndpoint="localhost-k8s-calico--apiserver--57c64d78d5--28z6n-eth0" Jun 20 19:25:38.448389 containerd[1619]: 2025-06-20 19:25:38.406 [INFO][6263] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" Jun 20 19:25:38.448389 containerd[1619]: 2025-06-20 19:25:38.406 [INFO][6263] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" iface="eth0" netns="" Jun 20 19:25:38.448389 containerd[1619]: 2025-06-20 19:25:38.406 [INFO][6263] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" Jun 20 19:25:38.448389 containerd[1619]: 2025-06-20 19:25:38.406 [INFO][6263] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" Jun 20 19:25:38.448389 containerd[1619]: 2025-06-20 19:25:38.438 [INFO][6270] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" HandleID="k8s-pod-network.eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" Workload="localhost-k8s-calico--apiserver--57c64d78d5--28z6n-eth0" Jun 20 19:25:38.448389 containerd[1619]: 2025-06-20 19:25:38.438 [INFO][6270] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:25:38.448389 containerd[1619]: 2025-06-20 19:25:38.438 [INFO][6270] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:25:38.448389 containerd[1619]: 2025-06-20 19:25:38.442 [WARNING][6270] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" HandleID="k8s-pod-network.eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" Workload="localhost-k8s-calico--apiserver--57c64d78d5--28z6n-eth0" Jun 20 19:25:38.448389 containerd[1619]: 2025-06-20 19:25:38.442 [INFO][6270] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" HandleID="k8s-pod-network.eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" Workload="localhost-k8s-calico--apiserver--57c64d78d5--28z6n-eth0" Jun 20 19:25:38.448389 containerd[1619]: 2025-06-20 19:25:38.443 [INFO][6270] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:25:38.448389 containerd[1619]: 2025-06-20 19:25:38.446 [INFO][6263] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf" Jun 20 19:25:38.457451 containerd[1619]: time="2025-06-20T19:25:38.448648315Z" level=info msg="TearDown network for sandbox \"eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf\" successfully" Jun 20 19:25:38.457451 containerd[1619]: time="2025-06-20T19:25:38.454916947Z" level=info msg="Ensure that sandbox eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf in task-service has been cleanup successfully" Jun 20 19:25:38.467006 containerd[1619]: time="2025-06-20T19:25:38.466904839Z" level=info msg="RemovePodSandbox \"eeecc893980b6404f0632b6e64f7b4499d82f223a8d169cb46d103935460e0cf\" returns successfully" Jun 20 19:25:38.467390 containerd[1619]: time="2025-06-20T19:25:38.467368596Z" level=info msg="StopPodSandbox for \"ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3\"" Jun 20 19:25:38.526199 containerd[1619]: 2025-06-20 19:25:38.497 [WARNING][6284] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" WorkloadEndpoint="localhost-k8s-calico--apiserver--57c64d78d5--dz2fv-eth0" Jun 20 19:25:38.526199 containerd[1619]: 2025-06-20 19:25:38.497 [INFO][6284] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" Jun 20 19:25:38.526199 containerd[1619]: 2025-06-20 19:25:38.497 [INFO][6284] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" iface="eth0" netns="" Jun 20 19:25:38.526199 containerd[1619]: 2025-06-20 19:25:38.497 [INFO][6284] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" Jun 20 19:25:38.526199 containerd[1619]: 2025-06-20 19:25:38.497 [INFO][6284] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" Jun 20 19:25:38.526199 containerd[1619]: 2025-06-20 19:25:38.516 [INFO][6291] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" HandleID="k8s-pod-network.ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" Workload="localhost-k8s-calico--apiserver--57c64d78d5--dz2fv-eth0" Jun 20 19:25:38.526199 containerd[1619]: 2025-06-20 19:25:38.516 [INFO][6291] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:25:38.526199 containerd[1619]: 2025-06-20 19:25:38.516 [INFO][6291] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:25:38.526199 containerd[1619]: 2025-06-20 19:25:38.522 [WARNING][6291] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" HandleID="k8s-pod-network.ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" Workload="localhost-k8s-calico--apiserver--57c64d78d5--dz2fv-eth0" Jun 20 19:25:38.526199 containerd[1619]: 2025-06-20 19:25:38.522 [INFO][6291] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" HandleID="k8s-pod-network.ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" Workload="localhost-k8s-calico--apiserver--57c64d78d5--dz2fv-eth0" Jun 20 19:25:38.526199 containerd[1619]: 2025-06-20 19:25:38.523 [INFO][6291] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:25:38.526199 containerd[1619]: 2025-06-20 19:25:38.524 [INFO][6284] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" Jun 20 19:25:38.526592 containerd[1619]: time="2025-06-20T19:25:38.526503222Z" level=info msg="TearDown network for sandbox \"ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3\" successfully" Jun 20 19:25:38.526592 containerd[1619]: time="2025-06-20T19:25:38.526519698Z" level=info msg="StopPodSandbox for \"ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3\" returns successfully" Jun 20 19:25:38.533111 containerd[1619]: time="2025-06-20T19:25:38.533085305Z" level=info msg="RemovePodSandbox for \"ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3\"" Jun 20 19:25:38.540177 containerd[1619]: time="2025-06-20T19:25:38.533115745Z" level=info msg="Forcibly stopping sandbox \"ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3\"" Jun 20 19:25:38.593945 containerd[1619]: 2025-06-20 19:25:38.569 [WARNING][6305] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" WorkloadEndpoint="localhost-k8s-calico--apiserver--57c64d78d5--dz2fv-eth0" Jun 20 19:25:38.593945 containerd[1619]: 2025-06-20 19:25:38.569 [INFO][6305] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" Jun 20 19:25:38.593945 containerd[1619]: 2025-06-20 19:25:38.569 [INFO][6305] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" iface="eth0" netns="" Jun 20 19:25:38.593945 containerd[1619]: 2025-06-20 19:25:38.569 [INFO][6305] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" Jun 20 19:25:38.593945 containerd[1619]: 2025-06-20 19:25:38.569 [INFO][6305] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" Jun 20 19:25:38.593945 containerd[1619]: 2025-06-20 19:25:38.584 [INFO][6312] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" HandleID="k8s-pod-network.ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" Workload="localhost-k8s-calico--apiserver--57c64d78d5--dz2fv-eth0" Jun 20 19:25:38.593945 containerd[1619]: 2025-06-20 19:25:38.585 [INFO][6312] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:25:38.593945 containerd[1619]: 2025-06-20 19:25:38.585 [INFO][6312] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:25:38.593945 containerd[1619]: 2025-06-20 19:25:38.589 [WARNING][6312] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" HandleID="k8s-pod-network.ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" Workload="localhost-k8s-calico--apiserver--57c64d78d5--dz2fv-eth0" Jun 20 19:25:38.593945 containerd[1619]: 2025-06-20 19:25:38.589 [INFO][6312] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" HandleID="k8s-pod-network.ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" Workload="localhost-k8s-calico--apiserver--57c64d78d5--dz2fv-eth0" Jun 20 19:25:38.593945 containerd[1619]: 2025-06-20 19:25:38.590 [INFO][6312] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:25:38.593945 containerd[1619]: 2025-06-20 19:25:38.592 [INFO][6305] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3" Jun 20 19:25:38.599719 containerd[1619]: time="2025-06-20T19:25:38.593977144Z" level=info msg="TearDown network for sandbox \"ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3\" successfully" Jun 20 19:25:38.599719 containerd[1619]: time="2025-06-20T19:25:38.597409664Z" level=info msg="Ensure that sandbox ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3 in task-service has been cleanup successfully" Jun 20 19:25:38.600379 containerd[1619]: time="2025-06-20T19:25:38.600344213Z" level=info msg="RemovePodSandbox \"ddc607951b2fe3c139eedb1a5ecfa569322de577f28c965e0f98bb6db60dabe3\" returns successfully" Jun 20 19:25:38.775070 systemd[1]: Started sshd@16-139.178.70.108:22-147.75.109.163:49382.service - OpenSSH per-connection server daemon (147.75.109.163:49382). Jun 20 19:25:39.025367 sshd[6319]: Accepted publickey for core from 147.75.109.163 port 49382 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:25:39.028438 sshd-session[6319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:25:39.032855 systemd-logind[1576]: New session 18 of user core. Jun 20 19:25:39.042753 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 20 19:25:40.201975 sshd[6321]: Connection closed by 147.75.109.163 port 49382 Jun 20 19:25:40.202767 sshd-session[6319]: pam_unix(sshd:session): session closed for user core Jun 20 19:25:40.219328 systemd[1]: Started sshd@17-139.178.70.108:22-147.75.109.163:49390.service - OpenSSH per-connection server daemon (147.75.109.163:49390). Jun 20 19:25:40.219990 systemd[1]: sshd@16-139.178.70.108:22-147.75.109.163:49382.service: Deactivated successfully. Jun 20 19:25:40.224341 systemd[1]: session-18.scope: Deactivated successfully. Jun 20 19:25:40.232704 systemd-logind[1576]: Session 18 logged out. Waiting for processes to exit. Jun 20 19:25:40.234186 systemd-logind[1576]: Removed session 18. Jun 20 19:25:40.382176 sshd[6330]: Accepted publickey for core from 147.75.109.163 port 49390 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:25:40.383266 sshd-session[6330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:25:40.386985 systemd-logind[1576]: New session 19 of user core. Jun 20 19:25:40.394828 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 20 19:25:41.268632 sshd[6335]: Connection closed by 147.75.109.163 port 49390 Jun 20 19:25:41.280549 systemd[1]: Started sshd@18-139.178.70.108:22-147.75.109.163:49396.service - OpenSSH per-connection server daemon (147.75.109.163:49396). Jun 20 19:25:41.318837 sshd-session[6330]: pam_unix(sshd:session): session closed for user core Jun 20 19:25:41.341066 systemd[1]: sshd@17-139.178.70.108:22-147.75.109.163:49390.service: Deactivated successfully. Jun 20 19:25:41.344117 systemd[1]: session-19.scope: Deactivated successfully. Jun 20 19:25:41.345819 systemd-logind[1576]: Session 19 logged out. Waiting for processes to exit. Jun 20 19:25:41.346840 systemd-logind[1576]: Removed session 19. Jun 20 19:25:41.993851 sshd[6341]: Accepted publickey for core from 147.75.109.163 port 49396 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:25:42.000503 sshd-session[6341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:25:42.008709 systemd-logind[1576]: New session 20 of user core. Jun 20 19:25:42.012801 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 20 19:25:47.076554 sshd[6346]: Connection closed by 147.75.109.163 port 49396 Jun 20 19:25:47.200853 systemd[1]: Started sshd@19-139.178.70.108:22-147.75.109.163:42140.service - OpenSSH per-connection server daemon (147.75.109.163:42140). Jun 20 19:25:47.085081 sshd-session[6341]: pam_unix(sshd:session): session closed for user core Jun 20 19:25:47.209900 systemd[1]: sshd@18-139.178.70.108:22-147.75.109.163:49396.service: Deactivated successfully. Jun 20 19:25:47.211515 systemd[1]: session-20.scope: Deactivated successfully. Jun 20 19:25:47.211651 systemd[1]: session-20.scope: Consumed 424ms CPU time, 77.7M memory peak. Jun 20 19:25:47.213597 systemd-logind[1576]: Session 20 logged out. Waiting for processes to exit. Jun 20 19:25:47.217205 systemd-logind[1576]: Removed session 20. Jun 20 19:25:47.406987 containerd[1619]: time="2025-06-20T19:25:47.406924526Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1f45d2f50b4ff4fcea6d83fd47a710976c9b7317c1c1419e16f4d90f2803ac03\" id:\"764c62a6d11eeb4fda97f0b2370ff48f940556f4cf13aafd06c92cf539aa2281\" pid:6392 exited_at:{seconds:1750447547 nanos:360862028}" Jun 20 19:25:47.651223 sshd[6390]: Accepted publickey for core from 147.75.109.163 port 42140 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:25:47.682796 sshd-session[6390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:25:47.722025 systemd-logind[1576]: New session 21 of user core. Jun 20 19:25:47.728779 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 20 19:25:47.994945 kubelet[2909]: E0620 19:25:47.771391 2909 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.486s" Jun 20 19:25:51.236610 containerd[1619]: time="2025-06-20T19:25:51.235086227Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e6519052d887fb5790a210931f627c1324ddcf4e64eb6707f7803af69c1f3c8d\" id:\"c703aee3a20fb715b77755bd7f10506ff3913f0a8d0f7b5e0d59c50720366e58\" pid:6427 exited_at:{seconds:1750447551 nanos:120037009}" Jun 20 19:25:52.166040 sshd[6410]: Connection closed by 147.75.109.163 port 42140 Jun 20 19:25:52.195597 sshd-session[6390]: pam_unix(sshd:session): session closed for user core Jun 20 19:25:52.257369 systemd[1]: sshd@19-139.178.70.108:22-147.75.109.163:42140.service: Deactivated successfully. Jun 20 19:25:52.259438 systemd[1]: session-21.scope: Deactivated successfully. Jun 20 19:25:52.259601 systemd[1]: session-21.scope: Consumed 806ms CPU time, 65.7M memory peak. Jun 20 19:25:52.260031 systemd-logind[1576]: Session 21 logged out. Waiting for processes to exit. Jun 20 19:25:52.268854 systemd[1]: Started sshd@20-139.178.70.108:22-147.75.109.163:42144.service - OpenSSH per-connection server daemon (147.75.109.163:42144). Jun 20 19:25:52.270622 systemd-logind[1576]: Removed session 21. Jun 20 19:25:52.461770 sshd[6442]: Accepted publickey for core from 147.75.109.163 port 42144 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:25:52.466602 sshd-session[6442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:25:52.481390 systemd-logind[1576]: New session 22 of user core. Jun 20 19:25:52.488822 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 20 19:25:53.435393 sshd[6445]: Connection closed by 147.75.109.163 port 42144 Jun 20 19:25:53.437706 sshd-session[6442]: pam_unix(sshd:session): session closed for user core Jun 20 19:25:53.441404 systemd[1]: sshd@20-139.178.70.108:22-147.75.109.163:42144.service: Deactivated successfully. Jun 20 19:25:53.446905 systemd[1]: session-22.scope: Deactivated successfully. Jun 20 19:25:53.448493 systemd-logind[1576]: Session 22 logged out. Waiting for processes to exit. Jun 20 19:25:53.452938 systemd-logind[1576]: Removed session 22. Jun 20 19:25:58.449901 systemd[1]: Started sshd@21-139.178.70.108:22-147.75.109.163:47816.service - OpenSSH per-connection server daemon (147.75.109.163:47816). Jun 20 19:25:58.598723 sshd[6459]: Accepted publickey for core from 147.75.109.163 port 47816 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:25:58.601208 sshd-session[6459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:25:58.606814 systemd-logind[1576]: New session 23 of user core. Jun 20 19:25:58.609919 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 20 19:25:59.033267 sshd[6461]: Connection closed by 147.75.109.163 port 47816 Jun 20 19:25:59.033169 sshd-session[6459]: pam_unix(sshd:session): session closed for user core Jun 20 19:25:59.035630 systemd[1]: sshd@21-139.178.70.108:22-147.75.109.163:47816.service: Deactivated successfully. Jun 20 19:25:59.037052 systemd[1]: session-23.scope: Deactivated successfully. Jun 20 19:25:59.037560 systemd-logind[1576]: Session 23 logged out. Waiting for processes to exit. Jun 20 19:25:59.039021 systemd-logind[1576]: Removed session 23. Jun 20 19:26:04.047216 systemd[1]: Started sshd@22-139.178.70.108:22-147.75.109.163:47822.service - OpenSSH per-connection server daemon (147.75.109.163:47822). Jun 20 19:26:04.160298 sshd[6477]: Accepted publickey for core from 147.75.109.163 port 47822 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:26:04.166318 sshd-session[6477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:26:04.171666 systemd-logind[1576]: New session 24 of user core. Jun 20 19:26:04.177783 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 20 19:26:05.653018 sshd[6479]: Connection closed by 147.75.109.163 port 47822 Jun 20 19:26:05.657189 systemd[1]: sshd@22-139.178.70.108:22-147.75.109.163:47822.service: Deactivated successfully. Jun 20 19:26:05.654073 sshd-session[6477]: pam_unix(sshd:session): session closed for user core Jun 20 19:26:05.657409 systemd-logind[1576]: Session 24 logged out. Waiting for processes to exit. Jun 20 19:26:05.658463 systemd[1]: session-24.scope: Deactivated successfully. Jun 20 19:26:05.659735 systemd-logind[1576]: Removed session 24. Jun 20 19:26:07.736436 containerd[1619]: time="2025-06-20T19:26:07.733628736Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e7e5149af51caf084bf97c04562e5331d256d9b819504a23a8a27279d4f450f0\" id:\"31e7cd4a5b363f38c12173bbb8e945639d315399fa4c325d567245c69a62ae0f\" pid:6501 exited_at:{seconds:1750447567 nanos:628981959}"