Jun 20 19:21:00.709711 kernel: Linux version 6.12.34-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jun 20 17:06:39 -00 2025 Jun 20 19:21:00.709728 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:21:00.709735 kernel: Disabled fast string operations Jun 20 19:21:00.709739 kernel: BIOS-provided physical RAM map: Jun 20 19:21:00.709743 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Jun 20 19:21:00.709757 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Jun 20 19:21:00.709764 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Jun 20 19:21:00.709768 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Jun 20 19:21:00.709772 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Jun 20 19:21:00.709776 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Jun 20 19:21:00.709781 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Jun 20 19:21:00.709785 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Jun 20 19:21:00.709789 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Jun 20 19:21:00.709793 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jun 20 19:21:00.709800 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Jun 20 19:21:00.709804 kernel: NX (Execute Disable) protection: active Jun 20 19:21:00.709809 kernel: APIC: Static calls initialized Jun 20 19:21:00.709814 kernel: SMBIOS 2.7 present. Jun 20 19:21:00.709819 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Jun 20 19:21:00.709824 kernel: DMI: Memory slots populated: 1/128 Jun 20 19:21:00.709830 kernel: vmware: hypercall mode: 0x00 Jun 20 19:21:00.709834 kernel: Hypervisor detected: VMware Jun 20 19:21:00.709839 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Jun 20 19:21:00.709844 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Jun 20 19:21:00.709848 kernel: vmware: using clock offset of 3505403808 ns Jun 20 19:21:00.709853 kernel: tsc: Detected 3408.000 MHz processor Jun 20 19:21:00.709858 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 20 19:21:00.709864 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 20 19:21:00.709869 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Jun 20 19:21:00.709874 kernel: total RAM covered: 3072M Jun 20 19:21:00.709879 kernel: Found optimal setting for mtrr clean up Jun 20 19:21:00.709885 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Jun 20 19:21:00.709890 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Jun 20 19:21:00.709895 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 20 19:21:00.709900 kernel: Using GB pages for direct mapping Jun 20 19:21:00.709905 kernel: ACPI: Early table checksum verification disabled Jun 20 19:21:00.709910 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Jun 20 19:21:00.709915 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Jun 20 19:21:00.709920 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Jun 20 19:21:00.709926 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Jun 20 19:21:00.709933 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jun 20 19:21:00.709938 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jun 20 19:21:00.709943 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Jun 20 19:21:00.709948 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Jun 20 19:21:00.709953 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Jun 20 19:21:00.709960 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Jun 20 19:21:00.709965 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Jun 20 19:21:00.709970 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Jun 20 19:21:00.709976 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Jun 20 19:21:00.709981 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Jun 20 19:21:00.709986 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jun 20 19:21:00.709991 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jun 20 19:21:00.709996 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Jun 20 19:21:00.710001 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Jun 20 19:21:00.710007 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Jun 20 19:21:00.710012 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Jun 20 19:21:00.710017 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Jun 20 19:21:00.710022 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Jun 20 19:21:00.710027 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jun 20 19:21:00.710032 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jun 20 19:21:00.710038 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Jun 20 19:21:00.710043 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00001000-0x7fffffff] Jun 20 19:21:00.710048 kernel: NODE_DATA(0) allocated [mem 0x7fff8dc0-0x7fffffff] Jun 20 19:21:00.710054 kernel: Zone ranges: Jun 20 19:21:00.710060 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 20 19:21:00.710065 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Jun 20 19:21:00.710070 kernel: Normal empty Jun 20 19:21:00.710075 kernel: Device empty Jun 20 19:21:00.710080 kernel: Movable zone start for each node Jun 20 19:21:00.710085 kernel: Early memory node ranges Jun 20 19:21:00.710090 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Jun 20 19:21:00.710095 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Jun 20 19:21:00.710100 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Jun 20 19:21:00.710107 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Jun 20 19:21:00.710112 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 20 19:21:00.710117 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Jun 20 19:21:00.710122 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Jun 20 19:21:00.710127 kernel: ACPI: PM-Timer IO Port: 0x1008 Jun 20 19:21:00.710132 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Jun 20 19:21:00.710137 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jun 20 19:21:00.710142 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jun 20 19:21:00.710147 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jun 20 19:21:00.710153 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jun 20 19:21:00.710158 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jun 20 19:21:00.710163 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jun 20 19:21:00.710168 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jun 20 19:21:00.710173 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jun 20 19:21:00.710179 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jun 20 19:21:00.710184 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jun 20 19:21:00.710189 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jun 20 19:21:00.710194 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jun 20 19:21:00.710199 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jun 20 19:21:00.710205 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jun 20 19:21:00.710210 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jun 20 19:21:00.710215 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jun 20 19:21:00.710220 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Jun 20 19:21:00.710225 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Jun 20 19:21:00.710230 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Jun 20 19:21:00.710235 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Jun 20 19:21:00.710240 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Jun 20 19:21:00.710245 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Jun 20 19:21:00.710250 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Jun 20 19:21:00.710256 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Jun 20 19:21:00.710261 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Jun 20 19:21:00.710266 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Jun 20 19:21:00.710271 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Jun 20 19:21:00.710276 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Jun 20 19:21:00.710281 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Jun 20 19:21:00.710286 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Jun 20 19:21:00.710291 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Jun 20 19:21:00.710296 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Jun 20 19:21:00.710301 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Jun 20 19:21:00.710307 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Jun 20 19:21:00.710312 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Jun 20 19:21:00.710317 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Jun 20 19:21:00.710322 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Jun 20 19:21:00.710327 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Jun 20 19:21:00.710333 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Jun 20 19:21:00.710342 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Jun 20 19:21:00.710348 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Jun 20 19:21:00.710353 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Jun 20 19:21:00.710358 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Jun 20 19:21:00.710365 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Jun 20 19:21:00.710370 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Jun 20 19:21:00.710376 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Jun 20 19:21:00.710381 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Jun 20 19:21:00.710386 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Jun 20 19:21:00.710391 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Jun 20 19:21:00.710397 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Jun 20 19:21:00.710402 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Jun 20 19:21:00.710408 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Jun 20 19:21:00.710414 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Jun 20 19:21:00.710419 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Jun 20 19:21:00.710424 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Jun 20 19:21:00.710430 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Jun 20 19:21:00.710435 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Jun 20 19:21:00.710440 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Jun 20 19:21:00.710445 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Jun 20 19:21:00.710451 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Jun 20 19:21:00.710456 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Jun 20 19:21:00.710462 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Jun 20 19:21:00.710468 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Jun 20 19:21:00.710473 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Jun 20 19:21:00.710478 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Jun 20 19:21:00.710484 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Jun 20 19:21:00.710489 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Jun 20 19:21:00.710494 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Jun 20 19:21:00.710500 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Jun 20 19:21:00.710505 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Jun 20 19:21:00.710511 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Jun 20 19:21:00.710517 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Jun 20 19:21:00.710522 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Jun 20 19:21:00.710527 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Jun 20 19:21:00.710532 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Jun 20 19:21:00.710538 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Jun 20 19:21:00.710543 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Jun 20 19:21:00.710548 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Jun 20 19:21:00.710554 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Jun 20 19:21:00.710559 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Jun 20 19:21:00.710566 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Jun 20 19:21:00.710571 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Jun 20 19:21:00.710576 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Jun 20 19:21:00.710582 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Jun 20 19:21:00.710587 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Jun 20 19:21:00.710592 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Jun 20 19:21:00.710598 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Jun 20 19:21:00.710603 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Jun 20 19:21:00.710608 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Jun 20 19:21:00.710613 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Jun 20 19:21:00.710620 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Jun 20 19:21:00.710625 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Jun 20 19:21:00.710631 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Jun 20 19:21:00.710636 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Jun 20 19:21:00.710641 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Jun 20 19:21:00.710647 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Jun 20 19:21:00.710652 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Jun 20 19:21:00.710657 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Jun 20 19:21:00.710663 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Jun 20 19:21:00.710668 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Jun 20 19:21:00.710674 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Jun 20 19:21:00.710680 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Jun 20 19:21:00.710685 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Jun 20 19:21:00.710690 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Jun 20 19:21:00.710696 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Jun 20 19:21:00.710701 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Jun 20 19:21:00.710706 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Jun 20 19:21:00.710712 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Jun 20 19:21:00.710717 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Jun 20 19:21:00.710722 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Jun 20 19:21:00.710728 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Jun 20 19:21:00.710734 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Jun 20 19:21:00.710739 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Jun 20 19:21:00.710782 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Jun 20 19:21:00.710789 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Jun 20 19:21:00.710794 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Jun 20 19:21:00.710800 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Jun 20 19:21:00.710805 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Jun 20 19:21:00.710810 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Jun 20 19:21:00.710816 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Jun 20 19:21:00.710823 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Jun 20 19:21:00.710829 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Jun 20 19:21:00.710834 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Jun 20 19:21:00.710839 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Jun 20 19:21:00.710845 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Jun 20 19:21:00.710850 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Jun 20 19:21:00.710856 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Jun 20 19:21:00.710861 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Jun 20 19:21:00.710867 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Jun 20 19:21:00.710873 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 20 19:21:00.710879 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Jun 20 19:21:00.710884 kernel: TSC deadline timer available Jun 20 19:21:00.710889 kernel: CPU topo: Max. logical packages: 128 Jun 20 19:21:00.710895 kernel: CPU topo: Max. logical dies: 128 Jun 20 19:21:00.710900 kernel: CPU topo: Max. dies per package: 1 Jun 20 19:21:00.710905 kernel: CPU topo: Max. threads per core: 1 Jun 20 19:21:00.710911 kernel: CPU topo: Num. cores per package: 1 Jun 20 19:21:00.710916 kernel: CPU topo: Num. threads per package: 1 Jun 20 19:21:00.710921 kernel: CPU topo: Allowing 2 present CPUs plus 126 hotplug CPUs Jun 20 19:21:00.710928 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Jun 20 19:21:00.710933 kernel: Booting paravirtualized kernel on VMware hypervisor Jun 20 19:21:00.710939 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 20 19:21:00.710945 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Jun 20 19:21:00.710951 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Jun 20 19:21:00.710956 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Jun 20 19:21:00.710962 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Jun 20 19:21:00.710967 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Jun 20 19:21:00.710972 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Jun 20 19:21:00.710979 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Jun 20 19:21:00.710984 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Jun 20 19:21:00.710990 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Jun 20 19:21:00.710995 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Jun 20 19:21:00.711001 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Jun 20 19:21:00.711006 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Jun 20 19:21:00.711011 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Jun 20 19:21:00.711017 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Jun 20 19:21:00.711022 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Jun 20 19:21:00.711029 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Jun 20 19:21:00.711034 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Jun 20 19:21:00.711040 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Jun 20 19:21:00.711045 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Jun 20 19:21:00.711052 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:21:00.711057 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 20 19:21:00.711063 kernel: random: crng init done Jun 20 19:21:00.711068 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Jun 20 19:21:00.711074 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Jun 20 19:21:00.711080 kernel: printk: log_buf_len min size: 262144 bytes Jun 20 19:21:00.711085 kernel: printk: log_buf_len: 1048576 bytes Jun 20 19:21:00.711091 kernel: printk: early log buf free: 245576(93%) Jun 20 19:21:00.711096 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 20 19:21:00.711102 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 20 19:21:00.711107 kernel: Fallback order for Node 0: 0 Jun 20 19:21:00.711113 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524157 Jun 20 19:21:00.711118 kernel: Policy zone: DMA32 Jun 20 19:21:00.711125 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 20 19:21:00.711130 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Jun 20 19:21:00.711136 kernel: ftrace: allocating 40093 entries in 157 pages Jun 20 19:21:00.711141 kernel: ftrace: allocated 157 pages with 5 groups Jun 20 19:21:00.711147 kernel: Dynamic Preempt: voluntary Jun 20 19:21:00.711152 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 20 19:21:00.711158 kernel: rcu: RCU event tracing is enabled. Jun 20 19:21:00.711164 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Jun 20 19:21:00.711169 kernel: Trampoline variant of Tasks RCU enabled. Jun 20 19:21:00.711176 kernel: Rude variant of Tasks RCU enabled. Jun 20 19:21:00.711181 kernel: Tracing variant of Tasks RCU enabled. Jun 20 19:21:00.711187 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 20 19:21:00.711192 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Jun 20 19:21:00.711197 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jun 20 19:21:00.711203 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jun 20 19:21:00.711208 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jun 20 19:21:00.711214 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Jun 20 19:21:00.711219 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Jun 20 19:21:00.711226 kernel: Console: colour VGA+ 80x25 Jun 20 19:21:00.711231 kernel: printk: legacy console [tty0] enabled Jun 20 19:21:00.711237 kernel: printk: legacy console [ttyS0] enabled Jun 20 19:21:00.711242 kernel: ACPI: Core revision 20240827 Jun 20 19:21:00.711248 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Jun 20 19:21:00.711253 kernel: APIC: Switch to symmetric I/O mode setup Jun 20 19:21:00.711259 kernel: x2apic enabled Jun 20 19:21:00.711264 kernel: APIC: Switched APIC routing to: physical x2apic Jun 20 19:21:00.711270 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jun 20 19:21:00.711277 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jun 20 19:21:00.711282 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Jun 20 19:21:00.711288 kernel: Disabled fast string operations Jun 20 19:21:00.711293 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jun 20 19:21:00.711299 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jun 20 19:21:00.711304 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 20 19:21:00.711310 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Jun 20 19:21:00.711315 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jun 20 19:21:00.711321 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jun 20 19:21:00.711327 kernel: RETBleed: Mitigation: Enhanced IBRS Jun 20 19:21:00.711333 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jun 20 19:21:00.711338 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jun 20 19:21:00.711344 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jun 20 19:21:00.711349 kernel: SRBDS: Unknown: Dependent on hypervisor status Jun 20 19:21:00.711355 kernel: GDS: Unknown: Dependent on hypervisor status Jun 20 19:21:00.711360 kernel: ITS: Mitigation: Aligned branch/return thunks Jun 20 19:21:00.711366 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 20 19:21:00.711371 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 20 19:21:00.711378 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 20 19:21:00.711384 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 20 19:21:00.711389 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jun 20 19:21:00.711395 kernel: Freeing SMP alternatives memory: 32K Jun 20 19:21:00.711400 kernel: pid_max: default: 131072 minimum: 1024 Jun 20 19:21:00.711406 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jun 20 19:21:00.711411 kernel: landlock: Up and running. Jun 20 19:21:00.711417 kernel: SELinux: Initializing. Jun 20 19:21:00.711422 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 20 19:21:00.711429 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 20 19:21:00.711434 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jun 20 19:21:00.711440 kernel: Performance Events: Skylake events, core PMU driver. Jun 20 19:21:00.711445 kernel: core: CPUID marked event: 'cpu cycles' unavailable Jun 20 19:21:00.711451 kernel: core: CPUID marked event: 'instructions' unavailable Jun 20 19:21:00.711456 kernel: core: CPUID marked event: 'bus cycles' unavailable Jun 20 19:21:00.711462 kernel: core: CPUID marked event: 'cache references' unavailable Jun 20 19:21:00.711467 kernel: core: CPUID marked event: 'cache misses' unavailable Jun 20 19:21:00.711472 kernel: core: CPUID marked event: 'branch instructions' unavailable Jun 20 19:21:00.711479 kernel: core: CPUID marked event: 'branch misses' unavailable Jun 20 19:21:00.711484 kernel: ... version: 1 Jun 20 19:21:00.711490 kernel: ... bit width: 48 Jun 20 19:21:00.711495 kernel: ... generic registers: 4 Jun 20 19:21:00.711501 kernel: ... value mask: 0000ffffffffffff Jun 20 19:21:00.711506 kernel: ... max period: 000000007fffffff Jun 20 19:21:00.711511 kernel: ... fixed-purpose events: 0 Jun 20 19:21:00.711517 kernel: ... event mask: 000000000000000f Jun 20 19:21:00.711522 kernel: signal: max sigframe size: 1776 Jun 20 19:21:00.711529 kernel: rcu: Hierarchical SRCU implementation. Jun 20 19:21:00.711534 kernel: rcu: Max phase no-delay instances is 400. Jun 20 19:21:00.711540 kernel: Timer migration: 3 hierarchy levels; 8 children per group; 3 crossnode level Jun 20 19:21:00.711546 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jun 20 19:21:00.711551 kernel: smp: Bringing up secondary CPUs ... Jun 20 19:21:00.711557 kernel: smpboot: x86: Booting SMP configuration: Jun 20 19:21:00.711562 kernel: .... node #0, CPUs: #1 Jun 20 19:21:00.711567 kernel: Disabled fast string operations Jun 20 19:21:00.711573 kernel: smp: Brought up 1 node, 2 CPUs Jun 20 19:21:00.711579 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Jun 20 19:21:00.711585 kernel: Memory: 1924264K/2096628K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54424K init, 2544K bss, 160980K reserved, 0K cma-reserved) Jun 20 19:21:00.711591 kernel: devtmpfs: initialized Jun 20 19:21:00.711596 kernel: x86/mm: Memory block size: 128MB Jun 20 19:21:00.711602 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Jun 20 19:21:00.711607 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 20 19:21:00.711613 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Jun 20 19:21:00.711618 kernel: pinctrl core: initialized pinctrl subsystem Jun 20 19:21:00.711624 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 20 19:21:00.711630 kernel: audit: initializing netlink subsys (disabled) Jun 20 19:21:00.711636 kernel: audit: type=2000 audit(1750447257.283:1): state=initialized audit_enabled=0 res=1 Jun 20 19:21:00.711641 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 20 19:21:00.711647 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 20 19:21:00.711652 kernel: cpuidle: using governor menu Jun 20 19:21:00.711658 kernel: Simple Boot Flag at 0x36 set to 0x80 Jun 20 19:21:00.711663 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 20 19:21:00.711669 kernel: dca service started, version 1.12.1 Jun 20 19:21:00.711675 kernel: PCI: ECAM [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) for domain 0000 [bus 00-7f] Jun 20 19:21:00.711687 kernel: PCI: Using configuration type 1 for base access Jun 20 19:21:00.711694 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 20 19:21:00.711699 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 20 19:21:00.711705 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 20 19:21:00.711711 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 20 19:21:00.711717 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 20 19:21:00.711723 kernel: ACPI: Added _OSI(Module Device) Jun 20 19:21:00.711728 kernel: ACPI: Added _OSI(Processor Device) Jun 20 19:21:00.711734 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 20 19:21:00.711741 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 20 19:21:00.711758 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jun 20 19:21:00.711765 kernel: ACPI: Interpreter enabled Jun 20 19:21:00.711772 kernel: ACPI: PM: (supports S0 S1 S5) Jun 20 19:21:00.711779 kernel: ACPI: Using IOAPIC for interrupt routing Jun 20 19:21:00.711785 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 20 19:21:00.711791 kernel: PCI: Using E820 reservations for host bridge windows Jun 20 19:21:00.711797 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Jun 20 19:21:00.711804 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Jun 20 19:21:00.711885 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 20 19:21:00.711944 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Jun 20 19:21:00.711992 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Jun 20 19:21:00.712001 kernel: PCI host bridge to bus 0000:00 Jun 20 19:21:00.712050 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 20 19:21:00.712252 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Jun 20 19:21:00.712302 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jun 20 19:21:00.712344 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 20 19:21:00.712387 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Jun 20 19:21:00.712429 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Jun 20 19:21:00.712488 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 conventional PCI endpoint Jun 20 19:21:00.712546 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 conventional PCI bridge Jun 20 19:21:00.712600 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jun 20 19:21:00.712654 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Jun 20 19:21:00.712708 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a conventional PCI endpoint Jun 20 19:21:00.712774 kernel: pci 0000:00:07.1: BAR 4 [io 0x1060-0x106f] Jun 20 19:21:00.712828 kernel: pci 0000:00:07.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Jun 20 19:21:00.712876 kernel: pci 0000:00:07.1: BAR 1 [io 0x03f6]: legacy IDE quirk Jun 20 19:21:00.712925 kernel: pci 0000:00:07.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Jun 20 19:21:00.712972 kernel: pci 0000:00:07.1: BAR 3 [io 0x0376]: legacy IDE quirk Jun 20 19:21:00.713025 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Jun 20 19:21:00.713073 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Jun 20 19:21:00.713121 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Jun 20 19:21:00.716817 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 conventional PCI endpoint Jun 20 19:21:00.716883 kernel: pci 0000:00:07.7: BAR 0 [io 0x1080-0x10bf] Jun 20 19:21:00.716935 kernel: pci 0000:00:07.7: BAR 1 [mem 0xfebfe000-0xfebfffff 64bit] Jun 20 19:21:00.716993 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 conventional PCI endpoint Jun 20 19:21:00.717043 kernel: pci 0000:00:0f.0: BAR 0 [io 0x1070-0x107f] Jun 20 19:21:00.717091 kernel: pci 0000:00:0f.0: BAR 1 [mem 0xe8000000-0xefffffff pref] Jun 20 19:21:00.717143 kernel: pci 0000:00:0f.0: BAR 2 [mem 0xfe000000-0xfe7fffff] Jun 20 19:21:00.717192 kernel: pci 0000:00:0f.0: ROM [mem 0x00000000-0x00007fff pref] Jun 20 19:21:00.717240 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 20 19:21:00.717294 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 conventional PCI bridge Jun 20 19:21:00.717343 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Jun 20 19:21:00.717391 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jun 20 19:21:00.717440 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jun 20 19:21:00.717491 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jun 20 19:21:00.717545 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.717597 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jun 20 19:21:00.717647 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jun 20 19:21:00.717697 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jun 20 19:21:00.717754 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.717817 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.717870 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jun 20 19:21:00.717926 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jun 20 19:21:00.717975 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jun 20 19:21:00.718024 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jun 20 19:21:00.718122 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.718177 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.718226 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jun 20 19:21:00.718278 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jun 20 19:21:00.718327 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jun 20 19:21:00.718376 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jun 20 19:21:00.718425 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.718479 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.718529 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jun 20 19:21:00.718580 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jun 20 19:21:00.718629 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jun 20 19:21:00.718677 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.718730 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.718835 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jun 20 19:21:00.718890 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jun 20 19:21:00.718954 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jun 20 19:21:00.720815 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.720880 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.720934 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jun 20 19:21:00.720984 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jun 20 19:21:00.721034 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jun 20 19:21:00.721084 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.721140 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.721195 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jun 20 19:21:00.721246 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jun 20 19:21:00.721295 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jun 20 19:21:00.721345 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.721398 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.721448 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jun 20 19:21:00.721497 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jun 20 19:21:00.721545 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jun 20 19:21:00.721596 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.721648 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.721697 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jun 20 19:21:00.722767 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jun 20 19:21:00.722836 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jun 20 19:21:00.722891 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.722947 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.723002 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jun 20 19:21:00.723051 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jun 20 19:21:00.723101 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jun 20 19:21:00.723149 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jun 20 19:21:00.723197 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.723260 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.723320 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jun 20 19:21:00.723381 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jun 20 19:21:00.723446 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jun 20 19:21:00.723508 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jun 20 19:21:00.723566 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.723622 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.723671 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jun 20 19:21:00.723720 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jun 20 19:21:00.723790 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jun 20 19:21:00.723840 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.723894 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.724117 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jun 20 19:21:00.724171 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jun 20 19:21:00.724221 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jun 20 19:21:00.724283 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.724341 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.724391 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jun 20 19:21:00.724440 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jun 20 19:21:00.724488 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jun 20 19:21:00.724536 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.724590 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.724639 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jun 20 19:21:00.724691 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jun 20 19:21:00.724740 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jun 20 19:21:00.725644 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.725717 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.725778 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jun 20 19:21:00.725829 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jun 20 19:21:00.725879 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jun 20 19:21:00.725931 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.725987 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.726037 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jun 20 19:21:00.726086 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jun 20 19:21:00.726135 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jun 20 19:21:00.726185 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jun 20 19:21:00.726234 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.726287 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.726340 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jun 20 19:21:00.726388 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jun 20 19:21:00.726446 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jun 20 19:21:00.726509 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jun 20 19:21:00.726571 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.726633 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.726685 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jun 20 19:21:00.726739 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jun 20 19:21:00.727598 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jun 20 19:21:00.728822 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jun 20 19:21:00.728887 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.728958 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.729012 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jun 20 19:21:00.729064 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jun 20 19:21:00.729116 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jun 20 19:21:00.729167 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.729222 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.729278 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jun 20 19:21:00.729329 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jun 20 19:21:00.729380 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jun 20 19:21:00.729431 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.729486 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.729538 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jun 20 19:21:00.729589 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jun 20 19:21:00.729642 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jun 20 19:21:00.729693 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.729755 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.729809 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jun 20 19:21:00.729859 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jun 20 19:21:00.729910 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jun 20 19:21:00.729960 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.730017 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.730073 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jun 20 19:21:00.730124 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jun 20 19:21:00.730176 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jun 20 19:21:00.730227 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.730281 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.730333 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jun 20 19:21:00.730383 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jun 20 19:21:00.730437 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jun 20 19:21:00.730487 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jun 20 19:21:00.730538 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.730594 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.730645 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jun 20 19:21:00.730696 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jun 20 19:21:00.730789 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jun 20 19:21:00.730971 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jun 20 19:21:00.731025 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.731082 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.731134 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jun 20 19:21:00.731184 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jun 20 19:21:00.731235 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jun 20 19:21:00.731286 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.731348 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.731400 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jun 20 19:21:00.731451 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jun 20 19:21:00.731502 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jun 20 19:21:00.731553 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.731609 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.731660 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jun 20 19:21:00.731714 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jun 20 19:21:00.734794 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jun 20 19:21:00.734865 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.734926 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.734979 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jun 20 19:21:00.735030 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jun 20 19:21:00.735081 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jun 20 19:21:00.735134 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.735187 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.735237 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jun 20 19:21:00.735287 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jun 20 19:21:00.735336 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jun 20 19:21:00.735384 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.735438 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:21:00.735490 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jun 20 19:21:00.735540 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jun 20 19:21:00.735590 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jun 20 19:21:00.735639 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.735693 kernel: pci_bus 0000:01: extended config space not accessible Jun 20 19:21:00.736883 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jun 20 19:21:00.736961 kernel: pci_bus 0000:02: extended config space not accessible Jun 20 19:21:00.736971 kernel: acpiphp: Slot [32] registered Jun 20 19:21:00.736980 kernel: acpiphp: Slot [33] registered Jun 20 19:21:00.736986 kernel: acpiphp: Slot [34] registered Jun 20 19:21:00.736992 kernel: acpiphp: Slot [35] registered Jun 20 19:21:00.736998 kernel: acpiphp: Slot [36] registered Jun 20 19:21:00.737004 kernel: acpiphp: Slot [37] registered Jun 20 19:21:00.737009 kernel: acpiphp: Slot [38] registered Jun 20 19:21:00.737015 kernel: acpiphp: Slot [39] registered Jun 20 19:21:00.737021 kernel: acpiphp: Slot [40] registered Jun 20 19:21:00.737027 kernel: acpiphp: Slot [41] registered Jun 20 19:21:00.737034 kernel: acpiphp: Slot [42] registered Jun 20 19:21:00.737040 kernel: acpiphp: Slot [43] registered Jun 20 19:21:00.737046 kernel: acpiphp: Slot [44] registered Jun 20 19:21:00.737052 kernel: acpiphp: Slot [45] registered Jun 20 19:21:00.737058 kernel: acpiphp: Slot [46] registered Jun 20 19:21:00.737064 kernel: acpiphp: Slot [47] registered Jun 20 19:21:00.737070 kernel: acpiphp: Slot [48] registered Jun 20 19:21:00.737076 kernel: acpiphp: Slot [49] registered Jun 20 19:21:00.737082 kernel: acpiphp: Slot [50] registered Jun 20 19:21:00.737087 kernel: acpiphp: Slot [51] registered Jun 20 19:21:00.737094 kernel: acpiphp: Slot [52] registered Jun 20 19:21:00.737100 kernel: acpiphp: Slot [53] registered Jun 20 19:21:00.737106 kernel: acpiphp: Slot [54] registered Jun 20 19:21:00.737112 kernel: acpiphp: Slot [55] registered Jun 20 19:21:00.737118 kernel: acpiphp: Slot [56] registered Jun 20 19:21:00.737124 kernel: acpiphp: Slot [57] registered Jun 20 19:21:00.737130 kernel: acpiphp: Slot [58] registered Jun 20 19:21:00.737136 kernel: acpiphp: Slot [59] registered Jun 20 19:21:00.737142 kernel: acpiphp: Slot [60] registered Jun 20 19:21:00.737149 kernel: acpiphp: Slot [61] registered Jun 20 19:21:00.737154 kernel: acpiphp: Slot [62] registered Jun 20 19:21:00.737161 kernel: acpiphp: Slot [63] registered Jun 20 19:21:00.737214 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Jun 20 19:21:00.737267 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Jun 20 19:21:00.737316 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Jun 20 19:21:00.737368 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Jun 20 19:21:00.737417 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Jun 20 19:21:00.737468 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Jun 20 19:21:00.737525 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 PCIe Endpoint Jun 20 19:21:00.737576 kernel: pci 0000:03:00.0: BAR 0 [io 0x4000-0x4007] Jun 20 19:21:00.737627 kernel: pci 0000:03:00.0: BAR 1 [mem 0xfd5f8000-0xfd5fffff 64bit] Jun 20 19:21:00.737677 kernel: pci 0000:03:00.0: ROM [mem 0x00000000-0x0000ffff pref] Jun 20 19:21:00.737726 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jun 20 19:21:00.737785 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jun 20 19:21:00.737840 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jun 20 19:21:00.737892 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jun 20 19:21:00.737943 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jun 20 19:21:00.737994 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jun 20 19:21:00.738045 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jun 20 19:21:00.738096 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jun 20 19:21:00.738146 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jun 20 19:21:00.738197 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jun 20 19:21:00.738255 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 PCIe Endpoint Jun 20 19:21:00.738306 kernel: pci 0000:0b:00.0: BAR 0 [mem 0xfd4fc000-0xfd4fcfff] Jun 20 19:21:00.738355 kernel: pci 0000:0b:00.0: BAR 1 [mem 0xfd4fd000-0xfd4fdfff] Jun 20 19:21:00.738404 kernel: pci 0000:0b:00.0: BAR 2 [mem 0xfd4fe000-0xfd4fffff] Jun 20 19:21:00.738453 kernel: pci 0000:0b:00.0: BAR 3 [io 0x5000-0x500f] Jun 20 19:21:00.738502 kernel: pci 0000:0b:00.0: ROM [mem 0x00000000-0x0000ffff pref] Jun 20 19:21:00.738551 kernel: pci 0000:0b:00.0: supports D1 D2 Jun 20 19:21:00.738603 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jun 20 19:21:00.738652 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jun 20 19:21:00.738702 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jun 20 19:21:00.742180 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jun 20 19:21:00.742249 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jun 20 19:21:00.742303 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jun 20 19:21:00.742356 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jun 20 19:21:00.742410 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jun 20 19:21:00.742462 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jun 20 19:21:00.742514 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jun 20 19:21:00.742566 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jun 20 19:21:00.742617 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jun 20 19:21:00.742668 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jun 20 19:21:00.742721 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jun 20 19:21:00.742781 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jun 20 19:21:00.742836 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jun 20 19:21:00.742887 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jun 20 19:21:00.742945 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jun 20 19:21:00.743001 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jun 20 19:21:00.743051 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jun 20 19:21:00.743102 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jun 20 19:21:00.743153 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jun 20 19:21:00.743207 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jun 20 19:21:00.743258 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jun 20 19:21:00.743308 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jun 20 19:21:00.743358 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jun 20 19:21:00.743368 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Jun 20 19:21:00.743374 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Jun 20 19:21:00.743380 kernel: ACPI: PCI: Interrupt link LNKB disabled Jun 20 19:21:00.743386 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 20 19:21:00.743394 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Jun 20 19:21:00.743400 kernel: iommu: Default domain type: Translated Jun 20 19:21:00.743406 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 20 19:21:00.743412 kernel: PCI: Using ACPI for IRQ routing Jun 20 19:21:00.743418 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 20 19:21:00.743424 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Jun 20 19:21:00.743429 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Jun 20 19:21:00.743478 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Jun 20 19:21:00.743527 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Jun 20 19:21:00.743578 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 20 19:21:00.743587 kernel: vgaarb: loaded Jun 20 19:21:00.743593 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Jun 20 19:21:00.743599 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Jun 20 19:21:00.743605 kernel: clocksource: Switched to clocksource tsc-early Jun 20 19:21:00.743611 kernel: VFS: Disk quotas dquot_6.6.0 Jun 20 19:21:00.743617 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 20 19:21:00.743623 kernel: pnp: PnP ACPI init Jun 20 19:21:00.743677 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Jun 20 19:21:00.743726 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Jun 20 19:21:00.743788 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Jun 20 19:21:00.743838 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Jun 20 19:21:00.743887 kernel: pnp 00:06: [dma 2] Jun 20 19:21:00.743936 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Jun 20 19:21:00.743982 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Jun 20 19:21:00.744029 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Jun 20 19:21:00.744037 kernel: pnp: PnP ACPI: found 8 devices Jun 20 19:21:00.744044 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 20 19:21:00.744050 kernel: NET: Registered PF_INET protocol family Jun 20 19:21:00.744056 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 20 19:21:00.744062 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jun 20 19:21:00.744068 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 20 19:21:00.744074 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 20 19:21:00.744081 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jun 20 19:21:00.744087 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jun 20 19:21:00.744093 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 20 19:21:00.744099 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 20 19:21:00.744105 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 20 19:21:00.744111 kernel: NET: Registered PF_XDP protocol family Jun 20 19:21:00.744162 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jun 20 19:21:00.744213 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jun 20 19:21:00.744264 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jun 20 19:21:00.744319 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jun 20 19:21:00.744370 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jun 20 19:21:00.744420 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jun 20 19:21:00.744470 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Jun 20 19:21:00.744520 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jun 20 19:21:00.744569 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jun 20 19:21:00.744618 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jun 20 19:21:00.744669 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jun 20 19:21:00.744719 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jun 20 19:21:00.744784 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jun 20 19:21:00.744835 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jun 20 19:21:00.744884 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jun 20 19:21:00.744934 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jun 20 19:21:00.744983 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jun 20 19:21:00.745033 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jun 20 19:21:00.745085 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jun 20 19:21:00.745135 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Jun 20 19:21:00.745184 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Jun 20 19:21:00.745233 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Jun 20 19:21:00.745282 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Jun 20 19:21:00.745332 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref]: assigned Jun 20 19:21:00.745383 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref]: assigned Jun 20 19:21:00.745432 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.745483 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.745534 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.745583 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.745632 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.745681 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.745731 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.745790 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.745840 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.745892 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.745944 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.745997 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.746046 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.746094 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.746142 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.746190 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.746242 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.746290 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.746338 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.746386 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.746434 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.746482 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.746531 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.746580 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.746631 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.746679 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.746727 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.746784 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.746833 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.746882 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.746930 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.746978 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.747030 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.747079 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.747127 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.747176 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.747225 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.747274 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.747322 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.747371 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.747422 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.747471 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.747520 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.747569 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.747619 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.747667 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.747718 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.747774 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.747822 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.747874 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.747922 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.747990 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.748043 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.748093 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.748142 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.748190 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.748238 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.748288 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.748339 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.748388 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.748436 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.748485 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.748534 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.748583 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.748631 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.748681 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.748729 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.748794 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.748847 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.748901 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.748978 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.749026 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.749073 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.749151 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.749213 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.749272 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.749335 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.749399 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.749455 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.749511 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.749568 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.749625 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.749686 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:21:00.750112 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Jun 20 19:21:00.750174 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jun 20 19:21:00.750225 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Jun 20 19:21:00.750272 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jun 20 19:21:00.750319 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jun 20 19:21:00.750366 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jun 20 19:21:00.750416 kernel: pci 0000:03:00.0: ROM [mem 0xfd500000-0xfd50ffff pref]: assigned Jun 20 19:21:00.750464 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jun 20 19:21:00.750511 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jun 20 19:21:00.750560 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jun 20 19:21:00.750607 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Jun 20 19:21:00.750655 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jun 20 19:21:00.752822 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jun 20 19:21:00.752910 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jun 20 19:21:00.752962 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jun 20 19:21:00.753012 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jun 20 19:21:00.753061 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jun 20 19:21:00.753109 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jun 20 19:21:00.753159 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jun 20 19:21:00.753206 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jun 20 19:21:00.753253 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jun 20 19:21:00.753300 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jun 20 19:21:00.753346 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jun 20 19:21:00.753393 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jun 20 19:21:00.753440 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jun 20 19:21:00.753488 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jun 20 19:21:00.753536 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jun 20 19:21:00.753583 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jun 20 19:21:00.753631 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jun 20 19:21:00.753678 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jun 20 19:21:00.753725 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jun 20 19:21:00.753980 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jun 20 19:21:00.754030 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jun 20 19:21:00.754080 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jun 20 19:21:00.754132 kernel: pci 0000:0b:00.0: ROM [mem 0xfd400000-0xfd40ffff pref]: assigned Jun 20 19:21:00.754180 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jun 20 19:21:00.754228 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jun 20 19:21:00.754275 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jun 20 19:21:00.754323 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Jun 20 19:21:00.754371 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jun 20 19:21:00.754418 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jun 20 19:21:00.754465 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jun 20 19:21:00.754514 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jun 20 19:21:00.754562 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jun 20 19:21:00.754609 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jun 20 19:21:00.754656 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jun 20 19:21:00.754704 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jun 20 19:21:00.754759 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jun 20 19:21:00.754828 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jun 20 19:21:00.754891 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jun 20 19:21:00.754963 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jun 20 19:21:00.755014 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jun 20 19:21:00.755061 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jun 20 19:21:00.755109 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jun 20 19:21:00.755156 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jun 20 19:21:00.755202 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jun 20 19:21:00.755249 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jun 20 19:21:00.755296 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jun 20 19:21:00.755345 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jun 20 19:21:00.755393 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jun 20 19:21:00.755439 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jun 20 19:21:00.755486 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jun 20 19:21:00.755534 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jun 20 19:21:00.755581 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jun 20 19:21:00.755628 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jun 20 19:21:00.755674 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jun 20 19:21:00.755726 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jun 20 19:21:00.756286 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jun 20 19:21:00.756340 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jun 20 19:21:00.756389 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jun 20 19:21:00.756437 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jun 20 19:21:00.756485 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jun 20 19:21:00.756533 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jun 20 19:21:00.756581 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jun 20 19:21:00.756628 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jun 20 19:21:00.756674 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jun 20 19:21:00.756725 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jun 20 19:21:00.756895 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jun 20 19:21:00.757132 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jun 20 19:21:00.757210 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jun 20 19:21:00.757336 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jun 20 19:21:00.757440 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jun 20 19:21:00.757491 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jun 20 19:21:00.757543 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jun 20 19:21:00.757591 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jun 20 19:21:00.757638 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jun 20 19:21:00.757685 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jun 20 19:21:00.757732 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jun 20 19:21:00.757787 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jun 20 19:21:00.757836 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jun 20 19:21:00.757886 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jun 20 19:21:00.757940 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jun 20 19:21:00.757989 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jun 20 19:21:00.758038 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jun 20 19:21:00.758086 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jun 20 19:21:00.758133 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jun 20 19:21:00.758179 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jun 20 19:21:00.758227 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jun 20 19:21:00.758274 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jun 20 19:21:00.758321 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jun 20 19:21:00.758371 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jun 20 19:21:00.758418 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jun 20 19:21:00.758464 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jun 20 19:21:00.758512 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jun 20 19:21:00.758559 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jun 20 19:21:00.758605 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jun 20 19:21:00.758655 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jun 20 19:21:00.758702 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jun 20 19:21:00.758756 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jun 20 19:21:00.758807 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jun 20 19:21:00.758853 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jun 20 19:21:00.758950 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jun 20 19:21:00.759027 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jun 20 19:21:00.759085 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jun 20 19:21:00.759145 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jun 20 19:21:00.759197 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Jun 20 19:21:00.759242 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Jun 20 19:21:00.759286 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Jun 20 19:21:00.759329 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Jun 20 19:21:00.759372 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Jun 20 19:21:00.759419 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Jun 20 19:21:00.759468 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Jun 20 19:21:00.759513 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Jun 20 19:21:00.759557 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Jun 20 19:21:00.759602 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Jun 20 19:21:00.759650 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Jun 20 19:21:00.759695 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Jun 20 19:21:00.759740 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Jun 20 19:21:00.759821 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Jun 20 19:21:00.759883 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Jun 20 19:21:00.759926 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Jun 20 19:21:00.759977 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Jun 20 19:21:00.760021 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Jun 20 19:21:00.760064 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Jun 20 19:21:00.760111 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Jun 20 19:21:00.760158 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Jun 20 19:21:00.760201 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Jun 20 19:21:00.760249 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Jun 20 19:21:00.760293 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Jun 20 19:21:00.760341 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Jun 20 19:21:00.760384 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Jun 20 19:21:00.760434 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Jun 20 19:21:00.760478 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Jun 20 19:21:00.760527 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Jun 20 19:21:00.760571 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Jun 20 19:21:00.760619 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Jun 20 19:21:00.760663 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Jun 20 19:21:00.760713 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Jun 20 19:21:00.760775 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Jun 20 19:21:00.760825 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Jun 20 19:21:00.760882 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Jun 20 19:21:00.760947 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Jun 20 19:21:00.761007 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Jun 20 19:21:00.761065 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Jun 20 19:21:00.761109 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Jun 20 19:21:00.761158 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Jun 20 19:21:00.761218 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Jun 20 19:21:00.761264 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Jun 20 19:21:00.761311 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Jun 20 19:21:00.761355 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Jun 20 19:21:00.761405 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Jun 20 19:21:00.761449 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Jun 20 19:21:00.761499 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Jun 20 19:21:00.761542 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Jun 20 19:21:00.761590 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Jun 20 19:21:00.761634 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Jun 20 19:21:00.761683 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Jun 20 19:21:00.761727 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Jun 20 19:21:00.761789 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Jun 20 19:21:00.761849 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Jun 20 19:21:00.761899 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Jun 20 19:21:00.761944 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Jun 20 19:21:00.761992 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Jun 20 19:21:00.762044 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Jun 20 19:21:00.762088 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Jun 20 19:21:00.762139 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Jun 20 19:21:00.762184 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Jun 20 19:21:00.762231 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Jun 20 19:21:00.762276 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Jun 20 19:21:00.762326 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Jun 20 19:21:00.762370 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Jun 20 19:21:00.762418 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Jun 20 19:21:00.762462 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Jun 20 19:21:00.762511 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Jun 20 19:21:00.762556 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Jun 20 19:21:00.762605 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Jun 20 19:21:00.762649 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Jun 20 19:21:00.762692 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Jun 20 19:21:00.762740 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Jun 20 19:21:00.762800 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Jun 20 19:21:00.762844 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Jun 20 19:21:00.762891 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Jun 20 19:21:00.762938 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Jun 20 19:21:00.762986 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Jun 20 19:21:00.763031 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Jun 20 19:21:00.763078 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Jun 20 19:21:00.763140 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Jun 20 19:21:00.763205 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Jun 20 19:21:00.763251 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Jun 20 19:21:00.763298 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Jun 20 19:21:00.763342 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Jun 20 19:21:00.763390 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Jun 20 19:21:00.763434 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Jun 20 19:21:00.763485 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jun 20 19:21:00.763494 kernel: PCI: CLS 32 bytes, default 64 Jun 20 19:21:00.763502 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jun 20 19:21:00.763508 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jun 20 19:21:00.763514 kernel: clocksource: Switched to clocksource tsc Jun 20 19:21:00.763520 kernel: Initialise system trusted keyrings Jun 20 19:21:00.763526 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jun 20 19:21:00.763533 kernel: Key type asymmetric registered Jun 20 19:21:00.763538 kernel: Asymmetric key parser 'x509' registered Jun 20 19:21:00.763544 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jun 20 19:21:00.763550 kernel: io scheduler mq-deadline registered Jun 20 19:21:00.763557 kernel: io scheduler kyber registered Jun 20 19:21:00.763563 kernel: io scheduler bfq registered Jun 20 19:21:00.763612 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Jun 20 19:21:00.763663 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.763713 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Jun 20 19:21:00.763847 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.763898 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Jun 20 19:21:00.763951 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.764000 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Jun 20 19:21:00.764050 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.764099 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Jun 20 19:21:00.764147 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.764196 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Jun 20 19:21:00.764244 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.764296 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Jun 20 19:21:00.764344 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.764392 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Jun 20 19:21:00.764440 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.764489 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Jun 20 19:21:00.764537 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.764586 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Jun 20 19:21:00.764636 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.764685 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Jun 20 19:21:00.764733 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.764810 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Jun 20 19:21:00.764861 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.764924 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Jun 20 19:21:00.764977 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.765027 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Jun 20 19:21:00.765081 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.765130 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Jun 20 19:21:00.765182 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.765231 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Jun 20 19:21:00.765281 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.765330 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Jun 20 19:21:00.765378 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.765428 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Jun 20 19:21:00.765477 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.765525 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Jun 20 19:21:00.765573 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.765621 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Jun 20 19:21:00.765670 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.765718 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Jun 20 19:21:00.765782 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.765836 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Jun 20 19:21:00.765885 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.765934 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Jun 20 19:21:00.765983 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.766032 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Jun 20 19:21:00.766080 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.766138 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Jun 20 19:21:00.766199 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.766250 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Jun 20 19:21:00.766299 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.766348 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Jun 20 19:21:00.766397 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.766446 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Jun 20 19:21:00.766494 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.766545 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Jun 20 19:21:00.766597 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.766646 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Jun 20 19:21:00.766695 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.766750 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Jun 20 19:21:00.766801 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.766851 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Jun 20 19:21:00.766900 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:21:00.766911 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 20 19:21:00.766919 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 20 19:21:00.766925 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 20 19:21:00.766932 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Jun 20 19:21:00.766938 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 20 19:21:00.766944 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 20 19:21:00.766996 kernel: rtc_cmos 00:01: registered as rtc0 Jun 20 19:21:00.767044 kernel: rtc_cmos 00:01: setting system clock to 2025-06-20T19:21:00 UTC (1750447260) Jun 20 19:21:00.767053 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jun 20 19:21:00.767095 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Jun 20 19:21:00.767104 kernel: intel_pstate: CPU model not supported Jun 20 19:21:00.767110 kernel: NET: Registered PF_INET6 protocol family Jun 20 19:21:00.767116 kernel: Segment Routing with IPv6 Jun 20 19:21:00.767141 kernel: In-situ OAM (IOAM) with IPv6 Jun 20 19:21:00.767147 kernel: NET: Registered PF_PACKET protocol family Jun 20 19:21:00.767153 kernel: Key type dns_resolver registered Jun 20 19:21:00.767161 kernel: IPI shorthand broadcast: enabled Jun 20 19:21:00.767167 kernel: sched_clock: Marking stable (2688039052, 168656695)->(2872788496, -16092749) Jun 20 19:21:00.767174 kernel: registered taskstats version 1 Jun 20 19:21:00.767181 kernel: Loading compiled-in X.509 certificates Jun 20 19:21:00.767187 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.34-flatcar: 9a085d119111c823c157514215d0379e3a2f1b94' Jun 20 19:21:00.767193 kernel: Demotion targets for Node 0: null Jun 20 19:21:00.767199 kernel: Key type .fscrypt registered Jun 20 19:21:00.767205 kernel: Key type fscrypt-provisioning registered Jun 20 19:21:00.767211 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 20 19:21:00.767219 kernel: ima: Allocated hash algorithm: sha1 Jun 20 19:21:00.767226 kernel: ima: No architecture policies found Jun 20 19:21:00.767233 kernel: clk: Disabling unused clocks Jun 20 19:21:00.767239 kernel: Warning: unable to open an initial console. Jun 20 19:21:00.767246 kernel: Freeing unused kernel image (initmem) memory: 54424K Jun 20 19:21:00.767252 kernel: Write protecting the kernel read-only data: 24576k Jun 20 19:21:00.767258 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jun 20 19:21:00.767265 kernel: Run /init as init process Jun 20 19:21:00.767271 kernel: with arguments: Jun 20 19:21:00.767279 kernel: /init Jun 20 19:21:00.767285 kernel: with environment: Jun 20 19:21:00.767291 kernel: HOME=/ Jun 20 19:21:00.767297 kernel: TERM=linux Jun 20 19:21:00.767303 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 20 19:21:00.767310 systemd[1]: Successfully made /usr/ read-only. Jun 20 19:21:00.767319 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 19:21:00.767326 systemd[1]: Detected virtualization vmware. Jun 20 19:21:00.767334 systemd[1]: Detected architecture x86-64. Jun 20 19:21:00.767340 systemd[1]: Running in initrd. Jun 20 19:21:00.767346 systemd[1]: No hostname configured, using default hostname. Jun 20 19:21:00.767353 systemd[1]: Hostname set to . Jun 20 19:21:00.767359 systemd[1]: Initializing machine ID from random generator. Jun 20 19:21:00.767366 systemd[1]: Queued start job for default target initrd.target. Jun 20 19:21:00.767372 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:21:00.767378 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:21:00.767386 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 20 19:21:00.767393 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 19:21:00.767399 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 20 19:21:00.767406 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 20 19:21:00.767413 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 20 19:21:00.767420 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 20 19:21:00.767428 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:21:00.767434 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:21:00.767441 systemd[1]: Reached target paths.target - Path Units. Jun 20 19:21:00.767447 systemd[1]: Reached target slices.target - Slice Units. Jun 20 19:21:00.767453 systemd[1]: Reached target swap.target - Swaps. Jun 20 19:21:00.767460 systemd[1]: Reached target timers.target - Timer Units. Jun 20 19:21:00.767466 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 19:21:00.767472 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 19:21:00.767479 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 20 19:21:00.767487 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 20 19:21:00.767494 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:21:00.767500 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 19:21:00.767506 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:21:00.767513 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 19:21:00.767519 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 20 19:21:00.767526 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 19:21:00.767532 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 20 19:21:00.767539 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jun 20 19:21:00.767546 systemd[1]: Starting systemd-fsck-usr.service... Jun 20 19:21:00.767553 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 19:21:00.767559 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 19:21:00.767566 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:21:00.767572 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 20 19:21:00.767592 systemd-journald[240]: Collecting audit messages is disabled. Jun 20 19:21:00.767609 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:21:00.767616 systemd[1]: Finished systemd-fsck-usr.service. Jun 20 19:21:00.767624 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 20 19:21:00.767630 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 19:21:00.767637 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 19:21:00.767644 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:21:00.767651 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 19:21:00.767657 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 20 19:21:00.767664 kernel: Bridge firewalling registered Jun 20 19:21:00.767670 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:21:00.767678 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 19:21:00.767685 systemd-journald[240]: Journal started Jun 20 19:21:00.767701 systemd-journald[240]: Runtime Journal (/run/log/journal/fb40f32aa17b419da41ec45e63fcb2ce) is 4.8M, max 38.8M, 34M free. Jun 20 19:21:00.725674 systemd-modules-load[243]: Inserted module 'overlay' Jun 20 19:21:00.769836 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:21:00.769848 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 19:21:00.761833 systemd-modules-load[243]: Inserted module 'br_netfilter' Jun 20 19:21:00.775073 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 19:21:00.777816 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:21:00.779373 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 20 19:21:00.779736 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:21:00.782025 systemd-tmpfiles[275]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jun 20 19:21:00.783661 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:21:00.785770 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 19:21:00.789771 dracut-cmdline[279]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:21:00.812210 systemd-resolved[285]: Positive Trust Anchors: Jun 20 19:21:00.812414 systemd-resolved[285]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 19:21:00.812437 systemd-resolved[285]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 19:21:00.814913 systemd-resolved[285]: Defaulting to hostname 'linux'. Jun 20 19:21:00.815532 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 19:21:00.815677 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:21:00.840759 kernel: SCSI subsystem initialized Jun 20 19:21:00.856756 kernel: Loading iSCSI transport class v2.0-870. Jun 20 19:21:00.864759 kernel: iscsi: registered transport (tcp) Jun 20 19:21:00.886762 kernel: iscsi: registered transport (qla4xxx) Jun 20 19:21:00.886785 kernel: QLogic iSCSI HBA Driver Jun 20 19:21:00.896680 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 19:21:00.907633 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:21:00.908630 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 19:21:00.930874 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 20 19:21:00.931882 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 20 19:21:00.967763 kernel: raid6: avx2x4 gen() 45852 MB/s Jun 20 19:21:00.984759 kernel: raid6: avx2x2 gen() 52596 MB/s Jun 20 19:21:01.002035 kernel: raid6: avx2x1 gen() 44666 MB/s Jun 20 19:21:01.002057 kernel: raid6: using algorithm avx2x2 gen() 52596 MB/s Jun 20 19:21:01.019991 kernel: raid6: .... xor() 29894 MB/s, rmw enabled Jun 20 19:21:01.020031 kernel: raid6: using avx2x2 recovery algorithm Jun 20 19:21:01.034761 kernel: xor: automatically using best checksumming function avx Jun 20 19:21:01.139773 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 20 19:21:01.143101 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 20 19:21:01.144136 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:21:01.161742 systemd-udevd[492]: Using default interface naming scheme 'v255'. Jun 20 19:21:01.165123 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:21:01.166395 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 20 19:21:01.188109 dracut-pre-trigger[498]: rd.md=0: removing MD RAID activation Jun 20 19:21:01.201858 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 19:21:01.202804 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 19:21:01.281697 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:21:01.283254 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 20 19:21:01.359783 kernel: VMware PVSCSI driver - version 1.0.7.0-k Jun 20 19:21:01.359815 kernel: vmw_pvscsi: using 64bit dma Jun 20 19:21:01.359824 kernel: vmw_pvscsi: max_id: 16 Jun 20 19:21:01.360756 kernel: vmw_pvscsi: setting ring_pages to 8 Jun 20 19:21:01.365052 kernel: vmw_pvscsi: enabling reqCallThreshold Jun 20 19:21:01.365072 kernel: vmw_pvscsi: driver-based request coalescing enabled Jun 20 19:21:01.365080 kernel: vmw_pvscsi: using MSI-X Jun 20 19:21:01.366290 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Jun 20 19:21:01.367226 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Jun 20 19:21:01.370786 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Jun 20 19:21:01.386789 (udev-worker)[556]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Jun 20 19:21:01.389206 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:21:01.390292 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:21:01.390638 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:21:01.390783 kernel: VMware vmxnet3 virtual NIC driver - version 1.9.0.0-k-NAPI Jun 20 19:21:01.393932 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Jun 20 19:21:01.394046 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Jun 20 19:21:01.393682 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:21:01.398764 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Jun 20 19:21:01.401756 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Jun 20 19:21:01.401864 kernel: sd 0:0:0:0: [sda] Write Protect is off Jun 20 19:21:01.401937 kernel: cryptd: max_cpu_qlen set to 1000 Jun 20 19:21:01.401946 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Jun 20 19:21:01.405320 kernel: sd 0:0:0:0: [sda] Cache data unavailable Jun 20 19:21:01.405537 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Jun 20 19:21:01.412762 kernel: libata version 3.00 loaded. Jun 20 19:21:01.419799 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:21:01.422808 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 19:21:01.422833 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jun 20 19:21:01.430843 kernel: ata_piix 0000:00:07.1: version 2.13 Jun 20 19:21:01.432868 kernel: scsi host1: ata_piix Jun 20 19:21:01.432965 kernel: scsi host2: ata_piix Jun 20 19:21:01.433027 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 lpm-pol 0 Jun 20 19:21:01.433956 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 lpm-pol 0 Jun 20 19:21:01.438869 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Jun 20 19:21:01.438898 kernel: AES CTR mode by8 optimization enabled Jun 20 19:21:01.481277 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Jun 20 19:21:01.486761 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Jun 20 19:21:01.492230 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jun 20 19:21:01.496664 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Jun 20 19:21:01.496972 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Jun 20 19:21:01.497730 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 20 19:21:01.541761 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 19:21:01.554759 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 19:21:01.606772 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Jun 20 19:21:01.615770 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Jun 20 19:21:01.638346 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Jun 20 19:21:01.638475 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 20 19:21:01.657770 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jun 20 19:21:01.945691 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 20 19:21:01.946268 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 19:21:01.946529 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:21:01.946778 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 19:21:01.947398 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 20 19:21:01.956846 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 20 19:21:02.553773 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 19:21:02.553891 disk-uuid[644]: The operation has completed successfully. Jun 20 19:21:02.595732 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 20 19:21:02.595812 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 20 19:21:02.606176 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 20 19:21:02.618800 sh[685]: Success Jun 20 19:21:02.632766 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 20 19:21:02.632830 kernel: device-mapper: uevent: version 1.0.3 Jun 20 19:21:02.632842 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jun 20 19:21:02.640760 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Jun 20 19:21:02.711118 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 20 19:21:02.712287 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 20 19:21:02.720134 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 20 19:21:02.751767 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jun 20 19:21:02.751804 kernel: BTRFS: device fsid 048b924a-9f97-43f5-98d6-0fff18874966 devid 1 transid 41 /dev/mapper/usr (254:0) scanned by mount (697) Jun 20 19:21:02.754666 kernel: BTRFS info (device dm-0): first mount of filesystem 048b924a-9f97-43f5-98d6-0fff18874966 Jun 20 19:21:02.754692 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:21:02.756258 kernel: BTRFS info (device dm-0): using free-space-tree Jun 20 19:21:02.763946 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 20 19:21:02.764270 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jun 20 19:21:02.765104 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Jun 20 19:21:02.766807 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 20 19:21:02.794431 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 (8:6) scanned by mount (720) Jun 20 19:21:02.794465 kernel: BTRFS info (device sda6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:21:02.794477 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:21:02.796086 kernel: BTRFS info (device sda6): using free-space-tree Jun 20 19:21:02.804795 kernel: BTRFS info (device sda6): last unmount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:21:02.805729 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 20 19:21:02.809853 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 20 19:21:02.858902 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jun 20 19:21:02.859558 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 20 19:21:02.927082 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 19:21:02.928224 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 19:21:02.941263 ignition[739]: Ignition 2.21.0 Jun 20 19:21:02.941272 ignition[739]: Stage: fetch-offline Jun 20 19:21:02.941290 ignition[739]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:21:02.941295 ignition[739]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jun 20 19:21:02.941342 ignition[739]: parsed url from cmdline: "" Jun 20 19:21:02.941344 ignition[739]: no config URL provided Jun 20 19:21:02.941347 ignition[739]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 19:21:02.941351 ignition[739]: no config at "/usr/lib/ignition/user.ign" Jun 20 19:21:02.941703 ignition[739]: config successfully fetched Jun 20 19:21:02.941721 ignition[739]: parsing config with SHA512: 927c6f9ec2957c505a8dda1113e103fca6de1ddd767afcb51120e35544c376b5a05b58969093a13c466e2d46e9aa1c802c0d762136155e17faffba5b5c6bdd28 Jun 20 19:21:02.946699 unknown[739]: fetched base config from "system" Jun 20 19:21:02.946706 unknown[739]: fetched user config from "vmware" Jun 20 19:21:02.946943 ignition[739]: fetch-offline: fetch-offline passed Jun 20 19:21:02.946983 ignition[739]: Ignition finished successfully Jun 20 19:21:02.947979 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 19:21:02.952500 systemd-networkd[876]: lo: Link UP Jun 20 19:21:02.952506 systemd-networkd[876]: lo: Gained carrier Jun 20 19:21:02.953264 systemd-networkd[876]: Enumeration completed Jun 20 19:21:02.953499 systemd-networkd[876]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Jun 20 19:21:02.953566 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 19:21:02.953852 systemd[1]: Reached target network.target - Network. Jun 20 19:21:02.954302 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jun 20 19:21:02.956429 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jun 20 19:21:02.956540 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jun 20 19:21:02.956610 systemd-networkd[876]: ens192: Link UP Jun 20 19:21:02.956615 systemd-networkd[876]: ens192: Gained carrier Jun 20 19:21:02.956829 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 20 19:21:02.972826 ignition[881]: Ignition 2.21.0 Jun 20 19:21:02.973104 ignition[881]: Stage: kargs Jun 20 19:21:02.973300 ignition[881]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:21:02.973427 ignition[881]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jun 20 19:21:02.974070 ignition[881]: kargs: kargs passed Jun 20 19:21:02.974097 ignition[881]: Ignition finished successfully Jun 20 19:21:02.975279 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 20 19:21:02.976110 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 20 19:21:02.988431 ignition[888]: Ignition 2.21.0 Jun 20 19:21:02.988641 ignition[888]: Stage: disks Jun 20 19:21:02.988734 ignition[888]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:21:02.988741 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jun 20 19:21:02.989769 ignition[888]: disks: disks passed Jun 20 19:21:02.989803 ignition[888]: Ignition finished successfully Jun 20 19:21:02.991094 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 20 19:21:02.991416 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 20 19:21:02.991646 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 20 19:21:02.991892 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 19:21:02.992100 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 19:21:02.992309 systemd[1]: Reached target basic.target - Basic System. Jun 20 19:21:02.993025 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 20 19:21:03.007791 systemd-fsck[896]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Jun 20 19:21:03.008808 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 20 19:21:03.009916 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 20 19:21:03.086755 kernel: EXT4-fs (sda9): mounted filesystem 6290a154-3512-46a6-a5f5-a7fb62c65caa r/w with ordered data mode. Quota mode: none. Jun 20 19:21:03.086811 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 20 19:21:03.087147 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 20 19:21:03.087979 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 19:21:03.088564 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 20 19:21:03.089970 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 20 19:21:03.090161 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 20 19:21:03.090363 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 19:21:03.099275 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 20 19:21:03.100280 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 20 19:21:03.106428 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 (8:6) scanned by mount (904) Jun 20 19:21:03.106449 kernel: BTRFS info (device sda6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:21:03.107843 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:21:03.107862 kernel: BTRFS info (device sda6): using free-space-tree Jun 20 19:21:03.113809 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 19:21:03.132497 initrd-setup-root[928]: cut: /sysroot/etc/passwd: No such file or directory Jun 20 19:21:03.134756 initrd-setup-root[935]: cut: /sysroot/etc/group: No such file or directory Jun 20 19:21:03.137031 initrd-setup-root[942]: cut: /sysroot/etc/shadow: No such file or directory Jun 20 19:21:03.139238 initrd-setup-root[949]: cut: /sysroot/etc/gshadow: No such file or directory Jun 20 19:21:03.192654 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 20 19:21:03.193505 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 20 19:21:03.194005 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 20 19:21:03.207762 kernel: BTRFS info (device sda6): last unmount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:21:03.222976 ignition[1020]: INFO : Ignition 2.21.0 Jun 20 19:21:03.223302 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 20 19:21:03.223516 ignition[1020]: INFO : Stage: mount Jun 20 19:21:03.223684 ignition[1020]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:21:03.223813 ignition[1020]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jun 20 19:21:03.224422 ignition[1020]: INFO : mount: mount passed Jun 20 19:21:03.224422 ignition[1020]: INFO : Ignition finished successfully Jun 20 19:21:03.225264 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 20 19:21:03.225898 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 20 19:21:03.305974 systemd-resolved[285]: Detected conflict on linux IN A 139.178.70.105 Jun 20 19:21:03.305997 systemd-resolved[285]: Hostname conflict, changing published hostname from 'linux' to 'linux11'. Jun 20 19:21:03.751430 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 20 19:21:03.752356 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 19:21:03.768371 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 (8:6) scanned by mount (1031) Jun 20 19:21:03.768403 kernel: BTRFS info (device sda6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:21:03.768412 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:21:03.769327 kernel: BTRFS info (device sda6): using free-space-tree Jun 20 19:21:03.773057 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 19:21:03.793117 ignition[1048]: INFO : Ignition 2.21.0 Jun 20 19:21:03.793117 ignition[1048]: INFO : Stage: files Jun 20 19:21:03.793461 ignition[1048]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:21:03.793461 ignition[1048]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jun 20 19:21:03.793713 ignition[1048]: DEBUG : files: compiled without relabeling support, skipping Jun 20 19:21:03.794325 ignition[1048]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 20 19:21:03.794325 ignition[1048]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 20 19:21:03.795714 ignition[1048]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 20 19:21:03.795862 ignition[1048]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 20 19:21:03.795995 ignition[1048]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 20 19:21:03.795949 unknown[1048]: wrote ssh authorized keys file for user: core Jun 20 19:21:03.797440 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jun 20 19:21:03.797671 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jun 20 19:21:03.827703 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 20 19:21:03.943786 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jun 20 19:21:03.944043 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 20 19:21:03.944043 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 20 19:21:03.944043 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 20 19:21:03.944530 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 20 19:21:03.944530 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 19:21:03.944530 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 19:21:03.944530 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 19:21:03.944530 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 19:21:03.945691 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 19:21:03.945885 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 19:21:03.945885 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 20 19:21:03.948049 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 20 19:21:03.948398 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 20 19:21:03.948398 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jun 20 19:21:04.656098 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 20 19:21:04.830942 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 20 19:21:04.831361 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jun 20 19:21:04.831705 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jun 20 19:21:04.831914 ignition[1048]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jun 20 19:21:04.832223 ignition[1048]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 19:21:04.832575 ignition[1048]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 19:21:04.832769 ignition[1048]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jun 20 19:21:04.832769 ignition[1048]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jun 20 19:21:04.832769 ignition[1048]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 20 19:21:04.832769 ignition[1048]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 20 19:21:04.832769 ignition[1048]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jun 20 19:21:04.832769 ignition[1048]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jun 20 19:21:04.983872 systemd-networkd[876]: ens192: Gained IPv6LL Jun 20 19:21:05.173414 ignition[1048]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jun 20 19:21:05.175504 ignition[1048]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jun 20 19:21:05.175691 ignition[1048]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jun 20 19:21:05.175691 ignition[1048]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jun 20 19:21:05.175691 ignition[1048]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jun 20 19:21:05.175691 ignition[1048]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 20 19:21:05.177001 ignition[1048]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 20 19:21:05.177001 ignition[1048]: INFO : files: files passed Jun 20 19:21:05.177001 ignition[1048]: INFO : Ignition finished successfully Jun 20 19:21:05.176905 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 20 19:21:05.178845 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 20 19:21:05.179523 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 20 19:21:05.191485 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 20 19:21:05.191565 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 20 19:21:05.199671 initrd-setup-root-after-ignition[1080]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:21:05.199671 initrd-setup-root-after-ignition[1080]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:21:05.200689 initrd-setup-root-after-ignition[1084]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:21:05.201826 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 19:21:05.202122 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 20 19:21:05.202663 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 20 19:21:05.227205 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 20 19:21:05.227283 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 20 19:21:05.227705 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 20 19:21:05.227859 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 20 19:21:05.228056 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 20 19:21:05.228520 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 20 19:21:05.237973 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 19:21:05.238785 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 20 19:21:05.249652 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:21:05.249959 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:21:05.250289 systemd[1]: Stopped target timers.target - Timer Units. Jun 20 19:21:05.250546 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 20 19:21:05.250726 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 19:21:05.251125 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 20 19:21:05.251353 systemd[1]: Stopped target basic.target - Basic System. Jun 20 19:21:05.251607 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 20 19:21:05.251895 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 19:21:05.252142 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 20 19:21:05.252418 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jun 20 19:21:05.252680 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 20 19:21:05.252915 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 19:21:05.253231 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 20 19:21:05.253488 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 20 19:21:05.253735 systemd[1]: Stopped target swap.target - Swaps. Jun 20 19:21:05.253950 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 20 19:21:05.254117 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 20 19:21:05.254467 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:21:05.254734 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:21:05.255001 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 20 19:21:05.255154 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:21:05.255421 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 20 19:21:05.255493 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 20 19:21:05.255990 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 20 19:21:05.256159 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 19:21:05.256456 systemd[1]: Stopped target paths.target - Path Units. Jun 20 19:21:05.256686 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 20 19:21:05.260770 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:21:05.260937 systemd[1]: Stopped target slices.target - Slice Units. Jun 20 19:21:05.261214 systemd[1]: Stopped target sockets.target - Socket Units. Jun 20 19:21:05.261406 systemd[1]: iscsid.socket: Deactivated successfully. Jun 20 19:21:05.261463 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 19:21:05.261611 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 20 19:21:05.261658 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 19:21:05.261835 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 20 19:21:05.261909 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 19:21:05.262160 systemd[1]: ignition-files.service: Deactivated successfully. Jun 20 19:21:05.262219 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 20 19:21:05.262965 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 20 19:21:05.264832 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 20 19:21:05.264946 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 20 19:21:05.265023 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:21:05.265190 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 20 19:21:05.265248 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 19:21:05.267439 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 20 19:21:05.267819 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 20 19:21:05.276243 ignition[1104]: INFO : Ignition 2.21.0 Jun 20 19:21:05.276243 ignition[1104]: INFO : Stage: umount Jun 20 19:21:05.276612 ignition[1104]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:21:05.276612 ignition[1104]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jun 20 19:21:05.277315 ignition[1104]: INFO : umount: umount passed Jun 20 19:21:05.277315 ignition[1104]: INFO : Ignition finished successfully Jun 20 19:21:05.277732 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 20 19:21:05.277896 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 20 19:21:05.278144 systemd[1]: Stopped target network.target - Network. Jun 20 19:21:05.278245 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 20 19:21:05.278272 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 20 19:21:05.278411 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 20 19:21:05.278434 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 20 19:21:05.278579 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 20 19:21:05.278599 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 20 19:21:05.278754 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 20 19:21:05.278775 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 20 19:21:05.278986 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 20 19:21:05.279267 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 20 19:21:05.285924 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 20 19:21:05.285995 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 20 19:21:05.287438 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 20 19:21:05.287591 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 20 19:21:05.287616 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:21:05.288535 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:21:05.290660 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 20 19:21:05.290729 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 20 19:21:05.291652 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 20 19:21:05.291741 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jun 20 19:21:05.291869 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 20 19:21:05.291888 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:21:05.292816 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 20 19:21:05.292920 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 20 19:21:05.292947 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 19:21:05.293139 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Jun 20 19:21:05.293161 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jun 20 19:21:05.293902 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 19:21:05.293927 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:21:05.294227 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 20 19:21:05.294251 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 20 19:21:05.294471 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:21:05.295529 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 20 19:21:05.304142 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 20 19:21:05.304235 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:21:05.304756 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 20 19:21:05.304807 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 20 19:21:05.305128 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 20 19:21:05.305146 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:21:05.305299 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 20 19:21:05.305323 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 20 19:21:05.305595 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 20 19:21:05.305619 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 20 19:21:05.306854 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 19:21:05.306878 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:21:05.307580 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 20 19:21:05.308828 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jun 20 19:21:05.308863 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:21:05.309221 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 20 19:21:05.309248 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:21:05.309581 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jun 20 19:21:05.309603 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 19:21:05.309949 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 20 19:21:05.309972 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:21:05.310183 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:21:05.310205 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:21:05.311535 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jun 20 19:21:05.311573 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jun 20 19:21:05.311595 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 20 19:21:05.311619 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:21:05.315026 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 20 19:21:05.315088 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 20 19:21:05.316893 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 20 19:21:05.316953 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 20 19:21:05.637417 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 20 19:21:05.637490 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 20 19:21:05.637812 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 20 19:21:05.637932 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 20 19:21:05.637960 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 20 19:21:05.638629 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 20 19:21:05.655627 systemd[1]: Switching root. Jun 20 19:21:05.692591 systemd-journald[240]: Journal stopped Jun 20 19:21:07.266251 systemd-journald[240]: Received SIGTERM from PID 1 (systemd). Jun 20 19:21:07.266274 kernel: SELinux: policy capability network_peer_controls=1 Jun 20 19:21:07.266283 kernel: SELinux: policy capability open_perms=1 Jun 20 19:21:07.266289 kernel: SELinux: policy capability extended_socket_class=1 Jun 20 19:21:07.266294 kernel: SELinux: policy capability always_check_network=0 Jun 20 19:21:07.266302 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 20 19:21:07.266308 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 20 19:21:07.266314 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 20 19:21:07.266320 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 20 19:21:07.266326 kernel: SELinux: policy capability userspace_initial_context=0 Jun 20 19:21:07.266332 systemd[1]: Successfully loaded SELinux policy in 32.461ms. Jun 20 19:21:07.266339 kernel: audit: type=1403 audit(1750447266.654:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 20 19:21:07.266346 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.424ms. Jun 20 19:21:07.266354 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 19:21:07.266361 systemd[1]: Detected virtualization vmware. Jun 20 19:21:07.266368 systemd[1]: Detected architecture x86-64. Jun 20 19:21:07.266375 systemd[1]: Detected first boot. Jun 20 19:21:07.266382 systemd[1]: Initializing machine ID from random generator. Jun 20 19:21:07.266389 zram_generator::config[1147]: No configuration found. Jun 20 19:21:07.266467 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Jun 20 19:21:07.266479 kernel: Guest personality initialized and is active Jun 20 19:21:07.266486 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jun 20 19:21:07.266492 kernel: Initialized host personality Jun 20 19:21:07.266500 kernel: NET: Registered PF_VSOCK protocol family Jun 20 19:21:07.266507 systemd[1]: Populated /etc with preset unit settings. Jun 20 19:21:07.266514 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jun 20 19:21:07.266522 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Jun 20 19:21:07.266529 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 20 19:21:07.266536 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 20 19:21:07.266542 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 20 19:21:07.266550 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 20 19:21:07.266558 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 20 19:21:07.266565 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 20 19:21:07.266572 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 20 19:21:07.266579 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 20 19:21:07.266586 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 20 19:21:07.266593 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 20 19:21:07.266601 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 20 19:21:07.266607 systemd[1]: Created slice user.slice - User and Session Slice. Jun 20 19:21:07.266615 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:21:07.266623 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:21:07.266630 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 20 19:21:07.266637 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 20 19:21:07.266644 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 20 19:21:07.266651 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 19:21:07.266659 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 20 19:21:07.266667 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:21:07.266673 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:21:07.266681 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 20 19:21:07.266688 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 20 19:21:07.266695 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 20 19:21:07.266702 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 20 19:21:07.266709 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:21:07.266717 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 19:21:07.266724 systemd[1]: Reached target slices.target - Slice Units. Jun 20 19:21:07.266730 systemd[1]: Reached target swap.target - Swaps. Jun 20 19:21:07.266737 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 20 19:21:07.268757 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 20 19:21:07.268772 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 20 19:21:07.268780 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:21:07.268787 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 19:21:07.268795 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:21:07.268802 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 20 19:21:07.268809 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 20 19:21:07.268816 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 20 19:21:07.268823 systemd[1]: Mounting media.mount - External Media Directory... Jun 20 19:21:07.268832 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:21:07.268840 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 20 19:21:07.268847 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 20 19:21:07.268854 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 20 19:21:07.268861 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 20 19:21:07.268868 systemd[1]: Reached target machines.target - Containers. Jun 20 19:21:07.268876 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 20 19:21:07.268883 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Jun 20 19:21:07.268891 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 19:21:07.268903 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 20 19:21:07.268911 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:21:07.268918 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 19:21:07.268925 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:21:07.268936 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 20 19:21:07.268944 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:21:07.268952 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 20 19:21:07.268960 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 20 19:21:07.268968 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 20 19:21:07.268975 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 20 19:21:07.268982 systemd[1]: Stopped systemd-fsck-usr.service. Jun 20 19:21:07.268993 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:21:07.269001 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 19:21:07.269008 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 19:21:07.269015 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 19:21:07.269022 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 20 19:21:07.269031 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 20 19:21:07.269038 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 19:21:07.269045 systemd[1]: verity-setup.service: Deactivated successfully. Jun 20 19:21:07.269052 systemd[1]: Stopped verity-setup.service. Jun 20 19:21:07.269060 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:21:07.269067 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 20 19:21:07.269075 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 20 19:21:07.269082 systemd[1]: Mounted media.mount - External Media Directory. Jun 20 19:21:07.269090 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 20 19:21:07.269097 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 20 19:21:07.269105 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 20 19:21:07.269112 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:21:07.269119 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 20 19:21:07.269126 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 20 19:21:07.269133 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:21:07.269141 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:21:07.269148 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:21:07.269169 systemd-journald[1233]: Collecting audit messages is disabled. Jun 20 19:21:07.269187 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:21:07.269195 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 19:21:07.269202 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:21:07.269211 systemd-journald[1233]: Journal started Jun 20 19:21:07.269226 systemd-journald[1233]: Runtime Journal (/run/log/journal/1331d8a44aea472593b7e0d2bb566c7c) is 4.8M, max 38.8M, 34M free. Jun 20 19:21:07.112531 systemd[1]: Queued start job for default target multi-user.target. Jun 20 19:21:07.125082 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jun 20 19:21:07.125328 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 20 19:21:07.269755 jq[1217]: true Jun 20 19:21:07.275756 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 20 19:21:07.275797 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 19:21:07.280502 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 19:21:07.281415 jq[1252]: true Jun 20 19:21:07.283330 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 20 19:21:07.283785 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 20 19:21:07.283808 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 19:21:07.285481 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 20 19:21:07.294892 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 20 19:21:07.295088 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:21:07.299809 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 20 19:21:07.304816 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 20 19:21:07.304979 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:21:07.306861 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 20 19:21:07.310814 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:21:07.311773 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 20 19:21:07.315215 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 20 19:21:07.320758 kernel: fuse: init (API version 7.41) Jun 20 19:21:07.324613 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 20 19:21:07.324982 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 20 19:21:07.325789 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 20 19:21:07.326068 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 20 19:21:07.330121 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 20 19:21:07.335858 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 20 19:21:07.342710 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 20 19:21:07.352769 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 20 19:21:07.352988 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 20 19:21:07.355267 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 20 19:21:07.355858 kernel: loop: module loaded Jun 20 19:21:07.356339 systemd-journald[1233]: Time spent on flushing to /var/log/journal/1331d8a44aea472593b7e0d2bb566c7c is 53.525ms for 1759 entries. Jun 20 19:21:07.356339 systemd-journald[1233]: System Journal (/var/log/journal/1331d8a44aea472593b7e0d2bb566c7c) is 8M, max 584.8M, 576.8M free. Jun 20 19:21:07.465575 systemd-journald[1233]: Received client request to flush runtime journal. Jun 20 19:21:07.465603 kernel: loop0: detected capacity change from 0 to 146240 Jun 20 19:21:07.465617 kernel: ACPI: bus type drm_connector registered Jun 20 19:21:07.362685 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:21:07.378195 ignition[1286]: Ignition 2.21.0 Jun 20 19:21:07.362836 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:21:07.378341 ignition[1286]: deleting config from guestinfo properties Jun 20 19:21:07.363994 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:21:07.397665 ignition[1286]: Successfully deleted config Jun 20 19:21:07.377379 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 19:21:07.377514 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 19:21:07.407056 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Jun 20 19:21:07.409403 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:21:07.433861 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Jun 20 19:21:07.433870 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Jun 20 19:21:07.434069 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:21:07.437688 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 19:21:07.438824 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 20 19:21:07.466256 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 20 19:21:07.471564 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 20 19:21:07.482262 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 20 19:21:07.497779 kernel: loop1: detected capacity change from 0 to 2960 Jun 20 19:21:07.502240 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 20 19:21:07.504520 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 19:21:07.530237 systemd-tmpfiles[1319]: ACLs are not supported, ignoring. Jun 20 19:21:07.530249 systemd-tmpfiles[1319]: ACLs are not supported, ignoring. Jun 20 19:21:07.536620 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:21:07.544761 kernel: loop2: detected capacity change from 0 to 113872 Jun 20 19:21:07.574760 kernel: loop3: detected capacity change from 0 to 224512 Jun 20 19:21:07.610779 kernel: loop4: detected capacity change from 0 to 146240 Jun 20 19:21:07.710764 kernel: loop5: detected capacity change from 0 to 2960 Jun 20 19:21:07.728767 kernel: loop6: detected capacity change from 0 to 113872 Jun 20 19:21:07.813876 kernel: loop7: detected capacity change from 0 to 224512 Jun 20 19:21:07.840408 (sd-merge)[1325]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. Jun 20 19:21:07.840704 (sd-merge)[1325]: Merged extensions into '/usr'. Jun 20 19:21:07.844039 systemd[1]: Reload requested from client PID 1284 ('systemd-sysext') (unit systemd-sysext.service)... Jun 20 19:21:07.844275 systemd[1]: Reloading... Jun 20 19:21:07.898801 zram_generator::config[1347]: No configuration found. Jun 20 19:21:07.978597 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:21:07.987805 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jun 20 19:21:08.032250 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 20 19:21:08.032414 systemd[1]: Reloading finished in 187 ms. Jun 20 19:21:08.051654 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 20 19:21:08.056074 systemd[1]: Starting ensure-sysext.service... Jun 20 19:21:08.059843 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 19:21:08.074830 systemd[1]: Reload requested from client PID 1406 ('systemctl') (unit ensure-sysext.service)... Jun 20 19:21:08.074841 systemd[1]: Reloading... Jun 20 19:21:08.080224 systemd-tmpfiles[1407]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jun 20 19:21:08.080243 systemd-tmpfiles[1407]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jun 20 19:21:08.080390 systemd-tmpfiles[1407]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 20 19:21:08.080539 systemd-tmpfiles[1407]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 20 19:21:08.081021 systemd-tmpfiles[1407]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 20 19:21:08.081188 systemd-tmpfiles[1407]: ACLs are not supported, ignoring. Jun 20 19:21:08.081231 systemd-tmpfiles[1407]: ACLs are not supported, ignoring. Jun 20 19:21:08.086367 systemd-tmpfiles[1407]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 19:21:08.086373 systemd-tmpfiles[1407]: Skipping /boot Jun 20 19:21:08.093720 systemd-tmpfiles[1407]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 19:21:08.093826 systemd-tmpfiles[1407]: Skipping /boot Jun 20 19:21:08.125895 zram_generator::config[1438]: No configuration found. Jun 20 19:21:08.201832 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:21:08.209999 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jun 20 19:21:08.217174 ldconfig[1279]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 20 19:21:08.255067 systemd[1]: Reloading finished in 180 ms. Jun 20 19:21:08.268422 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 20 19:21:08.268756 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 20 19:21:08.271695 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:21:08.277786 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 19:21:08.279048 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 20 19:21:08.280665 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 20 19:21:08.287668 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 19:21:08.290839 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:21:08.292979 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 20 19:21:08.296674 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:21:08.298928 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:21:08.301054 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:21:08.308265 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:21:08.308444 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:21:08.308516 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:21:08.308581 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:21:08.313808 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 20 19:21:08.318013 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 20 19:21:08.318524 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:21:08.318909 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:21:08.319687 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 20 19:21:08.323196 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:21:08.325111 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:21:08.325357 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:21:08.325496 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:21:08.327725 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 20 19:21:08.327866 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:21:08.331166 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:21:08.339895 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 19:21:08.340103 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:21:08.340175 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:21:08.340267 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:21:08.340801 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:21:08.340916 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:21:08.342245 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:21:08.342579 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:21:08.343424 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:21:08.347394 systemd-udevd[1503]: Using default interface naming scheme 'v255'. Jun 20 19:21:08.348291 systemd[1]: Finished ensure-sysext.service. Jun 20 19:21:08.357536 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 20 19:21:08.358778 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 20 19:21:08.359107 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:21:08.359231 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:21:08.359467 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 19:21:08.359573 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 19:21:08.360718 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:21:08.362220 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 20 19:21:08.372312 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:21:08.376106 augenrules[1543]: No rules Jun 20 19:21:08.376405 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 19:21:08.376888 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 19:21:08.377550 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 19:21:08.377855 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 20 19:21:08.410387 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 20 19:21:08.447880 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 20 19:21:08.529661 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jun 20 19:21:08.531492 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 20 19:21:08.533359 kernel: mousedev: PS/2 mouse device common for all mice Jun 20 19:21:08.540791 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jun 20 19:21:08.565970 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 20 19:21:08.574765 kernel: ACPI: button: Power Button [PWRF] Jun 20 19:21:08.581864 systemd-networkd[1546]: lo: Link UP Jun 20 19:21:08.581870 systemd-networkd[1546]: lo: Gained carrier Jun 20 19:21:08.583001 systemd-networkd[1546]: Enumeration completed Jun 20 19:21:08.583101 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 19:21:08.583585 systemd-networkd[1546]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Jun 20 19:21:08.586769 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jun 20 19:21:08.586931 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jun 20 19:21:08.586831 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 20 19:21:08.588088 systemd-networkd[1546]: ens192: Link UP Jun 20 19:21:08.588377 systemd-networkd[1546]: ens192: Gained carrier Jun 20 19:21:08.590064 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 20 19:21:08.601821 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 20 19:21:08.602237 systemd[1]: Reached target time-set.target - System Time Set. Jun 20 19:21:08.619991 systemd-resolved[1501]: Positive Trust Anchors: Jun 20 19:21:08.620000 systemd-resolved[1501]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 19:21:08.620023 systemd-resolved[1501]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 19:21:08.646474 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 20 19:22:33.272997 systemd-timesyncd[1530]: Contacted time server 204.2.134.173:123 (0.flatcar.pool.ntp.org). Jun 20 19:22:33.273307 systemd-timesyncd[1530]: Initial clock synchronization to Fri 2025-06-20 19:22:33.272940 UTC. Jun 20 19:22:33.278570 systemd-resolved[1501]: Defaulting to hostname 'linux'. Jun 20 19:22:33.280580 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 19:22:33.280778 systemd[1]: Reached target network.target - Network. Jun 20 19:22:33.280991 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:22:33.281177 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 19:22:33.281456 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 20 19:22:33.281628 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 20 19:22:33.281854 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jun 20 19:22:33.282142 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 20 19:22:33.282359 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 20 19:22:33.282565 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 20 19:22:33.282831 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 20 19:22:33.282874 systemd[1]: Reached target paths.target - Path Units. Jun 20 19:22:33.283013 systemd[1]: Reached target timers.target - Timer Units. Jun 20 19:22:33.293860 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 20 19:22:33.295240 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 20 19:22:33.298712 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Jun 20 19:22:33.299025 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 20 19:22:33.299239 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jun 20 19:22:33.299360 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jun 20 19:22:33.303743 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 20 19:22:33.304084 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 20 19:22:33.304606 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 20 19:22:33.305109 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 19:22:33.305208 systemd[1]: Reached target basic.target - Basic System. Jun 20 19:22:33.305332 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 20 19:22:33.305348 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 20 19:22:33.306625 systemd[1]: Starting containerd.service - containerd container runtime... Jun 20 19:22:33.307442 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 20 19:22:33.309850 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 20 19:22:33.310717 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 20 19:22:33.312812 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 20 19:22:33.314784 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 20 19:22:33.317781 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jun 20 19:22:33.320247 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 20 19:22:33.323183 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 20 19:22:33.323370 jq[1613]: false Jun 20 19:22:33.325398 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 20 19:22:33.327876 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 20 19:22:33.333017 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 20 19:22:33.333678 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 20 19:22:33.334213 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 20 19:22:33.337832 systemd[1]: Starting update-engine.service - Update Engine... Jun 20 19:22:33.346262 google_oslogin_nss_cache[1615]: oslogin_cache_refresh[1615]: Refreshing passwd entry cache Jun 20 19:22:33.346267 oslogin_cache_refresh[1615]: Refreshing passwd entry cache Jun 20 19:22:33.347208 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 20 19:22:33.349928 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Jun 20 19:22:33.353340 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 20 19:22:33.353619 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 20 19:22:33.353749 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 20 19:22:33.354901 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 20 19:22:33.355022 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 20 19:22:33.362765 google_oslogin_nss_cache[1615]: oslogin_cache_refresh[1615]: Failure getting users, quitting Jun 20 19:22:33.362765 google_oslogin_nss_cache[1615]: oslogin_cache_refresh[1615]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 20 19:22:33.362297 oslogin_cache_refresh[1615]: Failure getting users, quitting Jun 20 19:22:33.362309 oslogin_cache_refresh[1615]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 20 19:22:33.367429 extend-filesystems[1614]: Found /dev/sda6 Jun 20 19:22:33.369359 google_oslogin_nss_cache[1615]: oslogin_cache_refresh[1615]: Refreshing group entry cache Jun 20 19:22:33.368776 oslogin_cache_refresh[1615]: Refreshing group entry cache Jun 20 19:22:33.372735 jq[1628]: true Jun 20 19:22:33.375426 google_oslogin_nss_cache[1615]: oslogin_cache_refresh[1615]: Failure getting groups, quitting Jun 20 19:22:33.375426 google_oslogin_nss_cache[1615]: oslogin_cache_refresh[1615]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 20 19:22:33.373711 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jun 20 19:22:33.373023 oslogin_cache_refresh[1615]: Failure getting groups, quitting Jun 20 19:22:33.374934 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jun 20 19:22:33.373031 oslogin_cache_refresh[1615]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 20 19:22:33.378514 update_engine[1621]: I20250620 19:22:33.377383 1621 main.cc:92] Flatcar Update Engine starting Jun 20 19:22:33.378664 extend-filesystems[1614]: Found /dev/sda9 Jun 20 19:22:33.379717 extend-filesystems[1614]: Checking size of /dev/sda9 Jun 20 19:22:33.385519 (ntainerd)[1641]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 20 19:22:33.391917 jq[1646]: true Jun 20 19:22:33.395727 (udev-worker)[1550]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Jun 20 19:22:33.401035 systemd[1]: motdgen.service: Deactivated successfully. Jun 20 19:22:33.401432 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 20 19:22:33.406395 tar[1632]: linux-amd64/LICENSE Jun 20 19:22:33.406395 tar[1632]: linux-amd64/helm Jun 20 19:22:33.408547 extend-filesystems[1614]: Old size kept for /dev/sda9 Jun 20 19:22:33.409581 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 20 19:22:33.410267 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 20 19:22:33.412929 dbus-daemon[1611]: [system] SELinux support is enabled Jun 20 19:22:33.413171 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 20 19:22:33.415567 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 20 19:22:33.415585 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 20 19:22:33.417457 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 20 19:22:33.417476 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 20 19:22:33.425393 systemd[1]: Started update-engine.service - Update Engine. Jun 20 19:22:33.425623 update_engine[1621]: I20250620 19:22:33.425434 1621 update_check_scheduler.cc:74] Next update check in 10m25s Jun 20 19:22:33.429520 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 20 19:22:33.474965 bash[1675]: Updated "/home/core/.ssh/authorized_keys" Jun 20 19:22:33.476902 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 20 19:22:33.477592 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 20 19:22:33.524000 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:22:33.551947 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Jun 20 19:22:33.555190 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Jun 20 19:22:33.606481 locksmithd[1666]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 20 19:22:33.634339 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Jun 20 19:22:33.650050 unknown[1683]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Jun 20 19:22:33.650578 unknown[1683]: Core dump limit set to -1 Jun 20 19:22:33.651878 systemd-logind[1620]: Watching system buttons on /dev/input/event2 (Power Button) Jun 20 19:22:33.651890 systemd-logind[1620]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 20 19:22:33.653181 systemd-logind[1620]: New seat seat0. Jun 20 19:22:33.655448 systemd[1]: Started systemd-logind.service - User Login Management. Jun 20 19:22:33.755131 containerd[1641]: time="2025-06-20T19:22:33Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jun 20 19:22:33.758447 containerd[1641]: time="2025-06-20T19:22:33.757884746Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jun 20 19:22:33.770254 containerd[1641]: time="2025-06-20T19:22:33.770225725Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.956µs" Jun 20 19:22:33.773618 containerd[1641]: time="2025-06-20T19:22:33.773596277Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jun 20 19:22:33.773656 containerd[1641]: time="2025-06-20T19:22:33.773619756Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jun 20 19:22:33.773747 containerd[1641]: time="2025-06-20T19:22:33.773735167Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jun 20 19:22:33.773775 containerd[1641]: time="2025-06-20T19:22:33.773748160Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jun 20 19:22:33.773775 containerd[1641]: time="2025-06-20T19:22:33.773763890Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 20 19:22:33.773812 containerd[1641]: time="2025-06-20T19:22:33.773800364Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 20 19:22:33.773812 containerd[1641]: time="2025-06-20T19:22:33.773809594Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 20 19:22:33.773960 containerd[1641]: time="2025-06-20T19:22:33.773947320Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 20 19:22:33.773960 containerd[1641]: time="2025-06-20T19:22:33.773958381Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 20 19:22:33.773992 containerd[1641]: time="2025-06-20T19:22:33.773964478Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 20 19:22:33.773992 containerd[1641]: time="2025-06-20T19:22:33.773968919Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jun 20 19:22:33.774018 containerd[1641]: time="2025-06-20T19:22:33.774007287Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jun 20 19:22:33.774237 containerd[1641]: time="2025-06-20T19:22:33.774132516Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 20 19:22:33.774237 containerd[1641]: time="2025-06-20T19:22:33.774152544Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 20 19:22:33.774237 containerd[1641]: time="2025-06-20T19:22:33.774160536Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jun 20 19:22:33.774237 containerd[1641]: time="2025-06-20T19:22:33.774172046Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jun 20 19:22:33.774646 containerd[1641]: time="2025-06-20T19:22:33.774288453Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jun 20 19:22:33.774646 containerd[1641]: time="2025-06-20T19:22:33.774323797Z" level=info msg="metadata content store policy set" policy=shared Jun 20 19:22:33.793948 containerd[1641]: time="2025-06-20T19:22:33.793922632Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jun 20 19:22:33.794025 containerd[1641]: time="2025-06-20T19:22:33.793958268Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jun 20 19:22:33.794025 containerd[1641]: time="2025-06-20T19:22:33.793967177Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jun 20 19:22:33.794025 containerd[1641]: time="2025-06-20T19:22:33.793974729Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jun 20 19:22:33.794025 containerd[1641]: time="2025-06-20T19:22:33.793981745Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jun 20 19:22:33.794025 containerd[1641]: time="2025-06-20T19:22:33.793987216Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jun 20 19:22:33.794025 containerd[1641]: time="2025-06-20T19:22:33.793994757Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jun 20 19:22:33.794025 containerd[1641]: time="2025-06-20T19:22:33.794001165Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jun 20 19:22:33.794025 containerd[1641]: time="2025-06-20T19:22:33.794007554Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jun 20 19:22:33.794172 containerd[1641]: time="2025-06-20T19:22:33.794030176Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jun 20 19:22:33.794172 containerd[1641]: time="2025-06-20T19:22:33.794035002Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jun 20 19:22:33.794172 containerd[1641]: time="2025-06-20T19:22:33.794059450Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jun 20 19:22:33.794172 containerd[1641]: time="2025-06-20T19:22:33.794146693Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jun 20 19:22:33.794172 containerd[1641]: time="2025-06-20T19:22:33.794158895Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jun 20 19:22:33.794172 containerd[1641]: time="2025-06-20T19:22:33.794167682Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jun 20 19:22:33.794247 containerd[1641]: time="2025-06-20T19:22:33.794173480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jun 20 19:22:33.794247 containerd[1641]: time="2025-06-20T19:22:33.794179206Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jun 20 19:22:33.794247 containerd[1641]: time="2025-06-20T19:22:33.794188663Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jun 20 19:22:33.794247 containerd[1641]: time="2025-06-20T19:22:33.794195591Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jun 20 19:22:33.794247 containerd[1641]: time="2025-06-20T19:22:33.794201361Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jun 20 19:22:33.794247 containerd[1641]: time="2025-06-20T19:22:33.794208196Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jun 20 19:22:33.794247 containerd[1641]: time="2025-06-20T19:22:33.794214406Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jun 20 19:22:33.794247 containerd[1641]: time="2025-06-20T19:22:33.794219754Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jun 20 19:22:33.794348 containerd[1641]: time="2025-06-20T19:22:33.794259227Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jun 20 19:22:33.794348 containerd[1641]: time="2025-06-20T19:22:33.794267134Z" level=info msg="Start snapshots syncer" Jun 20 19:22:33.794348 containerd[1641]: time="2025-06-20T19:22:33.794279655Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jun 20 19:22:33.794997 containerd[1641]: time="2025-06-20T19:22:33.794409770Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jun 20 19:22:33.794997 containerd[1641]: time="2025-06-20T19:22:33.794441045Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jun 20 19:22:33.796626 containerd[1641]: time="2025-06-20T19:22:33.796611711Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jun 20 19:22:33.796714 containerd[1641]: time="2025-06-20T19:22:33.796686116Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jun 20 19:22:33.796736 containerd[1641]: time="2025-06-20T19:22:33.796717079Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jun 20 19:22:33.796736 containerd[1641]: time="2025-06-20T19:22:33.796724448Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jun 20 19:22:33.796736 containerd[1641]: time="2025-06-20T19:22:33.796731057Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jun 20 19:22:33.796796 containerd[1641]: time="2025-06-20T19:22:33.796739177Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jun 20 19:22:33.796796 containerd[1641]: time="2025-06-20T19:22:33.796748361Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jun 20 19:22:33.796796 containerd[1641]: time="2025-06-20T19:22:33.796754808Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jun 20 19:22:33.796796 containerd[1641]: time="2025-06-20T19:22:33.796771559Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jun 20 19:22:33.796796 containerd[1641]: time="2025-06-20T19:22:33.796780963Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jun 20 19:22:33.796796 containerd[1641]: time="2025-06-20T19:22:33.796787554Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jun 20 19:22:33.797061 containerd[1641]: time="2025-06-20T19:22:33.797049932Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 20 19:22:33.797086 containerd[1641]: time="2025-06-20T19:22:33.797064989Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 20 19:22:33.797086 containerd[1641]: time="2025-06-20T19:22:33.797071284Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 20 19:22:33.797086 containerd[1641]: time="2025-06-20T19:22:33.797076689Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 20 19:22:33.797086 containerd[1641]: time="2025-06-20T19:22:33.797081048Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jun 20 19:22:33.797086 containerd[1641]: time="2025-06-20T19:22:33.797086212Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jun 20 19:22:33.797161 containerd[1641]: time="2025-06-20T19:22:33.797091760Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jun 20 19:22:33.797722 containerd[1641]: time="2025-06-20T19:22:33.797710862Z" level=info msg="runtime interface created" Jun 20 19:22:33.797722 containerd[1641]: time="2025-06-20T19:22:33.797720019Z" level=info msg="created NRI interface" Jun 20 19:22:33.797756 containerd[1641]: time="2025-06-20T19:22:33.797726808Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jun 20 19:22:33.797756 containerd[1641]: time="2025-06-20T19:22:33.797736561Z" level=info msg="Connect containerd service" Jun 20 19:22:33.797877 containerd[1641]: time="2025-06-20T19:22:33.797755625Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 20 19:22:33.800699 containerd[1641]: time="2025-06-20T19:22:33.800157054Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 19:22:33.811897 sshd_keygen[1652]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 20 19:22:33.843234 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:22:33.853516 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 20 19:22:33.855812 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 20 19:22:33.880422 systemd[1]: issuegen.service: Deactivated successfully. Jun 20 19:22:33.880553 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 20 19:22:33.883361 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 20 19:22:33.908160 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 20 19:22:33.910853 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 20 19:22:33.914205 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 20 19:22:33.915856 systemd[1]: Reached target getty.target - Login Prompts. Jun 20 19:22:33.959239 containerd[1641]: time="2025-06-20T19:22:33.959210888Z" level=info msg="Start subscribing containerd event" Jun 20 19:22:33.959320 containerd[1641]: time="2025-06-20T19:22:33.959243706Z" level=info msg="Start recovering state" Jun 20 19:22:33.959320 containerd[1641]: time="2025-06-20T19:22:33.959314555Z" level=info msg="Start event monitor" Jun 20 19:22:33.959403 containerd[1641]: time="2025-06-20T19:22:33.959325945Z" level=info msg="Start cni network conf syncer for default" Jun 20 19:22:33.959403 containerd[1641]: time="2025-06-20T19:22:33.959331089Z" level=info msg="Start streaming server" Jun 20 19:22:33.959403 containerd[1641]: time="2025-06-20T19:22:33.959335978Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jun 20 19:22:33.959403 containerd[1641]: time="2025-06-20T19:22:33.959340427Z" level=info msg="runtime interface starting up..." Jun 20 19:22:33.959403 containerd[1641]: time="2025-06-20T19:22:33.959343207Z" level=info msg="starting plugins..." Jun 20 19:22:33.959950 containerd[1641]: time="2025-06-20T19:22:33.959534603Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 20 19:22:33.959950 containerd[1641]: time="2025-06-20T19:22:33.959551699Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jun 20 19:22:33.959950 containerd[1641]: time="2025-06-20T19:22:33.959562198Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 20 19:22:33.959681 systemd[1]: Started containerd.service - containerd container runtime. Jun 20 19:22:33.960067 containerd[1641]: time="2025-06-20T19:22:33.960055290Z" level=info msg="containerd successfully booted in 0.205132s" Jun 20 19:22:34.047974 tar[1632]: linux-amd64/README.md Jun 20 19:22:34.059715 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 20 19:22:35.025837 systemd-networkd[1546]: ens192: Gained IPv6LL Jun 20 19:22:35.027563 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 20 19:22:35.028257 systemd[1]: Reached target network-online.target - Network is Online. Jun 20 19:22:35.030094 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Jun 20 19:22:35.031871 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:22:35.041836 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 20 19:22:35.057609 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 20 19:22:35.080528 systemd[1]: coreos-metadata.service: Deactivated successfully. Jun 20 19:22:35.080722 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Jun 20 19:22:35.081356 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 20 19:22:36.008396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:22:36.009117 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 20 19:22:36.009757 systemd[1]: Startup finished in 2.720s (kernel) + 6.082s (initrd) + 4.784s (userspace) = 13.588s. Jun 20 19:22:36.014880 (kubelet)[1817]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:22:36.039795 login[1777]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 20 19:22:36.042003 login[1778]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 20 19:22:36.044965 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 20 19:22:36.046079 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 20 19:22:36.051934 systemd-logind[1620]: New session 2 of user core. Jun 20 19:22:36.056722 systemd-logind[1620]: New session 1 of user core. Jun 20 19:22:36.060183 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 20 19:22:36.061932 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 20 19:22:36.072004 (systemd)[1824]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 20 19:22:36.074568 systemd-logind[1620]: New session c1 of user core. Jun 20 19:22:36.163762 systemd[1824]: Queued start job for default target default.target. Jun 20 19:22:36.175455 systemd[1824]: Created slice app.slice - User Application Slice. Jun 20 19:22:36.175471 systemd[1824]: Reached target paths.target - Paths. Jun 20 19:22:36.175494 systemd[1824]: Reached target timers.target - Timers. Jun 20 19:22:36.176183 systemd[1824]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 20 19:22:36.188418 systemd[1824]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 20 19:22:36.188558 systemd[1824]: Reached target sockets.target - Sockets. Jun 20 19:22:36.188648 systemd[1824]: Reached target basic.target - Basic System. Jun 20 19:22:36.188671 systemd[1824]: Reached target default.target - Main User Target. Jun 20 19:22:36.188686 systemd[1824]: Startup finished in 110ms. Jun 20 19:22:36.188832 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 20 19:22:36.189752 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 20 19:22:36.190281 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 20 19:22:36.561066 kubelet[1817]: E0620 19:22:36.561029 1817 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:22:36.562396 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:22:36.562487 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:22:36.562747 systemd[1]: kubelet.service: Consumed 647ms CPU time, 264M memory peak. Jun 20 19:22:46.813056 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 20 19:22:46.814384 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:22:47.106407 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:22:47.109411 (kubelet)[1866]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:22:47.168605 kubelet[1866]: E0620 19:22:47.168563 1866 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:22:47.171038 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:22:47.171147 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:22:47.171487 systemd[1]: kubelet.service: Consumed 110ms CPU time, 108.9M memory peak. Jun 20 19:22:57.421586 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 20 19:22:57.423035 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:22:57.682431 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:22:57.691004 (kubelet)[1881]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:22:57.725683 kubelet[1881]: E0620 19:22:57.725653 1881 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:22:57.727157 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:22:57.727247 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:22:57.727585 systemd[1]: kubelet.service: Consumed 102ms CPU time, 110.2M memory peak. Jun 20 19:23:03.751472 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 20 19:23:03.752965 systemd[1]: Started sshd@0-139.178.70.105:22-147.75.109.163:43118.service - OpenSSH per-connection server daemon (147.75.109.163:43118). Jun 20 19:23:03.801987 sshd[1889]: Accepted publickey for core from 147.75.109.163 port 43118 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:23:03.802786 sshd-session[1889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:23:03.805430 systemd-logind[1620]: New session 3 of user core. Jun 20 19:23:03.813922 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 20 19:23:03.871886 systemd[1]: Started sshd@1-139.178.70.105:22-147.75.109.163:43128.service - OpenSSH per-connection server daemon (147.75.109.163:43128). Jun 20 19:23:03.912610 sshd[1894]: Accepted publickey for core from 147.75.109.163 port 43128 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:23:03.913390 sshd-session[1894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:23:03.915956 systemd-logind[1620]: New session 4 of user core. Jun 20 19:23:03.923892 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 20 19:23:03.972164 sshd[1896]: Connection closed by 147.75.109.163 port 43128 Jun 20 19:23:03.972802 sshd-session[1894]: pam_unix(sshd:session): session closed for user core Jun 20 19:23:03.982522 systemd[1]: sshd@1-139.178.70.105:22-147.75.109.163:43128.service: Deactivated successfully. Jun 20 19:23:03.983315 systemd[1]: session-4.scope: Deactivated successfully. Jun 20 19:23:03.983747 systemd-logind[1620]: Session 4 logged out. Waiting for processes to exit. Jun 20 19:23:03.984833 systemd[1]: Started sshd@2-139.178.70.105:22-147.75.109.163:43144.service - OpenSSH per-connection server daemon (147.75.109.163:43144). Jun 20 19:23:03.987011 systemd-logind[1620]: Removed session 4. Jun 20 19:23:04.026054 sshd[1902]: Accepted publickey for core from 147.75.109.163 port 43144 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:23:04.027069 sshd-session[1902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:23:04.030193 systemd-logind[1620]: New session 5 of user core. Jun 20 19:23:04.038800 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 20 19:23:04.085888 sshd[1904]: Connection closed by 147.75.109.163 port 43144 Jun 20 19:23:04.085819 sshd-session[1902]: pam_unix(sshd:session): session closed for user core Jun 20 19:23:04.092114 systemd[1]: sshd@2-139.178.70.105:22-147.75.109.163:43144.service: Deactivated successfully. Jun 20 19:23:04.093116 systemd[1]: session-5.scope: Deactivated successfully. Jun 20 19:23:04.093687 systemd-logind[1620]: Session 5 logged out. Waiting for processes to exit. Jun 20 19:23:04.095034 systemd[1]: Started sshd@3-139.178.70.105:22-147.75.109.163:43156.service - OpenSSH per-connection server daemon (147.75.109.163:43156). Jun 20 19:23:04.096744 systemd-logind[1620]: Removed session 5. Jun 20 19:23:04.137383 sshd[1910]: Accepted publickey for core from 147.75.109.163 port 43156 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:23:04.138395 sshd-session[1910]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:23:04.140891 systemd-logind[1620]: New session 6 of user core. Jun 20 19:23:04.151929 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 20 19:23:04.200206 sshd[1912]: Connection closed by 147.75.109.163 port 43156 Jun 20 19:23:04.200014 sshd-session[1910]: pam_unix(sshd:session): session closed for user core Jun 20 19:23:04.209332 systemd[1]: sshd@3-139.178.70.105:22-147.75.109.163:43156.service: Deactivated successfully. Jun 20 19:23:04.210468 systemd[1]: session-6.scope: Deactivated successfully. Jun 20 19:23:04.211095 systemd-logind[1620]: Session 6 logged out. Waiting for processes to exit. Jun 20 19:23:04.213125 systemd[1]: Started sshd@4-139.178.70.105:22-147.75.109.163:43158.service - OpenSSH per-connection server daemon (147.75.109.163:43158). Jun 20 19:23:04.214241 systemd-logind[1620]: Removed session 6. Jun 20 19:23:04.258080 sshd[1918]: Accepted publickey for core from 147.75.109.163 port 43158 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:23:04.258852 sshd-session[1918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:23:04.261522 systemd-logind[1620]: New session 7 of user core. Jun 20 19:23:04.270800 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 20 19:23:04.331660 sudo[1921]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 20 19:23:04.331986 sudo[1921]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:23:04.346287 sudo[1921]: pam_unix(sudo:session): session closed for user root Jun 20 19:23:04.347463 sshd[1920]: Connection closed by 147.75.109.163 port 43158 Jun 20 19:23:04.347856 sshd-session[1918]: pam_unix(sshd:session): session closed for user core Jun 20 19:23:04.356789 systemd[1]: sshd@4-139.178.70.105:22-147.75.109.163:43158.service: Deactivated successfully. Jun 20 19:23:04.357992 systemd[1]: session-7.scope: Deactivated successfully. Jun 20 19:23:04.358612 systemd-logind[1620]: Session 7 logged out. Waiting for processes to exit. Jun 20 19:23:04.360615 systemd[1]: Started sshd@5-139.178.70.105:22-147.75.109.163:43160.service - OpenSSH per-connection server daemon (147.75.109.163:43160). Jun 20 19:23:04.361646 systemd-logind[1620]: Removed session 7. Jun 20 19:23:04.400325 sshd[1927]: Accepted publickey for core from 147.75.109.163 port 43160 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:23:04.401135 sshd-session[1927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:23:04.403620 systemd-logind[1620]: New session 8 of user core. Jun 20 19:23:04.413777 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 20 19:23:04.461006 sudo[1931]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 20 19:23:04.461158 sudo[1931]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:23:04.506391 sudo[1931]: pam_unix(sudo:session): session closed for user root Jun 20 19:23:04.510003 sudo[1930]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jun 20 19:23:04.510170 sudo[1930]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:23:04.517578 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 19:23:04.545325 augenrules[1953]: No rules Jun 20 19:23:04.546012 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 19:23:04.546237 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 19:23:04.546936 sudo[1930]: pam_unix(sudo:session): session closed for user root Jun 20 19:23:04.548577 sshd[1929]: Connection closed by 147.75.109.163 port 43160 Jun 20 19:23:04.548848 sshd-session[1927]: pam_unix(sshd:session): session closed for user core Jun 20 19:23:04.553929 systemd[1]: sshd@5-139.178.70.105:22-147.75.109.163:43160.service: Deactivated successfully. Jun 20 19:23:04.554756 systemd[1]: session-8.scope: Deactivated successfully. Jun 20 19:23:04.555202 systemd-logind[1620]: Session 8 logged out. Waiting for processes to exit. Jun 20 19:23:04.556650 systemd[1]: Started sshd@6-139.178.70.105:22-147.75.109.163:43176.service - OpenSSH per-connection server daemon (147.75.109.163:43176). Jun 20 19:23:04.558947 systemd-logind[1620]: Removed session 8. Jun 20 19:23:04.595404 sshd[1962]: Accepted publickey for core from 147.75.109.163 port 43176 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:23:04.596577 sshd-session[1962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:23:04.599301 systemd-logind[1620]: New session 9 of user core. Jun 20 19:23:04.605807 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 20 19:23:04.655135 sudo[1965]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 20 19:23:04.655499 sudo[1965]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:23:05.048199 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 20 19:23:05.060998 (dockerd)[1982]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 20 19:23:05.264149 dockerd[1982]: time="2025-06-20T19:23:05.263951628Z" level=info msg="Starting up" Jun 20 19:23:05.264938 dockerd[1982]: time="2025-06-20T19:23:05.264915870Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jun 20 19:23:05.284387 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1892019219-merged.mount: Deactivated successfully. Jun 20 19:23:05.303740 dockerd[1982]: time="2025-06-20T19:23:05.303594937Z" level=info msg="Loading containers: start." Jun 20 19:23:05.314711 kernel: Initializing XFRM netlink socket Jun 20 19:23:05.526509 systemd-networkd[1546]: docker0: Link UP Jun 20 19:23:05.527750 dockerd[1982]: time="2025-06-20T19:23:05.527714309Z" level=info msg="Loading containers: done." Jun 20 19:23:05.536143 dockerd[1982]: time="2025-06-20T19:23:05.535922964Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 20 19:23:05.536143 dockerd[1982]: time="2025-06-20T19:23:05.535978574Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jun 20 19:23:05.536143 dockerd[1982]: time="2025-06-20T19:23:05.536032001Z" level=info msg="Initializing buildkit" Jun 20 19:23:05.546265 dockerd[1982]: time="2025-06-20T19:23:05.546237114Z" level=info msg="Completed buildkit initialization" Jun 20 19:23:05.550537 dockerd[1982]: time="2025-06-20T19:23:05.550510550Z" level=info msg="Daemon has completed initialization" Jun 20 19:23:05.550721 dockerd[1982]: time="2025-06-20T19:23:05.550629792Z" level=info msg="API listen on /run/docker.sock" Jun 20 19:23:05.550716 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 20 19:23:06.210070 containerd[1641]: time="2025-06-20T19:23:06.210044739Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jun 20 19:23:06.740364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3429517492.mount: Deactivated successfully. Jun 20 19:23:07.965565 containerd[1641]: time="2025-06-20T19:23:07.965350122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:07.965835 containerd[1641]: time="2025-06-20T19:23:07.965823003Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799045" Jun 20 19:23:07.966115 containerd[1641]: time="2025-06-20T19:23:07.966099976Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:07.968002 containerd[1641]: time="2025-06-20T19:23:07.967373479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:07.968002 containerd[1641]: time="2025-06-20T19:23:07.967918512Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 1.757850965s" Jun 20 19:23:07.968002 containerd[1641]: time="2025-06-20T19:23:07.967935186Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jun 20 19:23:07.968352 containerd[1641]: time="2025-06-20T19:23:07.968333092Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jun 20 19:23:07.977677 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 20 19:23:07.978924 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:23:08.254153 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:23:08.262018 (kubelet)[2245]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:23:08.300791 kubelet[2245]: E0620 19:23:08.300745 2245 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:23:08.302434 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:23:08.302562 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:23:08.302907 systemd[1]: kubelet.service: Consumed 105ms CPU time, 110.5M memory peak. Jun 20 19:23:09.835358 containerd[1641]: time="2025-06-20T19:23:09.835331746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:09.840488 containerd[1641]: time="2025-06-20T19:23:09.840469993Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783912" Jun 20 19:23:09.845919 containerd[1641]: time="2025-06-20T19:23:09.845889024Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:09.853848 containerd[1641]: time="2025-06-20T19:23:09.853810654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:09.854493 containerd[1641]: time="2025-06-20T19:23:09.854224886Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 1.885815352s" Jun 20 19:23:09.854493 containerd[1641]: time="2025-06-20T19:23:09.854243556Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jun 20 19:23:09.854562 containerd[1641]: time="2025-06-20T19:23:09.854544864Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jun 20 19:23:11.340360 containerd[1641]: time="2025-06-20T19:23:11.340319986Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:11.347670 containerd[1641]: time="2025-06-20T19:23:11.347626278Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176916" Jun 20 19:23:11.355846 containerd[1641]: time="2025-06-20T19:23:11.355820639Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:11.366604 containerd[1641]: time="2025-06-20T19:23:11.366582616Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:11.367518 containerd[1641]: time="2025-06-20T19:23:11.367501038Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 1.512938469s" Jun 20 19:23:11.367545 containerd[1641]: time="2025-06-20T19:23:11.367520869Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jun 20 19:23:11.367793 containerd[1641]: time="2025-06-20T19:23:11.367779041Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jun 20 19:23:12.653891 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount938818498.mount: Deactivated successfully. Jun 20 19:23:13.283890 containerd[1641]: time="2025-06-20T19:23:13.283859694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:13.293357 containerd[1641]: time="2025-06-20T19:23:13.293325803Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895363" Jun 20 19:23:13.305202 containerd[1641]: time="2025-06-20T19:23:13.305161405Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:13.319919 containerd[1641]: time="2025-06-20T19:23:13.319886233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:13.320280 containerd[1641]: time="2025-06-20T19:23:13.320255654Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 1.952460599s" Jun 20 19:23:13.320548 containerd[1641]: time="2025-06-20T19:23:13.320348581Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jun 20 19:23:13.320717 containerd[1641]: time="2025-06-20T19:23:13.320671117Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jun 20 19:23:13.942665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2334736571.mount: Deactivated successfully. Jun 20 19:23:15.183809 containerd[1641]: time="2025-06-20T19:23:15.183778088Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:15.193545 containerd[1641]: time="2025-06-20T19:23:15.193495879Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jun 20 19:23:15.199608 containerd[1641]: time="2025-06-20T19:23:15.199539647Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:15.207774 containerd[1641]: time="2025-06-20T19:23:15.207722521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:15.209482 containerd[1641]: time="2025-06-20T19:23:15.209455637Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.888759549s" Jun 20 19:23:15.209737 containerd[1641]: time="2025-06-20T19:23:15.209584459Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jun 20 19:23:15.211169 containerd[1641]: time="2025-06-20T19:23:15.211149161Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 20 19:23:15.820351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1038234404.mount: Deactivated successfully. Jun 20 19:23:15.823497 containerd[1641]: time="2025-06-20T19:23:15.822897493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:23:15.823889 containerd[1641]: time="2025-06-20T19:23:15.823870909Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jun 20 19:23:15.824418 containerd[1641]: time="2025-06-20T19:23:15.824399963Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:23:15.826054 containerd[1641]: time="2025-06-20T19:23:15.826029311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:23:15.826812 containerd[1641]: time="2025-06-20T19:23:15.826791081Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 615.618266ms" Jun 20 19:23:15.826894 containerd[1641]: time="2025-06-20T19:23:15.826881700Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jun 20 19:23:15.827282 containerd[1641]: time="2025-06-20T19:23:15.827259846Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jun 20 19:23:16.798933 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount518777606.mount: Deactivated successfully. Jun 20 19:23:18.327007 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jun 20 19:23:18.328818 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:23:18.434873 update_engine[1621]: I20250620 19:23:18.434825 1621 update_attempter.cc:509] Updating boot flags... Jun 20 19:23:19.477114 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:23:19.479542 (kubelet)[2405]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:23:19.656154 kubelet[2405]: E0620 19:23:19.656117 2405 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:23:19.657987 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:23:19.658155 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:23:19.658625 systemd[1]: kubelet.service: Consumed 115ms CPU time, 111.2M memory peak. Jun 20 19:23:23.792637 containerd[1641]: time="2025-06-20T19:23:23.791988598Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:23.826486 containerd[1641]: time="2025-06-20T19:23:23.826466902Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Jun 20 19:23:23.837993 containerd[1641]: time="2025-06-20T19:23:23.837972568Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:23.862284 containerd[1641]: time="2025-06-20T19:23:23.862265574Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:23.862880 containerd[1641]: time="2025-06-20T19:23:23.862863602Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 8.03558439s" Jun 20 19:23:23.862908 containerd[1641]: time="2025-06-20T19:23:23.862881708Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jun 20 19:23:26.250996 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:23:26.251095 systemd[1]: kubelet.service: Consumed 115ms CPU time, 111.2M memory peak. Jun 20 19:23:26.252654 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:23:26.274187 systemd[1]: Reload requested from client PID 2445 ('systemctl') (unit session-9.scope)... Jun 20 19:23:26.274203 systemd[1]: Reloading... Jun 20 19:23:26.382711 zram_generator::config[2488]: No configuration found. Jun 20 19:23:26.446337 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:23:26.454338 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jun 20 19:23:26.528204 systemd[1]: Reloading finished in 253 ms. Jun 20 19:23:26.657765 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 20 19:23:26.657847 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 20 19:23:26.658148 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:23:26.659990 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:23:27.044504 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:23:27.047785 (kubelet)[2556]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:23:27.131372 kubelet[2556]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:23:27.131372 kubelet[2556]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 19:23:27.131372 kubelet[2556]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:23:27.131372 kubelet[2556]: I0620 19:23:27.130856 2556 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 19:23:27.374030 kubelet[2556]: I0620 19:23:27.374001 2556 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jun 20 19:23:27.374127 kubelet[2556]: I0620 19:23:27.374121 2556 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 19:23:27.374508 kubelet[2556]: I0620 19:23:27.374498 2556 server.go:954] "Client rotation is on, will bootstrap in background" Jun 20 19:23:27.506154 kubelet[2556]: E0620 19:23:27.506129 2556 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.105:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:23:27.509473 kubelet[2556]: I0620 19:23:27.509349 2556 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:23:27.578707 kubelet[2556]: I0620 19:23:27.578667 2556 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 20 19:23:27.594083 kubelet[2556]: I0620 19:23:27.594066 2556 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 19:23:27.637806 kubelet[2556]: I0620 19:23:27.637712 2556 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 19:23:27.637878 kubelet[2556]: I0620 19:23:27.637761 2556 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 19:23:27.637946 kubelet[2556]: I0620 19:23:27.637879 2556 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 19:23:27.637946 kubelet[2556]: I0620 19:23:27.637885 2556 container_manager_linux.go:304] "Creating device plugin manager" Jun 20 19:23:27.644806 kubelet[2556]: I0620 19:23:27.644786 2556 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:23:27.695788 kubelet[2556]: I0620 19:23:27.695758 2556 kubelet.go:446] "Attempting to sync node with API server" Jun 20 19:23:27.695788 kubelet[2556]: I0620 19:23:27.695790 2556 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 19:23:27.716438 kubelet[2556]: I0620 19:23:27.716411 2556 kubelet.go:352] "Adding apiserver pod source" Jun 20 19:23:27.716438 kubelet[2556]: I0620 19:23:27.716441 2556 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 19:23:27.753708 kubelet[2556]: I0620 19:23:27.753571 2556 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 20 19:23:27.770138 kubelet[2556]: W0620 19:23:27.769772 2556 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jun 20 19:23:27.770138 kubelet[2556]: E0620 19:23:27.769843 2556 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:23:27.770138 kubelet[2556]: I0620 19:23:27.769993 2556 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 20 19:23:27.770554 kubelet[2556]: W0620 19:23:27.770544 2556 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 20 19:23:27.772133 kubelet[2556]: W0620 19:23:27.772102 2556 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jun 20 19:23:27.772231 kubelet[2556]: E0620 19:23:27.772218 2556 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:23:27.773243 kubelet[2556]: I0620 19:23:27.773230 2556 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 19:23:27.773329 kubelet[2556]: I0620 19:23:27.773321 2556 server.go:1287] "Started kubelet" Jun 20 19:23:27.774440 kubelet[2556]: I0620 19:23:27.774291 2556 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 19:23:27.776391 kubelet[2556]: I0620 19:23:27.776373 2556 server.go:479] "Adding debug handlers to kubelet server" Jun 20 19:23:27.777358 kubelet[2556]: I0620 19:23:27.777127 2556 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 19:23:27.777358 kubelet[2556]: I0620 19:23:27.777305 2556 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 19:23:27.779208 kubelet[2556]: I0620 19:23:27.778747 2556 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 19:23:27.781715 kubelet[2556]: E0620 19:23:27.778227 2556 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.105:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.105:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184ad6a4a254c223 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-06-20 19:23:27.773303331 +0000 UTC m=+0.723144920,LastTimestamp:2025-06-20 19:23:27.773303331 +0000 UTC m=+0.723144920,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jun 20 19:23:27.781715 kubelet[2556]: I0620 19:23:27.781376 2556 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 19:23:27.783760 kubelet[2556]: E0620 19:23:27.783214 2556 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:23:27.783760 kubelet[2556]: I0620 19:23:27.783239 2556 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 19:23:27.783760 kubelet[2556]: I0620 19:23:27.783351 2556 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 19:23:27.783760 kubelet[2556]: I0620 19:23:27.783386 2556 reconciler.go:26] "Reconciler: start to sync state" Jun 20 19:23:27.783760 kubelet[2556]: W0620 19:23:27.783622 2556 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jun 20 19:23:27.783760 kubelet[2556]: E0620 19:23:27.783649 2556 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:23:27.783910 kubelet[2556]: E0620 19:23:27.783783 2556 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.105:6443: connect: connection refused" interval="200ms" Jun 20 19:23:27.792962 kubelet[2556]: I0620 19:23:27.792946 2556 factory.go:221] Registration of the systemd container factory successfully Jun 20 19:23:27.793162 kubelet[2556]: I0620 19:23:27.793149 2556 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 19:23:27.807071 kubelet[2556]: E0620 19:23:27.806772 2556 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 19:23:27.807339 kubelet[2556]: I0620 19:23:27.806858 2556 factory.go:221] Registration of the containerd container factory successfully Jun 20 19:23:27.812338 kubelet[2556]: I0620 19:23:27.812303 2556 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 20 19:23:27.813122 kubelet[2556]: I0620 19:23:27.813080 2556 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 20 19:23:27.813122 kubelet[2556]: I0620 19:23:27.813095 2556 status_manager.go:227] "Starting to sync pod status with apiserver" Jun 20 19:23:27.813122 kubelet[2556]: I0620 19:23:27.813108 2556 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 19:23:27.813122 kubelet[2556]: I0620 19:23:27.813112 2556 kubelet.go:2382] "Starting kubelet main sync loop" Jun 20 19:23:27.813259 kubelet[2556]: E0620 19:23:27.813241 2556 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 19:23:27.827950 kubelet[2556]: W0620 19:23:27.827924 2556 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jun 20 19:23:27.828108 kubelet[2556]: E0620 19:23:27.828050 2556 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:23:27.834876 kubelet[2556]: I0620 19:23:27.834855 2556 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 19:23:27.834876 kubelet[2556]: I0620 19:23:27.834867 2556 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 19:23:27.834876 kubelet[2556]: I0620 19:23:27.834883 2556 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:23:27.848764 kubelet[2556]: I0620 19:23:27.848738 2556 policy_none.go:49] "None policy: Start" Jun 20 19:23:27.848764 kubelet[2556]: I0620 19:23:27.848766 2556 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 19:23:27.848872 kubelet[2556]: I0620 19:23:27.848802 2556 state_mem.go:35] "Initializing new in-memory state store" Jun 20 19:23:27.866121 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 20 19:23:27.874602 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 20 19:23:27.876766 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 20 19:23:27.883334 kubelet[2556]: E0620 19:23:27.883317 2556 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:23:27.884180 kubelet[2556]: I0620 19:23:27.884166 2556 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 20 19:23:27.884296 kubelet[2556]: I0620 19:23:27.884285 2556 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 19:23:27.884332 kubelet[2556]: I0620 19:23:27.884294 2556 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 19:23:27.884906 kubelet[2556]: I0620 19:23:27.884825 2556 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 19:23:27.885747 kubelet[2556]: E0620 19:23:27.885734 2556 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 19:23:27.885782 kubelet[2556]: E0620 19:23:27.885760 2556 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jun 20 19:23:27.927747 systemd[1]: Created slice kubepods-burstable-pod588897bca4863f82560dab50f71ce025.slice - libcontainer container kubepods-burstable-pod588897bca4863f82560dab50f71ce025.slice. Jun 20 19:23:27.939499 kubelet[2556]: E0620 19:23:27.939415 2556 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 20 19:23:27.942236 systemd[1]: Created slice kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice - libcontainer container kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice. Jun 20 19:23:27.953265 kubelet[2556]: E0620 19:23:27.953161 2556 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 20 19:23:27.955191 systemd[1]: Created slice kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice - libcontainer container kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice. Jun 20 19:23:27.956683 kubelet[2556]: E0620 19:23:27.956573 2556 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 20 19:23:27.985137 kubelet[2556]: E0620 19:23:27.984992 2556 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.105:6443: connect: connection refused" interval="400ms" Jun 20 19:23:27.985677 kubelet[2556]: I0620 19:23:27.985659 2556 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jun 20 19:23:27.985866 kubelet[2556]: E0620 19:23:27.985851 2556 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.105:6443/api/v1/nodes\": dial tcp 139.178.70.105:6443: connect: connection refused" node="localhost" Jun 20 19:23:28.084337 kubelet[2556]: I0620 19:23:28.084309 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/588897bca4863f82560dab50f71ce025-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"588897bca4863f82560dab50f71ce025\") " pod="kube-system/kube-apiserver-localhost" Jun 20 19:23:28.084337 kubelet[2556]: I0620 19:23:28.084338 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jun 20 19:23:28.084459 kubelet[2556]: I0620 19:23:28.084351 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/588897bca4863f82560dab50f71ce025-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"588897bca4863f82560dab50f71ce025\") " pod="kube-system/kube-apiserver-localhost" Jun 20 19:23:28.084459 kubelet[2556]: I0620 19:23:28.084362 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/588897bca4863f82560dab50f71ce025-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"588897bca4863f82560dab50f71ce025\") " pod="kube-system/kube-apiserver-localhost" Jun 20 19:23:28.084459 kubelet[2556]: I0620 19:23:28.084373 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:23:28.084459 kubelet[2556]: I0620 19:23:28.084382 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:23:28.084459 kubelet[2556]: I0620 19:23:28.084395 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:23:28.084541 kubelet[2556]: I0620 19:23:28.084404 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:23:28.084541 kubelet[2556]: I0620 19:23:28.084413 2556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:23:28.187315 kubelet[2556]: I0620 19:23:28.187254 2556 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jun 20 19:23:28.187517 kubelet[2556]: E0620 19:23:28.187447 2556 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.105:6443/api/v1/nodes\": dial tcp 139.178.70.105:6443: connect: connection refused" node="localhost" Jun 20 19:23:28.240457 containerd[1641]: time="2025-06-20T19:23:28.240428613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:588897bca4863f82560dab50f71ce025,Namespace:kube-system,Attempt:0,}" Jun 20 19:23:28.265367 containerd[1641]: time="2025-06-20T19:23:28.265259262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,}" Jun 20 19:23:28.265470 containerd[1641]: time="2025-06-20T19:23:28.265459959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,}" Jun 20 19:23:28.385506 kubelet[2556]: E0620 19:23:28.385478 2556 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.105:6443: connect: connection refused" interval="800ms" Jun 20 19:23:28.589321 kubelet[2556]: I0620 19:23:28.589040 2556 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jun 20 19:23:28.589321 kubelet[2556]: E0620 19:23:28.589234 2556 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.105:6443/api/v1/nodes\": dial tcp 139.178.70.105:6443: connect: connection refused" node="localhost" Jun 20 19:23:28.627133 kubelet[2556]: W0620 19:23:28.627036 2556 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jun 20 19:23:28.627133 kubelet[2556]: E0620 19:23:28.627097 2556 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:23:28.676619 kubelet[2556]: W0620 19:23:28.676563 2556 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jun 20 19:23:28.676619 kubelet[2556]: E0620 19:23:28.676604 2556 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:23:28.694873 containerd[1641]: time="2025-06-20T19:23:28.694726304Z" level=info msg="connecting to shim a28c6370c2db5c625588e903a905fa5c69c3cf409b4d5fd8ffa7aa234cd8466e" address="unix:///run/containerd/s/10eca7470afe229dd4af46e4ee0ff99086ee31cdb080d2c9df0e0adc437253aa" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:23:28.695089 containerd[1641]: time="2025-06-20T19:23:28.695070942Z" level=info msg="connecting to shim c1f237a8bbf99b58a6209da48530ed4f5994d12d598cf7db174f84d4b9bf6307" address="unix:///run/containerd/s/583568c9f787023a7daec65c3764911546d0fd4a46b06a66d6735d44645ee696" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:23:28.701095 containerd[1641]: time="2025-06-20T19:23:28.701071040Z" level=info msg="connecting to shim 839b92d3b6beaa6b9878276ba1e880800e3dfa8febf910e78c374d397cdcfac5" address="unix:///run/containerd/s/930008d3840abdb1162ea51a0a893fea84b1f67c06fc4e837508c406e1d80375" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:23:28.842841 systemd[1]: Started cri-containerd-a28c6370c2db5c625588e903a905fa5c69c3cf409b4d5fd8ffa7aa234cd8466e.scope - libcontainer container a28c6370c2db5c625588e903a905fa5c69c3cf409b4d5fd8ffa7aa234cd8466e. Jun 20 19:23:28.846934 systemd[1]: Started cri-containerd-839b92d3b6beaa6b9878276ba1e880800e3dfa8febf910e78c374d397cdcfac5.scope - libcontainer container 839b92d3b6beaa6b9878276ba1e880800e3dfa8febf910e78c374d397cdcfac5. Jun 20 19:23:28.849052 systemd[1]: Started cri-containerd-c1f237a8bbf99b58a6209da48530ed4f5994d12d598cf7db174f84d4b9bf6307.scope - libcontainer container c1f237a8bbf99b58a6209da48530ed4f5994d12d598cf7db174f84d4b9bf6307. Jun 20 19:23:28.902785 containerd[1641]: time="2025-06-20T19:23:28.902736788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"a28c6370c2db5c625588e903a905fa5c69c3cf409b4d5fd8ffa7aa234cd8466e\"" Jun 20 19:23:28.905954 containerd[1641]: time="2025-06-20T19:23:28.905921896Z" level=info msg="CreateContainer within sandbox \"a28c6370c2db5c625588e903a905fa5c69c3cf409b4d5fd8ffa7aa234cd8466e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 20 19:23:28.915047 containerd[1641]: time="2025-06-20T19:23:28.915018911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:588897bca4863f82560dab50f71ce025,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1f237a8bbf99b58a6209da48530ed4f5994d12d598cf7db174f84d4b9bf6307\"" Jun 20 19:23:28.916680 containerd[1641]: time="2025-06-20T19:23:28.916589768Z" level=info msg="CreateContainer within sandbox \"c1f237a8bbf99b58a6209da48530ed4f5994d12d598cf7db174f84d4b9bf6307\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 20 19:23:28.930067 containerd[1641]: time="2025-06-20T19:23:28.930045873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"839b92d3b6beaa6b9878276ba1e880800e3dfa8febf910e78c374d397cdcfac5\"" Jun 20 19:23:28.940466 containerd[1641]: time="2025-06-20T19:23:28.940446379Z" level=info msg="CreateContainer within sandbox \"839b92d3b6beaa6b9878276ba1e880800e3dfa8febf910e78c374d397cdcfac5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 20 19:23:28.969498 containerd[1641]: time="2025-06-20T19:23:28.969464412Z" level=info msg="Container 9ac3096d18f3407b679bdb2204d9198728b5083666e2793bf4844ac8dece3c81: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:23:28.985212 containerd[1641]: time="2025-06-20T19:23:28.985171984Z" level=info msg="Container af04d0c541b8e6481be89d634c9f95f72372938f50bc4175c9eb789bb6ef6e42: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:23:28.993586 containerd[1641]: time="2025-06-20T19:23:28.993306322Z" level=info msg="Container 96c01f597aa10261ecf931ef311be4724d9d0e029f7b6d2dffe6fdf71eaec0b8: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:23:28.993586 containerd[1641]: time="2025-06-20T19:23:28.993489678Z" level=info msg="CreateContainer within sandbox \"c1f237a8bbf99b58a6209da48530ed4f5994d12d598cf7db174f84d4b9bf6307\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"af04d0c541b8e6481be89d634c9f95f72372938f50bc4175c9eb789bb6ef6e42\"" Jun 20 19:23:28.994187 containerd[1641]: time="2025-06-20T19:23:28.994040947Z" level=info msg="CreateContainer within sandbox \"a28c6370c2db5c625588e903a905fa5c69c3cf409b4d5fd8ffa7aa234cd8466e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9ac3096d18f3407b679bdb2204d9198728b5083666e2793bf4844ac8dece3c81\"" Jun 20 19:23:28.994187 containerd[1641]: time="2025-06-20T19:23:28.994148043Z" level=info msg="StartContainer for \"af04d0c541b8e6481be89d634c9f95f72372938f50bc4175c9eb789bb6ef6e42\"" Jun 20 19:23:28.994926 containerd[1641]: time="2025-06-20T19:23:28.994875824Z" level=info msg="connecting to shim af04d0c541b8e6481be89d634c9f95f72372938f50bc4175c9eb789bb6ef6e42" address="unix:///run/containerd/s/583568c9f787023a7daec65c3764911546d0fd4a46b06a66d6735d44645ee696" protocol=ttrpc version=3 Jun 20 19:23:28.995319 containerd[1641]: time="2025-06-20T19:23:28.995304378Z" level=info msg="StartContainer for \"9ac3096d18f3407b679bdb2204d9198728b5083666e2793bf4844ac8dece3c81\"" Jun 20 19:23:28.995940 containerd[1641]: time="2025-06-20T19:23:28.995921183Z" level=info msg="connecting to shim 9ac3096d18f3407b679bdb2204d9198728b5083666e2793bf4844ac8dece3c81" address="unix:///run/containerd/s/10eca7470afe229dd4af46e4ee0ff99086ee31cdb080d2c9df0e0adc437253aa" protocol=ttrpc version=3 Jun 20 19:23:28.998879 containerd[1641]: time="2025-06-20T19:23:28.998851173Z" level=info msg="CreateContainer within sandbox \"839b92d3b6beaa6b9878276ba1e880800e3dfa8febf910e78c374d397cdcfac5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"96c01f597aa10261ecf931ef311be4724d9d0e029f7b6d2dffe6fdf71eaec0b8\"" Jun 20 19:23:29.000149 containerd[1641]: time="2025-06-20T19:23:28.999241472Z" level=info msg="StartContainer for \"96c01f597aa10261ecf931ef311be4724d9d0e029f7b6d2dffe6fdf71eaec0b8\"" Jun 20 19:23:29.000149 containerd[1641]: time="2025-06-20T19:23:28.999856534Z" level=info msg="connecting to shim 96c01f597aa10261ecf931ef311be4724d9d0e029f7b6d2dffe6fdf71eaec0b8" address="unix:///run/containerd/s/930008d3840abdb1162ea51a0a893fea84b1f67c06fc4e837508c406e1d80375" protocol=ttrpc version=3 Jun 20 19:23:29.011843 systemd[1]: Started cri-containerd-9ac3096d18f3407b679bdb2204d9198728b5083666e2793bf4844ac8dece3c81.scope - libcontainer container 9ac3096d18f3407b679bdb2204d9198728b5083666e2793bf4844ac8dece3c81. Jun 20 19:23:29.035889 systemd[1]: Started cri-containerd-af04d0c541b8e6481be89d634c9f95f72372938f50bc4175c9eb789bb6ef6e42.scope - libcontainer container af04d0c541b8e6481be89d634c9f95f72372938f50bc4175c9eb789bb6ef6e42. Jun 20 19:23:29.043844 systemd[1]: Started cri-containerd-96c01f597aa10261ecf931ef311be4724d9d0e029f7b6d2dffe6fdf71eaec0b8.scope - libcontainer container 96c01f597aa10261ecf931ef311be4724d9d0e029f7b6d2dffe6fdf71eaec0b8. Jun 20 19:23:29.095023 containerd[1641]: time="2025-06-20T19:23:29.094748742Z" level=info msg="StartContainer for \"9ac3096d18f3407b679bdb2204d9198728b5083666e2793bf4844ac8dece3c81\" returns successfully" Jun 20 19:23:29.095584 containerd[1641]: time="2025-06-20T19:23:29.094877378Z" level=info msg="StartContainer for \"af04d0c541b8e6481be89d634c9f95f72372938f50bc4175c9eb789bb6ef6e42\" returns successfully" Jun 20 19:23:29.119720 containerd[1641]: time="2025-06-20T19:23:29.119036916Z" level=info msg="StartContainer for \"96c01f597aa10261ecf931ef311be4724d9d0e029f7b6d2dffe6fdf71eaec0b8\" returns successfully" Jun 20 19:23:29.127476 kubelet[2556]: W0620 19:23:29.127429 2556 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jun 20 19:23:29.127581 kubelet[2556]: E0620 19:23:29.127480 2556 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:23:29.186028 kubelet[2556]: E0620 19:23:29.186001 2556 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.105:6443: connect: connection refused" interval="1.6s" Jun 20 19:23:29.228738 kubelet[2556]: W0620 19:23:29.228675 2556 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jun 20 19:23:29.228970 kubelet[2556]: E0620 19:23:29.228745 2556 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:23:29.392041 kubelet[2556]: I0620 19:23:29.391207 2556 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jun 20 19:23:29.392169 kubelet[2556]: E0620 19:23:29.392156 2556 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.105:6443/api/v1/nodes\": dial tcp 139.178.70.105:6443: connect: connection refused" node="localhost" Jun 20 19:23:29.618690 kubelet[2556]: E0620 19:23:29.618625 2556 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.105:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:23:29.840022 kubelet[2556]: E0620 19:23:29.839949 2556 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 20 19:23:29.841706 kubelet[2556]: E0620 19:23:29.840944 2556 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 20 19:23:29.842201 kubelet[2556]: E0620 19:23:29.842194 2556 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 20 19:23:30.787918 kubelet[2556]: E0620 19:23:30.787895 2556 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jun 20 19:23:30.791316 kubelet[2556]: E0620 19:23:30.791301 2556 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jun 20 19:23:30.843847 kubelet[2556]: E0620 19:23:30.843825 2556 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 20 19:23:30.844082 kubelet[2556]: E0620 19:23:30.843958 2556 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 20 19:23:30.844082 kubelet[2556]: E0620 19:23:30.844021 2556 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 20 19:23:30.993569 kubelet[2556]: I0620 19:23:30.993516 2556 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jun 20 19:23:30.998496 kubelet[2556]: I0620 19:23:30.998470 2556 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jun 20 19:23:30.998496 kubelet[2556]: E0620 19:23:30.998485 2556 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jun 20 19:23:31.084202 kubelet[2556]: I0620 19:23:31.084157 2556 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jun 20 19:23:31.087703 kubelet[2556]: E0620 19:23:31.087657 2556 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jun 20 19:23:31.087703 kubelet[2556]: I0620 19:23:31.087672 2556 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jun 20 19:23:31.088715 kubelet[2556]: E0620 19:23:31.088625 2556 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jun 20 19:23:31.088715 kubelet[2556]: I0620 19:23:31.088636 2556 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jun 20 19:23:31.089291 kubelet[2556]: E0620 19:23:31.089279 2556 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jun 20 19:23:31.754932 kubelet[2556]: I0620 19:23:31.754915 2556 apiserver.go:52] "Watching apiserver" Jun 20 19:23:31.784153 kubelet[2556]: I0620 19:23:31.784131 2556 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 19:23:31.843934 kubelet[2556]: I0620 19:23:31.843784 2556 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jun 20 19:23:32.262390 systemd[1]: Reload requested from client PID 2829 ('systemctl') (unit session-9.scope)... Jun 20 19:23:32.262611 systemd[1]: Reloading... Jun 20 19:23:32.327813 zram_generator::config[2876]: No configuration found. Jun 20 19:23:32.408799 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:23:32.417820 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jun 20 19:23:32.495819 systemd[1]: Reloading finished in 232 ms. Jun 20 19:23:32.518280 kubelet[2556]: I0620 19:23:32.517755 2556 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:23:32.518166 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:23:32.532369 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 19:23:32.532539 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:23:32.532576 systemd[1]: kubelet.service: Consumed 525ms CPU time, 128.6M memory peak. Jun 20 19:23:32.534063 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:23:32.886518 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:23:32.893124 (kubelet)[2940]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:23:33.081449 kubelet[2940]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:23:33.081449 kubelet[2940]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 19:23:33.081449 kubelet[2940]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:23:33.081725 kubelet[2940]: I0620 19:23:33.081517 2940 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 19:23:33.105090 kubelet[2940]: I0620 19:23:33.105062 2940 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jun 20 19:23:33.105090 kubelet[2940]: I0620 19:23:33.105082 2940 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 19:23:33.105316 kubelet[2940]: I0620 19:23:33.105299 2940 server.go:954] "Client rotation is on, will bootstrap in background" Jun 20 19:23:33.110428 kubelet[2940]: I0620 19:23:33.110411 2940 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 20 19:23:33.180753 kubelet[2940]: I0620 19:23:33.180395 2940 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:23:33.183501 kubelet[2940]: I0620 19:23:33.183486 2940 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 20 19:23:33.186862 kubelet[2940]: I0620 19:23:33.186849 2940 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 19:23:33.187139 kubelet[2940]: I0620 19:23:33.187109 2940 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 19:23:33.187322 kubelet[2940]: I0620 19:23:33.187133 2940 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 19:23:33.187401 kubelet[2940]: I0620 19:23:33.187325 2940 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 19:23:33.187401 kubelet[2940]: I0620 19:23:33.187334 2940 container_manager_linux.go:304] "Creating device plugin manager" Jun 20 19:23:33.187401 kubelet[2940]: I0620 19:23:33.187371 2940 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:23:33.187553 kubelet[2940]: I0620 19:23:33.187536 2940 kubelet.go:446] "Attempting to sync node with API server" Jun 20 19:23:33.187586 kubelet[2940]: I0620 19:23:33.187559 2940 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 19:23:33.188734 kubelet[2940]: I0620 19:23:33.188079 2940 kubelet.go:352] "Adding apiserver pod source" Jun 20 19:23:33.188734 kubelet[2940]: I0620 19:23:33.188126 2940 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 19:23:33.189861 kubelet[2940]: I0620 19:23:33.189840 2940 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 20 19:23:33.190240 kubelet[2940]: I0620 19:23:33.190220 2940 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 20 19:23:33.191707 kubelet[2940]: I0620 19:23:33.190669 2940 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 19:23:33.191707 kubelet[2940]: I0620 19:23:33.190714 2940 server.go:1287] "Started kubelet" Jun 20 19:23:33.192375 kubelet[2940]: I0620 19:23:33.192355 2940 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 19:23:33.206991 kubelet[2940]: I0620 19:23:33.206974 2940 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 19:23:33.207237 kubelet[2940]: I0620 19:23:33.207227 2940 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 19:23:33.207539 kubelet[2940]: I0620 19:23:33.207530 2940 reconciler.go:26] "Reconciler: start to sync state" Jun 20 19:23:33.208080 kubelet[2940]: I0620 19:23:33.208054 2940 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 19:23:33.209239 kubelet[2940]: E0620 19:23:33.209219 2940 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 19:23:33.209414 kubelet[2940]: I0620 19:23:33.209401 2940 factory.go:221] Registration of the systemd container factory successfully Jun 20 19:23:33.209490 kubelet[2940]: I0620 19:23:33.209465 2940 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 19:23:33.211283 kubelet[2940]: I0620 19:23:33.211270 2940 factory.go:221] Registration of the containerd container factory successfully Jun 20 19:23:33.222054 kubelet[2940]: I0620 19:23:33.222028 2940 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 20 19:23:33.223226 kubelet[2940]: I0620 19:23:33.223209 2940 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 20 19:23:33.223279 kubelet[2940]: I0620 19:23:33.223229 2940 status_manager.go:227] "Starting to sync pod status with apiserver" Jun 20 19:23:33.223279 kubelet[2940]: I0620 19:23:33.223245 2940 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 19:23:33.223279 kubelet[2940]: I0620 19:23:33.223265 2940 kubelet.go:2382] "Starting kubelet main sync loop" Jun 20 19:23:33.223326 kubelet[2940]: E0620 19:23:33.223295 2940 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 19:23:33.223473 kubelet[2940]: I0620 19:23:33.223443 2940 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 19:23:33.224065 kubelet[2940]: I0620 19:23:33.223755 2940 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 19:23:33.224065 kubelet[2940]: I0620 19:23:33.223910 2940 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 19:23:33.225505 kubelet[2940]: I0620 19:23:33.225435 2940 server.go:479] "Adding debug handlers to kubelet server" Jun 20 19:23:33.248073 kubelet[2940]: I0620 19:23:33.248050 2940 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 19:23:33.248073 kubelet[2940]: I0620 19:23:33.248065 2940 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 19:23:33.248073 kubelet[2940]: I0620 19:23:33.248078 2940 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:23:33.248213 kubelet[2940]: I0620 19:23:33.248184 2940 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 20 19:23:33.248213 kubelet[2940]: I0620 19:23:33.248191 2940 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 20 19:23:33.248213 kubelet[2940]: I0620 19:23:33.248202 2940 policy_none.go:49] "None policy: Start" Jun 20 19:23:33.248213 kubelet[2940]: I0620 19:23:33.248207 2940 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 19:23:33.248213 kubelet[2940]: I0620 19:23:33.248213 2940 state_mem.go:35] "Initializing new in-memory state store" Jun 20 19:23:33.248309 kubelet[2940]: I0620 19:23:33.248303 2940 state_mem.go:75] "Updated machine memory state" Jun 20 19:23:33.252660 kubelet[2940]: I0620 19:23:33.252647 2940 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 20 19:23:33.252848 kubelet[2940]: I0620 19:23:33.252842 2940 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 19:23:33.252917 kubelet[2940]: I0620 19:23:33.252883 2940 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 19:23:33.253094 kubelet[2940]: I0620 19:23:33.253088 2940 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 19:23:33.255183 kubelet[2940]: E0620 19:23:33.255165 2940 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 19:23:33.323641 kubelet[2940]: I0620 19:23:33.323610 2940 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jun 20 19:23:33.335427 kubelet[2940]: I0620 19:23:33.335396 2940 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jun 20 19:23:33.336181 kubelet[2940]: I0620 19:23:33.335733 2940 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jun 20 19:23:33.351849 kubelet[2940]: E0620 19:23:33.351711 2940 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jun 20 19:23:33.358016 kubelet[2940]: I0620 19:23:33.357612 2940 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jun 20 19:23:33.377188 kubelet[2940]: I0620 19:23:33.377156 2940 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jun 20 19:23:33.377301 kubelet[2940]: I0620 19:23:33.377240 2940 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jun 20 19:23:33.408373 kubelet[2940]: I0620 19:23:33.408346 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:23:33.408599 kubelet[2940]: I0620 19:23:33.408504 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:23:33.408599 kubelet[2940]: I0620 19:23:33.408519 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/588897bca4863f82560dab50f71ce025-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"588897bca4863f82560dab50f71ce025\") " pod="kube-system/kube-apiserver-localhost" Jun 20 19:23:33.408599 kubelet[2940]: I0620 19:23:33.408549 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/588897bca4863f82560dab50f71ce025-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"588897bca4863f82560dab50f71ce025\") " pod="kube-system/kube-apiserver-localhost" Jun 20 19:23:33.408599 kubelet[2940]: I0620 19:23:33.408566 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:23:33.408599 kubelet[2940]: I0620 19:23:33.408576 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:23:33.408783 kubelet[2940]: I0620 19:23:33.408589 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:23:33.408783 kubelet[2940]: I0620 19:23:33.408758 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jun 20 19:23:33.408783 kubelet[2940]: I0620 19:23:33.408767 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/588897bca4863f82560dab50f71ce025-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"588897bca4863f82560dab50f71ce025\") " pod="kube-system/kube-apiserver-localhost" Jun 20 19:23:34.188700 kubelet[2940]: I0620 19:23:34.188666 2940 apiserver.go:52] "Watching apiserver" Jun 20 19:23:34.207972 kubelet[2940]: I0620 19:23:34.207939 2940 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 19:23:34.236857 kubelet[2940]: I0620 19:23:34.236832 2940 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jun 20 19:23:34.237097 kubelet[2940]: I0620 19:23:34.237083 2940 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jun 20 19:23:34.240906 kubelet[2940]: E0620 19:23:34.240879 2940 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jun 20 19:23:34.241849 kubelet[2940]: E0620 19:23:34.241765 2940 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jun 20 19:23:34.256193 kubelet[2940]: I0620 19:23:34.256154 2940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.256142119 podStartE2EDuration="1.256142119s" podCreationTimestamp="2025-06-20 19:23:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:23:34.251960621 +0000 UTC m=+1.199348272" watchObservedRunningTime="2025-06-20 19:23:34.256142119 +0000 UTC m=+1.203529769" Jun 20 19:23:34.261307 kubelet[2940]: I0620 19:23:34.261036 2940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.2610241850000001 podStartE2EDuration="1.261024185s" podCreationTimestamp="2025-06-20 19:23:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:23:34.256536684 +0000 UTC m=+1.203924334" watchObservedRunningTime="2025-06-20 19:23:34.261024185 +0000 UTC m=+1.208411826" Jun 20 19:23:34.261307 kubelet[2940]: I0620 19:23:34.261121 2940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.26111729 podStartE2EDuration="3.26111729s" podCreationTimestamp="2025-06-20 19:23:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:23:34.260987315 +0000 UTC m=+1.208374966" watchObservedRunningTime="2025-06-20 19:23:34.26111729 +0000 UTC m=+1.208504934" Jun 20 19:23:37.982263 kubelet[2940]: I0620 19:23:37.982237 2940 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 20 19:23:37.983229 containerd[1641]: time="2025-06-20T19:23:37.982947358Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 20 19:23:37.983431 kubelet[2940]: I0620 19:23:37.983092 2940 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 20 19:23:38.576347 systemd[1]: Created slice kubepods-besteffort-podea0df039_0c13_4107_a588_b9701a386b74.slice - libcontainer container kubepods-besteffort-podea0df039_0c13_4107_a588_b9701a386b74.slice. Jun 20 19:23:38.645049 kubelet[2940]: I0620 19:23:38.644960 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ea0df039-0c13-4107-a588-b9701a386b74-lib-modules\") pod \"kube-proxy-v488g\" (UID: \"ea0df039-0c13-4107-a588-b9701a386b74\") " pod="kube-system/kube-proxy-v488g" Jun 20 19:23:38.645049 kubelet[2940]: I0620 19:23:38.644990 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v8fk\" (UniqueName: \"kubernetes.io/projected/ea0df039-0c13-4107-a588-b9701a386b74-kube-api-access-9v8fk\") pod \"kube-proxy-v488g\" (UID: \"ea0df039-0c13-4107-a588-b9701a386b74\") " pod="kube-system/kube-proxy-v488g" Jun 20 19:23:38.645049 kubelet[2940]: I0620 19:23:38.645002 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ea0df039-0c13-4107-a588-b9701a386b74-xtables-lock\") pod \"kube-proxy-v488g\" (UID: \"ea0df039-0c13-4107-a588-b9701a386b74\") " pod="kube-system/kube-proxy-v488g" Jun 20 19:23:38.645049 kubelet[2940]: I0620 19:23:38.645013 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ea0df039-0c13-4107-a588-b9701a386b74-kube-proxy\") pod \"kube-proxy-v488g\" (UID: \"ea0df039-0c13-4107-a588-b9701a386b74\") " pod="kube-system/kube-proxy-v488g" Jun 20 19:23:38.750732 kubelet[2940]: E0620 19:23:38.750631 2940 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jun 20 19:23:38.750732 kubelet[2940]: E0620 19:23:38.750651 2940 projected.go:194] Error preparing data for projected volume kube-api-access-9v8fk for pod kube-system/kube-proxy-v488g: configmap "kube-root-ca.crt" not found Jun 20 19:23:38.750732 kubelet[2940]: E0620 19:23:38.750685 2940 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ea0df039-0c13-4107-a588-b9701a386b74-kube-api-access-9v8fk podName:ea0df039-0c13-4107-a588-b9701a386b74 nodeName:}" failed. No retries permitted until 2025-06-20 19:23:39.250673672 +0000 UTC m=+6.198061314 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9v8fk" (UniqueName: "kubernetes.io/projected/ea0df039-0c13-4107-a588-b9701a386b74-kube-api-access-9v8fk") pod "kube-proxy-v488g" (UID: "ea0df039-0c13-4107-a588-b9701a386b74") : configmap "kube-root-ca.crt" not found Jun 20 19:23:39.021073 kubelet[2940]: I0620 19:23:39.020925 2940 status_manager.go:890] "Failed to get status for pod" podUID="3ef8f1c0-f80e-4af5-ac8c-9abd36bc6153" pod="tigera-operator/tigera-operator-68f7c7984d-vzz8g" err="pods \"tigera-operator-68f7c7984d-vzz8g\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'localhost' and this object" Jun 20 19:23:39.025133 systemd[1]: Created slice kubepods-besteffort-pod3ef8f1c0_f80e_4af5_ac8c_9abd36bc6153.slice - libcontainer container kubepods-besteffort-pod3ef8f1c0_f80e_4af5_ac8c_9abd36bc6153.slice. Jun 20 19:23:39.047733 kubelet[2940]: I0620 19:23:39.047660 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3ef8f1c0-f80e-4af5-ac8c-9abd36bc6153-var-lib-calico\") pod \"tigera-operator-68f7c7984d-vzz8g\" (UID: \"3ef8f1c0-f80e-4af5-ac8c-9abd36bc6153\") " pod="tigera-operator/tigera-operator-68f7c7984d-vzz8g" Jun 20 19:23:39.047998 kubelet[2940]: I0620 19:23:39.047911 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txtr4\" (UniqueName: \"kubernetes.io/projected/3ef8f1c0-f80e-4af5-ac8c-9abd36bc6153-kube-api-access-txtr4\") pod \"tigera-operator-68f7c7984d-vzz8g\" (UID: \"3ef8f1c0-f80e-4af5-ac8c-9abd36bc6153\") " pod="tigera-operator/tigera-operator-68f7c7984d-vzz8g" Jun 20 19:23:39.327946 containerd[1641]: time="2025-06-20T19:23:39.327916418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-68f7c7984d-vzz8g,Uid:3ef8f1c0-f80e-4af5-ac8c-9abd36bc6153,Namespace:tigera-operator,Attempt:0,}" Jun 20 19:23:39.341192 containerd[1641]: time="2025-06-20T19:23:39.341149216Z" level=info msg="connecting to shim a05fca1c83a65738e84ed54dac3436bf5f1fcce68e28a12ca19dd63cdb2ac1ea" address="unix:///run/containerd/s/3e5eba178e42695d8e6749ebd5fe10f550bef4d35169182321828ad1e8fac62f" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:23:39.365859 systemd[1]: Started cri-containerd-a05fca1c83a65738e84ed54dac3436bf5f1fcce68e28a12ca19dd63cdb2ac1ea.scope - libcontainer container a05fca1c83a65738e84ed54dac3436bf5f1fcce68e28a12ca19dd63cdb2ac1ea. Jun 20 19:23:39.403729 containerd[1641]: time="2025-06-20T19:23:39.403687514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-68f7c7984d-vzz8g,Uid:3ef8f1c0-f80e-4af5-ac8c-9abd36bc6153,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a05fca1c83a65738e84ed54dac3436bf5f1fcce68e28a12ca19dd63cdb2ac1ea\"" Jun 20 19:23:39.405308 containerd[1641]: time="2025-06-20T19:23:39.405286121Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.1\"" Jun 20 19:23:39.484447 containerd[1641]: time="2025-06-20T19:23:39.484405057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v488g,Uid:ea0df039-0c13-4107-a588-b9701a386b74,Namespace:kube-system,Attempt:0,}" Jun 20 19:23:39.494259 containerd[1641]: time="2025-06-20T19:23:39.494041060Z" level=info msg="connecting to shim 60d345751d3d9c3eab3ca737be4e75c72ec5ce968646e0b32cab86596661e241" address="unix:///run/containerd/s/fba7d96fdb5a7889cfdabc8aba1d18942234e675ef83f7a02dfb089768e11b68" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:23:39.511876 systemd[1]: Started cri-containerd-60d345751d3d9c3eab3ca737be4e75c72ec5ce968646e0b32cab86596661e241.scope - libcontainer container 60d345751d3d9c3eab3ca737be4e75c72ec5ce968646e0b32cab86596661e241. Jun 20 19:23:39.526247 containerd[1641]: time="2025-06-20T19:23:39.526226108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v488g,Uid:ea0df039-0c13-4107-a588-b9701a386b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"60d345751d3d9c3eab3ca737be4e75c72ec5ce968646e0b32cab86596661e241\"" Jun 20 19:23:39.529096 containerd[1641]: time="2025-06-20T19:23:39.529065517Z" level=info msg="CreateContainer within sandbox \"60d345751d3d9c3eab3ca737be4e75c72ec5ce968646e0b32cab86596661e241\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 20 19:23:39.534536 containerd[1641]: time="2025-06-20T19:23:39.534241187Z" level=info msg="Container a23b76cc3a6e61879e65cbb76c8b93b90e065067d3fc6766c4a666cad9df4eac: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:23:39.537662 containerd[1641]: time="2025-06-20T19:23:39.537641436Z" level=info msg="CreateContainer within sandbox \"60d345751d3d9c3eab3ca737be4e75c72ec5ce968646e0b32cab86596661e241\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a23b76cc3a6e61879e65cbb76c8b93b90e065067d3fc6766c4a666cad9df4eac\"" Jun 20 19:23:39.538908 containerd[1641]: time="2025-06-20T19:23:39.538892746Z" level=info msg="StartContainer for \"a23b76cc3a6e61879e65cbb76c8b93b90e065067d3fc6766c4a666cad9df4eac\"" Jun 20 19:23:39.540328 containerd[1641]: time="2025-06-20T19:23:39.540307926Z" level=info msg="connecting to shim a23b76cc3a6e61879e65cbb76c8b93b90e065067d3fc6766c4a666cad9df4eac" address="unix:///run/containerd/s/fba7d96fdb5a7889cfdabc8aba1d18942234e675ef83f7a02dfb089768e11b68" protocol=ttrpc version=3 Jun 20 19:23:39.554816 systemd[1]: Started cri-containerd-a23b76cc3a6e61879e65cbb76c8b93b90e065067d3fc6766c4a666cad9df4eac.scope - libcontainer container a23b76cc3a6e61879e65cbb76c8b93b90e065067d3fc6766c4a666cad9df4eac. Jun 20 19:23:39.577836 containerd[1641]: time="2025-06-20T19:23:39.577808887Z" level=info msg="StartContainer for \"a23b76cc3a6e61879e65cbb76c8b93b90e065067d3fc6766c4a666cad9df4eac\" returns successfully" Jun 20 19:23:40.260324 kubelet[2940]: I0620 19:23:40.260273 2940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-v488g" podStartSLOduration=2.260260536 podStartE2EDuration="2.260260536s" podCreationTimestamp="2025-06-20 19:23:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:23:40.253754767 +0000 UTC m=+7.201142428" watchObservedRunningTime="2025-06-20 19:23:40.260260536 +0000 UTC m=+7.207648185" Jun 20 19:23:40.573918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1301554126.mount: Deactivated successfully. Jun 20 19:23:41.034173 containerd[1641]: time="2025-06-20T19:23:41.034142904Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:41.037740 containerd[1641]: time="2025-06-20T19:23:41.037708999Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.1: active requests=0, bytes read=25059858" Jun 20 19:23:41.038323 containerd[1641]: time="2025-06-20T19:23:41.038303614Z" level=info msg="ImageCreate event name:\"sha256:9fe1a04a0e6c440395d63018f1a72bb1ed07d81ed81be41e9b8adcc35a64164c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:41.039690 containerd[1641]: time="2025-06-20T19:23:41.039666959Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a2a468d1ac1b6a7049c1c2505cd933461fcadb127b5c3f98f03bd8e402bce456\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:41.040173 containerd[1641]: time="2025-06-20T19:23:41.040153859Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.1\" with image id \"sha256:9fe1a04a0e6c440395d63018f1a72bb1ed07d81ed81be41e9b8adcc35a64164c\", repo tag \"quay.io/tigera/operator:v1.38.1\", repo digest \"quay.io/tigera/operator@sha256:a2a468d1ac1b6a7049c1c2505cd933461fcadb127b5c3f98f03bd8e402bce456\", size \"25055853\" in 1.634847168s" Jun 20 19:23:41.040208 containerd[1641]: time="2025-06-20T19:23:41.040173815Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.1\" returns image reference \"sha256:9fe1a04a0e6c440395d63018f1a72bb1ed07d81ed81be41e9b8adcc35a64164c\"" Jun 20 19:23:41.041725 containerd[1641]: time="2025-06-20T19:23:41.041648338Z" level=info msg="CreateContainer within sandbox \"a05fca1c83a65738e84ed54dac3436bf5f1fcce68e28a12ca19dd63cdb2ac1ea\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 20 19:23:41.051089 containerd[1641]: time="2025-06-20T19:23:41.050802429Z" level=info msg="Container 2927f442be388dff3cd792485761928192c3b6ee0914cdee544784111665e2d6: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:23:41.053223 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount215927107.mount: Deactivated successfully. Jun 20 19:23:41.054900 containerd[1641]: time="2025-06-20T19:23:41.054860314Z" level=info msg="CreateContainer within sandbox \"a05fca1c83a65738e84ed54dac3436bf5f1fcce68e28a12ca19dd63cdb2ac1ea\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2927f442be388dff3cd792485761928192c3b6ee0914cdee544784111665e2d6\"" Jun 20 19:23:41.055891 containerd[1641]: time="2025-06-20T19:23:41.055869598Z" level=info msg="StartContainer for \"2927f442be388dff3cd792485761928192c3b6ee0914cdee544784111665e2d6\"" Jun 20 19:23:41.057001 containerd[1641]: time="2025-06-20T19:23:41.056973587Z" level=info msg="connecting to shim 2927f442be388dff3cd792485761928192c3b6ee0914cdee544784111665e2d6" address="unix:///run/containerd/s/3e5eba178e42695d8e6749ebd5fe10f550bef4d35169182321828ad1e8fac62f" protocol=ttrpc version=3 Jun 20 19:23:41.071841 systemd[1]: Started cri-containerd-2927f442be388dff3cd792485761928192c3b6ee0914cdee544784111665e2d6.scope - libcontainer container 2927f442be388dff3cd792485761928192c3b6ee0914cdee544784111665e2d6. Jun 20 19:23:41.089739 containerd[1641]: time="2025-06-20T19:23:41.089673070Z" level=info msg="StartContainer for \"2927f442be388dff3cd792485761928192c3b6ee0914cdee544784111665e2d6\" returns successfully" Jun 20 19:23:41.263163 kubelet[2940]: I0620 19:23:41.263107 2940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-68f7c7984d-vzz8g" podStartSLOduration=1.625061322 podStartE2EDuration="3.261015366s" podCreationTimestamp="2025-06-20 19:23:38 +0000 UTC" firstStartedPulling="2025-06-20 19:23:39.40478572 +0000 UTC m=+6.352173363" lastFinishedPulling="2025-06-20 19:23:41.040739766 +0000 UTC m=+7.988127407" observedRunningTime="2025-06-20 19:23:41.260868366 +0000 UTC m=+8.208256017" watchObservedRunningTime="2025-06-20 19:23:41.261015366 +0000 UTC m=+8.208403017" Jun 20 19:23:46.621319 sudo[1965]: pam_unix(sudo:session): session closed for user root Jun 20 19:23:46.623957 sshd[1964]: Connection closed by 147.75.109.163 port 43176 Jun 20 19:23:46.624689 sshd-session[1962]: pam_unix(sshd:session): session closed for user core Jun 20 19:23:46.628139 systemd[1]: sshd@6-139.178.70.105:22-147.75.109.163:43176.service: Deactivated successfully. Jun 20 19:23:46.631719 systemd[1]: session-9.scope: Deactivated successfully. Jun 20 19:23:46.632083 systemd[1]: session-9.scope: Consumed 3.119s CPU time, 150.6M memory peak. Jun 20 19:23:46.635929 systemd-logind[1620]: Session 9 logged out. Waiting for processes to exit. Jun 20 19:23:46.638206 systemd-logind[1620]: Removed session 9. Jun 20 19:23:49.463764 systemd[1]: Created slice kubepods-besteffort-podc79cad85_d81e_4aa5_bc8b_675f3d325da9.slice - libcontainer container kubepods-besteffort-podc79cad85_d81e_4aa5_bc8b_675f3d325da9.slice. Jun 20 19:23:49.508924 kubelet[2940]: I0620 19:23:49.508883 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c79cad85-d81e-4aa5-bc8b-675f3d325da9-tigera-ca-bundle\") pod \"calico-typha-9786c9d7f-tghqs\" (UID: \"c79cad85-d81e-4aa5-bc8b-675f3d325da9\") " pod="calico-system/calico-typha-9786c9d7f-tghqs" Jun 20 19:23:49.509201 kubelet[2940]: I0620 19:23:49.508931 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c79cad85-d81e-4aa5-bc8b-675f3d325da9-typha-certs\") pod \"calico-typha-9786c9d7f-tghqs\" (UID: \"c79cad85-d81e-4aa5-bc8b-675f3d325da9\") " pod="calico-system/calico-typha-9786c9d7f-tghqs" Jun 20 19:23:49.509201 kubelet[2940]: I0620 19:23:49.508953 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sb2r\" (UniqueName: \"kubernetes.io/projected/c79cad85-d81e-4aa5-bc8b-675f3d325da9-kube-api-access-2sb2r\") pod \"calico-typha-9786c9d7f-tghqs\" (UID: \"c79cad85-d81e-4aa5-bc8b-675f3d325da9\") " pod="calico-system/calico-typha-9786c9d7f-tghqs" Jun 20 19:23:49.766757 containerd[1641]: time="2025-06-20T19:23:49.766651391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-9786c9d7f-tghqs,Uid:c79cad85-d81e-4aa5-bc8b-675f3d325da9,Namespace:calico-system,Attempt:0,}" Jun 20 19:23:49.846324 systemd[1]: Created slice kubepods-besteffort-pod12a24154_d63e_491b_9796_8c595c4357f8.slice - libcontainer container kubepods-besteffort-pod12a24154_d63e_491b_9796_8c595c4357f8.slice. Jun 20 19:23:49.856509 containerd[1641]: time="2025-06-20T19:23:49.855794894Z" level=info msg="connecting to shim 03f18e07cec9080d1319c620baf57820524c1ff2c9f55032ca0b13160a58c620" address="unix:///run/containerd/s/e66571d359e5fed026ec079c6dcde68980297c766c58692ba3d2b15b67b67b15" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:23:49.895115 systemd[1]: Started cri-containerd-03f18e07cec9080d1319c620baf57820524c1ff2c9f55032ca0b13160a58c620.scope - libcontainer container 03f18e07cec9080d1319c620baf57820524c1ff2c9f55032ca0b13160a58c620. Jun 20 19:23:49.930951 kubelet[2940]: I0620 19:23:49.930685 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/12a24154-d63e-491b-9796-8c595c4357f8-node-certs\") pod \"calico-node-vjh79\" (UID: \"12a24154-d63e-491b-9796-8c595c4357f8\") " pod="calico-system/calico-node-vjh79" Jun 20 19:23:49.930951 kubelet[2940]: I0620 19:23:49.930768 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12a24154-d63e-491b-9796-8c595c4357f8-tigera-ca-bundle\") pod \"calico-node-vjh79\" (UID: \"12a24154-d63e-491b-9796-8c595c4357f8\") " pod="calico-system/calico-node-vjh79" Jun 20 19:23:49.930951 kubelet[2940]: I0620 19:23:49.930793 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/12a24154-d63e-491b-9796-8c595c4357f8-policysync\") pod \"calico-node-vjh79\" (UID: \"12a24154-d63e-491b-9796-8c595c4357f8\") " pod="calico-system/calico-node-vjh79" Jun 20 19:23:49.930951 kubelet[2940]: I0620 19:23:49.930806 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/12a24154-d63e-491b-9796-8c595c4357f8-flexvol-driver-host\") pod \"calico-node-vjh79\" (UID: \"12a24154-d63e-491b-9796-8c595c4357f8\") " pod="calico-system/calico-node-vjh79" Jun 20 19:23:49.930951 kubelet[2940]: I0620 19:23:49.930819 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/12a24154-d63e-491b-9796-8c595c4357f8-var-lib-calico\") pod \"calico-node-vjh79\" (UID: \"12a24154-d63e-491b-9796-8c595c4357f8\") " pod="calico-system/calico-node-vjh79" Jun 20 19:23:49.931148 kubelet[2940]: I0620 19:23:49.930830 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlh7t\" (UniqueName: \"kubernetes.io/projected/12a24154-d63e-491b-9796-8c595c4357f8-kube-api-access-vlh7t\") pod \"calico-node-vjh79\" (UID: \"12a24154-d63e-491b-9796-8c595c4357f8\") " pod="calico-system/calico-node-vjh79" Jun 20 19:23:49.931148 kubelet[2940]: I0620 19:23:49.930848 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/12a24154-d63e-491b-9796-8c595c4357f8-lib-modules\") pod \"calico-node-vjh79\" (UID: \"12a24154-d63e-491b-9796-8c595c4357f8\") " pod="calico-system/calico-node-vjh79" Jun 20 19:23:49.931148 kubelet[2940]: I0620 19:23:49.930862 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/12a24154-d63e-491b-9796-8c595c4357f8-var-run-calico\") pod \"calico-node-vjh79\" (UID: \"12a24154-d63e-491b-9796-8c595c4357f8\") " pod="calico-system/calico-node-vjh79" Jun 20 19:23:49.931148 kubelet[2940]: I0620 19:23:49.930873 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/12a24154-d63e-491b-9796-8c595c4357f8-xtables-lock\") pod \"calico-node-vjh79\" (UID: \"12a24154-d63e-491b-9796-8c595c4357f8\") " pod="calico-system/calico-node-vjh79" Jun 20 19:23:49.931148 kubelet[2940]: I0620 19:23:49.930885 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/12a24154-d63e-491b-9796-8c595c4357f8-cni-net-dir\") pod \"calico-node-vjh79\" (UID: \"12a24154-d63e-491b-9796-8c595c4357f8\") " pod="calico-system/calico-node-vjh79" Jun 20 19:23:49.931246 kubelet[2940]: I0620 19:23:49.930899 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/12a24154-d63e-491b-9796-8c595c4357f8-cni-bin-dir\") pod \"calico-node-vjh79\" (UID: \"12a24154-d63e-491b-9796-8c595c4357f8\") " pod="calico-system/calico-node-vjh79" Jun 20 19:23:49.931246 kubelet[2940]: I0620 19:23:49.930908 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/12a24154-d63e-491b-9796-8c595c4357f8-cni-log-dir\") pod \"calico-node-vjh79\" (UID: \"12a24154-d63e-491b-9796-8c595c4357f8\") " pod="calico-system/calico-node-vjh79" Jun 20 19:23:49.992750 containerd[1641]: time="2025-06-20T19:23:49.992705198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-9786c9d7f-tghqs,Uid:c79cad85-d81e-4aa5-bc8b-675f3d325da9,Namespace:calico-system,Attempt:0,} returns sandbox id \"03f18e07cec9080d1319c620baf57820524c1ff2c9f55032ca0b13160a58c620\"" Jun 20 19:23:49.995816 containerd[1641]: time="2025-06-20T19:23:49.995239232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.1\"" Jun 20 19:23:50.088616 kubelet[2940]: E0620 19:23:50.088580 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.088616 kubelet[2940]: W0620 19:23:50.088606 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.100198 kubelet[2940]: E0620 19:23:50.100165 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.151513 containerd[1641]: time="2025-06-20T19:23:50.151485427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vjh79,Uid:12a24154-d63e-491b-9796-8c595c4357f8,Namespace:calico-system,Attempt:0,}" Jun 20 19:23:50.196440 kubelet[2940]: E0620 19:23:50.196413 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4w9w9" podUID="de04d4f2-cb39-4862-aa3d-bec53c847188" Jun 20 19:23:50.207687 kubelet[2940]: E0620 19:23:50.207665 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.207687 kubelet[2940]: W0620 19:23:50.207680 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.207822 kubelet[2940]: E0620 19:23:50.207713 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.207822 kubelet[2940]: E0620 19:23:50.207807 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.207822 kubelet[2940]: W0620 19:23:50.207811 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.207822 kubelet[2940]: E0620 19:23:50.207816 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.208328 kubelet[2940]: E0620 19:23:50.208071 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.208328 kubelet[2940]: W0620 19:23:50.208077 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.208328 kubelet[2940]: E0620 19:23:50.208082 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.209421 kubelet[2940]: E0620 19:23:50.209408 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.209421 kubelet[2940]: W0620 19:23:50.209417 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.209600 kubelet[2940]: E0620 19:23:50.209424 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.209600 kubelet[2940]: E0620 19:23:50.209597 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.209642 kubelet[2940]: W0620 19:23:50.209601 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.209642 kubelet[2940]: E0620 19:23:50.209607 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.209901 kubelet[2940]: E0620 19:23:50.209697 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.209901 kubelet[2940]: W0620 19:23:50.209704 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.209901 kubelet[2940]: E0620 19:23:50.209709 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.209901 kubelet[2940]: E0620 19:23:50.209882 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.209901 kubelet[2940]: W0620 19:23:50.209887 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.209901 kubelet[2940]: E0620 19:23:50.209892 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.210266 kubelet[2940]: E0620 19:23:50.209978 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.210266 kubelet[2940]: W0620 19:23:50.209983 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.210266 kubelet[2940]: E0620 19:23:50.209988 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.210266 kubelet[2940]: E0620 19:23:50.210075 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.210266 kubelet[2940]: W0620 19:23:50.210080 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.210266 kubelet[2940]: E0620 19:23:50.210084 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.210266 kubelet[2940]: E0620 19:23:50.210158 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.210266 kubelet[2940]: W0620 19:23:50.210163 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.210266 kubelet[2940]: E0620 19:23:50.210167 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.210266 kubelet[2940]: E0620 19:23:50.210247 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.210598 kubelet[2940]: W0620 19:23:50.210252 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.210598 kubelet[2940]: E0620 19:23:50.210256 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.210598 kubelet[2940]: E0620 19:23:50.210405 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.210598 kubelet[2940]: W0620 19:23:50.210411 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.210598 kubelet[2940]: E0620 19:23:50.210416 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.210883 kubelet[2940]: E0620 19:23:50.210860 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.210883 kubelet[2940]: W0620 19:23:50.210868 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.210985 kubelet[2940]: E0620 19:23:50.210888 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.210985 kubelet[2940]: E0620 19:23:50.210971 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.210985 kubelet[2940]: W0620 19:23:50.210975 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.210985 kubelet[2940]: E0620 19:23:50.210980 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.211257 kubelet[2940]: E0620 19:23:50.211248 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.211257 kubelet[2940]: W0620 19:23:50.211255 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.211311 kubelet[2940]: E0620 19:23:50.211261 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.211435 kubelet[2940]: E0620 19:23:50.211356 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.211435 kubelet[2940]: W0620 19:23:50.211360 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.211435 kubelet[2940]: E0620 19:23:50.211365 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.211541 kubelet[2940]: E0620 19:23:50.211478 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.211541 kubelet[2940]: W0620 19:23:50.211483 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.211541 kubelet[2940]: E0620 19:23:50.211488 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.211738 kubelet[2940]: E0620 19:23:50.211579 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.211738 kubelet[2940]: W0620 19:23:50.211583 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.211738 kubelet[2940]: E0620 19:23:50.211588 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.211738 kubelet[2940]: E0620 19:23:50.211666 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.211738 kubelet[2940]: W0620 19:23:50.211670 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.211738 kubelet[2940]: E0620 19:23:50.211674 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.211930 kubelet[2940]: E0620 19:23:50.211764 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.211930 kubelet[2940]: W0620 19:23:50.211781 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.211930 kubelet[2940]: E0620 19:23:50.211788 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.234351 kubelet[2940]: E0620 19:23:50.234306 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.234351 kubelet[2940]: W0620 19:23:50.234321 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.234351 kubelet[2940]: E0620 19:23:50.234334 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.234351 kubelet[2940]: I0620 19:23:50.234354 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/de04d4f2-cb39-4862-aa3d-bec53c847188-varrun\") pod \"csi-node-driver-4w9w9\" (UID: \"de04d4f2-cb39-4862-aa3d-bec53c847188\") " pod="calico-system/csi-node-driver-4w9w9" Jun 20 19:23:50.234885 kubelet[2940]: E0620 19:23:50.234873 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.234885 kubelet[2940]: W0620 19:23:50.234882 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.235393 kubelet[2940]: E0620 19:23:50.234893 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.235393 kubelet[2940]: I0620 19:23:50.234903 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98vzw\" (UniqueName: \"kubernetes.io/projected/de04d4f2-cb39-4862-aa3d-bec53c847188-kube-api-access-98vzw\") pod \"csi-node-driver-4w9w9\" (UID: \"de04d4f2-cb39-4862-aa3d-bec53c847188\") " pod="calico-system/csi-node-driver-4w9w9" Jun 20 19:23:50.235530 kubelet[2940]: E0620 19:23:50.235480 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.235530 kubelet[2940]: W0620 19:23:50.235489 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.235530 kubelet[2940]: E0620 19:23:50.235502 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.235700 kubelet[2940]: E0620 19:23:50.235657 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.235700 kubelet[2940]: W0620 19:23:50.235674 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.235700 kubelet[2940]: E0620 19:23:50.235684 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.235868 kubelet[2940]: E0620 19:23:50.235852 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.235868 kubelet[2940]: W0620 19:23:50.235860 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.236021 kubelet[2940]: E0620 19:23:50.235992 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.236021 kubelet[2940]: I0620 19:23:50.236006 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/de04d4f2-cb39-4862-aa3d-bec53c847188-kubelet-dir\") pod \"csi-node-driver-4w9w9\" (UID: \"de04d4f2-cb39-4862-aa3d-bec53c847188\") " pod="calico-system/csi-node-driver-4w9w9" Jun 20 19:23:50.236158 kubelet[2940]: E0620 19:23:50.236033 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.236158 kubelet[2940]: W0620 19:23:50.236154 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.236257 kubelet[2940]: E0620 19:23:50.236164 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.236366 kubelet[2940]: E0620 19:23:50.236302 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.236473 kubelet[2940]: W0620 19:23:50.236398 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.236473 kubelet[2940]: E0620 19:23:50.236411 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.237371 kubelet[2940]: E0620 19:23:50.237363 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.237496 kubelet[2940]: W0620 19:23:50.237418 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.237496 kubelet[2940]: E0620 19:23:50.237430 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.237496 kubelet[2940]: I0620 19:23:50.237443 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/de04d4f2-cb39-4862-aa3d-bec53c847188-registration-dir\") pod \"csi-node-driver-4w9w9\" (UID: \"de04d4f2-cb39-4862-aa3d-bec53c847188\") " pod="calico-system/csi-node-driver-4w9w9" Jun 20 19:23:50.237641 kubelet[2940]: E0620 19:23:50.237634 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.237680 kubelet[2940]: W0620 19:23:50.237673 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.237790 kubelet[2940]: E0620 19:23:50.237707 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.237899 kubelet[2940]: E0620 19:23:50.237888 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.237984 kubelet[2940]: W0620 19:23:50.237933 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.237984 kubelet[2940]: E0620 19:23:50.237943 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.238056 kubelet[2940]: E0620 19:23:50.238050 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.238097 kubelet[2940]: W0620 19:23:50.238092 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.238164 kubelet[2940]: E0620 19:23:50.238125 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.238295 kubelet[2940]: E0620 19:23:50.238288 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.238342 kubelet[2940]: W0620 19:23:50.238324 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.238422 kubelet[2940]: E0620 19:23:50.238379 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.238457 kubelet[2940]: E0620 19:23:50.238446 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.238457 kubelet[2940]: W0620 19:23:50.238452 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.238510 kubelet[2940]: E0620 19:23:50.238458 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.238510 kubelet[2940]: I0620 19:23:50.238481 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/de04d4f2-cb39-4862-aa3d-bec53c847188-socket-dir\") pod \"csi-node-driver-4w9w9\" (UID: \"de04d4f2-cb39-4862-aa3d-bec53c847188\") " pod="calico-system/csi-node-driver-4w9w9" Jun 20 19:23:50.238597 kubelet[2940]: E0620 19:23:50.238587 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.238597 kubelet[2940]: W0620 19:23:50.238594 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.238655 kubelet[2940]: E0620 19:23:50.238600 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.239447 kubelet[2940]: E0620 19:23:50.238677 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.239447 kubelet[2940]: W0620 19:23:50.238682 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.239447 kubelet[2940]: E0620 19:23:50.238715 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.265917 containerd[1641]: time="2025-06-20T19:23:50.264516027Z" level=info msg="connecting to shim f648ae1f3517a4dbef3d76fe2f952976576d0904e1d8acfa17e050b3d208ca1f" address="unix:///run/containerd/s/1e85f8968203a1d39a3139c7fececa93fefa7fea45414fcc5fd3e15a56038b5d" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:23:50.340122 kubelet[2940]: E0620 19:23:50.340040 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.340844 kubelet[2940]: W0620 19:23:50.340059 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.340844 kubelet[2940]: E0620 19:23:50.340744 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.341743 kubelet[2940]: E0620 19:23:50.341429 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.341743 kubelet[2940]: W0620 19:23:50.341436 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.341743 kubelet[2940]: E0620 19:23:50.341444 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.343083 kubelet[2940]: E0620 19:23:50.343070 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.343383 kubelet[2940]: W0620 19:23:50.343210 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.343383 kubelet[2940]: E0620 19:23:50.343235 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.343572 kubelet[2940]: E0620 19:23:50.343519 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.343572 kubelet[2940]: W0620 19:23:50.343544 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.343572 kubelet[2940]: E0620 19:23:50.343558 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.343773 kubelet[2940]: E0620 19:23:50.343761 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.343773 kubelet[2940]: W0620 19:23:50.343770 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.343912 kubelet[2940]: E0620 19:23:50.343858 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.343912 kubelet[2940]: E0620 19:23:50.343883 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.343912 kubelet[2940]: W0620 19:23:50.343904 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.344030 kubelet[2940]: E0620 19:23:50.343995 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.344030 kubelet[2940]: W0620 19:23:50.344005 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.344030 kubelet[2940]: E0620 19:23:50.344017 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.344147 kubelet[2940]: E0620 19:23:50.344011 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.344192 kubelet[2940]: E0620 19:23:50.344166 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.344192 kubelet[2940]: W0620 19:23:50.344172 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.344192 kubelet[2940]: E0620 19:23:50.344188 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.344335 kubelet[2940]: E0620 19:23:50.344317 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.344335 kubelet[2940]: W0620 19:23:50.344324 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.344412 kubelet[2940]: E0620 19:23:50.344341 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.344462 kubelet[2940]: E0620 19:23:50.344449 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.344462 kubelet[2940]: W0620 19:23:50.344459 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.344529 kubelet[2940]: E0620 19:23:50.344470 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.344569 kubelet[2940]: E0620 19:23:50.344559 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.344569 kubelet[2940]: W0620 19:23:50.344566 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.344623 kubelet[2940]: E0620 19:23:50.344574 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.344688 kubelet[2940]: E0620 19:23:50.344674 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.344688 kubelet[2940]: W0620 19:23:50.344682 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.344688 kubelet[2940]: E0620 19:23:50.344689 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.345123 kubelet[2940]: E0620 19:23:50.345018 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.345123 kubelet[2940]: W0620 19:23:50.345028 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.345123 kubelet[2940]: E0620 19:23:50.345044 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.345247 kubelet[2940]: E0620 19:23:50.345240 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.345306 kubelet[2940]: W0620 19:23:50.345299 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.345371 kubelet[2940]: E0620 19:23:50.345353 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.345584 kubelet[2940]: E0620 19:23:50.345524 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.345584 kubelet[2940]: W0620 19:23:50.345531 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.345860 systemd[1]: Started cri-containerd-f648ae1f3517a4dbef3d76fe2f952976576d0904e1d8acfa17e050b3d208ca1f.scope - libcontainer container f648ae1f3517a4dbef3d76fe2f952976576d0904e1d8acfa17e050b3d208ca1f. Jun 20 19:23:50.346211 kubelet[2940]: E0620 19:23:50.346196 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.346211 kubelet[2940]: E0620 19:23:50.346201 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.346312 kubelet[2940]: W0620 19:23:50.346204 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.346312 kubelet[2940]: E0620 19:23:50.346276 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.346562 kubelet[2940]: E0620 19:23:50.346549 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.346562 kubelet[2940]: W0620 19:23:50.346558 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.347147 kubelet[2940]: E0620 19:23:50.347131 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.347386 kubelet[2940]: E0620 19:23:50.347171 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.347386 kubelet[2940]: W0620 19:23:50.347177 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.347386 kubelet[2940]: E0620 19:23:50.347209 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.348147 kubelet[2940]: E0620 19:23:50.348089 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.348147 kubelet[2940]: W0620 19:23:50.348110 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.348147 kubelet[2940]: E0620 19:23:50.348122 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.348898 kubelet[2940]: E0620 19:23:50.348884 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.348940 kubelet[2940]: W0620 19:23:50.348897 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.348940 kubelet[2940]: E0620 19:23:50.348912 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.349743 kubelet[2940]: E0620 19:23:50.349726 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.349743 kubelet[2940]: W0620 19:23:50.349739 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.349837 kubelet[2940]: E0620 19:23:50.349795 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.349968 kubelet[2940]: E0620 19:23:50.349952 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.349968 kubelet[2940]: W0620 19:23:50.349964 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.350101 kubelet[2940]: E0620 19:23:50.350066 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.350130 kubelet[2940]: E0620 19:23:50.350110 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.350130 kubelet[2940]: W0620 19:23:50.350115 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.350130 kubelet[2940]: E0620 19:23:50.350126 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.350265 kubelet[2940]: E0620 19:23:50.350231 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.350265 kubelet[2940]: W0620 19:23:50.350237 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.350333 kubelet[2940]: E0620 19:23:50.350316 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.350641 kubelet[2940]: E0620 19:23:50.350625 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.350641 kubelet[2940]: W0620 19:23:50.350637 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.350754 kubelet[2940]: E0620 19:23:50.350644 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.354948 kubelet[2940]: E0620 19:23:50.354924 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:50.354948 kubelet[2940]: W0620 19:23:50.354939 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:50.354948 kubelet[2940]: E0620 19:23:50.354955 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:50.371945 containerd[1641]: time="2025-06-20T19:23:50.371782794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vjh79,Uid:12a24154-d63e-491b-9796-8c595c4357f8,Namespace:calico-system,Attempt:0,} returns sandbox id \"f648ae1f3517a4dbef3d76fe2f952976576d0904e1d8acfa17e050b3d208ca1f\"" Jun 20 19:23:51.449081 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2659953877.mount: Deactivated successfully. Jun 20 19:23:52.258728 kubelet[2940]: E0620 19:23:52.258678 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4w9w9" podUID="de04d4f2-cb39-4862-aa3d-bec53c847188" Jun 20 19:23:52.604680 containerd[1641]: time="2025-06-20T19:23:52.604562527Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:52.610845 containerd[1641]: time="2025-06-20T19:23:52.610803952Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.1: active requests=0, bytes read=35227888" Jun 20 19:23:52.617377 containerd[1641]: time="2025-06-20T19:23:52.615477088Z" level=info msg="ImageCreate event name:\"sha256:11d920cd1d8c935bdf3cb40dd9e67f22c3624df627bdd58cf6d0e503230688d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:52.628791 containerd[1641]: time="2025-06-20T19:23:52.628162815Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f1edaa4eaa6349a958c409e0dab2d6ee7d1234e5f0eeefc9f508d0b1c9d7d0d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:52.628791 containerd[1641]: time="2025-06-20T19:23:52.628671886Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.1\" with image id \"sha256:11d920cd1d8c935bdf3cb40dd9e67f22c3624df627bdd58cf6d0e503230688d7\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f1edaa4eaa6349a958c409e0dab2d6ee7d1234e5f0eeefc9f508d0b1c9d7d0d1\", size \"35227742\" in 2.633407769s" Jun 20 19:23:52.628791 containerd[1641]: time="2025-06-20T19:23:52.628715026Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.1\" returns image reference \"sha256:11d920cd1d8c935bdf3cb40dd9e67f22c3624df627bdd58cf6d0e503230688d7\"" Jun 20 19:23:52.629846 containerd[1641]: time="2025-06-20T19:23:52.629826970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\"" Jun 20 19:23:52.649921 containerd[1641]: time="2025-06-20T19:23:52.649899092Z" level=info msg="CreateContainer within sandbox \"03f18e07cec9080d1319c620baf57820524c1ff2c9f55032ca0b13160a58c620\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 20 19:23:52.723201 containerd[1641]: time="2025-06-20T19:23:52.723176985Z" level=info msg="Container f8e43bc8cc0abf1f49ba0cdb330aab6199911624dea5e98298b31742ecdc1712: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:23:52.828215 containerd[1641]: time="2025-06-20T19:23:52.828177706Z" level=info msg="CreateContainer within sandbox \"03f18e07cec9080d1319c620baf57820524c1ff2c9f55032ca0b13160a58c620\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f8e43bc8cc0abf1f49ba0cdb330aab6199911624dea5e98298b31742ecdc1712\"" Jun 20 19:23:52.828876 containerd[1641]: time="2025-06-20T19:23:52.828855908Z" level=info msg="StartContainer for \"f8e43bc8cc0abf1f49ba0cdb330aab6199911624dea5e98298b31742ecdc1712\"" Jun 20 19:23:52.838379 containerd[1641]: time="2025-06-20T19:23:52.838347169Z" level=info msg="connecting to shim f8e43bc8cc0abf1f49ba0cdb330aab6199911624dea5e98298b31742ecdc1712" address="unix:///run/containerd/s/e66571d359e5fed026ec079c6dcde68980297c766c58692ba3d2b15b67b67b15" protocol=ttrpc version=3 Jun 20 19:23:52.859112 systemd[1]: Started cri-containerd-f8e43bc8cc0abf1f49ba0cdb330aab6199911624dea5e98298b31742ecdc1712.scope - libcontainer container f8e43bc8cc0abf1f49ba0cdb330aab6199911624dea5e98298b31742ecdc1712. Jun 20 19:23:52.916875 containerd[1641]: time="2025-06-20T19:23:52.916496560Z" level=info msg="StartContainer for \"f8e43bc8cc0abf1f49ba0cdb330aab6199911624dea5e98298b31742ecdc1712\" returns successfully" Jun 20 19:23:53.339412 kubelet[2940]: E0620 19:23:53.339356 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:53.339412 kubelet[2940]: W0620 19:23:53.339369 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:53.339412 kubelet[2940]: E0620 19:23:53.339384 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:53.340412 kubelet[2940]: E0620 19:23:53.339652 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:53.340412 kubelet[2940]: W0620 19:23:53.339658 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:53.340412 kubelet[2940]: E0620 19:23:53.339664 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:53.340412 kubelet[2940]: E0620 19:23:53.340137 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:53.340412 kubelet[2940]: W0620 19:23:53.340296 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:53.340412 kubelet[2940]: E0620 19:23:53.340304 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:53.340907 kubelet[2940]: E0620 19:23:53.340830 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:53.340907 kubelet[2940]: W0620 19:23:53.340836 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:53.340907 kubelet[2940]: E0620 19:23:53.340843 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:53.341120 kubelet[2940]: E0620 19:23:53.341113 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:53.341206 kubelet[2940]: W0620 19:23:53.341156 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:53.341206 kubelet[2940]: E0620 19:23:53.341164 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:53.341405 kubelet[2940]: E0620 19:23:53.341371 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:53.341405 kubelet[2940]: W0620 19:23:53.341377 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:53.341405 kubelet[2940]: E0620 19:23:53.341385 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:53.341629 kubelet[2940]: E0620 19:23:53.341570 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:53.341629 kubelet[2940]: W0620 19:23:53.341598 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:53.341629 kubelet[2940]: E0620 19:23:53.341606 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:53.342004 kubelet[2940]: E0620 19:23:53.341959 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:53.342004 kubelet[2940]: W0620 19:23:53.341967 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:53.342004 kubelet[2940]: E0620 19:23:53.341973 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:53.342340 kubelet[2940]: E0620 19:23:53.342320 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:53.342462 kubelet[2940]: W0620 19:23:53.342372 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:53.342462 kubelet[2940]: E0620 19:23:53.342380 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:53.342714 kubelet[2940]: E0620 19:23:53.342683 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:53.342714 kubelet[2940]: W0620 19:23:53.342701 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:53.342923 kubelet[2940]: E0620 19:23:53.342851 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:53.343168 kubelet[2940]: E0620 19:23:53.343076 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:53.343168 kubelet[2940]: W0620 19:23:53.343085 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:53.343168 kubelet[2940]: E0620 19:23:53.343092 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:53.343424 kubelet[2940]: E0620 19:23:53.343405 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:53.343536 kubelet[2940]: W0620 19:23:53.343496 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:53.343536 kubelet[2940]: E0620 19:23:53.343506 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:53.343885 kubelet[2940]: E0620 19:23:53.343814 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:53.343885 kubelet[2940]: W0620 19:23:53.343822 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:53.343885 kubelet[2940]: E0620 19:23:53.343828 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:53.344170 kubelet[2940]: E0620 19:23:53.344154 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:53.344214 kubelet[2940]: W0620 19:23:53.344207 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:53.344285 kubelet[2940]: E0620 19:23:53.344246 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:53.344442 kubelet[2940]: E0620 19:23:53.344423 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:53.344530 kubelet[2940]: W0620 19:23:53.344464 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:53.344530 kubelet[2940]: E0620 19:23:53.344472 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:53.365239 kubelet[2940]: E0620 19:23:53.365146 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:53.365239 kubelet[2940]: W0620 19:23:53.365163 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:53.365239 kubelet[2940]: E0620 19:23:53.365175 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:53.365398 kubelet[2940]: E0620 19:23:53.365392 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:53.365436 kubelet[2940]: W0620 19:23:53.365430 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:53.365521 kubelet[2940]: E0620 19:23:53.365467 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:53.365571 kubelet[2940]: E0620 19:23:53.365566 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:53.365600 kubelet[2940]: W0620 19:23:53.365595 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:53.365634 kubelet[2940]: E0620 19:23:53.365629 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:53.365819 kubelet[2940]: E0620 19:23:53.365766 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:53.365819 kubelet[2940]: W0620 19:23:53.365772 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:53.365819 kubelet[2940]: E0620 19:23:53.365777 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:53.365909 kubelet[2940]: E0620 19:23:53.365903 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:53.365941 kubelet[2940]: W0620 19:23:53.365936 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:53.366007 kubelet[2940]: E0620 19:23:53.365968 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:53.366056 kubelet[2940]: E0620 19:23:53.366051 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:53.366088 kubelet[2940]: W0620 19:23:53.366083 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:53.366123 kubelet[2940]: E0620 19:23:53.366118 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:53.366469 kubelet[2940]: E0620 19:23:53.366232 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:53.366469 kubelet[2940]: W0620 19:23:53.366238 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:53.366469 kubelet[2940]: E0620 19:23:53.366242 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:53.366560 kubelet[2940]: E0620 19:23:53.366554 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:53.366593 kubelet[2940]: W0620 19:23:53.366588 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:53.366629 kubelet[2940]: E0620 19:23:53.366624 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:53.374826 kubelet[2940]: E0620 19:23:53.366867 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:53.374826 kubelet[2940]: W0620 19:23:53.366872 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:53.374826 kubelet[2940]: E0620 19:23:53.366877 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:53.374826 kubelet[2940]: E0620 19:23:53.366953 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:53.374826 kubelet[2940]: W0620 19:23:53.366957 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:53.374826 kubelet[2940]: E0620 19:23:53.366962 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:53.374826 kubelet[2940]: E0620 19:23:53.367030 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:53.374826 kubelet[2940]: W0620 19:23:53.367034 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:53.374826 kubelet[2940]: E0620 19:23:53.367039 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:53.374826 kubelet[2940]: E0620 19:23:53.367116 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:53.375001 kubelet[2940]: W0620 19:23:53.367121 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:53.375001 kubelet[2940]: E0620 19:23:53.367126 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:53.375001 kubelet[2940]: E0620 19:23:53.367744 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:53.375001 kubelet[2940]: W0620 19:23:53.367749 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:53.375001 kubelet[2940]: E0620 19:23:53.367755 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:53.375001 kubelet[2940]: E0620 19:23:53.367845 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:53.375001 kubelet[2940]: W0620 19:23:53.367851 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:53.375001 kubelet[2940]: E0620 19:23:53.367857 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:53.375001 kubelet[2940]: E0620 19:23:53.367938 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:53.375001 kubelet[2940]: W0620 19:23:53.367943 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:53.388137 kubelet[2940]: E0620 19:23:53.367948 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:53.388137 kubelet[2940]: E0620 19:23:53.368021 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:53.388137 kubelet[2940]: W0620 19:23:53.368025 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:53.388137 kubelet[2940]: E0620 19:23:53.368030 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:53.388137 kubelet[2940]: E0620 19:23:53.368110 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:53.388137 kubelet[2940]: W0620 19:23:53.368115 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:53.388137 kubelet[2940]: E0620 19:23:53.368119 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:53.388137 kubelet[2940]: E0620 19:23:53.368267 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 20 19:23:53.388137 kubelet[2940]: W0620 19:23:53.368272 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 20 19:23:53.388137 kubelet[2940]: E0620 19:23:53.368277 2940 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 20 19:23:53.388303 kubelet[2940]: I0620 19:23:53.383074 2940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-9786c9d7f-tghqs" podStartSLOduration=1.748280203 podStartE2EDuration="4.383063841s" podCreationTimestamp="2025-06-20 19:23:49 +0000 UTC" firstStartedPulling="2025-06-20 19:23:49.994688729 +0000 UTC m=+16.942076369" lastFinishedPulling="2025-06-20 19:23:52.629472365 +0000 UTC m=+19.576860007" observedRunningTime="2025-06-20 19:23:53.382736027 +0000 UTC m=+20.330123677" watchObservedRunningTime="2025-06-20 19:23:53.383063841 +0000 UTC m=+20.330451486" Jun 20 19:23:54.013726 containerd[1641]: time="2025-06-20T19:23:54.013683124Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:54.014403 containerd[1641]: time="2025-06-20T19:23:54.014384198Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1: active requests=0, bytes read=4441627" Jun 20 19:23:54.014765 containerd[1641]: time="2025-06-20T19:23:54.014742940Z" level=info msg="ImageCreate event name:\"sha256:2eb0d46821080fd806e1b7f8ca42889800fcb3f0af912b6fbb09a13b21454d48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:54.016308 containerd[1641]: time="2025-06-20T19:23:54.016288364Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b9246fe925ee5b8a5c7dfe1d1c3c29063cbfd512663088b135a015828c20401e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:54.016892 containerd[1641]: time="2025-06-20T19:23:54.016871060Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" with image id \"sha256:2eb0d46821080fd806e1b7f8ca42889800fcb3f0af912b6fbb09a13b21454d48\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b9246fe925ee5b8a5c7dfe1d1c3c29063cbfd512663088b135a015828c20401e\", size \"5934290\" in 1.386912331s" Jun 20 19:23:54.016929 containerd[1641]: time="2025-06-20T19:23:54.016898443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.1\" returns image reference \"sha256:2eb0d46821080fd806e1b7f8ca42889800fcb3f0af912b6fbb09a13b21454d48\"" Jun 20 19:23:54.019941 containerd[1641]: time="2025-06-20T19:23:54.019906284Z" level=info msg="CreateContainer within sandbox \"f648ae1f3517a4dbef3d76fe2f952976576d0904e1d8acfa17e050b3d208ca1f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 20 19:23:54.026302 containerd[1641]: time="2025-06-20T19:23:54.024894748Z" level=info msg="Container f5178d286d814c192858db8972daae7f1644281303bcc8dcd232d23c5766f2be: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:23:54.032022 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2075819500.mount: Deactivated successfully. Jun 20 19:23:54.042753 containerd[1641]: time="2025-06-20T19:23:54.034537043Z" level=info msg="CreateContainer within sandbox \"f648ae1f3517a4dbef3d76fe2f952976576d0904e1d8acfa17e050b3d208ca1f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f5178d286d814c192858db8972daae7f1644281303bcc8dcd232d23c5766f2be\"" Jun 20 19:23:54.043605 containerd[1641]: time="2025-06-20T19:23:54.043571976Z" level=info msg="StartContainer for \"f5178d286d814c192858db8972daae7f1644281303bcc8dcd232d23c5766f2be\"" Jun 20 19:23:54.044858 containerd[1641]: time="2025-06-20T19:23:54.044832779Z" level=info msg="connecting to shim f5178d286d814c192858db8972daae7f1644281303bcc8dcd232d23c5766f2be" address="unix:///run/containerd/s/1e85f8968203a1d39a3139c7fececa93fefa7fea45414fcc5fd3e15a56038b5d" protocol=ttrpc version=3 Jun 20 19:23:54.067823 systemd[1]: Started cri-containerd-f5178d286d814c192858db8972daae7f1644281303bcc8dcd232d23c5766f2be.scope - libcontainer container f5178d286d814c192858db8972daae7f1644281303bcc8dcd232d23c5766f2be. Jun 20 19:23:54.103301 systemd[1]: cri-containerd-f5178d286d814c192858db8972daae7f1644281303bcc8dcd232d23c5766f2be.scope: Deactivated successfully. Jun 20 19:23:54.107190 containerd[1641]: time="2025-06-20T19:23:54.106078748Z" level=info msg="StartContainer for \"f5178d286d814c192858db8972daae7f1644281303bcc8dcd232d23c5766f2be\" returns successfully" Jun 20 19:23:54.116454 containerd[1641]: time="2025-06-20T19:23:54.116427272Z" level=info msg="received exit event container_id:\"f5178d286d814c192858db8972daae7f1644281303bcc8dcd232d23c5766f2be\" id:\"f5178d286d814c192858db8972daae7f1644281303bcc8dcd232d23c5766f2be\" pid:3598 exited_at:{seconds:1750447434 nanos:104825153}" Jun 20 19:23:54.116761 containerd[1641]: time="2025-06-20T19:23:54.116747115Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f5178d286d814c192858db8972daae7f1644281303bcc8dcd232d23c5766f2be\" id:\"f5178d286d814c192858db8972daae7f1644281303bcc8dcd232d23c5766f2be\" pid:3598 exited_at:{seconds:1750447434 nanos:104825153}" Jun 20 19:23:54.131944 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f5178d286d814c192858db8972daae7f1644281303bcc8dcd232d23c5766f2be-rootfs.mount: Deactivated successfully. Jun 20 19:23:54.224631 kubelet[2940]: E0620 19:23:54.224589 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4w9w9" podUID="de04d4f2-cb39-4862-aa3d-bec53c847188" Jun 20 19:23:54.365549 kubelet[2940]: I0620 19:23:54.365250 2940 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:23:55.328247 containerd[1641]: time="2025-06-20T19:23:55.328219940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.1\"" Jun 20 19:23:56.224726 kubelet[2940]: E0620 19:23:56.224488 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4w9w9" podUID="de04d4f2-cb39-4862-aa3d-bec53c847188" Jun 20 19:23:57.500584 kubelet[2940]: I0620 19:23:57.500443 2940 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:23:58.224432 kubelet[2940]: E0620 19:23:58.224393 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4w9w9" podUID="de04d4f2-cb39-4862-aa3d-bec53c847188" Jun 20 19:23:59.766920 containerd[1641]: time="2025-06-20T19:23:59.766878659Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:59.776750 containerd[1641]: time="2025-06-20T19:23:59.776704054Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.1: active requests=0, bytes read=70405879" Jun 20 19:23:59.788665 containerd[1641]: time="2025-06-20T19:23:59.788613189Z" level=info msg="ImageCreate event name:\"sha256:0d2cd976ff6ee711927e02b1c2ba0b532275ff85d5dc05fc413cc660d5bec68e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:59.803653 containerd[1641]: time="2025-06-20T19:23:59.803608878Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:930b33311eec7523e36d95977281681d74d33efff937302b26516b2bc03a5fe9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:23:59.804862 containerd[1641]: time="2025-06-20T19:23:59.804827757Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.1\" with image id \"sha256:0d2cd976ff6ee711927e02b1c2ba0b532275ff85d5dc05fc413cc660d5bec68e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:930b33311eec7523e36d95977281681d74d33efff937302b26516b2bc03a5fe9\", size \"71898582\" in 4.476545975s" Jun 20 19:23:59.805013 containerd[1641]: time="2025-06-20T19:23:59.804943200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.1\" returns image reference \"sha256:0d2cd976ff6ee711927e02b1c2ba0b532275ff85d5dc05fc413cc660d5bec68e\"" Jun 20 19:23:59.808439 containerd[1641]: time="2025-06-20T19:23:59.808414340Z" level=info msg="CreateContainer within sandbox \"f648ae1f3517a4dbef3d76fe2f952976576d0904e1d8acfa17e050b3d208ca1f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 20 19:23:59.822614 containerd[1641]: time="2025-06-20T19:23:59.821461044Z" level=info msg="Container 4d2ccdc2f19fa6ab5bf6072a3413133e8113338ad37584daf9336fb81e8077ba: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:23:59.824033 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3296632246.mount: Deactivated successfully. Jun 20 19:23:59.832020 containerd[1641]: time="2025-06-20T19:23:59.831987697Z" level=info msg="CreateContainer within sandbox \"f648ae1f3517a4dbef3d76fe2f952976576d0904e1d8acfa17e050b3d208ca1f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4d2ccdc2f19fa6ab5bf6072a3413133e8113338ad37584daf9336fb81e8077ba\"" Jun 20 19:23:59.832496 containerd[1641]: time="2025-06-20T19:23:59.832334704Z" level=info msg="StartContainer for \"4d2ccdc2f19fa6ab5bf6072a3413133e8113338ad37584daf9336fb81e8077ba\"" Jun 20 19:23:59.833914 containerd[1641]: time="2025-06-20T19:23:59.833873619Z" level=info msg="connecting to shim 4d2ccdc2f19fa6ab5bf6072a3413133e8113338ad37584daf9336fb81e8077ba" address="unix:///run/containerd/s/1e85f8968203a1d39a3139c7fececa93fefa7fea45414fcc5fd3e15a56038b5d" protocol=ttrpc version=3 Jun 20 19:23:59.859832 systemd[1]: Started cri-containerd-4d2ccdc2f19fa6ab5bf6072a3413133e8113338ad37584daf9336fb81e8077ba.scope - libcontainer container 4d2ccdc2f19fa6ab5bf6072a3413133e8113338ad37584daf9336fb81e8077ba. Jun 20 19:23:59.894639 containerd[1641]: time="2025-06-20T19:23:59.894560760Z" level=info msg="StartContainer for \"4d2ccdc2f19fa6ab5bf6072a3413133e8113338ad37584daf9336fb81e8077ba\" returns successfully" Jun 20 19:24:00.224047 kubelet[2940]: E0620 19:24:00.223971 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4w9w9" podUID="de04d4f2-cb39-4862-aa3d-bec53c847188" Jun 20 19:24:02.224067 kubelet[2940]: E0620 19:24:02.223999 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4w9w9" podUID="de04d4f2-cb39-4862-aa3d-bec53c847188" Jun 20 19:24:02.701826 systemd[1]: cri-containerd-4d2ccdc2f19fa6ab5bf6072a3413133e8113338ad37584daf9336fb81e8077ba.scope: Deactivated successfully. Jun 20 19:24:02.702024 systemd[1]: cri-containerd-4d2ccdc2f19fa6ab5bf6072a3413133e8113338ad37584daf9336fb81e8077ba.scope: Consumed 313ms CPU time, 164.3M memory peak, 12K read from disk, 171.2M written to disk. Jun 20 19:24:02.777757 containerd[1641]: time="2025-06-20T19:24:02.777730947Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4d2ccdc2f19fa6ab5bf6072a3413133e8113338ad37584daf9336fb81e8077ba\" id:\"4d2ccdc2f19fa6ab5bf6072a3413133e8113338ad37584daf9336fb81e8077ba\" pid:3658 exited_at:{seconds:1750447442 nanos:772160158}" Jun 20 19:24:02.778502 containerd[1641]: time="2025-06-20T19:24:02.778477293Z" level=info msg="received exit event container_id:\"4d2ccdc2f19fa6ab5bf6072a3413133e8113338ad37584daf9336fb81e8077ba\" id:\"4d2ccdc2f19fa6ab5bf6072a3413133e8113338ad37584daf9336fb81e8077ba\" pid:3658 exited_at:{seconds:1750447442 nanos:772160158}" Jun 20 19:24:02.824705 kubelet[2940]: I0620 19:24:02.823856 2940 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jun 20 19:24:02.872297 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d2ccdc2f19fa6ab5bf6072a3413133e8113338ad37584daf9336fb81e8077ba-rootfs.mount: Deactivated successfully. Jun 20 19:24:02.970676 systemd[1]: Created slice kubepods-burstable-pod6532dc34_4519_4956_bfd1_87c2dd630634.slice - libcontainer container kubepods-burstable-pod6532dc34_4519_4956_bfd1_87c2dd630634.slice. Jun 20 19:24:02.979210 systemd[1]: Created slice kubepods-besteffort-pod8fe0a6ae_35c9_4521_95a4_fc8967e01b20.slice - libcontainer container kubepods-besteffort-pod8fe0a6ae_35c9_4521_95a4_fc8967e01b20.slice. Jun 20 19:24:02.985085 systemd[1]: Created slice kubepods-burstable-pod4412edbc_0778_486f_a5f0_73708cc96255.slice - libcontainer container kubepods-burstable-pod4412edbc_0778_486f_a5f0_73708cc96255.slice. Jun 20 19:24:02.993276 systemd[1]: Created slice kubepods-besteffort-pod98b29a09_d386_4da1_9313_3058130bfd33.slice - libcontainer container kubepods-besteffort-pod98b29a09_d386_4da1_9313_3058130bfd33.slice. Jun 20 19:24:02.998358 systemd[1]: Created slice kubepods-besteffort-podb9f1f0e9_d3be_4b86_a742_94818c1000f9.slice - libcontainer container kubepods-besteffort-podb9f1f0e9_d3be_4b86_a742_94818c1000f9.slice. Jun 20 19:24:03.004238 systemd[1]: Created slice kubepods-besteffort-pod89c2012c_00da_48d2_9d38_40ca4aaf4dc5.slice - libcontainer container kubepods-besteffort-pod89c2012c_00da_48d2_9d38_40ca4aaf4dc5.slice. Jun 20 19:24:03.008942 systemd[1]: Created slice kubepods-besteffort-podda54477e_0dce_43c1_91c2_8f7a222c7684.slice - libcontainer container kubepods-besteffort-podda54477e_0dce_43c1_91c2_8f7a222c7684.slice. Jun 20 19:24:03.101856 kubelet[2940]: I0620 19:24:03.101820 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98b29a09-d386-4da1-9313-3058130bfd33-config\") pod \"goldmane-5bd85449d4-8pm9l\" (UID: \"98b29a09-d386-4da1-9313-3058130bfd33\") " pod="calico-system/goldmane-5bd85449d4-8pm9l" Jun 20 19:24:03.101856 kubelet[2940]: I0620 19:24:03.101858 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/98b29a09-d386-4da1-9313-3058130bfd33-goldmane-key-pair\") pod \"goldmane-5bd85449d4-8pm9l\" (UID: \"98b29a09-d386-4da1-9313-3058130bfd33\") " pod="calico-system/goldmane-5bd85449d4-8pm9l" Jun 20 19:24:03.108130 kubelet[2940]: I0620 19:24:03.101873 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnpkj\" (UniqueName: \"kubernetes.io/projected/4412edbc-0778-486f-a5f0-73708cc96255-kube-api-access-vnpkj\") pod \"coredns-668d6bf9bc-bl9m6\" (UID: \"4412edbc-0778-486f-a5f0-73708cc96255\") " pod="kube-system/coredns-668d6bf9bc-bl9m6" Jun 20 19:24:03.108130 kubelet[2940]: I0620 19:24:03.101883 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwr46\" (UniqueName: \"kubernetes.io/projected/89c2012c-00da-48d2-9d38-40ca4aaf4dc5-kube-api-access-gwr46\") pod \"calico-apiserver-6647f6b4b7-2w6xq\" (UID: \"89c2012c-00da-48d2-9d38-40ca4aaf4dc5\") " pod="calico-apiserver/calico-apiserver-6647f6b4b7-2w6xq" Jun 20 19:24:03.108130 kubelet[2940]: I0620 19:24:03.101896 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br6hv\" (UniqueName: \"kubernetes.io/projected/da54477e-0dce-43c1-91c2-8f7a222c7684-kube-api-access-br6hv\") pod \"calico-kube-controllers-98fb94684-dcwlf\" (UID: \"da54477e-0dce-43c1-91c2-8f7a222c7684\") " pod="calico-system/calico-kube-controllers-98fb94684-dcwlf" Jun 20 19:24:03.108130 kubelet[2940]: I0620 19:24:03.101906 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b9f1f0e9-d3be-4b86-a742-94818c1000f9-whisker-backend-key-pair\") pod \"whisker-fd8d89cbd-v6v7p\" (UID: \"b9f1f0e9-d3be-4b86-a742-94818c1000f9\") " pod="calico-system/whisker-fd8d89cbd-v6v7p" Jun 20 19:24:03.108130 kubelet[2940]: I0620 19:24:03.101918 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9f1f0e9-d3be-4b86-a742-94818c1000f9-whisker-ca-bundle\") pod \"whisker-fd8d89cbd-v6v7p\" (UID: \"b9f1f0e9-d3be-4b86-a742-94818c1000f9\") " pod="calico-system/whisker-fd8d89cbd-v6v7p" Jun 20 19:24:03.108258 kubelet[2940]: I0620 19:24:03.101930 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5t6z\" (UniqueName: \"kubernetes.io/projected/8fe0a6ae-35c9-4521-95a4-fc8967e01b20-kube-api-access-w5t6z\") pod \"calico-apiserver-6647f6b4b7-8zcwx\" (UID: \"8fe0a6ae-35c9-4521-95a4-fc8967e01b20\") " pod="calico-apiserver/calico-apiserver-6647f6b4b7-8zcwx" Jun 20 19:24:03.108258 kubelet[2940]: I0620 19:24:03.101941 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6532dc34-4519-4956-bfd1-87c2dd630634-config-volume\") pod \"coredns-668d6bf9bc-fhxxd\" (UID: \"6532dc34-4519-4956-bfd1-87c2dd630634\") " pod="kube-system/coredns-668d6bf9bc-fhxxd" Jun 20 19:24:03.108258 kubelet[2940]: I0620 19:24:03.101955 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zccsb\" (UniqueName: \"kubernetes.io/projected/98b29a09-d386-4da1-9313-3058130bfd33-kube-api-access-zccsb\") pod \"goldmane-5bd85449d4-8pm9l\" (UID: \"98b29a09-d386-4da1-9313-3058130bfd33\") " pod="calico-system/goldmane-5bd85449d4-8pm9l" Jun 20 19:24:03.108258 kubelet[2940]: I0620 19:24:03.101967 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8fe0a6ae-35c9-4521-95a4-fc8967e01b20-calico-apiserver-certs\") pod \"calico-apiserver-6647f6b4b7-8zcwx\" (UID: \"8fe0a6ae-35c9-4521-95a4-fc8967e01b20\") " pod="calico-apiserver/calico-apiserver-6647f6b4b7-8zcwx" Jun 20 19:24:03.108258 kubelet[2940]: I0620 19:24:03.101976 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72ff4\" (UniqueName: \"kubernetes.io/projected/b9f1f0e9-d3be-4b86-a742-94818c1000f9-kube-api-access-72ff4\") pod \"whisker-fd8d89cbd-v6v7p\" (UID: \"b9f1f0e9-d3be-4b86-a742-94818c1000f9\") " pod="calico-system/whisker-fd8d89cbd-v6v7p" Jun 20 19:24:03.113809 kubelet[2940]: I0620 19:24:03.101988 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da54477e-0dce-43c1-91c2-8f7a222c7684-tigera-ca-bundle\") pod \"calico-kube-controllers-98fb94684-dcwlf\" (UID: \"da54477e-0dce-43c1-91c2-8f7a222c7684\") " pod="calico-system/calico-kube-controllers-98fb94684-dcwlf" Jun 20 19:24:03.113809 kubelet[2940]: I0620 19:24:03.102001 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4412edbc-0778-486f-a5f0-73708cc96255-config-volume\") pod \"coredns-668d6bf9bc-bl9m6\" (UID: \"4412edbc-0778-486f-a5f0-73708cc96255\") " pod="kube-system/coredns-668d6bf9bc-bl9m6" Jun 20 19:24:03.113809 kubelet[2940]: I0620 19:24:03.102012 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/89c2012c-00da-48d2-9d38-40ca4aaf4dc5-calico-apiserver-certs\") pod \"calico-apiserver-6647f6b4b7-2w6xq\" (UID: \"89c2012c-00da-48d2-9d38-40ca4aaf4dc5\") " pod="calico-apiserver/calico-apiserver-6647f6b4b7-2w6xq" Jun 20 19:24:03.113809 kubelet[2940]: I0620 19:24:03.102020 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnbtm\" (UniqueName: \"kubernetes.io/projected/6532dc34-4519-4956-bfd1-87c2dd630634-kube-api-access-gnbtm\") pod \"coredns-668d6bf9bc-fhxxd\" (UID: \"6532dc34-4519-4956-bfd1-87c2dd630634\") " pod="kube-system/coredns-668d6bf9bc-fhxxd" Jun 20 19:24:03.113809 kubelet[2940]: I0620 19:24:03.102035 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98b29a09-d386-4da1-9313-3058130bfd33-goldmane-ca-bundle\") pod \"goldmane-5bd85449d4-8pm9l\" (UID: \"98b29a09-d386-4da1-9313-3058130bfd33\") " pod="calico-system/goldmane-5bd85449d4-8pm9l" Jun 20 19:24:03.316231 containerd[1641]: time="2025-06-20T19:24:03.316098820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-8pm9l,Uid:98b29a09-d386-4da1-9313-3058130bfd33,Namespace:calico-system,Attempt:0,}" Jun 20 19:24:03.321655 containerd[1641]: time="2025-06-20T19:24:03.321566187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-fd8d89cbd-v6v7p,Uid:b9f1f0e9-d3be-4b86-a742-94818c1000f9,Namespace:calico-system,Attempt:0,}" Jun 20 19:24:03.321655 containerd[1641]: time="2025-06-20T19:24:03.321653038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6647f6b4b7-2w6xq,Uid:89c2012c-00da-48d2-9d38-40ca4aaf4dc5,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:24:03.321787 containerd[1641]: time="2025-06-20T19:24:03.321774812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-98fb94684-dcwlf,Uid:da54477e-0dce-43c1-91c2-8f7a222c7684,Namespace:calico-system,Attempt:0,}" Jun 20 19:24:03.537532 containerd[1641]: time="2025-06-20T19:24:03.537190743Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.1\"" Jun 20 19:24:03.575941 containerd[1641]: time="2025-06-20T19:24:03.575604258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fhxxd,Uid:6532dc34-4519-4956-bfd1-87c2dd630634,Namespace:kube-system,Attempt:0,}" Jun 20 19:24:03.581717 containerd[1641]: time="2025-06-20T19:24:03.581592843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6647f6b4b7-8zcwx,Uid:8fe0a6ae-35c9-4521-95a4-fc8967e01b20,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:24:03.590319 containerd[1641]: time="2025-06-20T19:24:03.590299416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bl9m6,Uid:4412edbc-0778-486f-a5f0-73708cc96255,Namespace:kube-system,Attempt:0,}" Jun 20 19:24:04.237383 systemd[1]: Created slice kubepods-besteffort-podde04d4f2_cb39_4862_aa3d_bec53c847188.slice - libcontainer container kubepods-besteffort-podde04d4f2_cb39_4862_aa3d_bec53c847188.slice. Jun 20 19:24:04.255202 containerd[1641]: time="2025-06-20T19:24:04.255174258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4w9w9,Uid:de04d4f2-cb39-4862-aa3d-bec53c847188,Namespace:calico-system,Attempt:0,}" Jun 20 19:24:04.417817 containerd[1641]: time="2025-06-20T19:24:04.417786483Z" level=error msg="Failed to destroy network for sandbox \"90a0259c030af20a128b47b83db28f159e456fd2b138acbfd2d997e803ad3571\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:24:04.425770 containerd[1641]: time="2025-06-20T19:24:04.418770276Z" level=error msg="Failed to destroy network for sandbox \"8489ed58d60af9be68e26c5b8abd1f78991c9c2b39c280e74dc5bd33db618159\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:24:04.425770 containerd[1641]: time="2025-06-20T19:24:04.423485858Z" level=error msg="Failed to destroy network for sandbox \"98e568949777c25a7cec60370f5fd103ab67eda60eed1df4995f3dc96702b813\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:24:04.419294 systemd[1]: run-netns-cni\x2d31746489\x2df077\x2d22e7\x2d3857\x2d5c91a60d86dc.mount: Deactivated successfully. Jun 20 19:24:04.424776 systemd[1]: run-netns-cni\x2da2392f51\x2d3178\x2df5f6\x2d1671\x2d03984571a7f4.mount: Deactivated successfully. Jun 20 19:24:04.426332 containerd[1641]: time="2025-06-20T19:24:04.426203002Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6647f6b4b7-2w6xq,Uid:89c2012c-00da-48d2-9d38-40ca4aaf4dc5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"90a0259c030af20a128b47b83db28f159e456fd2b138acbfd2d997e803ad3571\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:24:04.427996 systemd[1]: run-netns-cni\x2d24ed605c\x2d1ea7\x2d043b\x2d97f3\x2dc4501a694d68.mount: Deactivated successfully. Jun 20 19:24:04.430556 containerd[1641]: time="2025-06-20T19:24:04.430528310Z" level=error msg="Failed to destroy network for sandbox \"0151057c4e704e36927b5b5f676bd5c1fd79723617d58550cef8e70fe5a283a7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:24:04.435009 containerd[1641]: time="2025-06-20T19:24:04.430635868Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-8pm9l,Uid:98b29a09-d386-4da1-9313-3058130bfd33,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"98e568949777c25a7cec60370f5fd103ab67eda60eed1df4995f3dc96702b813\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:24:04.435009 containerd[1641]: time="2025-06-20T19:24:04.433056747Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4w9w9,Uid:de04d4f2-cb39-4862-aa3d-bec53c847188,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8489ed58d60af9be68e26c5b8abd1f78991c9c2b39c280e74dc5bd33db618159\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:24:04.435009 containerd[1641]: time="2025-06-20T19:24:04.433110265Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bl9m6,Uid:4412edbc-0778-486f-a5f0-73708cc96255,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0151057c4e704e36927b5b5f676bd5c1fd79723617d58550cef8e70fe5a283a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:24:04.435137 kubelet[2940]: E0620 19:24:04.430783 2940 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90a0259c030af20a128b47b83db28f159e456fd2b138acbfd2d997e803ad3571\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:24:04.435137 kubelet[2940]: E0620 19:24:04.430850 2940 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90a0259c030af20a128b47b83db28f159e456fd2b138acbfd2d997e803ad3571\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6647f6b4b7-2w6xq" Jun 20 19:24:04.435137 kubelet[2940]: E0620 19:24:04.430867 2940 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90a0259c030af20a128b47b83db28f159e456fd2b138acbfd2d997e803ad3571\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6647f6b4b7-2w6xq" Jun 20 19:24:04.435351 kubelet[2940]: E0620 19:24:04.430937 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6647f6b4b7-2w6xq_calico-apiserver(89c2012c-00da-48d2-9d38-40ca4aaf4dc5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6647f6b4b7-2w6xq_calico-apiserver(89c2012c-00da-48d2-9d38-40ca4aaf4dc5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"90a0259c030af20a128b47b83db28f159e456fd2b138acbfd2d997e803ad3571\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6647f6b4b7-2w6xq" podUID="89c2012c-00da-48d2-9d38-40ca4aaf4dc5" Jun 20 19:24:04.435351 kubelet[2940]: E0620 19:24:04.431093 2940 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98e568949777c25a7cec60370f5fd103ab67eda60eed1df4995f3dc96702b813\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:24:04.435351 kubelet[2940]: E0620 19:24:04.431118 2940 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98e568949777c25a7cec60370f5fd103ab67eda60eed1df4995f3dc96702b813\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5bd85449d4-8pm9l" Jun 20 19:24:04.435422 kubelet[2940]: E0620 19:24:04.431129 2940 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98e568949777c25a7cec60370f5fd103ab67eda60eed1df4995f3dc96702b813\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5bd85449d4-8pm9l" Jun 20 19:24:04.435422 kubelet[2940]: E0620 19:24:04.431149 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5bd85449d4-8pm9l_calico-system(98b29a09-d386-4da1-9313-3058130bfd33)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5bd85449d4-8pm9l_calico-system(98b29a09-d386-4da1-9313-3058130bfd33)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"98e568949777c25a7cec60370f5fd103ab67eda60eed1df4995f3dc96702b813\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5bd85449d4-8pm9l" podUID="98b29a09-d386-4da1-9313-3058130bfd33" Jun 20 19:24:04.435422 kubelet[2940]: E0620 19:24:04.433210 2940 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0151057c4e704e36927b5b5f676bd5c1fd79723617d58550cef8e70fe5a283a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:24:04.435486 kubelet[2940]: E0620 19:24:04.433232 2940 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0151057c4e704e36927b5b5f676bd5c1fd79723617d58550cef8e70fe5a283a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-bl9m6" Jun 20 19:24:04.435486 kubelet[2940]: E0620 19:24:04.433242 2940 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0151057c4e704e36927b5b5f676bd5c1fd79723617d58550cef8e70fe5a283a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-bl9m6" Jun 20 19:24:04.435486 kubelet[2940]: E0620 19:24:04.433260 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-bl9m6_kube-system(4412edbc-0778-486f-a5f0-73708cc96255)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-bl9m6_kube-system(4412edbc-0778-486f-a5f0-73708cc96255)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0151057c4e704e36927b5b5f676bd5c1fd79723617d58550cef8e70fe5a283a7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-bl9m6" podUID="4412edbc-0778-486f-a5f0-73708cc96255" Jun 20 19:24:04.435550 kubelet[2940]: E0620 19:24:04.433280 2940 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8489ed58d60af9be68e26c5b8abd1f78991c9c2b39c280e74dc5bd33db618159\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:24:04.435550 kubelet[2940]: E0620 19:24:04.433291 2940 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8489ed58d60af9be68e26c5b8abd1f78991c9c2b39c280e74dc5bd33db618159\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4w9w9" Jun 20 19:24:04.435550 kubelet[2940]: E0620 19:24:04.433298 2940 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8489ed58d60af9be68e26c5b8abd1f78991c9c2b39c280e74dc5bd33db618159\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4w9w9" Jun 20 19:24:04.436002 kubelet[2940]: E0620 19:24:04.433311 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4w9w9_calico-system(de04d4f2-cb39-4862-aa3d-bec53c847188)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4w9w9_calico-system(de04d4f2-cb39-4862-aa3d-bec53c847188)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8489ed58d60af9be68e26c5b8abd1f78991c9c2b39c280e74dc5bd33db618159\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4w9w9" podUID="de04d4f2-cb39-4862-aa3d-bec53c847188" Jun 20 19:24:04.436045 containerd[1641]: time="2025-06-20T19:24:04.435561994Z" level=error msg="Failed to destroy network for sandbox \"f4f9fda2a80e38ea865cf64071f3d15944517dd63b47b4f81745d6dc383118ba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:24:04.436045 containerd[1641]: time="2025-06-20T19:24:04.435820220Z" level=error msg="Failed to destroy network for sandbox \"36de4c85d2b1d5bc099f7e420352e9b4d98fad3a578233ab171c5f0f9b64d2b6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:24:04.438297 containerd[1641]: time="2025-06-20T19:24:04.438269365Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6647f6b4b7-8zcwx,Uid:8fe0a6ae-35c9-4521-95a4-fc8967e01b20,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4f9fda2a80e38ea865cf64071f3d15944517dd63b47b4f81745d6dc383118ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:24:04.438860 kubelet[2940]: E0620 19:24:04.438839 2940 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4f9fda2a80e38ea865cf64071f3d15944517dd63b47b4f81745d6dc383118ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:24:04.438911 kubelet[2940]: E0620 19:24:04.438871 2940 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4f9fda2a80e38ea865cf64071f3d15944517dd63b47b4f81745d6dc383118ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6647f6b4b7-8zcwx" Jun 20 19:24:04.438911 kubelet[2940]: E0620 19:24:04.438885 2940 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4f9fda2a80e38ea865cf64071f3d15944517dd63b47b4f81745d6dc383118ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6647f6b4b7-8zcwx" Jun 20 19:24:04.438958 kubelet[2940]: E0620 19:24:04.438909 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6647f6b4b7-8zcwx_calico-apiserver(8fe0a6ae-35c9-4521-95a4-fc8967e01b20)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6647f6b4b7-8zcwx_calico-apiserver(8fe0a6ae-35c9-4521-95a4-fc8967e01b20)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f4f9fda2a80e38ea865cf64071f3d15944517dd63b47b4f81745d6dc383118ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6647f6b4b7-8zcwx" podUID="8fe0a6ae-35c9-4521-95a4-fc8967e01b20" Jun 20 19:24:04.440817 containerd[1641]: time="2025-06-20T19:24:04.440792950Z" level=error msg="Failed to destroy network for sandbox \"e3a6ca0eb12614c12de852f0665d2a65859eb723515eddb6f566a2ee306d7094\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:24:04.441277 containerd[1641]: time="2025-06-20T19:24:04.441257210Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fhxxd,Uid:6532dc34-4519-4956-bfd1-87c2dd630634,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"36de4c85d2b1d5bc099f7e420352e9b4d98fad3a578233ab171c5f0f9b64d2b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:24:04.441556 kubelet[2940]: E0620 19:24:04.441537 2940 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36de4c85d2b1d5bc099f7e420352e9b4d98fad3a578233ab171c5f0f9b64d2b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:24:04.441594 kubelet[2940]: E0620 19:24:04.441566 2940 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36de4c85d2b1d5bc099f7e420352e9b4d98fad3a578233ab171c5f0f9b64d2b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fhxxd" Jun 20 19:24:04.441594 kubelet[2940]: E0620 19:24:04.441583 2940 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36de4c85d2b1d5bc099f7e420352e9b4d98fad3a578233ab171c5f0f9b64d2b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fhxxd" Jun 20 19:24:04.441640 kubelet[2940]: E0620 19:24:04.441604 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-fhxxd_kube-system(6532dc34-4519-4956-bfd1-87c2dd630634)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-fhxxd_kube-system(6532dc34-4519-4956-bfd1-87c2dd630634)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"36de4c85d2b1d5bc099f7e420352e9b4d98fad3a578233ab171c5f0f9b64d2b6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-fhxxd" podUID="6532dc34-4519-4956-bfd1-87c2dd630634" Jun 20 19:24:04.442122 containerd[1641]: time="2025-06-20T19:24:04.441745079Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-98fb94684-dcwlf,Uid:da54477e-0dce-43c1-91c2-8f7a222c7684,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3a6ca0eb12614c12de852f0665d2a65859eb723515eddb6f566a2ee306d7094\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:24:04.442167 kubelet[2940]: E0620 19:24:04.441813 2940 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3a6ca0eb12614c12de852f0665d2a65859eb723515eddb6f566a2ee306d7094\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:24:04.442167 kubelet[2940]: E0620 19:24:04.441828 2940 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3a6ca0eb12614c12de852f0665d2a65859eb723515eddb6f566a2ee306d7094\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-98fb94684-dcwlf" Jun 20 19:24:04.442167 kubelet[2940]: E0620 19:24:04.441837 2940 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3a6ca0eb12614c12de852f0665d2a65859eb723515eddb6f566a2ee306d7094\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-98fb94684-dcwlf" Jun 20 19:24:04.442228 kubelet[2940]: E0620 19:24:04.441858 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-98fb94684-dcwlf_calico-system(da54477e-0dce-43c1-91c2-8f7a222c7684)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-98fb94684-dcwlf_calico-system(da54477e-0dce-43c1-91c2-8f7a222c7684)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e3a6ca0eb12614c12de852f0665d2a65859eb723515eddb6f566a2ee306d7094\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-98fb94684-dcwlf" podUID="da54477e-0dce-43c1-91c2-8f7a222c7684" Jun 20 19:24:04.444299 containerd[1641]: time="2025-06-20T19:24:04.444239456Z" level=error msg="Failed to destroy network for sandbox \"5949651d79ca85083d2b9efc45553438bb435465c3fe3d5ce107df7c9cdf31cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:24:04.444782 containerd[1641]: time="2025-06-20T19:24:04.444746148Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-fd8d89cbd-v6v7p,Uid:b9f1f0e9-d3be-4b86-a742-94818c1000f9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5949651d79ca85083d2b9efc45553438bb435465c3fe3d5ce107df7c9cdf31cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:24:04.445170 kubelet[2940]: E0620 19:24:04.444968 2940 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5949651d79ca85083d2b9efc45553438bb435465c3fe3d5ce107df7c9cdf31cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 20 19:24:04.445308 kubelet[2940]: E0620 19:24:04.445242 2940 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5949651d79ca85083d2b9efc45553438bb435465c3fe3d5ce107df7c9cdf31cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-fd8d89cbd-v6v7p" Jun 20 19:24:04.445308 kubelet[2940]: E0620 19:24:04.445258 2940 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5949651d79ca85083d2b9efc45553438bb435465c3fe3d5ce107df7c9cdf31cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-fd8d89cbd-v6v7p" Jun 20 19:24:04.445308 kubelet[2940]: E0620 19:24:04.445280 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-fd8d89cbd-v6v7p_calico-system(b9f1f0e9-d3be-4b86-a742-94818c1000f9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-fd8d89cbd-v6v7p_calico-system(b9f1f0e9-d3be-4b86-a742-94818c1000f9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5949651d79ca85083d2b9efc45553438bb435465c3fe3d5ce107df7c9cdf31cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-fd8d89cbd-v6v7p" podUID="b9f1f0e9-d3be-4b86-a742-94818c1000f9" Jun 20 19:24:04.872207 systemd[1]: run-netns-cni\x2d9ded8b06\x2d44f6\x2df2d3\x2dbe85\x2da080924e7813.mount: Deactivated successfully. Jun 20 19:24:04.872265 systemd[1]: run-netns-cni\x2de8bc4abf\x2d7144\x2d26c1\x2dac32\x2dd035de29aa42.mount: Deactivated successfully. Jun 20 19:24:04.872300 systemd[1]: run-netns-cni\x2d9f436d15\x2d2567\x2d5f62\x2d3ac2\x2d5950d4ead7bc.mount: Deactivated successfully. Jun 20 19:24:04.872332 systemd[1]: run-netns-cni\x2d61a72424\x2d359f\x2d842f\x2da7cc\x2d9ee9c13272d6.mount: Deactivated successfully. Jun 20 19:24:04.872367 systemd[1]: run-netns-cni\x2df0fb58e1\x2d093f\x2d21f6\x2da27a\x2d32b023dc239e.mount: Deactivated successfully. Jun 20 19:24:08.422436 systemd[1]: Started sshd@7-139.178.70.105:22-65.49.1.177:46179.service - OpenSSH per-connection server daemon (65.49.1.177:46179). Jun 20 19:24:08.855393 sshd[3923]: Invalid user from 65.49.1.177 port 46179 Jun 20 19:24:08.953820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3447876686.mount: Deactivated successfully. Jun 20 19:24:09.124801 containerd[1641]: time="2025-06-20T19:24:09.124600524Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:24:09.130148 containerd[1641]: time="2025-06-20T19:24:09.130018444Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.1: active requests=0, bytes read=156518913" Jun 20 19:24:09.134702 containerd[1641]: time="2025-06-20T19:24:09.134471531Z" level=info msg="ImageCreate event name:\"sha256:9ac26af2ca9c35e475f921a9bcf40c7c0ce106819208883b006e64c489251722\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:24:09.141004 containerd[1641]: time="2025-06-20T19:24:09.140968979Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:8da6d025e5cf2ff5080c801ac8611bedb513e5922500fcc8161d8164e4679597\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:24:09.142375 containerd[1641]: time="2025-06-20T19:24:09.142219792Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.1\" with image id \"sha256:9ac26af2ca9c35e475f921a9bcf40c7c0ce106819208883b006e64c489251722\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:8da6d025e5cf2ff5080c801ac8611bedb513e5922500fcc8161d8164e4679597\", size \"156518775\" in 5.604378751s" Jun 20 19:24:09.142375 containerd[1641]: time="2025-06-20T19:24:09.142240177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.1\" returns image reference \"sha256:9ac26af2ca9c35e475f921a9bcf40c7c0ce106819208883b006e64c489251722\"" Jun 20 19:24:09.183392 containerd[1641]: time="2025-06-20T19:24:09.183365184Z" level=info msg="CreateContainer within sandbox \"f648ae1f3517a4dbef3d76fe2f952976576d0904e1d8acfa17e050b3d208ca1f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 20 19:24:09.209437 containerd[1641]: time="2025-06-20T19:24:09.207774738Z" level=info msg="Container 46f0523a7c7aa33623b27473163e2a380f961a3593747ef5725e109bd32952b4: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:24:09.208655 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2696982592.mount: Deactivated successfully. Jun 20 19:24:09.247790 containerd[1641]: time="2025-06-20T19:24:09.247762931Z" level=info msg="CreateContainer within sandbox \"f648ae1f3517a4dbef3d76fe2f952976576d0904e1d8acfa17e050b3d208ca1f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"46f0523a7c7aa33623b27473163e2a380f961a3593747ef5725e109bd32952b4\"" Jun 20 19:24:09.248163 containerd[1641]: time="2025-06-20T19:24:09.248066136Z" level=info msg="StartContainer for \"46f0523a7c7aa33623b27473163e2a380f961a3593747ef5725e109bd32952b4\"" Jun 20 19:24:09.253841 containerd[1641]: time="2025-06-20T19:24:09.253817053Z" level=info msg="connecting to shim 46f0523a7c7aa33623b27473163e2a380f961a3593747ef5725e109bd32952b4" address="unix:///run/containerd/s/1e85f8968203a1d39a3139c7fececa93fefa7fea45414fcc5fd3e15a56038b5d" protocol=ttrpc version=3 Jun 20 19:24:09.331786 systemd[1]: Started cri-containerd-46f0523a7c7aa33623b27473163e2a380f961a3593747ef5725e109bd32952b4.scope - libcontainer container 46f0523a7c7aa33623b27473163e2a380f961a3593747ef5725e109bd32952b4. Jun 20 19:24:09.359668 containerd[1641]: time="2025-06-20T19:24:09.359643790Z" level=info msg="StartContainer for \"46f0523a7c7aa33623b27473163e2a380f961a3593747ef5725e109bd32952b4\" returns successfully" Jun 20 19:24:09.874723 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 20 19:24:09.891396 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 20 19:24:09.899265 containerd[1641]: time="2025-06-20T19:24:09.898647964Z" level=info msg="TaskExit event in podsandbox handler container_id:\"46f0523a7c7aa33623b27473163e2a380f961a3593747ef5725e109bd32952b4\" id:\"f0d251fdc0c544e33c1f8564fb57cea19761e562b6ba80e6caad78bd612632fa\" pid:3974 exit_status:1 exited_at:{seconds:1750447449 nanos:894548564}" Jun 20 19:24:10.216878 kubelet[2940]: I0620 19:24:10.216784 2940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-vjh79" podStartSLOduration=2.447669582 podStartE2EDuration="21.215879464s" podCreationTimestamp="2025-06-20 19:23:49 +0000 UTC" firstStartedPulling="2025-06-20 19:23:50.37455997 +0000 UTC m=+17.321947613" lastFinishedPulling="2025-06-20 19:24:09.142769851 +0000 UTC m=+36.090157495" observedRunningTime="2025-06-20 19:24:09.754990052 +0000 UTC m=+36.702377702" watchObservedRunningTime="2025-06-20 19:24:10.215879464 +0000 UTC m=+37.163267102" Jun 20 19:24:10.358190 kubelet[2940]: I0620 19:24:10.358156 2940 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b9f1f0e9-d3be-4b86-a742-94818c1000f9-whisker-backend-key-pair\") pod \"b9f1f0e9-d3be-4b86-a742-94818c1000f9\" (UID: \"b9f1f0e9-d3be-4b86-a742-94818c1000f9\") " Jun 20 19:24:10.358190 kubelet[2940]: I0620 19:24:10.358189 2940 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72ff4\" (UniqueName: \"kubernetes.io/projected/b9f1f0e9-d3be-4b86-a742-94818c1000f9-kube-api-access-72ff4\") pod \"b9f1f0e9-d3be-4b86-a742-94818c1000f9\" (UID: \"b9f1f0e9-d3be-4b86-a742-94818c1000f9\") " Jun 20 19:24:10.358365 kubelet[2940]: I0620 19:24:10.358203 2940 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9f1f0e9-d3be-4b86-a742-94818c1000f9-whisker-ca-bundle\") pod \"b9f1f0e9-d3be-4b86-a742-94818c1000f9\" (UID: \"b9f1f0e9-d3be-4b86-a742-94818c1000f9\") " Jun 20 19:24:10.363906 kubelet[2940]: I0620 19:24:10.363871 2940 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9f1f0e9-d3be-4b86-a742-94818c1000f9-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "b9f1f0e9-d3be-4b86-a742-94818c1000f9" (UID: "b9f1f0e9-d3be-4b86-a742-94818c1000f9"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 20 19:24:10.370497 systemd[1]: var-lib-kubelet-pods-b9f1f0e9\x2dd3be\x2d4b86\x2da742\x2d94818c1000f9-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jun 20 19:24:10.372210 kubelet[2940]: I0620 19:24:10.372120 2940 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9f1f0e9-d3be-4b86-a742-94818c1000f9-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "b9f1f0e9-d3be-4b86-a742-94818c1000f9" (UID: "b9f1f0e9-d3be-4b86-a742-94818c1000f9"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jun 20 19:24:10.372256 kubelet[2940]: I0620 19:24:10.372176 2940 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9f1f0e9-d3be-4b86-a742-94818c1000f9-kube-api-access-72ff4" (OuterVolumeSpecName: "kube-api-access-72ff4") pod "b9f1f0e9-d3be-4b86-a742-94818c1000f9" (UID: "b9f1f0e9-d3be-4b86-a742-94818c1000f9"). InnerVolumeSpecName "kube-api-access-72ff4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 19:24:10.373240 systemd[1]: var-lib-kubelet-pods-b9f1f0e9\x2dd3be\x2d4b86\x2da742\x2d94818c1000f9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d72ff4.mount: Deactivated successfully. Jun 20 19:24:10.458566 kubelet[2940]: I0620 19:24:10.458527 2940 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b9f1f0e9-d3be-4b86-a742-94818c1000f9-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jun 20 19:24:10.458566 kubelet[2940]: I0620 19:24:10.458563 2940 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9f1f0e9-d3be-4b86-a742-94818c1000f9-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jun 20 19:24:10.458566 kubelet[2940]: I0620 19:24:10.458569 2940 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-72ff4\" (UniqueName: \"kubernetes.io/projected/b9f1f0e9-d3be-4b86-a742-94818c1000f9-kube-api-access-72ff4\") on node \"localhost\" DevicePath \"\"" Jun 20 19:24:10.561375 systemd[1]: Removed slice kubepods-besteffort-podb9f1f0e9_d3be_4b86_a742_94818c1000f9.slice - libcontainer container kubepods-besteffort-podb9f1f0e9_d3be_4b86_a742_94818c1000f9.slice. Jun 20 19:24:10.656571 systemd[1]: Created slice kubepods-besteffort-pod11aaebe0_41d6_4160_a867_9f3af782c5fd.slice - libcontainer container kubepods-besteffort-pod11aaebe0_41d6_4160_a867_9f3af782c5fd.slice. Jun 20 19:24:10.686976 containerd[1641]: time="2025-06-20T19:24:10.686949497Z" level=info msg="TaskExit event in podsandbox handler container_id:\"46f0523a7c7aa33623b27473163e2a380f961a3593747ef5725e109bd32952b4\" id:\"e51b71d39edead91c162f0a9d268d492537bc09914a786680d54bec00edf8464\" pid:4029 exit_status:1 exited_at:{seconds:1750447450 nanos:686648641}" Jun 20 19:24:10.759709 kubelet[2940]: I0620 19:24:10.759649 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/11aaebe0-41d6-4160-a867-9f3af782c5fd-whisker-backend-key-pair\") pod \"whisker-76b5557d97-kknr4\" (UID: \"11aaebe0-41d6-4160-a867-9f3af782c5fd\") " pod="calico-system/whisker-76b5557d97-kknr4" Jun 20 19:24:10.759899 kubelet[2940]: I0620 19:24:10.759689 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11aaebe0-41d6-4160-a867-9f3af782c5fd-whisker-ca-bundle\") pod \"whisker-76b5557d97-kknr4\" (UID: \"11aaebe0-41d6-4160-a867-9f3af782c5fd\") " pod="calico-system/whisker-76b5557d97-kknr4" Jun 20 19:24:10.759899 kubelet[2940]: I0620 19:24:10.759866 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8mpp\" (UniqueName: \"kubernetes.io/projected/11aaebe0-41d6-4160-a867-9f3af782c5fd-kube-api-access-j8mpp\") pod \"whisker-76b5557d97-kknr4\" (UID: \"11aaebe0-41d6-4160-a867-9f3af782c5fd\") " pod="calico-system/whisker-76b5557d97-kknr4" Jun 20 19:24:10.961290 containerd[1641]: time="2025-06-20T19:24:10.961092038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76b5557d97-kknr4,Uid:11aaebe0-41d6-4160-a867-9f3af782c5fd,Namespace:calico-system,Attempt:0,}" Jun 20 19:24:11.227222 kubelet[2940]: I0620 19:24:11.227135 2940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9f1f0e9-d3be-4b86-a742-94818c1000f9" path="/var/lib/kubelet/pods/b9f1f0e9-d3be-4b86-a742-94818c1000f9/volumes" Jun 20 19:24:11.449335 systemd-networkd[1546]: cali778b0962653: Link UP Jun 20 19:24:11.449475 systemd-networkd[1546]: cali778b0962653: Gained carrier Jun 20 19:24:11.464196 containerd[1641]: 2025-06-20 19:24:10.998 [INFO][4047] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jun 20 19:24:11.464196 containerd[1641]: 2025-06-20 19:24:11.041 [INFO][4047] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--76b5557d97--kknr4-eth0 whisker-76b5557d97- calico-system 11aaebe0-41d6-4160-a867-9f3af782c5fd 863 0 2025-06-20 19:24:10 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:76b5557d97 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-76b5557d97-kknr4 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali778b0962653 [] [] }} ContainerID="b32a58c4565a166d760220893e8c3324ef0963b31c7c3f76a2eb9ac4f34a4df7" Namespace="calico-system" Pod="whisker-76b5557d97-kknr4" WorkloadEndpoint="localhost-k8s-whisker--76b5557d97--kknr4-" Jun 20 19:24:11.464196 containerd[1641]: 2025-06-20 19:24:11.041 [INFO][4047] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b32a58c4565a166d760220893e8c3324ef0963b31c7c3f76a2eb9ac4f34a4df7" Namespace="calico-system" Pod="whisker-76b5557d97-kknr4" WorkloadEndpoint="localhost-k8s-whisker--76b5557d97--kknr4-eth0" Jun 20 19:24:11.464196 containerd[1641]: 2025-06-20 19:24:11.382 [INFO][4055] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b32a58c4565a166d760220893e8c3324ef0963b31c7c3f76a2eb9ac4f34a4df7" HandleID="k8s-pod-network.b32a58c4565a166d760220893e8c3324ef0963b31c7c3f76a2eb9ac4f34a4df7" Workload="localhost-k8s-whisker--76b5557d97--kknr4-eth0" Jun 20 19:24:11.464364 containerd[1641]: 2025-06-20 19:24:11.383 [INFO][4055] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b32a58c4565a166d760220893e8c3324ef0963b31c7c3f76a2eb9ac4f34a4df7" HandleID="k8s-pod-network.b32a58c4565a166d760220893e8c3324ef0963b31c7c3f76a2eb9ac4f34a4df7" Workload="localhost-k8s-whisker--76b5557d97--kknr4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ed00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-76b5557d97-kknr4", "timestamp":"2025-06-20 19:24:11.382185157 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:24:11.464364 containerd[1641]: 2025-06-20 19:24:11.383 [INFO][4055] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:24:11.464364 containerd[1641]: 2025-06-20 19:24:11.384 [INFO][4055] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:24:11.464364 containerd[1641]: 2025-06-20 19:24:11.384 [INFO][4055] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 20 19:24:11.464364 containerd[1641]: 2025-06-20 19:24:11.401 [INFO][4055] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b32a58c4565a166d760220893e8c3324ef0963b31c7c3f76a2eb9ac4f34a4df7" host="localhost" Jun 20 19:24:11.464364 containerd[1641]: 2025-06-20 19:24:11.417 [INFO][4055] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 20 19:24:11.464364 containerd[1641]: 2025-06-20 19:24:11.420 [INFO][4055] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 20 19:24:11.464364 containerd[1641]: 2025-06-20 19:24:11.423 [INFO][4055] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 20 19:24:11.464364 containerd[1641]: 2025-06-20 19:24:11.425 [INFO][4055] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 20 19:24:11.464364 containerd[1641]: 2025-06-20 19:24:11.425 [INFO][4055] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b32a58c4565a166d760220893e8c3324ef0963b31c7c3f76a2eb9ac4f34a4df7" host="localhost" Jun 20 19:24:11.464982 containerd[1641]: 2025-06-20 19:24:11.426 [INFO][4055] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b32a58c4565a166d760220893e8c3324ef0963b31c7c3f76a2eb9ac4f34a4df7 Jun 20 19:24:11.464982 containerd[1641]: 2025-06-20 19:24:11.428 [INFO][4055] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b32a58c4565a166d760220893e8c3324ef0963b31c7c3f76a2eb9ac4f34a4df7" host="localhost" Jun 20 19:24:11.464982 containerd[1641]: 2025-06-20 19:24:11.435 [INFO][4055] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.b32a58c4565a166d760220893e8c3324ef0963b31c7c3f76a2eb9ac4f34a4df7" host="localhost" Jun 20 19:24:11.464982 containerd[1641]: 2025-06-20 19:24:11.435 [INFO][4055] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.b32a58c4565a166d760220893e8c3324ef0963b31c7c3f76a2eb9ac4f34a4df7" host="localhost" Jun 20 19:24:11.464982 containerd[1641]: 2025-06-20 19:24:11.436 [INFO][4055] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:24:11.464982 containerd[1641]: 2025-06-20 19:24:11.436 [INFO][4055] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="b32a58c4565a166d760220893e8c3324ef0963b31c7c3f76a2eb9ac4f34a4df7" HandleID="k8s-pod-network.b32a58c4565a166d760220893e8c3324ef0963b31c7c3f76a2eb9ac4f34a4df7" Workload="localhost-k8s-whisker--76b5557d97--kknr4-eth0" Jun 20 19:24:11.465792 containerd[1641]: 2025-06-20 19:24:11.438 [INFO][4047] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b32a58c4565a166d760220893e8c3324ef0963b31c7c3f76a2eb9ac4f34a4df7" Namespace="calico-system" Pod="whisker-76b5557d97-kknr4" WorkloadEndpoint="localhost-k8s-whisker--76b5557d97--kknr4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--76b5557d97--kknr4-eth0", GenerateName:"whisker-76b5557d97-", Namespace:"calico-system", SelfLink:"", UID:"11aaebe0-41d6-4160-a867-9f3af782c5fd", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 24, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"76b5557d97", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-76b5557d97-kknr4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali778b0962653", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:24:11.465792 containerd[1641]: 2025-06-20 19:24:11.438 [INFO][4047] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="b32a58c4565a166d760220893e8c3324ef0963b31c7c3f76a2eb9ac4f34a4df7" Namespace="calico-system" Pod="whisker-76b5557d97-kknr4" WorkloadEndpoint="localhost-k8s-whisker--76b5557d97--kknr4-eth0" Jun 20 19:24:11.465861 containerd[1641]: 2025-06-20 19:24:11.438 [INFO][4047] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali778b0962653 ContainerID="b32a58c4565a166d760220893e8c3324ef0963b31c7c3f76a2eb9ac4f34a4df7" Namespace="calico-system" Pod="whisker-76b5557d97-kknr4" WorkloadEndpoint="localhost-k8s-whisker--76b5557d97--kknr4-eth0" Jun 20 19:24:11.465861 containerd[1641]: 2025-06-20 19:24:11.450 [INFO][4047] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b32a58c4565a166d760220893e8c3324ef0963b31c7c3f76a2eb9ac4f34a4df7" Namespace="calico-system" Pod="whisker-76b5557d97-kknr4" WorkloadEndpoint="localhost-k8s-whisker--76b5557d97--kknr4-eth0" Jun 20 19:24:11.465903 containerd[1641]: 2025-06-20 19:24:11.450 [INFO][4047] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b32a58c4565a166d760220893e8c3324ef0963b31c7c3f76a2eb9ac4f34a4df7" Namespace="calico-system" Pod="whisker-76b5557d97-kknr4" WorkloadEndpoint="localhost-k8s-whisker--76b5557d97--kknr4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--76b5557d97--kknr4-eth0", GenerateName:"whisker-76b5557d97-", Namespace:"calico-system", SelfLink:"", UID:"11aaebe0-41d6-4160-a867-9f3af782c5fd", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 24, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"76b5557d97", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b32a58c4565a166d760220893e8c3324ef0963b31c7c3f76a2eb9ac4f34a4df7", Pod:"whisker-76b5557d97-kknr4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali778b0962653", MAC:"ae:13:92:38:a2:bb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:24:11.465951 containerd[1641]: 2025-06-20 19:24:11.460 [INFO][4047] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b32a58c4565a166d760220893e8c3324ef0963b31c7c3f76a2eb9ac4f34a4df7" Namespace="calico-system" Pod="whisker-76b5557d97-kknr4" WorkloadEndpoint="localhost-k8s-whisker--76b5557d97--kknr4-eth0" Jun 20 19:24:11.551997 containerd[1641]: time="2025-06-20T19:24:11.551879380Z" level=info msg="connecting to shim b32a58c4565a166d760220893e8c3324ef0963b31c7c3f76a2eb9ac4f34a4df7" address="unix:///run/containerd/s/3fb8fbc262c103164f36e651921123e2015fa439f2fedf269da2d52204d50b48" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:24:11.574798 systemd[1]: Started cri-containerd-b32a58c4565a166d760220893e8c3324ef0963b31c7c3f76a2eb9ac4f34a4df7.scope - libcontainer container b32a58c4565a166d760220893e8c3324ef0963b31c7c3f76a2eb9ac4f34a4df7. Jun 20 19:24:11.589321 systemd-resolved[1501]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 20 19:24:11.622939 containerd[1641]: time="2025-06-20T19:24:11.622872384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76b5557d97-kknr4,Uid:11aaebe0-41d6-4160-a867-9f3af782c5fd,Namespace:calico-system,Attempt:0,} returns sandbox id \"b32a58c4565a166d760220893e8c3324ef0963b31c7c3f76a2eb9ac4f34a4df7\"" Jun 20 19:24:11.628880 containerd[1641]: time="2025-06-20T19:24:11.628855901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.1\"" Jun 20 19:24:11.755196 systemd-networkd[1546]: vxlan.calico: Link UP Jun 20 19:24:11.755201 systemd-networkd[1546]: vxlan.calico: Gained carrier Jun 20 19:24:12.357511 sshd[3923]: Connection closed by invalid user 65.49.1.177 port 46179 [preauth] Jun 20 19:24:12.359200 systemd[1]: sshd@7-139.178.70.105:22-65.49.1.177:46179.service: Deactivated successfully. Jun 20 19:24:12.881802 systemd-networkd[1546]: cali778b0962653: Gained IPv6LL Jun 20 19:24:13.073833 systemd-networkd[1546]: vxlan.calico: Gained IPv6LL Jun 20 19:24:13.247313 containerd[1641]: time="2025-06-20T19:24:13.247255239Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:24:13.248166 containerd[1641]: time="2025-06-20T19:24:13.248153808Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.1: active requests=0, bytes read=4661202" Jun 20 19:24:13.248523 containerd[1641]: time="2025-06-20T19:24:13.248506204Z" level=info msg="ImageCreate event name:\"sha256:f9c2addb6553484a4cf8cf5e38959c95aff70d213991bb2626aab9eb9b0ce51c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:24:13.249681 containerd[1641]: time="2025-06-20T19:24:13.249667646Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:7f323954f2f741238d256690a674536bf562d4b4bd7cd6bab3c21a0a1327e1fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:24:13.250130 containerd[1641]: time="2025-06-20T19:24:13.249941769Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.1\" with image id \"sha256:f9c2addb6553484a4cf8cf5e38959c95aff70d213991bb2626aab9eb9b0ce51c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:7f323954f2f741238d256690a674536bf562d4b4bd7cd6bab3c21a0a1327e1fc\", size \"6153897\" in 1.621060655s" Jun 20 19:24:13.250293 containerd[1641]: time="2025-06-20T19:24:13.250283971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.1\" returns image reference \"sha256:f9c2addb6553484a4cf8cf5e38959c95aff70d213991bb2626aab9eb9b0ce51c\"" Jun 20 19:24:13.252043 containerd[1641]: time="2025-06-20T19:24:13.252020217Z" level=info msg="CreateContainer within sandbox \"b32a58c4565a166d760220893e8c3324ef0963b31c7c3f76a2eb9ac4f34a4df7\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jun 20 19:24:13.257385 containerd[1641]: time="2025-06-20T19:24:13.256948478Z" level=info msg="Container 525c0f77c47b55b963966613d797d259eb08dc0f45ae6e5928d07cce42644e3c: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:24:13.258532 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4211860689.mount: Deactivated successfully. Jun 20 19:24:13.270521 containerd[1641]: time="2025-06-20T19:24:13.270475676Z" level=info msg="CreateContainer within sandbox \"b32a58c4565a166d760220893e8c3324ef0963b31c7c3f76a2eb9ac4f34a4df7\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"525c0f77c47b55b963966613d797d259eb08dc0f45ae6e5928d07cce42644e3c\"" Jun 20 19:24:13.270893 containerd[1641]: time="2025-06-20T19:24:13.270771982Z" level=info msg="StartContainer for \"525c0f77c47b55b963966613d797d259eb08dc0f45ae6e5928d07cce42644e3c\"" Jun 20 19:24:13.272245 containerd[1641]: time="2025-06-20T19:24:13.271523342Z" level=info msg="connecting to shim 525c0f77c47b55b963966613d797d259eb08dc0f45ae6e5928d07cce42644e3c" address="unix:///run/containerd/s/3fb8fbc262c103164f36e651921123e2015fa439f2fedf269da2d52204d50b48" protocol=ttrpc version=3 Jun 20 19:24:13.290037 systemd[1]: Started cri-containerd-525c0f77c47b55b963966613d797d259eb08dc0f45ae6e5928d07cce42644e3c.scope - libcontainer container 525c0f77c47b55b963966613d797d259eb08dc0f45ae6e5928d07cce42644e3c. Jun 20 19:24:13.326471 containerd[1641]: time="2025-06-20T19:24:13.326433494Z" level=info msg="StartContainer for \"525c0f77c47b55b963966613d797d259eb08dc0f45ae6e5928d07cce42644e3c\" returns successfully" Jun 20 19:24:13.327619 containerd[1641]: time="2025-06-20T19:24:13.327564048Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\"" Jun 20 19:24:15.294948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1013374132.mount: Deactivated successfully. Jun 20 19:24:15.311634 containerd[1641]: time="2025-06-20T19:24:15.311124884Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:24:15.312617 containerd[1641]: time="2025-06-20T19:24:15.312593912Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.1: active requests=0, bytes read=33086345" Jun 20 19:24:15.313123 containerd[1641]: time="2025-06-20T19:24:15.313109070Z" level=info msg="ImageCreate event name:\"sha256:a8d73c8fd22b3a7a28e9baab63169fb459bc504d71d871f96225c4f2d5e660a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:24:15.315057 containerd[1641]: time="2025-06-20T19:24:15.315042904Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:4b8bcb8b4fc05026ba811bf0b25b736086c1b8b26a83a9039a84dd3a06b06bd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:24:15.315743 containerd[1641]: time="2025-06-20T19:24:15.315723724Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" with image id \"sha256:a8d73c8fd22b3a7a28e9baab63169fb459bc504d71d871f96225c4f2d5e660a5\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:4b8bcb8b4fc05026ba811bf0b25b736086c1b8b26a83a9039a84dd3a06b06bd4\", size \"33086175\" in 1.988041508s" Jun 20 19:24:15.315743 containerd[1641]: time="2025-06-20T19:24:15.315742814Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.1\" returns image reference \"sha256:a8d73c8fd22b3a7a28e9baab63169fb459bc504d71d871f96225c4f2d5e660a5\"" Jun 20 19:24:15.317606 containerd[1641]: time="2025-06-20T19:24:15.317586803Z" level=info msg="CreateContainer within sandbox \"b32a58c4565a166d760220893e8c3324ef0963b31c7c3f76a2eb9ac4f34a4df7\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jun 20 19:24:15.323502 containerd[1641]: time="2025-06-20T19:24:15.323438301Z" level=info msg="Container ab8089f4be4da5da8dbbeff1beec6f53337a1645596f6b99a788fe9d13c000cc: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:24:15.331724 containerd[1641]: time="2025-06-20T19:24:15.331702403Z" level=info msg="CreateContainer within sandbox \"b32a58c4565a166d760220893e8c3324ef0963b31c7c3f76a2eb9ac4f34a4df7\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"ab8089f4be4da5da8dbbeff1beec6f53337a1645596f6b99a788fe9d13c000cc\"" Jun 20 19:24:15.332224 containerd[1641]: time="2025-06-20T19:24:15.332208574Z" level=info msg="StartContainer for \"ab8089f4be4da5da8dbbeff1beec6f53337a1645596f6b99a788fe9d13c000cc\"" Jun 20 19:24:15.333193 containerd[1641]: time="2025-06-20T19:24:15.333160450Z" level=info msg="connecting to shim ab8089f4be4da5da8dbbeff1beec6f53337a1645596f6b99a788fe9d13c000cc" address="unix:///run/containerd/s/3fb8fbc262c103164f36e651921123e2015fa439f2fedf269da2d52204d50b48" protocol=ttrpc version=3 Jun 20 19:24:15.363951 systemd[1]: Started cri-containerd-ab8089f4be4da5da8dbbeff1beec6f53337a1645596f6b99a788fe9d13c000cc.scope - libcontainer container ab8089f4be4da5da8dbbeff1beec6f53337a1645596f6b99a788fe9d13c000cc. Jun 20 19:24:15.410281 containerd[1641]: time="2025-06-20T19:24:15.410242813Z" level=info msg="StartContainer for \"ab8089f4be4da5da8dbbeff1beec6f53337a1645596f6b99a788fe9d13c000cc\" returns successfully" Jun 20 19:24:15.632915 kubelet[2940]: I0620 19:24:15.623054 2940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-76b5557d97-kknr4" podStartSLOduration=1.930487937 podStartE2EDuration="5.623034648s" podCreationTimestamp="2025-06-20 19:24:10 +0000 UTC" firstStartedPulling="2025-06-20 19:24:11.623733075 +0000 UTC m=+38.571120719" lastFinishedPulling="2025-06-20 19:24:15.31627979 +0000 UTC m=+42.263667430" observedRunningTime="2025-06-20 19:24:15.615095107 +0000 UTC m=+42.562482757" watchObservedRunningTime="2025-06-20 19:24:15.623034648 +0000 UTC m=+42.570422293" Jun 20 19:24:16.226427 containerd[1641]: time="2025-06-20T19:24:16.226072504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-98fb94684-dcwlf,Uid:da54477e-0dce-43c1-91c2-8f7a222c7684,Namespace:calico-system,Attempt:0,}" Jun 20 19:24:16.226570 containerd[1641]: time="2025-06-20T19:24:16.226544282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fhxxd,Uid:6532dc34-4519-4956-bfd1-87c2dd630634,Namespace:kube-system,Attempt:0,}" Jun 20 19:24:16.226727 containerd[1641]: time="2025-06-20T19:24:16.226660570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-8pm9l,Uid:98b29a09-d386-4da1-9313-3058130bfd33,Namespace:calico-system,Attempt:0,}" Jun 20 19:24:17.250920 containerd[1641]: time="2025-06-20T19:24:17.250799778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6647f6b4b7-2w6xq,Uid:89c2012c-00da-48d2-9d38-40ca4aaf4dc5,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:24:17.356962 systemd-networkd[1546]: calidf20324f12b: Link UP Jun 20 19:24:17.357519 systemd-networkd[1546]: calidf20324f12b: Gained carrier Jun 20 19:24:17.377342 containerd[1641]: 2025-06-20 19:24:16.623 [INFO][4397] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--5bd85449d4--8pm9l-eth0 goldmane-5bd85449d4- calico-system 98b29a09-d386-4da1-9313-3058130bfd33 797 0 2025-06-20 19:23:49 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5bd85449d4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-5bd85449d4-8pm9l eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calidf20324f12b [] [] }} ContainerID="4dda986162c86118a5330fc7d25924568895b5a5c6b778c135c0c2d3f1f34ccf" Namespace="calico-system" Pod="goldmane-5bd85449d4-8pm9l" WorkloadEndpoint="localhost-k8s-goldmane--5bd85449d4--8pm9l-" Jun 20 19:24:17.377342 containerd[1641]: 2025-06-20 19:24:16.624 [INFO][4397] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4dda986162c86118a5330fc7d25924568895b5a5c6b778c135c0c2d3f1f34ccf" Namespace="calico-system" Pod="goldmane-5bd85449d4-8pm9l" WorkloadEndpoint="localhost-k8s-goldmane--5bd85449d4--8pm9l-eth0" Jun 20 19:24:17.377342 containerd[1641]: 2025-06-20 19:24:17.287 [INFO][4430] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4dda986162c86118a5330fc7d25924568895b5a5c6b778c135c0c2d3f1f34ccf" HandleID="k8s-pod-network.4dda986162c86118a5330fc7d25924568895b5a5c6b778c135c0c2d3f1f34ccf" Workload="localhost-k8s-goldmane--5bd85449d4--8pm9l-eth0" Jun 20 19:24:17.380121 containerd[1641]: 2025-06-20 19:24:17.290 [INFO][4430] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4dda986162c86118a5330fc7d25924568895b5a5c6b778c135c0c2d3f1f34ccf" HandleID="k8s-pod-network.4dda986162c86118a5330fc7d25924568895b5a5c6b778c135c0c2d3f1f34ccf" Workload="localhost-k8s-goldmane--5bd85449d4--8pm9l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e7b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-5bd85449d4-8pm9l", "timestamp":"2025-06-20 19:24:17.287247803 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:24:17.380121 containerd[1641]: 2025-06-20 19:24:17.290 [INFO][4430] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:24:17.380121 containerd[1641]: 2025-06-20 19:24:17.290 [INFO][4430] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:24:17.380121 containerd[1641]: 2025-06-20 19:24:17.290 [INFO][4430] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 20 19:24:17.380121 containerd[1641]: 2025-06-20 19:24:17.305 [INFO][4430] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4dda986162c86118a5330fc7d25924568895b5a5c6b778c135c0c2d3f1f34ccf" host="localhost" Jun 20 19:24:17.380121 containerd[1641]: 2025-06-20 19:24:17.331 [INFO][4430] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 20 19:24:17.380121 containerd[1641]: 2025-06-20 19:24:17.335 [INFO][4430] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 20 19:24:17.380121 containerd[1641]: 2025-06-20 19:24:17.336 [INFO][4430] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 20 19:24:17.380121 containerd[1641]: 2025-06-20 19:24:17.338 [INFO][4430] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 20 19:24:17.380121 containerd[1641]: 2025-06-20 19:24:17.338 [INFO][4430] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4dda986162c86118a5330fc7d25924568895b5a5c6b778c135c0c2d3f1f34ccf" host="localhost" Jun 20 19:24:17.380349 containerd[1641]: 2025-06-20 19:24:17.339 [INFO][4430] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4dda986162c86118a5330fc7d25924568895b5a5c6b778c135c0c2d3f1f34ccf Jun 20 19:24:17.380349 containerd[1641]: 2025-06-20 19:24:17.342 [INFO][4430] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4dda986162c86118a5330fc7d25924568895b5a5c6b778c135c0c2d3f1f34ccf" host="localhost" Jun 20 19:24:17.380349 containerd[1641]: 2025-06-20 19:24:17.348 [INFO][4430] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.4dda986162c86118a5330fc7d25924568895b5a5c6b778c135c0c2d3f1f34ccf" host="localhost" Jun 20 19:24:17.380349 containerd[1641]: 2025-06-20 19:24:17.348 [INFO][4430] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.4dda986162c86118a5330fc7d25924568895b5a5c6b778c135c0c2d3f1f34ccf" host="localhost" Jun 20 19:24:17.380349 containerd[1641]: 2025-06-20 19:24:17.348 [INFO][4430] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:24:17.380349 containerd[1641]: 2025-06-20 19:24:17.348 [INFO][4430] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="4dda986162c86118a5330fc7d25924568895b5a5c6b778c135c0c2d3f1f34ccf" HandleID="k8s-pod-network.4dda986162c86118a5330fc7d25924568895b5a5c6b778c135c0c2d3f1f34ccf" Workload="localhost-k8s-goldmane--5bd85449d4--8pm9l-eth0" Jun 20 19:24:17.382977 containerd[1641]: 2025-06-20 19:24:17.351 [INFO][4397] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4dda986162c86118a5330fc7d25924568895b5a5c6b778c135c0c2d3f1f34ccf" Namespace="calico-system" Pod="goldmane-5bd85449d4-8pm9l" WorkloadEndpoint="localhost-k8s-goldmane--5bd85449d4--8pm9l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5bd85449d4--8pm9l-eth0", GenerateName:"goldmane-5bd85449d4-", Namespace:"calico-system", SelfLink:"", UID:"98b29a09-d386-4da1-9313-3058130bfd33", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 23, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5bd85449d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-5bd85449d4-8pm9l", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidf20324f12b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:24:17.382977 containerd[1641]: 2025-06-20 19:24:17.351 [INFO][4397] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="4dda986162c86118a5330fc7d25924568895b5a5c6b778c135c0c2d3f1f34ccf" Namespace="calico-system" Pod="goldmane-5bd85449d4-8pm9l" WorkloadEndpoint="localhost-k8s-goldmane--5bd85449d4--8pm9l-eth0" Jun 20 19:24:17.383061 containerd[1641]: 2025-06-20 19:24:17.352 [INFO][4397] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidf20324f12b ContainerID="4dda986162c86118a5330fc7d25924568895b5a5c6b778c135c0c2d3f1f34ccf" Namespace="calico-system" Pod="goldmane-5bd85449d4-8pm9l" WorkloadEndpoint="localhost-k8s-goldmane--5bd85449d4--8pm9l-eth0" Jun 20 19:24:17.383061 containerd[1641]: 2025-06-20 19:24:17.358 [INFO][4397] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4dda986162c86118a5330fc7d25924568895b5a5c6b778c135c0c2d3f1f34ccf" Namespace="calico-system" Pod="goldmane-5bd85449d4-8pm9l" WorkloadEndpoint="localhost-k8s-goldmane--5bd85449d4--8pm9l-eth0" Jun 20 19:24:17.383095 containerd[1641]: 2025-06-20 19:24:17.358 [INFO][4397] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4dda986162c86118a5330fc7d25924568895b5a5c6b778c135c0c2d3f1f34ccf" Namespace="calico-system" Pod="goldmane-5bd85449d4-8pm9l" WorkloadEndpoint="localhost-k8s-goldmane--5bd85449d4--8pm9l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5bd85449d4--8pm9l-eth0", GenerateName:"goldmane-5bd85449d4-", Namespace:"calico-system", SelfLink:"", UID:"98b29a09-d386-4da1-9313-3058130bfd33", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 23, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5bd85449d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4dda986162c86118a5330fc7d25924568895b5a5c6b778c135c0c2d3f1f34ccf", Pod:"goldmane-5bd85449d4-8pm9l", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidf20324f12b", MAC:"36:ef:6f:68:eb:57", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:24:17.383147 containerd[1641]: 2025-06-20 19:24:17.372 [INFO][4397] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4dda986162c86118a5330fc7d25924568895b5a5c6b778c135c0c2d3f1f34ccf" Namespace="calico-system" Pod="goldmane-5bd85449d4-8pm9l" WorkloadEndpoint="localhost-k8s-goldmane--5bd85449d4--8pm9l-eth0" Jun 20 19:24:17.456706 containerd[1641]: time="2025-06-20T19:24:17.456095598Z" level=info msg="connecting to shim 4dda986162c86118a5330fc7d25924568895b5a5c6b778c135c0c2d3f1f34ccf" address="unix:///run/containerd/s/15e97a1563a3d73f8def6caed11f99c79b345ce0b4d2316a2c1d0b70687c4551" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:24:17.466431 systemd-networkd[1546]: cali1dc499b0fd9: Link UP Jun 20 19:24:17.467070 systemd-networkd[1546]: cali1dc499b0fd9: Gained carrier Jun 20 19:24:17.487356 containerd[1641]: 2025-06-20 19:24:16.623 [INFO][4395] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--98fb94684--dcwlf-eth0 calico-kube-controllers-98fb94684- calico-system da54477e-0dce-43c1-91c2-8f7a222c7684 800 0 2025-06-20 19:23:50 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:98fb94684 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-98fb94684-dcwlf eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1dc499b0fd9 [] [] }} ContainerID="44abcc94551bc0e8690d5204f448e0620fe9cef2772512fe9468cb1892249729" Namespace="calico-system" Pod="calico-kube-controllers-98fb94684-dcwlf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--98fb94684--dcwlf-" Jun 20 19:24:17.487356 containerd[1641]: 2025-06-20 19:24:16.623 [INFO][4395] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="44abcc94551bc0e8690d5204f448e0620fe9cef2772512fe9468cb1892249729" Namespace="calico-system" Pod="calico-kube-controllers-98fb94684-dcwlf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--98fb94684--dcwlf-eth0" Jun 20 19:24:17.487356 containerd[1641]: 2025-06-20 19:24:17.287 [INFO][4429] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="44abcc94551bc0e8690d5204f448e0620fe9cef2772512fe9468cb1892249729" HandleID="k8s-pod-network.44abcc94551bc0e8690d5204f448e0620fe9cef2772512fe9468cb1892249729" Workload="localhost-k8s-calico--kube--controllers--98fb94684--dcwlf-eth0" Jun 20 19:24:17.488581 containerd[1641]: 2025-06-20 19:24:17.289 [INFO][4429] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="44abcc94551bc0e8690d5204f448e0620fe9cef2772512fe9468cb1892249729" HandleID="k8s-pod-network.44abcc94551bc0e8690d5204f448e0620fe9cef2772512fe9468cb1892249729" Workload="localhost-k8s-calico--kube--controllers--98fb94684--dcwlf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000103180), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-98fb94684-dcwlf", "timestamp":"2025-06-20 19:24:17.287312702 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:24:17.488581 containerd[1641]: 2025-06-20 19:24:17.289 [INFO][4429] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:24:17.488581 containerd[1641]: 2025-06-20 19:24:17.349 [INFO][4429] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:24:17.488581 containerd[1641]: 2025-06-20 19:24:17.350 [INFO][4429] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 20 19:24:17.488581 containerd[1641]: 2025-06-20 19:24:17.405 [INFO][4429] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.44abcc94551bc0e8690d5204f448e0620fe9cef2772512fe9468cb1892249729" host="localhost" Jun 20 19:24:17.488581 containerd[1641]: 2025-06-20 19:24:17.431 [INFO][4429] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 20 19:24:17.488581 containerd[1641]: 2025-06-20 19:24:17.437 [INFO][4429] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 20 19:24:17.488581 containerd[1641]: 2025-06-20 19:24:17.441 [INFO][4429] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 20 19:24:17.488581 containerd[1641]: 2025-06-20 19:24:17.443 [INFO][4429] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 20 19:24:17.488581 containerd[1641]: 2025-06-20 19:24:17.443 [INFO][4429] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.44abcc94551bc0e8690d5204f448e0620fe9cef2772512fe9468cb1892249729" host="localhost" Jun 20 19:24:17.489114 containerd[1641]: 2025-06-20 19:24:17.445 [INFO][4429] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.44abcc94551bc0e8690d5204f448e0620fe9cef2772512fe9468cb1892249729 Jun 20 19:24:17.489114 containerd[1641]: 2025-06-20 19:24:17.450 [INFO][4429] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.44abcc94551bc0e8690d5204f448e0620fe9cef2772512fe9468cb1892249729" host="localhost" Jun 20 19:24:17.489114 containerd[1641]: 2025-06-20 19:24:17.459 [INFO][4429] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.44abcc94551bc0e8690d5204f448e0620fe9cef2772512fe9468cb1892249729" host="localhost" Jun 20 19:24:17.489114 containerd[1641]: 2025-06-20 19:24:17.459 [INFO][4429] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.44abcc94551bc0e8690d5204f448e0620fe9cef2772512fe9468cb1892249729" host="localhost" Jun 20 19:24:17.489114 containerd[1641]: 2025-06-20 19:24:17.459 [INFO][4429] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:24:17.489114 containerd[1641]: 2025-06-20 19:24:17.459 [INFO][4429] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="44abcc94551bc0e8690d5204f448e0620fe9cef2772512fe9468cb1892249729" HandleID="k8s-pod-network.44abcc94551bc0e8690d5204f448e0620fe9cef2772512fe9468cb1892249729" Workload="localhost-k8s-calico--kube--controllers--98fb94684--dcwlf-eth0" Jun 20 19:24:17.490680 containerd[1641]: 2025-06-20 19:24:17.462 [INFO][4395] cni-plugin/k8s.go 418: Populated endpoint ContainerID="44abcc94551bc0e8690d5204f448e0620fe9cef2772512fe9468cb1892249729" Namespace="calico-system" Pod="calico-kube-controllers-98fb94684-dcwlf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--98fb94684--dcwlf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--98fb94684--dcwlf-eth0", GenerateName:"calico-kube-controllers-98fb94684-", Namespace:"calico-system", SelfLink:"", UID:"da54477e-0dce-43c1-91c2-8f7a222c7684", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 23, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"98fb94684", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-98fb94684-dcwlf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1dc499b0fd9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:24:17.490805 containerd[1641]: 2025-06-20 19:24:17.462 [INFO][4395] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="44abcc94551bc0e8690d5204f448e0620fe9cef2772512fe9468cb1892249729" Namespace="calico-system" Pod="calico-kube-controllers-98fb94684-dcwlf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--98fb94684--dcwlf-eth0" Jun 20 19:24:17.490805 containerd[1641]: 2025-06-20 19:24:17.462 [INFO][4395] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1dc499b0fd9 ContainerID="44abcc94551bc0e8690d5204f448e0620fe9cef2772512fe9468cb1892249729" Namespace="calico-system" Pod="calico-kube-controllers-98fb94684-dcwlf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--98fb94684--dcwlf-eth0" Jun 20 19:24:17.490805 containerd[1641]: 2025-06-20 19:24:17.467 [INFO][4395] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="44abcc94551bc0e8690d5204f448e0620fe9cef2772512fe9468cb1892249729" Namespace="calico-system" Pod="calico-kube-controllers-98fb94684-dcwlf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--98fb94684--dcwlf-eth0" Jun 20 19:24:17.491034 containerd[1641]: 2025-06-20 19:24:17.467 [INFO][4395] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="44abcc94551bc0e8690d5204f448e0620fe9cef2772512fe9468cb1892249729" Namespace="calico-system" Pod="calico-kube-controllers-98fb94684-dcwlf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--98fb94684--dcwlf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--98fb94684--dcwlf-eth0", GenerateName:"calico-kube-controllers-98fb94684-", Namespace:"calico-system", SelfLink:"", UID:"da54477e-0dce-43c1-91c2-8f7a222c7684", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 23, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"98fb94684", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"44abcc94551bc0e8690d5204f448e0620fe9cef2772512fe9468cb1892249729", Pod:"calico-kube-controllers-98fb94684-dcwlf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1dc499b0fd9", MAC:"02:44:83:59:78:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:24:17.491078 containerd[1641]: 2025-06-20 19:24:17.483 [INFO][4395] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="44abcc94551bc0e8690d5204f448e0620fe9cef2772512fe9468cb1892249729" Namespace="calico-system" Pod="calico-kube-controllers-98fb94684-dcwlf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--98fb94684--dcwlf-eth0" Jun 20 19:24:17.499878 systemd[1]: Started cri-containerd-4dda986162c86118a5330fc7d25924568895b5a5c6b778c135c0c2d3f1f34ccf.scope - libcontainer container 4dda986162c86118a5330fc7d25924568895b5a5c6b778c135c0c2d3f1f34ccf. Jun 20 19:24:17.521276 systemd-resolved[1501]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 20 19:24:17.524176 containerd[1641]: time="2025-06-20T19:24:17.523892818Z" level=info msg="connecting to shim 44abcc94551bc0e8690d5204f448e0620fe9cef2772512fe9468cb1892249729" address="unix:///run/containerd/s/07ffa9c9fcfb23fc80560805f4fc0456387e19a8bdf6fa69f41af65542cc661d" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:24:17.557845 systemd[1]: Started cri-containerd-44abcc94551bc0e8690d5204f448e0620fe9cef2772512fe9468cb1892249729.scope - libcontainer container 44abcc94551bc0e8690d5204f448e0620fe9cef2772512fe9468cb1892249729. Jun 20 19:24:17.570520 systemd-resolved[1501]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 20 19:24:17.583988 containerd[1641]: time="2025-06-20T19:24:17.583480755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5bd85449d4-8pm9l,Uid:98b29a09-d386-4da1-9313-3058130bfd33,Namespace:calico-system,Attempt:0,} returns sandbox id \"4dda986162c86118a5330fc7d25924568895b5a5c6b778c135c0c2d3f1f34ccf\"" Jun 20 19:24:17.591906 systemd-networkd[1546]: cali597f9778823: Link UP Jun 20 19:24:17.592377 systemd-networkd[1546]: cali597f9778823: Gained carrier Jun 20 19:24:17.615947 containerd[1641]: time="2025-06-20T19:24:17.615857209Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.1\"" Jun 20 19:24:17.617561 containerd[1641]: 2025-06-20 19:24:16.623 [INFO][4393] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--fhxxd-eth0 coredns-668d6bf9bc- kube-system 6532dc34-4519-4956-bfd1-87c2dd630634 794 0 2025-06-20 19:23:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-fhxxd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali597f9778823 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7c4d748408175f63fc38e69e2e685d4f7f09dec6ae942eebbadefd4814fe157e" Namespace="kube-system" Pod="coredns-668d6bf9bc-fhxxd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fhxxd-" Jun 20 19:24:17.617561 containerd[1641]: 2025-06-20 19:24:16.623 [INFO][4393] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7c4d748408175f63fc38e69e2e685d4f7f09dec6ae942eebbadefd4814fe157e" Namespace="kube-system" Pod="coredns-668d6bf9bc-fhxxd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fhxxd-eth0" Jun 20 19:24:17.617561 containerd[1641]: 2025-06-20 19:24:17.287 [INFO][4432] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7c4d748408175f63fc38e69e2e685d4f7f09dec6ae942eebbadefd4814fe157e" HandleID="k8s-pod-network.7c4d748408175f63fc38e69e2e685d4f7f09dec6ae942eebbadefd4814fe157e" Workload="localhost-k8s-coredns--668d6bf9bc--fhxxd-eth0" Jun 20 19:24:17.617753 containerd[1641]: 2025-06-20 19:24:17.290 [INFO][4432] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7c4d748408175f63fc38e69e2e685d4f7f09dec6ae942eebbadefd4814fe157e" HandleID="k8s-pod-network.7c4d748408175f63fc38e69e2e685d4f7f09dec6ae942eebbadefd4814fe157e" Workload="localhost-k8s-coredns--668d6bf9bc--fhxxd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f420), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-fhxxd", "timestamp":"2025-06-20 19:24:17.28716653 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:24:17.617753 containerd[1641]: 2025-06-20 19:24:17.290 [INFO][4432] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:24:17.617753 containerd[1641]: 2025-06-20 19:24:17.459 [INFO][4432] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:24:17.617753 containerd[1641]: 2025-06-20 19:24:17.459 [INFO][4432] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 20 19:24:17.617753 containerd[1641]: 2025-06-20 19:24:17.506 [INFO][4432] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7c4d748408175f63fc38e69e2e685d4f7f09dec6ae942eebbadefd4814fe157e" host="localhost" Jun 20 19:24:17.617753 containerd[1641]: 2025-06-20 19:24:17.536 [INFO][4432] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 20 19:24:17.617753 containerd[1641]: 2025-06-20 19:24:17.541 [INFO][4432] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 20 19:24:17.617753 containerd[1641]: 2025-06-20 19:24:17.546 [INFO][4432] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 20 19:24:17.617753 containerd[1641]: 2025-06-20 19:24:17.549 [INFO][4432] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 20 19:24:17.617753 containerd[1641]: 2025-06-20 19:24:17.549 [INFO][4432] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7c4d748408175f63fc38e69e2e685d4f7f09dec6ae942eebbadefd4814fe157e" host="localhost" Jun 20 19:24:17.626323 containerd[1641]: 2025-06-20 19:24:17.550 [INFO][4432] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7c4d748408175f63fc38e69e2e685d4f7f09dec6ae942eebbadefd4814fe157e Jun 20 19:24:17.626323 containerd[1641]: 2025-06-20 19:24:17.556 [INFO][4432] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7c4d748408175f63fc38e69e2e685d4f7f09dec6ae942eebbadefd4814fe157e" host="localhost" Jun 20 19:24:17.626323 containerd[1641]: 2025-06-20 19:24:17.585 [INFO][4432] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.7c4d748408175f63fc38e69e2e685d4f7f09dec6ae942eebbadefd4814fe157e" host="localhost" Jun 20 19:24:17.626323 containerd[1641]: 2025-06-20 19:24:17.585 [INFO][4432] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.7c4d748408175f63fc38e69e2e685d4f7f09dec6ae942eebbadefd4814fe157e" host="localhost" Jun 20 19:24:17.626323 containerd[1641]: 2025-06-20 19:24:17.585 [INFO][4432] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:24:17.626323 containerd[1641]: 2025-06-20 19:24:17.585 [INFO][4432] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="7c4d748408175f63fc38e69e2e685d4f7f09dec6ae942eebbadefd4814fe157e" HandleID="k8s-pod-network.7c4d748408175f63fc38e69e2e685d4f7f09dec6ae942eebbadefd4814fe157e" Workload="localhost-k8s-coredns--668d6bf9bc--fhxxd-eth0" Jun 20 19:24:17.626616 containerd[1641]: 2025-06-20 19:24:17.588 [INFO][4393] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7c4d748408175f63fc38e69e2e685d4f7f09dec6ae942eebbadefd4814fe157e" Namespace="kube-system" Pod="coredns-668d6bf9bc-fhxxd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fhxxd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--fhxxd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6532dc34-4519-4956-bfd1-87c2dd630634", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 23, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-fhxxd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali597f9778823", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:24:17.627257 containerd[1641]: 2025-06-20 19:24:17.588 [INFO][4393] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="7c4d748408175f63fc38e69e2e685d4f7f09dec6ae942eebbadefd4814fe157e" Namespace="kube-system" Pod="coredns-668d6bf9bc-fhxxd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fhxxd-eth0" Jun 20 19:24:17.627257 containerd[1641]: 2025-06-20 19:24:17.588 [INFO][4393] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali597f9778823 ContainerID="7c4d748408175f63fc38e69e2e685d4f7f09dec6ae942eebbadefd4814fe157e" Namespace="kube-system" Pod="coredns-668d6bf9bc-fhxxd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fhxxd-eth0" Jun 20 19:24:17.627257 containerd[1641]: 2025-06-20 19:24:17.595 [INFO][4393] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7c4d748408175f63fc38e69e2e685d4f7f09dec6ae942eebbadefd4814fe157e" Namespace="kube-system" Pod="coredns-668d6bf9bc-fhxxd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fhxxd-eth0" Jun 20 19:24:17.627340 containerd[1641]: 2025-06-20 19:24:17.596 [INFO][4393] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7c4d748408175f63fc38e69e2e685d4f7f09dec6ae942eebbadefd4814fe157e" Namespace="kube-system" Pod="coredns-668d6bf9bc-fhxxd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fhxxd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--fhxxd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6532dc34-4519-4956-bfd1-87c2dd630634", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 23, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7c4d748408175f63fc38e69e2e685d4f7f09dec6ae942eebbadefd4814fe157e", Pod:"coredns-668d6bf9bc-fhxxd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali597f9778823", MAC:"26:f8:ec:e7:cd:be", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:24:17.627340 containerd[1641]: 2025-06-20 19:24:17.614 [INFO][4393] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7c4d748408175f63fc38e69e2e685d4f7f09dec6ae942eebbadefd4814fe157e" Namespace="kube-system" Pod="coredns-668d6bf9bc-fhxxd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fhxxd-eth0" Jun 20 19:24:17.627340 containerd[1641]: time="2025-06-20T19:24:17.619923291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-98fb94684-dcwlf,Uid:da54477e-0dce-43c1-91c2-8f7a222c7684,Namespace:calico-system,Attempt:0,} returns sandbox id \"44abcc94551bc0e8690d5204f448e0620fe9cef2772512fe9468cb1892249729\"" Jun 20 19:24:17.689292 systemd-networkd[1546]: cali6a225a154a4: Link UP Jun 20 19:24:17.689912 systemd-networkd[1546]: cali6a225a154a4: Gained carrier Jun 20 19:24:17.706926 containerd[1641]: time="2025-06-20T19:24:17.706882536Z" level=info msg="connecting to shim 7c4d748408175f63fc38e69e2e685d4f7f09dec6ae942eebbadefd4814fe157e" address="unix:///run/containerd/s/867064f54c4bfde6d4ac2c00f68d77bd18d1a3059df1714471b902a7b356d183" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:24:17.708931 containerd[1641]: 2025-06-20 19:24:17.396 [INFO][4455] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6647f6b4b7--2w6xq-eth0 calico-apiserver-6647f6b4b7- calico-apiserver 89c2012c-00da-48d2-9d38-40ca4aaf4dc5 799 0 2025-06-20 19:23:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6647f6b4b7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6647f6b4b7-2w6xq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6a225a154a4 [] [] }} ContainerID="08e74fc80092fa814d8f95ff8f4fdd1df1a0a5d3ba0e19eb6e77321af4a8082c" Namespace="calico-apiserver" Pod="calico-apiserver-6647f6b4b7-2w6xq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6647f6b4b7--2w6xq-" Jun 20 19:24:17.708931 containerd[1641]: 2025-06-20 19:24:17.397 [INFO][4455] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="08e74fc80092fa814d8f95ff8f4fdd1df1a0a5d3ba0e19eb6e77321af4a8082c" Namespace="calico-apiserver" Pod="calico-apiserver-6647f6b4b7-2w6xq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6647f6b4b7--2w6xq-eth0" Jun 20 19:24:17.708931 containerd[1641]: 2025-06-20 19:24:17.420 [INFO][4480] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="08e74fc80092fa814d8f95ff8f4fdd1df1a0a5d3ba0e19eb6e77321af4a8082c" HandleID="k8s-pod-network.08e74fc80092fa814d8f95ff8f4fdd1df1a0a5d3ba0e19eb6e77321af4a8082c" Workload="localhost-k8s-calico--apiserver--6647f6b4b7--2w6xq-eth0" Jun 20 19:24:17.708931 containerd[1641]: 2025-06-20 19:24:17.420 [INFO][4480] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="08e74fc80092fa814d8f95ff8f4fdd1df1a0a5d3ba0e19eb6e77321af4a8082c" HandleID="k8s-pod-network.08e74fc80092fa814d8f95ff8f4fdd1df1a0a5d3ba0e19eb6e77321af4a8082c" Workload="localhost-k8s-calico--apiserver--6647f6b4b7--2w6xq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d58e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6647f6b4b7-2w6xq", "timestamp":"2025-06-20 19:24:17.420209624 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:24:17.708931 containerd[1641]: 2025-06-20 19:24:17.420 [INFO][4480] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:24:17.708931 containerd[1641]: 2025-06-20 19:24:17.585 [INFO][4480] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:24:17.708931 containerd[1641]: 2025-06-20 19:24:17.585 [INFO][4480] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 20 19:24:17.708931 containerd[1641]: 2025-06-20 19:24:17.605 [INFO][4480] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.08e74fc80092fa814d8f95ff8f4fdd1df1a0a5d3ba0e19eb6e77321af4a8082c" host="localhost" Jun 20 19:24:17.708931 containerd[1641]: 2025-06-20 19:24:17.631 [INFO][4480] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 20 19:24:17.708931 containerd[1641]: 2025-06-20 19:24:17.641 [INFO][4480] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 20 19:24:17.708931 containerd[1641]: 2025-06-20 19:24:17.643 [INFO][4480] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 20 19:24:17.708931 containerd[1641]: 2025-06-20 19:24:17.650 [INFO][4480] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 20 19:24:17.708931 containerd[1641]: 2025-06-20 19:24:17.650 [INFO][4480] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.08e74fc80092fa814d8f95ff8f4fdd1df1a0a5d3ba0e19eb6e77321af4a8082c" host="localhost" Jun 20 19:24:17.708931 containerd[1641]: 2025-06-20 19:24:17.651 [INFO][4480] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.08e74fc80092fa814d8f95ff8f4fdd1df1a0a5d3ba0e19eb6e77321af4a8082c Jun 20 19:24:17.708931 containerd[1641]: 2025-06-20 19:24:17.672 [INFO][4480] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.08e74fc80092fa814d8f95ff8f4fdd1df1a0a5d3ba0e19eb6e77321af4a8082c" host="localhost" Jun 20 19:24:17.708931 containerd[1641]: 2025-06-20 19:24:17.683 [INFO][4480] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.08e74fc80092fa814d8f95ff8f4fdd1df1a0a5d3ba0e19eb6e77321af4a8082c" host="localhost" Jun 20 19:24:17.708931 containerd[1641]: 2025-06-20 19:24:17.683 [INFO][4480] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.08e74fc80092fa814d8f95ff8f4fdd1df1a0a5d3ba0e19eb6e77321af4a8082c" host="localhost" Jun 20 19:24:17.708931 containerd[1641]: 2025-06-20 19:24:17.683 [INFO][4480] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:24:17.708931 containerd[1641]: 2025-06-20 19:24:17.683 [INFO][4480] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="08e74fc80092fa814d8f95ff8f4fdd1df1a0a5d3ba0e19eb6e77321af4a8082c" HandleID="k8s-pod-network.08e74fc80092fa814d8f95ff8f4fdd1df1a0a5d3ba0e19eb6e77321af4a8082c" Workload="localhost-k8s-calico--apiserver--6647f6b4b7--2w6xq-eth0" Jun 20 19:24:17.728481 containerd[1641]: 2025-06-20 19:24:17.685 [INFO][4455] cni-plugin/k8s.go 418: Populated endpoint ContainerID="08e74fc80092fa814d8f95ff8f4fdd1df1a0a5d3ba0e19eb6e77321af4a8082c" Namespace="calico-apiserver" Pod="calico-apiserver-6647f6b4b7-2w6xq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6647f6b4b7--2w6xq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6647f6b4b7--2w6xq-eth0", GenerateName:"calico-apiserver-6647f6b4b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"89c2012c-00da-48d2-9d38-40ca4aaf4dc5", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 23, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6647f6b4b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6647f6b4b7-2w6xq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6a225a154a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:24:17.728481 containerd[1641]: 2025-06-20 19:24:17.685 [INFO][4455] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="08e74fc80092fa814d8f95ff8f4fdd1df1a0a5d3ba0e19eb6e77321af4a8082c" Namespace="calico-apiserver" Pod="calico-apiserver-6647f6b4b7-2w6xq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6647f6b4b7--2w6xq-eth0" Jun 20 19:24:17.728481 containerd[1641]: 2025-06-20 19:24:17.685 [INFO][4455] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6a225a154a4 ContainerID="08e74fc80092fa814d8f95ff8f4fdd1df1a0a5d3ba0e19eb6e77321af4a8082c" Namespace="calico-apiserver" Pod="calico-apiserver-6647f6b4b7-2w6xq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6647f6b4b7--2w6xq-eth0" Jun 20 19:24:17.728481 containerd[1641]: 2025-06-20 19:24:17.690 [INFO][4455] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="08e74fc80092fa814d8f95ff8f4fdd1df1a0a5d3ba0e19eb6e77321af4a8082c" Namespace="calico-apiserver" Pod="calico-apiserver-6647f6b4b7-2w6xq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6647f6b4b7--2w6xq-eth0" Jun 20 19:24:17.728481 containerd[1641]: 2025-06-20 19:24:17.690 [INFO][4455] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="08e74fc80092fa814d8f95ff8f4fdd1df1a0a5d3ba0e19eb6e77321af4a8082c" Namespace="calico-apiserver" Pod="calico-apiserver-6647f6b4b7-2w6xq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6647f6b4b7--2w6xq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6647f6b4b7--2w6xq-eth0", GenerateName:"calico-apiserver-6647f6b4b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"89c2012c-00da-48d2-9d38-40ca4aaf4dc5", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 23, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6647f6b4b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"08e74fc80092fa814d8f95ff8f4fdd1df1a0a5d3ba0e19eb6e77321af4a8082c", Pod:"calico-apiserver-6647f6b4b7-2w6xq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6a225a154a4", MAC:"ee:c6:47:38:68:50", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:24:17.728481 containerd[1641]: 2025-06-20 19:24:17.706 [INFO][4455] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="08e74fc80092fa814d8f95ff8f4fdd1df1a0a5d3ba0e19eb6e77321af4a8082c" Namespace="calico-apiserver" Pod="calico-apiserver-6647f6b4b7-2w6xq" WorkloadEndpoint="localhost-k8s-calico--apiserver--6647f6b4b7--2w6xq-eth0" Jun 20 19:24:17.727857 systemd[1]: Started cri-containerd-7c4d748408175f63fc38e69e2e685d4f7f09dec6ae942eebbadefd4814fe157e.scope - libcontainer container 7c4d748408175f63fc38e69e2e685d4f7f09dec6ae942eebbadefd4814fe157e. Jun 20 19:24:17.737477 systemd-resolved[1501]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 20 19:24:17.785746 containerd[1641]: time="2025-06-20T19:24:17.784822896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fhxxd,Uid:6532dc34-4519-4956-bfd1-87c2dd630634,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c4d748408175f63fc38e69e2e685d4f7f09dec6ae942eebbadefd4814fe157e\"" Jun 20 19:24:17.796459 containerd[1641]: time="2025-06-20T19:24:17.796431616Z" level=info msg="CreateContainer within sandbox \"7c4d748408175f63fc38e69e2e685d4f7f09dec6ae942eebbadefd4814fe157e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 19:24:17.806109 containerd[1641]: time="2025-06-20T19:24:17.806067723Z" level=info msg="connecting to shim 08e74fc80092fa814d8f95ff8f4fdd1df1a0a5d3ba0e19eb6e77321af4a8082c" address="unix:///run/containerd/s/5cd0a5b84c65fc47ee5599b0df131ee6fad4098b89b74f297c179716b6e1d899" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:24:17.823880 systemd[1]: Started cri-containerd-08e74fc80092fa814d8f95ff8f4fdd1df1a0a5d3ba0e19eb6e77321af4a8082c.scope - libcontainer container 08e74fc80092fa814d8f95ff8f4fdd1df1a0a5d3ba0e19eb6e77321af4a8082c. Jun 20 19:24:17.838765 systemd-resolved[1501]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 20 19:24:17.854453 containerd[1641]: time="2025-06-20T19:24:17.854410381Z" level=info msg="Container 5b7e090607bbc348959ebbdeae6bdb97f8180d4dc69d2f2976572c912825d24a: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:24:17.864711 containerd[1641]: time="2025-06-20T19:24:17.864673834Z" level=info msg="CreateContainer within sandbox \"7c4d748408175f63fc38e69e2e685d4f7f09dec6ae942eebbadefd4814fe157e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5b7e090607bbc348959ebbdeae6bdb97f8180d4dc69d2f2976572c912825d24a\"" Jun 20 19:24:17.865783 containerd[1641]: time="2025-06-20T19:24:17.865629349Z" level=info msg="StartContainer for \"5b7e090607bbc348959ebbdeae6bdb97f8180d4dc69d2f2976572c912825d24a\"" Jun 20 19:24:17.867165 containerd[1641]: time="2025-06-20T19:24:17.867034743Z" level=info msg="connecting to shim 5b7e090607bbc348959ebbdeae6bdb97f8180d4dc69d2f2976572c912825d24a" address="unix:///run/containerd/s/867064f54c4bfde6d4ac2c00f68d77bd18d1a3059df1714471b902a7b356d183" protocol=ttrpc version=3 Jun 20 19:24:17.893227 systemd[1]: Started cri-containerd-5b7e090607bbc348959ebbdeae6bdb97f8180d4dc69d2f2976572c912825d24a.scope - libcontainer container 5b7e090607bbc348959ebbdeae6bdb97f8180d4dc69d2f2976572c912825d24a. Jun 20 19:24:17.919773 containerd[1641]: time="2025-06-20T19:24:17.919576754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6647f6b4b7-2w6xq,Uid:89c2012c-00da-48d2-9d38-40ca4aaf4dc5,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"08e74fc80092fa814d8f95ff8f4fdd1df1a0a5d3ba0e19eb6e77321af4a8082c\"" Jun 20 19:24:17.956384 containerd[1641]: time="2025-06-20T19:24:17.956354696Z" level=info msg="StartContainer for \"5b7e090607bbc348959ebbdeae6bdb97f8180d4dc69d2f2976572c912825d24a\" returns successfully" Jun 20 19:24:18.224429 containerd[1641]: time="2025-06-20T19:24:18.224320818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4w9w9,Uid:de04d4f2-cb39-4862-aa3d-bec53c847188,Namespace:calico-system,Attempt:0,}" Jun 20 19:24:18.395555 systemd-networkd[1546]: cali650b9cfc77d: Link UP Jun 20 19:24:18.396341 systemd-networkd[1546]: cali650b9cfc77d: Gained carrier Jun 20 19:24:18.418031 containerd[1641]: 2025-06-20 19:24:18.285 [INFO][4728] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--4w9w9-eth0 csi-node-driver- calico-system de04d4f2-cb39-4862-aa3d-bec53c847188 680 0 2025-06-20 19:23:50 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:85b8c9d4df k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-4w9w9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali650b9cfc77d [] [] }} ContainerID="81f5738bb3c3ce2d0fb6c21c60bf661e14062f72fba610c5ca6c7f743abf09c3" Namespace="calico-system" Pod="csi-node-driver-4w9w9" WorkloadEndpoint="localhost-k8s-csi--node--driver--4w9w9-" Jun 20 19:24:18.418031 containerd[1641]: 2025-06-20 19:24:18.285 [INFO][4728] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="81f5738bb3c3ce2d0fb6c21c60bf661e14062f72fba610c5ca6c7f743abf09c3" Namespace="calico-system" Pod="csi-node-driver-4w9w9" WorkloadEndpoint="localhost-k8s-csi--node--driver--4w9w9-eth0" Jun 20 19:24:18.418031 containerd[1641]: 2025-06-20 19:24:18.325 [INFO][4740] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="81f5738bb3c3ce2d0fb6c21c60bf661e14062f72fba610c5ca6c7f743abf09c3" HandleID="k8s-pod-network.81f5738bb3c3ce2d0fb6c21c60bf661e14062f72fba610c5ca6c7f743abf09c3" Workload="localhost-k8s-csi--node--driver--4w9w9-eth0" Jun 20 19:24:18.418031 containerd[1641]: 2025-06-20 19:24:18.325 [INFO][4740] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="81f5738bb3c3ce2d0fb6c21c60bf661e14062f72fba610c5ca6c7f743abf09c3" HandleID="k8s-pod-network.81f5738bb3c3ce2d0fb6c21c60bf661e14062f72fba610c5ca6c7f743abf09c3" Workload="localhost-k8s-csi--node--driver--4w9w9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5080), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-4w9w9", "timestamp":"2025-06-20 19:24:18.325640797 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:24:18.418031 containerd[1641]: 2025-06-20 19:24:18.325 [INFO][4740] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:24:18.418031 containerd[1641]: 2025-06-20 19:24:18.325 [INFO][4740] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:24:18.418031 containerd[1641]: 2025-06-20 19:24:18.325 [INFO][4740] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 20 19:24:18.418031 containerd[1641]: 2025-06-20 19:24:18.331 [INFO][4740] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.81f5738bb3c3ce2d0fb6c21c60bf661e14062f72fba610c5ca6c7f743abf09c3" host="localhost" Jun 20 19:24:18.418031 containerd[1641]: 2025-06-20 19:24:18.335 [INFO][4740] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 20 19:24:18.418031 containerd[1641]: 2025-06-20 19:24:18.342 [INFO][4740] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 20 19:24:18.418031 containerd[1641]: 2025-06-20 19:24:18.343 [INFO][4740] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 20 19:24:18.418031 containerd[1641]: 2025-06-20 19:24:18.348 [INFO][4740] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 20 19:24:18.418031 containerd[1641]: 2025-06-20 19:24:18.348 [INFO][4740] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.81f5738bb3c3ce2d0fb6c21c60bf661e14062f72fba610c5ca6c7f743abf09c3" host="localhost" Jun 20 19:24:18.418031 containerd[1641]: 2025-06-20 19:24:18.350 [INFO][4740] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.81f5738bb3c3ce2d0fb6c21c60bf661e14062f72fba610c5ca6c7f743abf09c3 Jun 20 19:24:18.418031 containerd[1641]: 2025-06-20 19:24:18.367 [INFO][4740] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.81f5738bb3c3ce2d0fb6c21c60bf661e14062f72fba610c5ca6c7f743abf09c3" host="localhost" Jun 20 19:24:18.418031 containerd[1641]: 2025-06-20 19:24:18.391 [INFO][4740] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.81f5738bb3c3ce2d0fb6c21c60bf661e14062f72fba610c5ca6c7f743abf09c3" host="localhost" Jun 20 19:24:18.418031 containerd[1641]: 2025-06-20 19:24:18.391 [INFO][4740] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.81f5738bb3c3ce2d0fb6c21c60bf661e14062f72fba610c5ca6c7f743abf09c3" host="localhost" Jun 20 19:24:18.418031 containerd[1641]: 2025-06-20 19:24:18.392 [INFO][4740] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:24:18.418031 containerd[1641]: 2025-06-20 19:24:18.392 [INFO][4740] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="81f5738bb3c3ce2d0fb6c21c60bf661e14062f72fba610c5ca6c7f743abf09c3" HandleID="k8s-pod-network.81f5738bb3c3ce2d0fb6c21c60bf661e14062f72fba610c5ca6c7f743abf09c3" Workload="localhost-k8s-csi--node--driver--4w9w9-eth0" Jun 20 19:24:18.418609 containerd[1641]: 2025-06-20 19:24:18.393 [INFO][4728] cni-plugin/k8s.go 418: Populated endpoint ContainerID="81f5738bb3c3ce2d0fb6c21c60bf661e14062f72fba610c5ca6c7f743abf09c3" Namespace="calico-system" Pod="csi-node-driver-4w9w9" WorkloadEndpoint="localhost-k8s-csi--node--driver--4w9w9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4w9w9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"de04d4f2-cb39-4862-aa3d-bec53c847188", ResourceVersion:"680", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 23, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85b8c9d4df", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-4w9w9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali650b9cfc77d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:24:18.418609 containerd[1641]: 2025-06-20 19:24:18.393 [INFO][4728] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="81f5738bb3c3ce2d0fb6c21c60bf661e14062f72fba610c5ca6c7f743abf09c3" Namespace="calico-system" Pod="csi-node-driver-4w9w9" WorkloadEndpoint="localhost-k8s-csi--node--driver--4w9w9-eth0" Jun 20 19:24:18.418609 containerd[1641]: 2025-06-20 19:24:18.393 [INFO][4728] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali650b9cfc77d ContainerID="81f5738bb3c3ce2d0fb6c21c60bf661e14062f72fba610c5ca6c7f743abf09c3" Namespace="calico-system" Pod="csi-node-driver-4w9w9" WorkloadEndpoint="localhost-k8s-csi--node--driver--4w9w9-eth0" Jun 20 19:24:18.418609 containerd[1641]: 2025-06-20 19:24:18.396 [INFO][4728] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="81f5738bb3c3ce2d0fb6c21c60bf661e14062f72fba610c5ca6c7f743abf09c3" Namespace="calico-system" Pod="csi-node-driver-4w9w9" WorkloadEndpoint="localhost-k8s-csi--node--driver--4w9w9-eth0" Jun 20 19:24:18.418609 containerd[1641]: 2025-06-20 19:24:18.396 [INFO][4728] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="81f5738bb3c3ce2d0fb6c21c60bf661e14062f72fba610c5ca6c7f743abf09c3" Namespace="calico-system" Pod="csi-node-driver-4w9w9" WorkloadEndpoint="localhost-k8s-csi--node--driver--4w9w9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4w9w9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"de04d4f2-cb39-4862-aa3d-bec53c847188", ResourceVersion:"680", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 23, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85b8c9d4df", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"81f5738bb3c3ce2d0fb6c21c60bf661e14062f72fba610c5ca6c7f743abf09c3", Pod:"csi-node-driver-4w9w9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali650b9cfc77d", MAC:"c2:a7:3d:6f:2b:6b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:24:18.418609 containerd[1641]: 2025-06-20 19:24:18.416 [INFO][4728] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="81f5738bb3c3ce2d0fb6c21c60bf661e14062f72fba610c5ca6c7f743abf09c3" Namespace="calico-system" Pod="csi-node-driver-4w9w9" WorkloadEndpoint="localhost-k8s-csi--node--driver--4w9w9-eth0" Jun 20 19:24:18.514127 systemd-networkd[1546]: calidf20324f12b: Gained IPv6LL Jun 20 19:24:18.641804 systemd-networkd[1546]: cali1dc499b0fd9: Gained IPv6LL Jun 20 19:24:18.833826 systemd-networkd[1546]: cali597f9778823: Gained IPv6LL Jun 20 19:24:18.901972 containerd[1641]: time="2025-06-20T19:24:18.901944246Z" level=info msg="connecting to shim 81f5738bb3c3ce2d0fb6c21c60bf661e14062f72fba610c5ca6c7f743abf09c3" address="unix:///run/containerd/s/2e31cca298b0bee2ab83a83f6b4d816a426f7c3dbf87a1d581ae8fc567aac8f1" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:24:18.920800 systemd[1]: Started cri-containerd-81f5738bb3c3ce2d0fb6c21c60bf661e14062f72fba610c5ca6c7f743abf09c3.scope - libcontainer container 81f5738bb3c3ce2d0fb6c21c60bf661e14062f72fba610c5ca6c7f743abf09c3. Jun 20 19:24:18.928898 systemd-resolved[1501]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 20 19:24:18.942438 containerd[1641]: time="2025-06-20T19:24:18.942413658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4w9w9,Uid:de04d4f2-cb39-4862-aa3d-bec53c847188,Namespace:calico-system,Attempt:0,} returns sandbox id \"81f5738bb3c3ce2d0fb6c21c60bf661e14062f72fba610c5ca6c7f743abf09c3\"" Jun 20 19:24:19.225291 containerd[1641]: time="2025-06-20T19:24:19.224365835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6647f6b4b7-8zcwx,Uid:8fe0a6ae-35c9-4521-95a4-fc8967e01b20,Namespace:calico-apiserver,Attempt:0,}" Jun 20 19:24:19.420795 systemd-networkd[1546]: cali90c343583db: Link UP Jun 20 19:24:19.421289 systemd-networkd[1546]: cali90c343583db: Gained carrier Jun 20 19:24:19.442368 kubelet[2940]: I0620 19:24:19.442162 2940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-fhxxd" podStartSLOduration=41.442140576 podStartE2EDuration="41.442140576s" podCreationTimestamp="2025-06-20 19:23:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:24:18.663968754 +0000 UTC m=+45.611356397" watchObservedRunningTime="2025-06-20 19:24:19.442140576 +0000 UTC m=+46.389528221" Jun 20 19:24:19.445185 containerd[1641]: 2025-06-20 19:24:19.348 [INFO][4814] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6647f6b4b7--8zcwx-eth0 calico-apiserver-6647f6b4b7- calico-apiserver 8fe0a6ae-35c9-4521-95a4-fc8967e01b20 796 0 2025-06-20 19:23:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6647f6b4b7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6647f6b4b7-8zcwx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali90c343583db [] [] }} ContainerID="efde91cc5685c2ee619740357be3669b608cb4186e1ace34c5516a7be57bb747" Namespace="calico-apiserver" Pod="calico-apiserver-6647f6b4b7-8zcwx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6647f6b4b7--8zcwx-" Jun 20 19:24:19.445185 containerd[1641]: 2025-06-20 19:24:19.348 [INFO][4814] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="efde91cc5685c2ee619740357be3669b608cb4186e1ace34c5516a7be57bb747" Namespace="calico-apiserver" Pod="calico-apiserver-6647f6b4b7-8zcwx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6647f6b4b7--8zcwx-eth0" Jun 20 19:24:19.445185 containerd[1641]: 2025-06-20 19:24:19.365 [INFO][4822] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="efde91cc5685c2ee619740357be3669b608cb4186e1ace34c5516a7be57bb747" HandleID="k8s-pod-network.efde91cc5685c2ee619740357be3669b608cb4186e1ace34c5516a7be57bb747" Workload="localhost-k8s-calico--apiserver--6647f6b4b7--8zcwx-eth0" Jun 20 19:24:19.445185 containerd[1641]: 2025-06-20 19:24:19.365 [INFO][4822] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="efde91cc5685c2ee619740357be3669b608cb4186e1ace34c5516a7be57bb747" HandleID="k8s-pod-network.efde91cc5685c2ee619740357be3669b608cb4186e1ace34c5516a7be57bb747" Workload="localhost-k8s-calico--apiserver--6647f6b4b7--8zcwx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f900), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6647f6b4b7-8zcwx", "timestamp":"2025-06-20 19:24:19.365756949 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:24:19.445185 containerd[1641]: 2025-06-20 19:24:19.365 [INFO][4822] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:24:19.445185 containerd[1641]: 2025-06-20 19:24:19.365 [INFO][4822] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:24:19.445185 containerd[1641]: 2025-06-20 19:24:19.366 [INFO][4822] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 20 19:24:19.445185 containerd[1641]: 2025-06-20 19:24:19.371 [INFO][4822] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.efde91cc5685c2ee619740357be3669b608cb4186e1ace34c5516a7be57bb747" host="localhost" Jun 20 19:24:19.445185 containerd[1641]: 2025-06-20 19:24:19.376 [INFO][4822] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 20 19:24:19.445185 containerd[1641]: 2025-06-20 19:24:19.379 [INFO][4822] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 20 19:24:19.445185 containerd[1641]: 2025-06-20 19:24:19.381 [INFO][4822] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 20 19:24:19.445185 containerd[1641]: 2025-06-20 19:24:19.383 [INFO][4822] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 20 19:24:19.445185 containerd[1641]: 2025-06-20 19:24:19.383 [INFO][4822] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.efde91cc5685c2ee619740357be3669b608cb4186e1ace34c5516a7be57bb747" host="localhost" Jun 20 19:24:19.445185 containerd[1641]: 2025-06-20 19:24:19.384 [INFO][4822] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.efde91cc5685c2ee619740357be3669b608cb4186e1ace34c5516a7be57bb747 Jun 20 19:24:19.445185 containerd[1641]: 2025-06-20 19:24:19.394 [INFO][4822] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.efde91cc5685c2ee619740357be3669b608cb4186e1ace34c5516a7be57bb747" host="localhost" Jun 20 19:24:19.445185 containerd[1641]: 2025-06-20 19:24:19.409 [INFO][4822] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.efde91cc5685c2ee619740357be3669b608cb4186e1ace34c5516a7be57bb747" host="localhost" Jun 20 19:24:19.445185 containerd[1641]: 2025-06-20 19:24:19.409 [INFO][4822] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.efde91cc5685c2ee619740357be3669b608cb4186e1ace34c5516a7be57bb747" host="localhost" Jun 20 19:24:19.445185 containerd[1641]: 2025-06-20 19:24:19.409 [INFO][4822] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:24:19.445185 containerd[1641]: 2025-06-20 19:24:19.409 [INFO][4822] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="efde91cc5685c2ee619740357be3669b608cb4186e1ace34c5516a7be57bb747" HandleID="k8s-pod-network.efde91cc5685c2ee619740357be3669b608cb4186e1ace34c5516a7be57bb747" Workload="localhost-k8s-calico--apiserver--6647f6b4b7--8zcwx-eth0" Jun 20 19:24:19.473616 containerd[1641]: 2025-06-20 19:24:19.412 [INFO][4814] cni-plugin/k8s.go 418: Populated endpoint ContainerID="efde91cc5685c2ee619740357be3669b608cb4186e1ace34c5516a7be57bb747" Namespace="calico-apiserver" Pod="calico-apiserver-6647f6b4b7-8zcwx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6647f6b4b7--8zcwx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6647f6b4b7--8zcwx-eth0", GenerateName:"calico-apiserver-6647f6b4b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"8fe0a6ae-35c9-4521-95a4-fc8967e01b20", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 23, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6647f6b4b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6647f6b4b7-8zcwx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali90c343583db", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:24:19.473616 containerd[1641]: 2025-06-20 19:24:19.413 [INFO][4814] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="efde91cc5685c2ee619740357be3669b608cb4186e1ace34c5516a7be57bb747" Namespace="calico-apiserver" Pod="calico-apiserver-6647f6b4b7-8zcwx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6647f6b4b7--8zcwx-eth0" Jun 20 19:24:19.473616 containerd[1641]: 2025-06-20 19:24:19.413 [INFO][4814] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali90c343583db ContainerID="efde91cc5685c2ee619740357be3669b608cb4186e1ace34c5516a7be57bb747" Namespace="calico-apiserver" Pod="calico-apiserver-6647f6b4b7-8zcwx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6647f6b4b7--8zcwx-eth0" Jun 20 19:24:19.473616 containerd[1641]: 2025-06-20 19:24:19.421 [INFO][4814] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="efde91cc5685c2ee619740357be3669b608cb4186e1ace34c5516a7be57bb747" Namespace="calico-apiserver" Pod="calico-apiserver-6647f6b4b7-8zcwx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6647f6b4b7--8zcwx-eth0" Jun 20 19:24:19.473616 containerd[1641]: 2025-06-20 19:24:19.421 [INFO][4814] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="efde91cc5685c2ee619740357be3669b608cb4186e1ace34c5516a7be57bb747" Namespace="calico-apiserver" Pod="calico-apiserver-6647f6b4b7-8zcwx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6647f6b4b7--8zcwx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6647f6b4b7--8zcwx-eth0", GenerateName:"calico-apiserver-6647f6b4b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"8fe0a6ae-35c9-4521-95a4-fc8967e01b20", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 23, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6647f6b4b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"efde91cc5685c2ee619740357be3669b608cb4186e1ace34c5516a7be57bb747", Pod:"calico-apiserver-6647f6b4b7-8zcwx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali90c343583db", MAC:"da:34:92:cc:94:9f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:24:19.473616 containerd[1641]: 2025-06-20 19:24:19.441 [INFO][4814] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="efde91cc5685c2ee619740357be3669b608cb4186e1ace34c5516a7be57bb747" Namespace="calico-apiserver" Pod="calico-apiserver-6647f6b4b7-8zcwx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6647f6b4b7--8zcwx-eth0" Jun 20 19:24:19.536930 containerd[1641]: time="2025-06-20T19:24:19.536830336Z" level=info msg="connecting to shim efde91cc5685c2ee619740357be3669b608cb4186e1ace34c5516a7be57bb747" address="unix:///run/containerd/s/b1cf38c1da3f1b3fdcdb52eef8e901fa55111c17c83e5b3236d12a05c674c1f7" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:24:19.557888 systemd[1]: Started cri-containerd-efde91cc5685c2ee619740357be3669b608cb4186e1ace34c5516a7be57bb747.scope - libcontainer container efde91cc5685c2ee619740357be3669b608cb4186e1ace34c5516a7be57bb747. Jun 20 19:24:19.580176 systemd-resolved[1501]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 20 19:24:19.603123 systemd-networkd[1546]: cali6a225a154a4: Gained IPv6LL Jun 20 19:24:19.613847 containerd[1641]: time="2025-06-20T19:24:19.613816632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6647f6b4b7-8zcwx,Uid:8fe0a6ae-35c9-4521-95a4-fc8967e01b20,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"efde91cc5685c2ee619740357be3669b608cb4186e1ace34c5516a7be57bb747\"" Jun 20 19:24:20.225260 containerd[1641]: time="2025-06-20T19:24:20.225172117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bl9m6,Uid:4412edbc-0778-486f-a5f0-73708cc96255,Namespace:kube-system,Attempt:0,}" Jun 20 19:24:20.304745 systemd-networkd[1546]: calia050b94eb14: Link UP Jun 20 19:24:20.304867 systemd-networkd[1546]: calia050b94eb14: Gained carrier Jun 20 19:24:20.306126 systemd-networkd[1546]: cali650b9cfc77d: Gained IPv6LL Jun 20 19:24:20.321442 containerd[1641]: 2025-06-20 19:24:20.255 [INFO][4885] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--bl9m6-eth0 coredns-668d6bf9bc- kube-system 4412edbc-0778-486f-a5f0-73708cc96255 795 0 2025-06-20 19:23:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-bl9m6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia050b94eb14 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5b86f418914440147ceae6d3b3c97596d2ad1e728dd1a30b4af6d97f9d63b22c" Namespace="kube-system" Pod="coredns-668d6bf9bc-bl9m6" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--bl9m6-" Jun 20 19:24:20.321442 containerd[1641]: 2025-06-20 19:24:20.260 [INFO][4885] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5b86f418914440147ceae6d3b3c97596d2ad1e728dd1a30b4af6d97f9d63b22c" Namespace="kube-system" Pod="coredns-668d6bf9bc-bl9m6" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--bl9m6-eth0" Jun 20 19:24:20.321442 containerd[1641]: 2025-06-20 19:24:20.279 [INFO][4897] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5b86f418914440147ceae6d3b3c97596d2ad1e728dd1a30b4af6d97f9d63b22c" HandleID="k8s-pod-network.5b86f418914440147ceae6d3b3c97596d2ad1e728dd1a30b4af6d97f9d63b22c" Workload="localhost-k8s-coredns--668d6bf9bc--bl9m6-eth0" Jun 20 19:24:20.321442 containerd[1641]: 2025-06-20 19:24:20.279 [INFO][4897] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5b86f418914440147ceae6d3b3c97596d2ad1e728dd1a30b4af6d97f9d63b22c" HandleID="k8s-pod-network.5b86f418914440147ceae6d3b3c97596d2ad1e728dd1a30b4af6d97f9d63b22c" Workload="localhost-k8s-coredns--668d6bf9bc--bl9m6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-bl9m6", "timestamp":"2025-06-20 19:24:20.279492818 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 20 19:24:20.321442 containerd[1641]: 2025-06-20 19:24:20.279 [INFO][4897] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jun 20 19:24:20.321442 containerd[1641]: 2025-06-20 19:24:20.279 [INFO][4897] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jun 20 19:24:20.321442 containerd[1641]: 2025-06-20 19:24:20.279 [INFO][4897] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jun 20 19:24:20.321442 containerd[1641]: 2025-06-20 19:24:20.283 [INFO][4897] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5b86f418914440147ceae6d3b3c97596d2ad1e728dd1a30b4af6d97f9d63b22c" host="localhost" Jun 20 19:24:20.321442 containerd[1641]: 2025-06-20 19:24:20.286 [INFO][4897] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jun 20 19:24:20.321442 containerd[1641]: 2025-06-20 19:24:20.289 [INFO][4897] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jun 20 19:24:20.321442 containerd[1641]: 2025-06-20 19:24:20.290 [INFO][4897] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jun 20 19:24:20.321442 containerd[1641]: 2025-06-20 19:24:20.292 [INFO][4897] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jun 20 19:24:20.321442 containerd[1641]: 2025-06-20 19:24:20.292 [INFO][4897] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5b86f418914440147ceae6d3b3c97596d2ad1e728dd1a30b4af6d97f9d63b22c" host="localhost" Jun 20 19:24:20.321442 containerd[1641]: 2025-06-20 19:24:20.293 [INFO][4897] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5b86f418914440147ceae6d3b3c97596d2ad1e728dd1a30b4af6d97f9d63b22c Jun 20 19:24:20.321442 containerd[1641]: 2025-06-20 19:24:20.296 [INFO][4897] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5b86f418914440147ceae6d3b3c97596d2ad1e728dd1a30b4af6d97f9d63b22c" host="localhost" Jun 20 19:24:20.321442 containerd[1641]: 2025-06-20 19:24:20.300 [INFO][4897] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.5b86f418914440147ceae6d3b3c97596d2ad1e728dd1a30b4af6d97f9d63b22c" host="localhost" Jun 20 19:24:20.321442 containerd[1641]: 2025-06-20 19:24:20.300 [INFO][4897] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.5b86f418914440147ceae6d3b3c97596d2ad1e728dd1a30b4af6d97f9d63b22c" host="localhost" Jun 20 19:24:20.321442 containerd[1641]: 2025-06-20 19:24:20.300 [INFO][4897] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jun 20 19:24:20.321442 containerd[1641]: 2025-06-20 19:24:20.300 [INFO][4897] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="5b86f418914440147ceae6d3b3c97596d2ad1e728dd1a30b4af6d97f9d63b22c" HandleID="k8s-pod-network.5b86f418914440147ceae6d3b3c97596d2ad1e728dd1a30b4af6d97f9d63b22c" Workload="localhost-k8s-coredns--668d6bf9bc--bl9m6-eth0" Jun 20 19:24:20.328206 containerd[1641]: 2025-06-20 19:24:20.302 [INFO][4885] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5b86f418914440147ceae6d3b3c97596d2ad1e728dd1a30b4af6d97f9d63b22c" Namespace="kube-system" Pod="coredns-668d6bf9bc-bl9m6" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--bl9m6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--bl9m6-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4412edbc-0778-486f-a5f0-73708cc96255", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 23, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-bl9m6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia050b94eb14", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:24:20.328206 containerd[1641]: 2025-06-20 19:24:20.302 [INFO][4885] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="5b86f418914440147ceae6d3b3c97596d2ad1e728dd1a30b4af6d97f9d63b22c" Namespace="kube-system" Pod="coredns-668d6bf9bc-bl9m6" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--bl9m6-eth0" Jun 20 19:24:20.328206 containerd[1641]: 2025-06-20 19:24:20.302 [INFO][4885] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia050b94eb14 ContainerID="5b86f418914440147ceae6d3b3c97596d2ad1e728dd1a30b4af6d97f9d63b22c" Namespace="kube-system" Pod="coredns-668d6bf9bc-bl9m6" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--bl9m6-eth0" Jun 20 19:24:20.328206 containerd[1641]: 2025-06-20 19:24:20.304 [INFO][4885] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5b86f418914440147ceae6d3b3c97596d2ad1e728dd1a30b4af6d97f9d63b22c" Namespace="kube-system" Pod="coredns-668d6bf9bc-bl9m6" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--bl9m6-eth0" Jun 20 19:24:20.328206 containerd[1641]: 2025-06-20 19:24:20.307 [INFO][4885] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5b86f418914440147ceae6d3b3c97596d2ad1e728dd1a30b4af6d97f9d63b22c" Namespace="kube-system" Pod="coredns-668d6bf9bc-bl9m6" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--bl9m6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--bl9m6-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4412edbc-0778-486f-a5f0-73708cc96255", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2025, time.June, 20, 19, 23, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5b86f418914440147ceae6d3b3c97596d2ad1e728dd1a30b4af6d97f9d63b22c", Pod:"coredns-668d6bf9bc-bl9m6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia050b94eb14", MAC:"fa:f4:46:6c:af:d8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jun 20 19:24:20.328206 containerd[1641]: 2025-06-20 19:24:20.317 [INFO][4885] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5b86f418914440147ceae6d3b3c97596d2ad1e728dd1a30b4af6d97f9d63b22c" Namespace="kube-system" Pod="coredns-668d6bf9bc-bl9m6" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--bl9m6-eth0" Jun 20 19:24:20.369740 containerd[1641]: time="2025-06-20T19:24:20.369640062Z" level=info msg="connecting to shim 5b86f418914440147ceae6d3b3c97596d2ad1e728dd1a30b4af6d97f9d63b22c" address="unix:///run/containerd/s/e17d90ababd27e142c5373fcadebf24ea3631e7a5aa2e08aa0693b463ef94a90" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:24:20.389895 systemd[1]: Started cri-containerd-5b86f418914440147ceae6d3b3c97596d2ad1e728dd1a30b4af6d97f9d63b22c.scope - libcontainer container 5b86f418914440147ceae6d3b3c97596d2ad1e728dd1a30b4af6d97f9d63b22c. Jun 20 19:24:20.406650 systemd-resolved[1501]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 20 19:24:20.441370 containerd[1641]: time="2025-06-20T19:24:20.441337359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bl9m6,Uid:4412edbc-0778-486f-a5f0-73708cc96255,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b86f418914440147ceae6d3b3c97596d2ad1e728dd1a30b4af6d97f9d63b22c\"" Jun 20 19:24:20.445546 containerd[1641]: time="2025-06-20T19:24:20.445510156Z" level=info msg="CreateContainer within sandbox \"5b86f418914440147ceae6d3b3c97596d2ad1e728dd1a30b4af6d97f9d63b22c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 19:24:20.451353 containerd[1641]: time="2025-06-20T19:24:20.451312796Z" level=info msg="Container ea2d7dc0c4257d6be015eda6f6e10fe27d69a9a656fc67c7892dd37558fee097: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:24:20.454916 containerd[1641]: time="2025-06-20T19:24:20.454863132Z" level=info msg="CreateContainer within sandbox \"5b86f418914440147ceae6d3b3c97596d2ad1e728dd1a30b4af6d97f9d63b22c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ea2d7dc0c4257d6be015eda6f6e10fe27d69a9a656fc67c7892dd37558fee097\"" Jun 20 19:24:20.456894 containerd[1641]: time="2025-06-20T19:24:20.455725238Z" level=info msg="StartContainer for \"ea2d7dc0c4257d6be015eda6f6e10fe27d69a9a656fc67c7892dd37558fee097\"" Jun 20 19:24:20.456894 containerd[1641]: time="2025-06-20T19:24:20.456433118Z" level=info msg="connecting to shim ea2d7dc0c4257d6be015eda6f6e10fe27d69a9a656fc67c7892dd37558fee097" address="unix:///run/containerd/s/e17d90ababd27e142c5373fcadebf24ea3631e7a5aa2e08aa0693b463ef94a90" protocol=ttrpc version=3 Jun 20 19:24:20.474876 systemd[1]: Started cri-containerd-ea2d7dc0c4257d6be015eda6f6e10fe27d69a9a656fc67c7892dd37558fee097.scope - libcontainer container ea2d7dc0c4257d6be015eda6f6e10fe27d69a9a656fc67c7892dd37558fee097. Jun 20 19:24:20.499907 containerd[1641]: time="2025-06-20T19:24:20.499622659Z" level=info msg="StartContainer for \"ea2d7dc0c4257d6be015eda6f6e10fe27d69a9a656fc67c7892dd37558fee097\" returns successfully" Jun 20 19:24:20.638314 kubelet[2940]: I0620 19:24:20.637987 2940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-bl9m6" podStartSLOduration=42.637976869 podStartE2EDuration="42.637976869s" podCreationTimestamp="2025-06-20 19:23:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:24:20.63763166 +0000 UTC m=+47.585019310" watchObservedRunningTime="2025-06-20 19:24:20.637976869 +0000 UTC m=+47.585364513" Jun 20 19:24:21.201989 systemd-networkd[1546]: cali90c343583db: Gained IPv6LL Jun 20 19:24:21.777808 systemd-networkd[1546]: calia050b94eb14: Gained IPv6LL Jun 20 19:24:23.440227 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2082512606.mount: Deactivated successfully. Jun 20 19:24:24.741195 containerd[1641]: time="2025-06-20T19:24:24.741054354Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.1: active requests=0, bytes read=66352249" Jun 20 19:24:24.770709 containerd[1641]: time="2025-06-20T19:24:24.770656761Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:24:24.774132 containerd[1641]: time="2025-06-20T19:24:24.771154293Z" level=info msg="ImageCreate event name:\"sha256:7ded2fef2b18e2077114599de13fa300df0e1437753deab5c59843a86d2dad82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:24:24.774386 containerd[1641]: time="2025-06-20T19:24:24.774361130Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:173a10ef7a65a843f99fc366c7c860fa4068a8f52fda1b30ee589bc4ca43f45a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:24:24.775865 containerd[1641]: time="2025-06-20T19:24:24.775841995Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.1\" with image id \"sha256:7ded2fef2b18e2077114599de13fa300df0e1437753deab5c59843a86d2dad82\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:173a10ef7a65a843f99fc366c7c860fa4068a8f52fda1b30ee589bc4ca43f45a\", size \"66352095\" in 7.158924106s" Jun 20 19:24:24.775950 containerd[1641]: time="2025-06-20T19:24:24.775940476Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.1\" returns image reference \"sha256:7ded2fef2b18e2077114599de13fa300df0e1437753deab5c59843a86d2dad82\"" Jun 20 19:24:24.813230 containerd[1641]: time="2025-06-20T19:24:24.813203949Z" level=info msg="CreateContainer within sandbox \"4dda986162c86118a5330fc7d25924568895b5a5c6b778c135c0c2d3f1f34ccf\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jun 20 19:24:24.820916 containerd[1641]: time="2025-06-20T19:24:24.820871863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\"" Jun 20 19:24:24.828892 containerd[1641]: time="2025-06-20T19:24:24.828846365Z" level=info msg="Container 06d4a95d773d6d180991fb9df97bcc747d940715230521757614d5a6533b57c2: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:24:24.843026 containerd[1641]: time="2025-06-20T19:24:24.842986979Z" level=info msg="CreateContainer within sandbox \"4dda986162c86118a5330fc7d25924568895b5a5c6b778c135c0c2d3f1f34ccf\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"06d4a95d773d6d180991fb9df97bcc747d940715230521757614d5a6533b57c2\"" Jun 20 19:24:24.845937 containerd[1641]: time="2025-06-20T19:24:24.845202089Z" level=info msg="StartContainer for \"06d4a95d773d6d180991fb9df97bcc747d940715230521757614d5a6533b57c2\"" Jun 20 19:24:24.846861 containerd[1641]: time="2025-06-20T19:24:24.846821266Z" level=info msg="connecting to shim 06d4a95d773d6d180991fb9df97bcc747d940715230521757614d5a6533b57c2" address="unix:///run/containerd/s/15e97a1563a3d73f8def6caed11f99c79b345ce0b4d2316a2c1d0b70687c4551" protocol=ttrpc version=3 Jun 20 19:24:24.900876 systemd[1]: Started cri-containerd-06d4a95d773d6d180991fb9df97bcc747d940715230521757614d5a6533b57c2.scope - libcontainer container 06d4a95d773d6d180991fb9df97bcc747d940715230521757614d5a6533b57c2. Jun 20 19:24:24.974484 containerd[1641]: time="2025-06-20T19:24:24.974452230Z" level=info msg="StartContainer for \"06d4a95d773d6d180991fb9df97bcc747d940715230521757614d5a6533b57c2\" returns successfully" Jun 20 19:24:25.774081 kubelet[2940]: I0620 19:24:25.773820 2940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5bd85449d4-8pm9l" podStartSLOduration=29.55092379 podStartE2EDuration="36.773801441s" podCreationTimestamp="2025-06-20 19:23:49 +0000 UTC" firstStartedPulling="2025-06-20 19:24:17.585425241 +0000 UTC m=+44.532812882" lastFinishedPulling="2025-06-20 19:24:24.808302876 +0000 UTC m=+51.755690533" observedRunningTime="2025-06-20 19:24:25.772375737 +0000 UTC m=+52.719763399" watchObservedRunningTime="2025-06-20 19:24:25.773801441 +0000 UTC m=+52.721189083" Jun 20 19:24:25.886052 containerd[1641]: time="2025-06-20T19:24:25.886018298Z" level=info msg="TaskExit event in podsandbox handler container_id:\"06d4a95d773d6d180991fb9df97bcc747d940715230521757614d5a6533b57c2\" id:\"cc82205fc430f487e7415e67de15724e6cf7d86874b73ba0ceb24a8bb4c669e2\" pid:5071 exit_status:1 exited_at:{seconds:1750447465 nanos:885514838}" Jun 20 19:24:26.941572 containerd[1641]: time="2025-06-20T19:24:26.941530332Z" level=info msg="TaskExit event in podsandbox handler container_id:\"06d4a95d773d6d180991fb9df97bcc747d940715230521757614d5a6533b57c2\" id:\"ee23908535b207fb1fef0dd6b5978c4a00b90ec92879bf82541b5301d0795567\" pid:5094 exit_status:1 exited_at:{seconds:1750447466 nanos:941362383}" Jun 20 19:24:27.230003 containerd[1641]: time="2025-06-20T19:24:27.229741105Z" level=info msg="TaskExit event in podsandbox handler container_id:\"06d4a95d773d6d180991fb9df97bcc747d940715230521757614d5a6533b57c2\" id:\"b753effbf37723ffd95bfc90ba99f409b3c56fdb49ad88a7fe8b995652b96fb0\" pid:5118 exited_at:{seconds:1750447467 nanos:229480753}" Jun 20 19:24:27.719009 containerd[1641]: time="2025-06-20T19:24:27.718972492Z" level=info msg="TaskExit event in podsandbox handler container_id:\"06d4a95d773d6d180991fb9df97bcc747d940715230521757614d5a6533b57c2\" id:\"3e836ca164f6fd3d31c5d0bf311835e414b5a2347b3cb9b73ddb384d78fbb20f\" pid:5139 exit_status:1 exited_at:{seconds:1750447467 nanos:718782579}" Jun 20 19:24:32.957237 containerd[1641]: time="2025-06-20T19:24:32.957183310Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:24:33.023196 containerd[1641]: time="2025-06-20T19:24:33.023161485Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.1: active requests=0, bytes read=51246233" Jun 20 19:24:33.077760 containerd[1641]: time="2025-06-20T19:24:33.077687831Z" level=info msg="ImageCreate event name:\"sha256:6df5d7da55b19142ea456ddaa7f49909709419c92a39991e84b0f6708f953d73\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:24:33.134964 containerd[1641]: time="2025-06-20T19:24:33.134884428Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5a988b0c09389a083a7f37e3f14e361659f0bcf538c01d50e9f785671a7d9b20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:24:33.135680 containerd[1641]: time="2025-06-20T19:24:33.135386887Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" with image id \"sha256:6df5d7da55b19142ea456ddaa7f49909709419c92a39991e84b0f6708f953d73\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5a988b0c09389a083a7f37e3f14e361659f0bcf538c01d50e9f785671a7d9b20\", size \"52738904\" in 8.314465162s" Jun 20 19:24:33.135680 containerd[1641]: time="2025-06-20T19:24:33.135413065Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.1\" returns image reference \"sha256:6df5d7da55b19142ea456ddaa7f49909709419c92a39991e84b0f6708f953d73\"" Jun 20 19:24:33.168608 containerd[1641]: time="2025-06-20T19:24:33.168579757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\"" Jun 20 19:24:34.007683 containerd[1641]: time="2025-06-20T19:24:34.007476470Z" level=info msg="CreateContainer within sandbox \"44abcc94551bc0e8690d5204f448e0620fe9cef2772512fe9468cb1892249729\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 20 19:24:34.060701 containerd[1641]: time="2025-06-20T19:24:34.060658200Z" level=info msg="Container c947d8f994561db6147edb2887fe9e1ab98335d27566bd6996075de44378bbf2: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:24:34.064003 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2841015890.mount: Deactivated successfully. Jun 20 19:24:34.113276 containerd[1641]: time="2025-06-20T19:24:34.113236795Z" level=info msg="CreateContainer within sandbox \"44abcc94551bc0e8690d5204f448e0620fe9cef2772512fe9468cb1892249729\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c947d8f994561db6147edb2887fe9e1ab98335d27566bd6996075de44378bbf2\"" Jun 20 19:24:34.224322 containerd[1641]: time="2025-06-20T19:24:34.224276058Z" level=info msg="StartContainer for \"c947d8f994561db6147edb2887fe9e1ab98335d27566bd6996075de44378bbf2\"" Jun 20 19:24:34.226492 containerd[1641]: time="2025-06-20T19:24:34.226446157Z" level=info msg="connecting to shim c947d8f994561db6147edb2887fe9e1ab98335d27566bd6996075de44378bbf2" address="unix:///run/containerd/s/07ffa9c9fcfb23fc80560805f4fc0456387e19a8bdf6fa69f41af65542cc661d" protocol=ttrpc version=3 Jun 20 19:24:34.309927 systemd[1]: Started cri-containerd-c947d8f994561db6147edb2887fe9e1ab98335d27566bd6996075de44378bbf2.scope - libcontainer container c947d8f994561db6147edb2887fe9e1ab98335d27566bd6996075de44378bbf2. Jun 20 19:24:34.384346 containerd[1641]: time="2025-06-20T19:24:34.384324786Z" level=info msg="StartContainer for \"c947d8f994561db6147edb2887fe9e1ab98335d27566bd6996075de44378bbf2\" returns successfully" Jun 20 19:24:35.366094 kubelet[2940]: I0620 19:24:35.353367 2940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-98fb94684-dcwlf" podStartSLOduration=29.836113935 podStartE2EDuration="45.351719684s" podCreationTimestamp="2025-06-20 19:23:50 +0000 UTC" firstStartedPulling="2025-06-20 19:24:17.620511366 +0000 UTC m=+44.567899008" lastFinishedPulling="2025-06-20 19:24:33.136117112 +0000 UTC m=+60.083504757" observedRunningTime="2025-06-20 19:24:35.344436631 +0000 UTC m=+62.291824272" watchObservedRunningTime="2025-06-20 19:24:35.351719684 +0000 UTC m=+62.299107328" Jun 20 19:24:35.393840 containerd[1641]: time="2025-06-20T19:24:35.393816582Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c947d8f994561db6147edb2887fe9e1ab98335d27566bd6996075de44378bbf2\" id:\"2cd2aa274675fa512eb16ab25fde45a3bd709ba1842b60907e694067a4530501\" pid:5227 exited_at:{seconds:1750447475 nanos:384201922}" Jun 20 19:24:36.145246 containerd[1641]: time="2025-06-20T19:24:36.145207820Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:24:36.172991 containerd[1641]: time="2025-06-20T19:24:36.172942033Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.1: active requests=0, bytes read=47305653" Jun 20 19:24:36.185455 containerd[1641]: time="2025-06-20T19:24:36.185406767Z" level=info msg="ImageCreate event name:\"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:24:36.204311 containerd[1641]: time="2025-06-20T19:24:36.204253984Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:24:36.204823 containerd[1641]: time="2025-06-20T19:24:36.204749533Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" with image id \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\", size \"48798372\" in 3.036143389s" Jun 20 19:24:36.204823 containerd[1641]: time="2025-06-20T19:24:36.204768520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" returns image reference \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\"" Jun 20 19:24:36.205501 containerd[1641]: time="2025-06-20T19:24:36.205488522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.1\"" Jun 20 19:24:36.217129 containerd[1641]: time="2025-06-20T19:24:36.216452033Z" level=info msg="CreateContainer within sandbox \"08e74fc80092fa814d8f95ff8f4fdd1df1a0a5d3ba0e19eb6e77321af4a8082c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 20 19:24:36.222337 containerd[1641]: time="2025-06-20T19:24:36.222308975Z" level=info msg="Container 5350a63dc1d2f92d88d640b67ec8c631afbdabcaa789b8fd598c1d70233caecb: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:24:36.241433 containerd[1641]: time="2025-06-20T19:24:36.241406022Z" level=info msg="CreateContainer within sandbox \"08e74fc80092fa814d8f95ff8f4fdd1df1a0a5d3ba0e19eb6e77321af4a8082c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5350a63dc1d2f92d88d640b67ec8c631afbdabcaa789b8fd598c1d70233caecb\"" Jun 20 19:24:36.260888 containerd[1641]: time="2025-06-20T19:24:36.260865269Z" level=info msg="StartContainer for \"5350a63dc1d2f92d88d640b67ec8c631afbdabcaa789b8fd598c1d70233caecb\"" Jun 20 19:24:36.261679 containerd[1641]: time="2025-06-20T19:24:36.261664334Z" level=info msg="connecting to shim 5350a63dc1d2f92d88d640b67ec8c631afbdabcaa789b8fd598c1d70233caecb" address="unix:///run/containerd/s/5cd0a5b84c65fc47ee5599b0df131ee6fad4098b89b74f297c179716b6e1d899" protocol=ttrpc version=3 Jun 20 19:24:36.289881 systemd[1]: Started cri-containerd-5350a63dc1d2f92d88d640b67ec8c631afbdabcaa789b8fd598c1d70233caecb.scope - libcontainer container 5350a63dc1d2f92d88d640b67ec8c631afbdabcaa789b8fd598c1d70233caecb. Jun 20 19:24:36.359984 containerd[1641]: time="2025-06-20T19:24:36.359924479Z" level=info msg="StartContainer for \"5350a63dc1d2f92d88d640b67ec8c631afbdabcaa789b8fd598c1d70233caecb\" returns successfully" Jun 20 19:24:37.345612 kubelet[2940]: I0620 19:24:37.344876 2940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6647f6b4b7-2w6xq" podStartSLOduration=32.060817368 podStartE2EDuration="50.344818894s" podCreationTimestamp="2025-06-20 19:23:47 +0000 UTC" firstStartedPulling="2025-06-20 19:24:17.92136227 +0000 UTC m=+44.868749915" lastFinishedPulling="2025-06-20 19:24:36.205363801 +0000 UTC m=+63.152751441" observedRunningTime="2025-06-20 19:24:37.313683725 +0000 UTC m=+64.261071368" watchObservedRunningTime="2025-06-20 19:24:37.344818894 +0000 UTC m=+64.292206538" Jun 20 19:24:37.725394 containerd[1641]: time="2025-06-20T19:24:37.724968580Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:24:37.726798 containerd[1641]: time="2025-06-20T19:24:37.726785688Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.1: active requests=0, bytes read=8758389" Jun 20 19:24:37.727973 containerd[1641]: time="2025-06-20T19:24:37.727961300Z" level=info msg="ImageCreate event name:\"sha256:8a733c30ec1a8c9f3f51e2da387b425052ed4a9ca631da57c6b185183243e8e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:24:37.729195 containerd[1641]: time="2025-06-20T19:24:37.728782847Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:b2a5699992dd6c84cfab94ef60536b9aaf19ad8de648e8e0b92d3733f5f52d23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:24:37.729195 containerd[1641]: time="2025-06-20T19:24:37.729132938Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.1\" with image id \"sha256:8a733c30ec1a8c9f3f51e2da387b425052ed4a9ca631da57c6b185183243e8e9\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:b2a5699992dd6c84cfab94ef60536b9aaf19ad8de648e8e0b92d3733f5f52d23\", size \"10251092\" in 1.523629488s" Jun 20 19:24:37.729195 containerd[1641]: time="2025-06-20T19:24:37.729147128Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.1\" returns image reference \"sha256:8a733c30ec1a8c9f3f51e2da387b425052ed4a9ca631da57c6b185183243e8e9\"" Jun 20 19:24:37.761914 containerd[1641]: time="2025-06-20T19:24:37.761886378Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\"" Jun 20 19:24:37.830312 containerd[1641]: time="2025-06-20T19:24:37.830283838Z" level=info msg="CreateContainer within sandbox \"81f5738bb3c3ce2d0fb6c21c60bf661e14062f72fba610c5ca6c7f743abf09c3\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 20 19:24:37.848347 containerd[1641]: time="2025-06-20T19:24:37.848313425Z" level=info msg="Container d0567be41c0bfae98d562db785a777f745b7c8a782198f5fe9141b939c656139: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:24:37.859860 containerd[1641]: time="2025-06-20T19:24:37.859822477Z" level=info msg="CreateContainer within sandbox \"81f5738bb3c3ce2d0fb6c21c60bf661e14062f72fba610c5ca6c7f743abf09c3\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d0567be41c0bfae98d562db785a777f745b7c8a782198f5fe9141b939c656139\"" Jun 20 19:24:37.868818 containerd[1641]: time="2025-06-20T19:24:37.868795679Z" level=info msg="StartContainer for \"d0567be41c0bfae98d562db785a777f745b7c8a782198f5fe9141b939c656139\"" Jun 20 19:24:37.869864 containerd[1641]: time="2025-06-20T19:24:37.869833661Z" level=info msg="connecting to shim d0567be41c0bfae98d562db785a777f745b7c8a782198f5fe9141b939c656139" address="unix:///run/containerd/s/2e31cca298b0bee2ab83a83f6b4d816a426f7c3dbf87a1d581ae8fc567aac8f1" protocol=ttrpc version=3 Jun 20 19:24:37.889900 systemd[1]: Started cri-containerd-d0567be41c0bfae98d562db785a777f745b7c8a782198f5fe9141b939c656139.scope - libcontainer container d0567be41c0bfae98d562db785a777f745b7c8a782198f5fe9141b939c656139. Jun 20 19:24:37.940914 containerd[1641]: time="2025-06-20T19:24:37.940890961Z" level=info msg="StartContainer for \"d0567be41c0bfae98d562db785a777f745b7c8a782198f5fe9141b939c656139\" returns successfully" Jun 20 19:24:38.194363 containerd[1641]: time="2025-06-20T19:24:38.193949338Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:24:38.233317 containerd[1641]: time="2025-06-20T19:24:38.233281397Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.1: active requests=0, bytes read=77" Jun 20 19:24:38.234584 containerd[1641]: time="2025-06-20T19:24:38.234557506Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" with image id \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:f6439af8b6022a48d2c6c75d92ec31fe177e7b6a90c58c78ca3964db2b94e21b\", size \"48798372\" in 472.643284ms" Jun 20 19:24:38.234584 containerd[1641]: time="2025-06-20T19:24:38.234581900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.1\" returns image reference \"sha256:5d29e6e796e41d7383da7c5b73fc136f7e486d40c52f79a04098396b7f85106c\"" Jun 20 19:24:38.235312 containerd[1641]: time="2025-06-20T19:24:38.235243406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\"" Jun 20 19:24:38.238357 containerd[1641]: time="2025-06-20T19:24:38.238308796Z" level=info msg="CreateContainer within sandbox \"efde91cc5685c2ee619740357be3669b608cb4186e1ace34c5516a7be57bb747\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 20 19:24:38.258501 containerd[1641]: time="2025-06-20T19:24:38.258469697Z" level=info msg="Container 75ad95083c18299544bae887aa3638b44258c80615c1d5ee726072ca024501e2: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:24:38.263265 containerd[1641]: time="2025-06-20T19:24:38.263237010Z" level=info msg="CreateContainer within sandbox \"efde91cc5685c2ee619740357be3669b608cb4186e1ace34c5516a7be57bb747\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"75ad95083c18299544bae887aa3638b44258c80615c1d5ee726072ca024501e2\"" Jun 20 19:24:38.264345 containerd[1641]: time="2025-06-20T19:24:38.264319839Z" level=info msg="StartContainer for \"75ad95083c18299544bae887aa3638b44258c80615c1d5ee726072ca024501e2\"" Jun 20 19:24:38.266792 containerd[1641]: time="2025-06-20T19:24:38.266732795Z" level=info msg="connecting to shim 75ad95083c18299544bae887aa3638b44258c80615c1d5ee726072ca024501e2" address="unix:///run/containerd/s/b1cf38c1da3f1b3fdcdb52eef8e901fa55111c17c83e5b3236d12a05c674c1f7" protocol=ttrpc version=3 Jun 20 19:24:38.284851 systemd[1]: Started cri-containerd-75ad95083c18299544bae887aa3638b44258c80615c1d5ee726072ca024501e2.scope - libcontainer container 75ad95083c18299544bae887aa3638b44258c80615c1d5ee726072ca024501e2. Jun 20 19:24:38.368899 containerd[1641]: time="2025-06-20T19:24:38.368822978Z" level=info msg="StartContainer for \"75ad95083c18299544bae887aa3638b44258c80615c1d5ee726072ca024501e2\" returns successfully" Jun 20 19:24:38.436403 kubelet[2940]: I0620 19:24:38.434298 2940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6647f6b4b7-8zcwx" podStartSLOduration=32.791066095 podStartE2EDuration="51.411331095s" podCreationTimestamp="2025-06-20 19:23:47 +0000 UTC" firstStartedPulling="2025-06-20 19:24:19.614908904 +0000 UTC m=+46.562296545" lastFinishedPulling="2025-06-20 19:24:38.235173902 +0000 UTC m=+65.182561545" observedRunningTime="2025-06-20 19:24:38.410442055 +0000 UTC m=+65.357829705" watchObservedRunningTime="2025-06-20 19:24:38.411331095 +0000 UTC m=+65.358718739" Jun 20 19:24:40.282934 containerd[1641]: time="2025-06-20T19:24:40.282902617Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:24:40.283868 containerd[1641]: time="2025-06-20T19:24:40.283807804Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1: active requests=0, bytes read=14705633" Jun 20 19:24:40.285302 containerd[1641]: time="2025-06-20T19:24:40.285282687Z" level=info msg="ImageCreate event name:\"sha256:dfc00385e8755bddd1053a2a482a3559ad6c93bd8b882371b9ed8b5c3dfe22b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:24:40.287291 containerd[1641]: time="2025-06-20T19:24:40.286444207Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:1a882b6866dd22d783a39f1e041b87a154666ea4dd8b669fe98d0b0fac58d225\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:24:40.287894 containerd[1641]: time="2025-06-20T19:24:40.287715283Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" with image id \"sha256:dfc00385e8755bddd1053a2a482a3559ad6c93bd8b882371b9ed8b5c3dfe22b5\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:1a882b6866dd22d783a39f1e041b87a154666ea4dd8b669fe98d0b0fac58d225\", size \"16198288\" in 2.052447713s" Jun 20 19:24:40.287894 containerd[1641]: time="2025-06-20T19:24:40.287735629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.1\" returns image reference \"sha256:dfc00385e8755bddd1053a2a482a3559ad6c93bd8b882371b9ed8b5c3dfe22b5\"" Jun 20 19:24:40.314969 containerd[1641]: time="2025-06-20T19:24:40.314776247Z" level=info msg="CreateContainer within sandbox \"81f5738bb3c3ce2d0fb6c21c60bf661e14062f72fba610c5ca6c7f743abf09c3\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 20 19:24:40.327374 containerd[1641]: time="2025-06-20T19:24:40.326814596Z" level=info msg="Container 5416b7f225a641c8a63cae62da68cf979978351fa213dc34d2ce6f48c559fa7c: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:24:40.335634 containerd[1641]: time="2025-06-20T19:24:40.335604726Z" level=info msg="CreateContainer within sandbox \"81f5738bb3c3ce2d0fb6c21c60bf661e14062f72fba610c5ca6c7f743abf09c3\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5416b7f225a641c8a63cae62da68cf979978351fa213dc34d2ce6f48c559fa7c\"" Jun 20 19:24:40.338906 containerd[1641]: time="2025-06-20T19:24:40.337790230Z" level=info msg="StartContainer for \"5416b7f225a641c8a63cae62da68cf979978351fa213dc34d2ce6f48c559fa7c\"" Jun 20 19:24:40.338906 containerd[1641]: time="2025-06-20T19:24:40.338639206Z" level=info msg="connecting to shim 5416b7f225a641c8a63cae62da68cf979978351fa213dc34d2ce6f48c559fa7c" address="unix:///run/containerd/s/2e31cca298b0bee2ab83a83f6b4d816a426f7c3dbf87a1d581ae8fc567aac8f1" protocol=ttrpc version=3 Jun 20 19:24:40.371868 systemd[1]: Started cri-containerd-5416b7f225a641c8a63cae62da68cf979978351fa213dc34d2ce6f48c559fa7c.scope - libcontainer container 5416b7f225a641c8a63cae62da68cf979978351fa213dc34d2ce6f48c559fa7c. Jun 20 19:24:40.477577 containerd[1641]: time="2025-06-20T19:24:40.477542641Z" level=info msg="StartContainer for \"5416b7f225a641c8a63cae62da68cf979978351fa213dc34d2ce6f48c559fa7c\" returns successfully" Jun 20 19:24:40.747627 kubelet[2940]: I0620 19:24:40.747512 2940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-4w9w9" podStartSLOduration=29.391517149 podStartE2EDuration="50.747486441s" podCreationTimestamp="2025-06-20 19:23:50 +0000 UTC" firstStartedPulling="2025-06-20 19:24:18.947943622 +0000 UTC m=+45.895331263" lastFinishedPulling="2025-06-20 19:24:40.303912911 +0000 UTC m=+67.251300555" observedRunningTime="2025-06-20 19:24:40.741485211 +0000 UTC m=+67.688872862" watchObservedRunningTime="2025-06-20 19:24:40.747486441 +0000 UTC m=+67.694874092" Jun 20 19:24:41.442952 containerd[1641]: time="2025-06-20T19:24:41.442909009Z" level=info msg="TaskExit event in podsandbox handler container_id:\"46f0523a7c7aa33623b27473163e2a380f961a3593747ef5725e109bd32952b4\" id:\"d1d192b4e7aa2ab27bf80e5aae39f16ebc77f0889e1b53eef8abf83451476491\" pid:5398 exited_at:{seconds:1750447481 nanos:349553758}" Jun 20 19:24:41.812350 kubelet[2940]: I0620 19:24:41.810455 2940 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 20 19:24:41.816128 kubelet[2940]: I0620 19:24:41.816107 2940 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 20 19:24:58.831824 containerd[1641]: time="2025-06-20T19:24:58.831789408Z" level=info msg="TaskExit event in podsandbox handler container_id:\"06d4a95d773d6d180991fb9df97bcc747d940715230521757614d5a6533b57c2\" id:\"556ca0e3a770594391239eb7b111268f1909eb8af490afb05b289425389fa0d0\" pid:5435 exited_at:{seconds:1750447498 nanos:831354209}" Jun 20 19:25:02.221319 systemd[1]: Started sshd@8-139.178.70.105:22-147.75.109.163:34816.service - OpenSSH per-connection server daemon (147.75.109.163:34816). Jun 20 19:25:02.388453 sshd[5480]: Accepted publickey for core from 147.75.109.163 port 34816 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:25:02.394349 sshd-session[5480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:25:02.406204 systemd-logind[1620]: New session 10 of user core. Jun 20 19:25:02.409781 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 20 19:25:03.132720 sshd[5482]: Connection closed by 147.75.109.163 port 34816 Jun 20 19:25:03.133066 sshd-session[5480]: pam_unix(sshd:session): session closed for user core Jun 20 19:25:03.138138 systemd-logind[1620]: Session 10 logged out. Waiting for processes to exit. Jun 20 19:25:03.138526 systemd[1]: sshd@8-139.178.70.105:22-147.75.109.163:34816.service: Deactivated successfully. Jun 20 19:25:03.140652 systemd[1]: session-10.scope: Deactivated successfully. Jun 20 19:25:03.146577 systemd-logind[1620]: Removed session 10. Jun 20 19:25:04.636089 containerd[1641]: time="2025-06-20T19:25:04.635954722Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c947d8f994561db6147edb2887fe9e1ab98335d27566bd6996075de44378bbf2\" id:\"a24d1d277994889599ab7237fd7d2747ca10d9b6c8e25b44bee68fe2eb6de4f1\" pid:5516 exited_at:{seconds:1750447504 nanos:635775894}" Jun 20 19:25:05.302180 containerd[1641]: time="2025-06-20T19:25:05.302149993Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c947d8f994561db6147edb2887fe9e1ab98335d27566bd6996075de44378bbf2\" id:\"4a85dcfb47745ef091a80be3b9182f3c538a38ca282c26c4cb6c0f0497198071\" pid:5537 exited_at:{seconds:1750447505 nanos:301949156}" Jun 20 19:25:06.895138 systemd[1]: Started sshd@9-139.178.70.105:22-20.168.120.250:37476.service - OpenSSH per-connection server daemon (20.168.120.250:37476). Jun 20 19:25:06.982178 sshd[5547]: banner exchange: Connection from 20.168.120.250 port 37476: invalid format Jun 20 19:25:06.983158 systemd[1]: sshd@9-139.178.70.105:22-20.168.120.250:37476.service: Deactivated successfully. Jun 20 19:25:08.143309 systemd[1]: Started sshd@10-139.178.70.105:22-147.75.109.163:45344.service - OpenSSH per-connection server daemon (147.75.109.163:45344). Jun 20 19:25:08.206732 sshd[5551]: Accepted publickey for core from 147.75.109.163 port 45344 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:25:08.208622 sshd-session[5551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:25:08.212206 systemd-logind[1620]: New session 11 of user core. Jun 20 19:25:08.225781 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 20 19:25:08.539865 sshd[5553]: Connection closed by 147.75.109.163 port 45344 Jun 20 19:25:08.539763 sshd-session[5551]: pam_unix(sshd:session): session closed for user core Jun 20 19:25:08.542904 systemd-logind[1620]: Session 11 logged out. Waiting for processes to exit. Jun 20 19:25:08.543275 systemd[1]: sshd@10-139.178.70.105:22-147.75.109.163:45344.service: Deactivated successfully. Jun 20 19:25:08.544682 systemd[1]: session-11.scope: Deactivated successfully. Jun 20 19:25:08.546263 systemd-logind[1620]: Removed session 11. Jun 20 19:25:13.551557 systemd[1]: Started sshd@11-139.178.70.105:22-147.75.109.163:45358.service - OpenSSH per-connection server daemon (147.75.109.163:45358). Jun 20 19:25:14.542987 containerd[1641]: time="2025-06-20T19:25:14.542956600Z" level=info msg="TaskExit event in podsandbox handler container_id:\"46f0523a7c7aa33623b27473163e2a380f961a3593747ef5725e109bd32952b4\" id:\"34892ccab9a50176b4d85dca4ff4d48a9d5e4f5b4db15378067174568dc9dab5\" pid:5581 exited_at:{seconds:1750447514 nanos:540276090}" Jun 20 19:25:14.662521 sshd[5594]: Accepted publickey for core from 147.75.109.163 port 45358 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:25:14.667316 sshd-session[5594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:25:14.670729 systemd-logind[1620]: New session 12 of user core. Jun 20 19:25:14.675848 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 20 19:25:15.203710 sshd[5596]: Connection closed by 147.75.109.163 port 45358 Jun 20 19:25:15.204263 sshd-session[5594]: pam_unix(sshd:session): session closed for user core Jun 20 19:25:15.213291 systemd[1]: sshd@11-139.178.70.105:22-147.75.109.163:45358.service: Deactivated successfully. Jun 20 19:25:15.214642 systemd[1]: session-12.scope: Deactivated successfully. Jun 20 19:25:15.220294 systemd-logind[1620]: Session 12 logged out. Waiting for processes to exit. Jun 20 19:25:15.223522 systemd[1]: Started sshd@12-139.178.70.105:22-147.75.109.163:45362.service - OpenSSH per-connection server daemon (147.75.109.163:45362). Jun 20 19:25:15.226993 systemd-logind[1620]: Removed session 12. Jun 20 19:25:15.291730 sshd[5610]: Accepted publickey for core from 147.75.109.163 port 45362 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:25:15.295159 sshd-session[5610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:25:15.307136 systemd-logind[1620]: New session 13 of user core. Jun 20 19:25:15.314973 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 20 19:25:15.519873 sshd[5612]: Connection closed by 147.75.109.163 port 45362 Jun 20 19:25:15.520763 sshd-session[5610]: pam_unix(sshd:session): session closed for user core Jun 20 19:25:15.527367 systemd[1]: sshd@12-139.178.70.105:22-147.75.109.163:45362.service: Deactivated successfully. Jun 20 19:25:15.528678 systemd[1]: session-13.scope: Deactivated successfully. Jun 20 19:25:15.529794 systemd-logind[1620]: Session 13 logged out. Waiting for processes to exit. Jun 20 19:25:15.532638 systemd[1]: Started sshd@13-139.178.70.105:22-147.75.109.163:45368.service - OpenSSH per-connection server daemon (147.75.109.163:45368). Jun 20 19:25:15.541753 systemd-logind[1620]: Removed session 13. Jun 20 19:25:15.603618 sshd[5622]: Accepted publickey for core from 147.75.109.163 port 45368 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:25:15.604352 sshd-session[5622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:25:15.610071 systemd-logind[1620]: New session 14 of user core. Jun 20 19:25:15.616836 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 20 19:25:15.754870 sshd[5624]: Connection closed by 147.75.109.163 port 45368 Jun 20 19:25:15.755046 sshd-session[5622]: pam_unix(sshd:session): session closed for user core Jun 20 19:25:15.757430 systemd-logind[1620]: Session 14 logged out. Waiting for processes to exit. Jun 20 19:25:15.757576 systemd[1]: sshd@13-139.178.70.105:22-147.75.109.163:45368.service: Deactivated successfully. Jun 20 19:25:15.760399 systemd[1]: session-14.scope: Deactivated successfully. Jun 20 19:25:15.763402 systemd-logind[1620]: Removed session 14. Jun 20 19:25:20.769525 systemd[1]: Started sshd@14-139.178.70.105:22-147.75.109.163:43356.service - OpenSSH per-connection server daemon (147.75.109.163:43356). Jun 20 19:25:21.247385 sshd[5636]: Accepted publickey for core from 147.75.109.163 port 43356 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:25:21.253912 sshd-session[5636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:25:21.256804 systemd-logind[1620]: New session 15 of user core. Jun 20 19:25:21.259793 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 20 19:25:22.425090 sshd[5640]: Connection closed by 147.75.109.163 port 43356 Jun 20 19:25:22.430490 sshd-session[5636]: pam_unix(sshd:session): session closed for user core Jun 20 19:25:22.441847 systemd[1]: sshd@14-139.178.70.105:22-147.75.109.163:43356.service: Deactivated successfully. Jun 20 19:25:22.443293 systemd[1]: session-15.scope: Deactivated successfully. Jun 20 19:25:22.446812 systemd-logind[1620]: Session 15 logged out. Waiting for processes to exit. Jun 20 19:25:22.448090 systemd-logind[1620]: Removed session 15. Jun 20 19:25:27.436460 systemd[1]: Started sshd@15-139.178.70.105:22-147.75.109.163:60402.service - OpenSSH per-connection server daemon (147.75.109.163:60402). Jun 20 19:25:27.683518 sshd[5674]: Accepted publickey for core from 147.75.109.163 port 60402 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:25:27.685523 sshd-session[5674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:25:27.689851 systemd-logind[1620]: New session 16 of user core. Jun 20 19:25:27.695983 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 20 19:25:28.067939 containerd[1641]: time="2025-06-20T19:25:28.061816073Z" level=info msg="TaskExit event in podsandbox handler container_id:\"06d4a95d773d6d180991fb9df97bcc747d940715230521757614d5a6533b57c2\" id:\"ac939d698ff30119d18cf531ca9e89b3c6a62c0dd9110e206c5bbd4d1e91c317\" pid:5691 exited_at:{seconds:1750447528 nanos:10241501}" Jun 20 19:25:28.067939 containerd[1641]: time="2025-06-20T19:25:28.067846609Z" level=info msg="TaskExit event in podsandbox handler container_id:\"06d4a95d773d6d180991fb9df97bcc747d940715230521757614d5a6533b57c2\" id:\"9e974c714870900d89b35e70a7f6b787d70595f841ffb8bf23aa48822c7152d3\" pid:5667 exited_at:{seconds:1750447528 nanos:11262399}" Jun 20 19:25:28.377033 sshd[5700]: Connection closed by 147.75.109.163 port 60402 Jun 20 19:25:28.377492 sshd-session[5674]: pam_unix(sshd:session): session closed for user core Jun 20 19:25:28.380560 systemd[1]: sshd@15-139.178.70.105:22-147.75.109.163:60402.service: Deactivated successfully. Jun 20 19:25:28.382325 systemd[1]: session-16.scope: Deactivated successfully. Jun 20 19:25:28.383131 systemd-logind[1620]: Session 16 logged out. Waiting for processes to exit. Jun 20 19:25:28.383995 systemd-logind[1620]: Removed session 16. Jun 20 19:25:33.387896 systemd[1]: Started sshd@16-139.178.70.105:22-147.75.109.163:60406.service - OpenSSH per-connection server daemon (147.75.109.163:60406). Jun 20 19:25:33.524911 sshd[5723]: Accepted publickey for core from 147.75.109.163 port 60406 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:25:33.526138 sshd-session[5723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:25:33.529178 systemd-logind[1620]: New session 17 of user core. Jun 20 19:25:33.535834 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 20 19:25:35.663113 containerd[1641]: time="2025-06-20T19:25:35.663071825Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c947d8f994561db6147edb2887fe9e1ab98335d27566bd6996075de44378bbf2\" id:\"220d698b84e90bab411adbd7e3a56e6dc4f8ae908c93956feef39dd886b12b99\" pid:5746 exited_at:{seconds:1750447535 nanos:639847806}" Jun 20 19:25:37.953033 sshd[5725]: Connection closed by 147.75.109.163 port 60406 Jun 20 19:25:37.958405 sshd-session[5723]: pam_unix(sshd:session): session closed for user core Jun 20 19:25:38.012199 systemd[1]: sshd@16-139.178.70.105:22-147.75.109.163:60406.service: Deactivated successfully. Jun 20 19:25:38.013601 systemd[1]: session-17.scope: Deactivated successfully. Jun 20 19:25:38.014373 systemd-logind[1620]: Session 17 logged out. Waiting for processes to exit. Jun 20 19:25:38.018441 systemd[1]: Started sshd@17-139.178.70.105:22-147.75.109.163:50342.service - OpenSSH per-connection server daemon (147.75.109.163:50342). Jun 20 19:25:38.019256 systemd-logind[1620]: Removed session 17. Jun 20 19:25:38.112357 sshd[5761]: Accepted publickey for core from 147.75.109.163 port 50342 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:25:38.122264 sshd-session[5761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:25:38.145286 systemd-logind[1620]: New session 18 of user core. Jun 20 19:25:38.149806 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 20 19:25:38.884348 sshd[5763]: Connection closed by 147.75.109.163 port 50342 Jun 20 19:25:38.885389 sshd-session[5761]: pam_unix(sshd:session): session closed for user core Jun 20 19:25:38.891438 systemd[1]: sshd@17-139.178.70.105:22-147.75.109.163:50342.service: Deactivated successfully. Jun 20 19:25:38.892655 systemd[1]: session-18.scope: Deactivated successfully. Jun 20 19:25:38.893323 systemd-logind[1620]: Session 18 logged out. Waiting for processes to exit. Jun 20 19:25:38.895626 systemd[1]: Started sshd@18-139.178.70.105:22-147.75.109.163:50348.service - OpenSSH per-connection server daemon (147.75.109.163:50348). Jun 20 19:25:38.896174 systemd-logind[1620]: Removed session 18. Jun 20 19:25:38.999748 sshd[5773]: Accepted publickey for core from 147.75.109.163 port 50348 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:25:39.000739 sshd-session[5773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:25:39.004611 systemd-logind[1620]: New session 19 of user core. Jun 20 19:25:39.007804 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 20 19:25:40.224383 sshd[5775]: Connection closed by 147.75.109.163 port 50348 Jun 20 19:25:40.231905 sshd-session[5773]: pam_unix(sshd:session): session closed for user core Jun 20 19:25:40.241681 systemd[1]: sshd@18-139.178.70.105:22-147.75.109.163:50348.service: Deactivated successfully. Jun 20 19:25:40.246097 systemd[1]: session-19.scope: Deactivated successfully. Jun 20 19:25:40.246873 systemd-logind[1620]: Session 19 logged out. Waiting for processes to exit. Jun 20 19:25:40.249288 systemd[1]: Started sshd@19-139.178.70.105:22-147.75.109.163:50360.service - OpenSSH per-connection server daemon (147.75.109.163:50360). Jun 20 19:25:40.250971 systemd-logind[1620]: Removed session 19. Jun 20 19:25:40.582342 sshd[5794]: Accepted publickey for core from 147.75.109.163 port 50360 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:25:40.591594 sshd-session[5794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:25:40.601294 systemd-logind[1620]: New session 20 of user core. Jun 20 19:25:40.605814 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 20 19:25:43.339418 kubelet[2940]: E0620 19:25:43.326638 2940 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.153s" Jun 20 19:25:43.900510 containerd[1641]: time="2025-06-20T19:25:43.874477501Z" level=info msg="TaskExit event in podsandbox handler container_id:\"46f0523a7c7aa33623b27473163e2a380f961a3593747ef5725e109bd32952b4\" id:\"fd473addd1d3d957b261eb6199b3e14353ce69e3b8d5ef25e2ef43e31127a771\" pid:5817 exited_at:{seconds:1750447543 nanos:687842528}" Jun 20 19:25:47.335991 sshd[5803]: Connection closed by 147.75.109.163 port 50360 Jun 20 19:25:47.379813 sshd-session[5794]: pam_unix(sshd:session): session closed for user core Jun 20 19:25:47.445386 systemd[1]: Started sshd@20-139.178.70.105:22-147.75.109.163:35644.service - OpenSSH per-connection server daemon (147.75.109.163:35644). Jun 20 19:25:47.445743 systemd[1]: sshd@19-139.178.70.105:22-147.75.109.163:50360.service: Deactivated successfully. Jun 20 19:25:47.447931 systemd[1]: session-20.scope: Deactivated successfully. Jun 20 19:25:47.450601 systemd-logind[1620]: Session 20 logged out. Waiting for processes to exit. Jun 20 19:25:47.452409 systemd-logind[1620]: Removed session 20. Jun 20 19:25:47.607202 sshd[5876]: Accepted publickey for core from 147.75.109.163 port 35644 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:25:47.606652 sshd-session[5876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:25:47.614287 systemd-logind[1620]: New session 21 of user core. Jun 20 19:25:47.619201 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 20 19:25:48.219764 sshd[5881]: Connection closed by 147.75.109.163 port 35644 Jun 20 19:25:48.223632 systemd[1]: sshd@20-139.178.70.105:22-147.75.109.163:35644.service: Deactivated successfully. Jun 20 19:25:48.220222 sshd-session[5876]: pam_unix(sshd:session): session closed for user core Jun 20 19:25:48.225250 systemd[1]: session-21.scope: Deactivated successfully. Jun 20 19:25:48.226453 systemd-logind[1620]: Session 21 logged out. Waiting for processes to exit. Jun 20 19:25:48.228108 systemd-logind[1620]: Removed session 21. Jun 20 19:25:53.229373 systemd[1]: Started sshd@21-139.178.70.105:22-147.75.109.163:35656.service - OpenSSH per-connection server daemon (147.75.109.163:35656). Jun 20 19:25:53.359318 sshd[5893]: Accepted publickey for core from 147.75.109.163 port 35656 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:25:53.361546 sshd-session[5893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:25:53.366269 systemd-logind[1620]: New session 22 of user core. Jun 20 19:25:53.373874 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 20 19:25:53.680134 sshd[5895]: Connection closed by 147.75.109.163 port 35656 Jun 20 19:25:53.680590 sshd-session[5893]: pam_unix(sshd:session): session closed for user core Jun 20 19:25:53.682963 systemd[1]: sshd@21-139.178.70.105:22-147.75.109.163:35656.service: Deactivated successfully. Jun 20 19:25:53.684334 systemd[1]: session-22.scope: Deactivated successfully. Jun 20 19:25:53.685682 systemd-logind[1620]: Session 22 logged out. Waiting for processes to exit. Jun 20 19:25:53.686764 systemd-logind[1620]: Removed session 22. Jun 20 19:25:58.629448 containerd[1641]: time="2025-06-20T19:25:58.627253875Z" level=info msg="TaskExit event in podsandbox handler container_id:\"06d4a95d773d6d180991fb9df97bcc747d940715230521757614d5a6533b57c2\" id:\"d76c669de8884cb57f7f35666a69a670092f2aa95b85311a973127c558a00aac\" pid:5918 exited_at:{seconds:1750447558 nanos:527536057}" Jun 20 19:25:58.740050 systemd[1]: Started sshd@22-139.178.70.105:22-147.75.109.163:51428.service - OpenSSH per-connection server daemon (147.75.109.163:51428). Jun 20 19:25:58.885410 sshd[5963]: Accepted publickey for core from 147.75.109.163 port 51428 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:25:58.887458 sshd-session[5963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:25:58.896167 systemd-logind[1620]: New session 23 of user core. Jun 20 19:25:58.899893 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 20 19:26:00.105030 sshd[5965]: Connection closed by 147.75.109.163 port 51428 Jun 20 19:26:00.105227 sshd-session[5963]: pam_unix(sshd:session): session closed for user core Jun 20 19:26:00.108399 systemd[1]: sshd@22-139.178.70.105:22-147.75.109.163:51428.service: Deactivated successfully. Jun 20 19:26:00.109547 systemd[1]: session-23.scope: Deactivated successfully. Jun 20 19:26:00.110143 systemd-logind[1620]: Session 23 logged out. Waiting for processes to exit. Jun 20 19:26:00.116666 systemd-logind[1620]: Removed session 23.