Jul 7 06:08:23.710971 kernel: Linux version 6.12.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 21:56:00 -00 2025 Jul 7 06:08:23.710987 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 06:08:23.710993 kernel: Disabled fast string operations Jul 7 06:08:23.710997 kernel: BIOS-provided physical RAM map: Jul 7 06:08:23.711001 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Jul 7 06:08:23.711005 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Jul 7 06:08:23.711012 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Jul 7 06:08:23.711016 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Jul 7 06:08:23.711020 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Jul 7 06:08:23.711025 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Jul 7 06:08:23.711029 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Jul 7 06:08:23.711034 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Jul 7 06:08:23.711038 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Jul 7 06:08:23.711042 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jul 7 06:08:23.711048 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Jul 7 06:08:23.711053 kernel: NX (Execute Disable) protection: active Jul 7 06:08:23.711058 kernel: APIC: Static calls initialized Jul 7 06:08:23.711063 kernel: SMBIOS 2.7 present. Jul 7 06:08:23.711068 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Jul 7 06:08:23.711073 kernel: DMI: Memory slots populated: 1/128 Jul 7 06:08:23.711079 kernel: vmware: hypercall mode: 0x00 Jul 7 06:08:23.711084 kernel: Hypervisor detected: VMware Jul 7 06:08:23.711089 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Jul 7 06:08:23.711093 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Jul 7 06:08:23.711098 kernel: vmware: using clock offset of 6505486711 ns Jul 7 06:08:23.711103 kernel: tsc: Detected 3408.000 MHz processor Jul 7 06:08:23.711108 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 7 06:08:23.711114 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 7 06:08:23.711119 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Jul 7 06:08:23.711124 kernel: total RAM covered: 3072M Jul 7 06:08:23.711130 kernel: Found optimal setting for mtrr clean up Jul 7 06:08:23.711135 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Jul 7 06:08:23.711140 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Jul 7 06:08:23.711145 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 7 06:08:23.711150 kernel: Using GB pages for direct mapping Jul 7 06:08:23.711155 kernel: ACPI: Early table checksum verification disabled Jul 7 06:08:23.711160 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Jul 7 06:08:23.711165 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Jul 7 06:08:23.711170 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Jul 7 06:08:23.711176 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Jul 7 06:08:23.711183 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jul 7 06:08:23.711188 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jul 7 06:08:23.711193 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Jul 7 06:08:23.711207 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Jul 7 06:08:23.711214 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Jul 7 06:08:23.711221 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Jul 7 06:08:23.711226 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Jul 7 06:08:23.711231 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Jul 7 06:08:23.711236 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Jul 7 06:08:23.711242 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Jul 7 06:08:23.711247 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jul 7 06:08:23.711252 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jul 7 06:08:23.711257 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Jul 7 06:08:23.711262 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Jul 7 06:08:23.711268 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Jul 7 06:08:23.711273 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Jul 7 06:08:23.711279 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Jul 7 06:08:23.711284 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Jul 7 06:08:23.711289 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 7 06:08:23.711294 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jul 7 06:08:23.711299 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Jul 7 06:08:23.711304 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00001000-0x7fffffff] Jul 7 06:08:23.711310 kernel: NODE_DATA(0) allocated [mem 0x7fff8dc0-0x7fffffff] Jul 7 06:08:23.711316 kernel: Zone ranges: Jul 7 06:08:23.711321 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 7 06:08:23.711326 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Jul 7 06:08:23.711331 kernel: Normal empty Jul 7 06:08:23.711336 kernel: Device empty Jul 7 06:08:23.711342 kernel: Movable zone start for each node Jul 7 06:08:23.711352 kernel: Early memory node ranges Jul 7 06:08:23.711358 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Jul 7 06:08:23.711363 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Jul 7 06:08:23.711368 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Jul 7 06:08:23.711380 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Jul 7 06:08:23.711386 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 7 06:08:23.711391 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Jul 7 06:08:23.711397 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Jul 7 06:08:23.711402 kernel: ACPI: PM-Timer IO Port: 0x1008 Jul 7 06:08:23.711407 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Jul 7 06:08:23.711413 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jul 7 06:08:23.711418 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jul 7 06:08:23.711423 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jul 7 06:08:23.711430 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jul 7 06:08:23.711435 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jul 7 06:08:23.711440 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jul 7 06:08:23.711445 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jul 7 06:08:23.711450 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jul 7 06:08:23.711455 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jul 7 06:08:23.711460 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jul 7 06:08:23.712490 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jul 7 06:08:23.712498 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jul 7 06:08:23.712503 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jul 7 06:08:23.712511 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jul 7 06:08:23.712516 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jul 7 06:08:23.712522 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jul 7 06:08:23.712527 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Jul 7 06:08:23.712532 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Jul 7 06:08:23.712537 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Jul 7 06:08:23.712542 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Jul 7 06:08:23.712547 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Jul 7 06:08:23.712552 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Jul 7 06:08:23.712559 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Jul 7 06:08:23.712564 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Jul 7 06:08:23.712569 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Jul 7 06:08:23.712574 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Jul 7 06:08:23.712579 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Jul 7 06:08:23.712584 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Jul 7 06:08:23.712589 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Jul 7 06:08:23.712594 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Jul 7 06:08:23.712600 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Jul 7 06:08:23.712605 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Jul 7 06:08:23.712611 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Jul 7 06:08:23.712616 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Jul 7 06:08:23.712621 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Jul 7 06:08:23.712626 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Jul 7 06:08:23.712631 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Jul 7 06:08:23.712636 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Jul 7 06:08:23.712642 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Jul 7 06:08:23.712651 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Jul 7 06:08:23.712656 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Jul 7 06:08:23.712662 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Jul 7 06:08:23.712667 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Jul 7 06:08:23.712674 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Jul 7 06:08:23.712679 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Jul 7 06:08:23.712685 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Jul 7 06:08:23.712690 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Jul 7 06:08:23.712696 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Jul 7 06:08:23.712701 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Jul 7 06:08:23.712706 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Jul 7 06:08:23.712713 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Jul 7 06:08:23.712718 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Jul 7 06:08:23.712724 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Jul 7 06:08:23.712729 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Jul 7 06:08:23.712735 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Jul 7 06:08:23.712740 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Jul 7 06:08:23.712746 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Jul 7 06:08:23.712751 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Jul 7 06:08:23.712756 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Jul 7 06:08:23.712762 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Jul 7 06:08:23.712769 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Jul 7 06:08:23.712774 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Jul 7 06:08:23.712780 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Jul 7 06:08:23.712786 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Jul 7 06:08:23.712791 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Jul 7 06:08:23.712796 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Jul 7 06:08:23.712802 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Jul 7 06:08:23.712807 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Jul 7 06:08:23.712813 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Jul 7 06:08:23.712818 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Jul 7 06:08:23.712825 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Jul 7 06:08:23.712830 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Jul 7 06:08:23.712835 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Jul 7 06:08:23.712841 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Jul 7 06:08:23.712846 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Jul 7 06:08:23.712852 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Jul 7 06:08:23.712857 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Jul 7 06:08:23.712862 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Jul 7 06:08:23.712868 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Jul 7 06:08:23.712874 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Jul 7 06:08:23.712880 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Jul 7 06:08:23.712885 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Jul 7 06:08:23.712890 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Jul 7 06:08:23.712896 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Jul 7 06:08:23.712901 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Jul 7 06:08:23.712907 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Jul 7 06:08:23.712912 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Jul 7 06:08:23.712918 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Jul 7 06:08:23.712923 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Jul 7 06:08:23.712930 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Jul 7 06:08:23.712935 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Jul 7 06:08:23.712940 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Jul 7 06:08:23.712946 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Jul 7 06:08:23.712951 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Jul 7 06:08:23.712957 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Jul 7 06:08:23.712962 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Jul 7 06:08:23.712967 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Jul 7 06:08:23.712973 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Jul 7 06:08:23.712978 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Jul 7 06:08:23.712984 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Jul 7 06:08:23.712990 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Jul 7 06:08:23.712995 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Jul 7 06:08:23.713001 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Jul 7 06:08:23.713006 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Jul 7 06:08:23.713011 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Jul 7 06:08:23.713017 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Jul 7 06:08:23.713023 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Jul 7 06:08:23.713028 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Jul 7 06:08:23.713034 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Jul 7 06:08:23.713040 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Jul 7 06:08:23.713046 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Jul 7 06:08:23.713051 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Jul 7 06:08:23.713057 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Jul 7 06:08:23.713062 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Jul 7 06:08:23.713067 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Jul 7 06:08:23.713073 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Jul 7 06:08:23.713079 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Jul 7 06:08:23.713084 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Jul 7 06:08:23.713090 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Jul 7 06:08:23.713096 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Jul 7 06:08:23.713102 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Jul 7 06:08:23.713107 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Jul 7 06:08:23.713112 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Jul 7 06:08:23.713118 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Jul 7 06:08:23.713123 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Jul 7 06:08:23.713129 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Jul 7 06:08:23.713134 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Jul 7 06:08:23.713140 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Jul 7 06:08:23.713150 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Jul 7 06:08:23.713156 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 7 06:08:23.713161 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Jul 7 06:08:23.713167 kernel: TSC deadline timer available Jul 7 06:08:23.713173 kernel: CPU topo: Max. logical packages: 128 Jul 7 06:08:23.713178 kernel: CPU topo: Max. logical dies: 128 Jul 7 06:08:23.713183 kernel: CPU topo: Max. dies per package: 1 Jul 7 06:08:23.713189 kernel: CPU topo: Max. threads per core: 1 Jul 7 06:08:23.713194 kernel: CPU topo: Num. cores per package: 1 Jul 7 06:08:23.713200 kernel: CPU topo: Num. threads per package: 1 Jul 7 06:08:23.713206 kernel: CPU topo: Allowing 2 present CPUs plus 126 hotplug CPUs Jul 7 06:08:23.713212 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Jul 7 06:08:23.713218 kernel: Booting paravirtualized kernel on VMware hypervisor Jul 7 06:08:23.713224 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 7 06:08:23.713229 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Jul 7 06:08:23.713235 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Jul 7 06:08:23.713240 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Jul 7 06:08:23.713246 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Jul 7 06:08:23.713251 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Jul 7 06:08:23.713258 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Jul 7 06:08:23.713264 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Jul 7 06:08:23.713269 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Jul 7 06:08:23.713274 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Jul 7 06:08:23.713280 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Jul 7 06:08:23.713285 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Jul 7 06:08:23.713290 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Jul 7 06:08:23.713296 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Jul 7 06:08:23.713301 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Jul 7 06:08:23.713308 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Jul 7 06:08:23.713314 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Jul 7 06:08:23.713319 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Jul 7 06:08:23.713324 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Jul 7 06:08:23.713330 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Jul 7 06:08:23.713336 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 06:08:23.713342 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 06:08:23.713348 kernel: random: crng init done Jul 7 06:08:23.713354 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Jul 7 06:08:23.713360 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Jul 7 06:08:23.713365 kernel: printk: log_buf_len min size: 262144 bytes Jul 7 06:08:23.713371 kernel: printk: log_buf_len: 1048576 bytes Jul 7 06:08:23.713376 kernel: printk: early log buf free: 245576(93%) Jul 7 06:08:23.713382 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 06:08:23.713387 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 7 06:08:23.713393 kernel: Fallback order for Node 0: 0 Jul 7 06:08:23.713399 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524157 Jul 7 06:08:23.713405 kernel: Policy zone: DMA32 Jul 7 06:08:23.713411 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 06:08:23.713417 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Jul 7 06:08:23.713422 kernel: ftrace: allocating 40095 entries in 157 pages Jul 7 06:08:23.713428 kernel: ftrace: allocated 157 pages with 5 groups Jul 7 06:08:23.713433 kernel: Dynamic Preempt: voluntary Jul 7 06:08:23.713439 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 06:08:23.714030 kernel: rcu: RCU event tracing is enabled. Jul 7 06:08:23.714040 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Jul 7 06:08:23.714048 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 06:08:23.714054 kernel: Rude variant of Tasks RCU enabled. Jul 7 06:08:23.714059 kernel: Tracing variant of Tasks RCU enabled. Jul 7 06:08:23.714065 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 06:08:23.714071 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Jul 7 06:08:23.714076 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jul 7 06:08:23.714082 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jul 7 06:08:23.714088 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jul 7 06:08:23.714093 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Jul 7 06:08:23.714100 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Jul 7 06:08:23.714106 kernel: Console: colour VGA+ 80x25 Jul 7 06:08:23.714111 kernel: printk: legacy console [tty0] enabled Jul 7 06:08:23.714117 kernel: printk: legacy console [ttyS0] enabled Jul 7 06:08:23.714122 kernel: ACPI: Core revision 20240827 Jul 7 06:08:23.714128 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Jul 7 06:08:23.714134 kernel: APIC: Switch to symmetric I/O mode setup Jul 7 06:08:23.714139 kernel: x2apic enabled Jul 7 06:08:23.714145 kernel: APIC: Switched APIC routing to: physical x2apic Jul 7 06:08:23.714151 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 7 06:08:23.714157 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jul 7 06:08:23.714163 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Jul 7 06:08:23.714168 kernel: Disabled fast string operations Jul 7 06:08:23.714174 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 7 06:08:23.714179 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jul 7 06:08:23.714185 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 7 06:08:23.714191 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Jul 7 06:08:23.714196 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jul 7 06:08:23.714203 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jul 7 06:08:23.714208 kernel: RETBleed: Mitigation: Enhanced IBRS Jul 7 06:08:23.714214 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 7 06:08:23.714219 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 7 06:08:23.714225 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 7 06:08:23.714230 kernel: SRBDS: Unknown: Dependent on hypervisor status Jul 7 06:08:23.714236 kernel: GDS: Unknown: Dependent on hypervisor status Jul 7 06:08:23.714241 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 7 06:08:23.714247 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 7 06:08:23.714253 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 7 06:08:23.714259 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 7 06:08:23.714265 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 7 06:08:23.714271 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 7 06:08:23.714276 kernel: Freeing SMP alternatives memory: 32K Jul 7 06:08:23.714282 kernel: pid_max: default: 131072 minimum: 1024 Jul 7 06:08:23.714287 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 7 06:08:23.714293 kernel: landlock: Up and running. Jul 7 06:08:23.714298 kernel: SELinux: Initializing. Jul 7 06:08:23.714305 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 7 06:08:23.714311 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 7 06:08:23.714316 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jul 7 06:08:23.714322 kernel: Performance Events: Skylake events, core PMU driver. Jul 7 06:08:23.714327 kernel: core: CPUID marked event: 'cpu cycles' unavailable Jul 7 06:08:23.714333 kernel: core: CPUID marked event: 'instructions' unavailable Jul 7 06:08:23.714339 kernel: core: CPUID marked event: 'bus cycles' unavailable Jul 7 06:08:23.714344 kernel: core: CPUID marked event: 'cache references' unavailable Jul 7 06:08:23.714349 kernel: core: CPUID marked event: 'cache misses' unavailable Jul 7 06:08:23.714356 kernel: core: CPUID marked event: 'branch instructions' unavailable Jul 7 06:08:23.714362 kernel: core: CPUID marked event: 'branch misses' unavailable Jul 7 06:08:23.714367 kernel: ... version: 1 Jul 7 06:08:23.714373 kernel: ... bit width: 48 Jul 7 06:08:23.714378 kernel: ... generic registers: 4 Jul 7 06:08:23.714384 kernel: ... value mask: 0000ffffffffffff Jul 7 06:08:23.714389 kernel: ... max period: 000000007fffffff Jul 7 06:08:23.714394 kernel: ... fixed-purpose events: 0 Jul 7 06:08:23.714400 kernel: ... event mask: 000000000000000f Jul 7 06:08:23.714411 kernel: signal: max sigframe size: 1776 Jul 7 06:08:23.714417 kernel: rcu: Hierarchical SRCU implementation. Jul 7 06:08:23.714422 kernel: rcu: Max phase no-delay instances is 400. Jul 7 06:08:23.714428 kernel: Timer migration: 3 hierarchy levels; 8 children per group; 3 crossnode level Jul 7 06:08:23.714433 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 7 06:08:23.714442 kernel: smp: Bringing up secondary CPUs ... Jul 7 06:08:23.714448 kernel: smpboot: x86: Booting SMP configuration: Jul 7 06:08:23.714454 kernel: .... node #0, CPUs: #1 Jul 7 06:08:23.714459 kernel: Disabled fast string operations Jul 7 06:08:23.714475 kernel: smp: Brought up 1 node, 2 CPUs Jul 7 06:08:23.714482 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Jul 7 06:08:23.714488 kernel: Memory: 1924240K/2096628K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54432K init, 2536K bss, 161004K reserved, 0K cma-reserved) Jul 7 06:08:23.714493 kernel: devtmpfs: initialized Jul 7 06:08:23.714499 kernel: x86/mm: Memory block size: 128MB Jul 7 06:08:23.714505 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Jul 7 06:08:23.714513 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 06:08:23.714519 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Jul 7 06:08:23.714525 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 06:08:23.714532 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 06:08:23.714538 kernel: audit: initializing netlink subsys (disabled) Jul 7 06:08:23.714549 kernel: audit: type=2000 audit(1751868501.289:1): state=initialized audit_enabled=0 res=1 Jul 7 06:08:23.714555 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 06:08:23.714561 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 7 06:08:23.714566 kernel: cpuidle: using governor menu Jul 7 06:08:23.714572 kernel: Simple Boot Flag at 0x36 set to 0x80 Jul 7 06:08:23.714577 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 06:08:23.714583 kernel: dca service started, version 1.12.1 Jul 7 06:08:23.714591 kernel: PCI: ECAM [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) for domain 0000 [bus 00-7f] Jul 7 06:08:23.714603 kernel: PCI: Using configuration type 1 for base access Jul 7 06:08:23.714613 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 7 06:08:23.714619 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 06:08:23.714625 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 06:08:23.714631 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 06:08:23.714637 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 06:08:23.714643 kernel: ACPI: Added _OSI(Module Device) Jul 7 06:08:23.714648 kernel: ACPI: Added _OSI(Processor Device) Jul 7 06:08:23.714656 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 06:08:23.714661 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 7 06:08:23.714667 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jul 7 06:08:23.714673 kernel: ACPI: Interpreter enabled Jul 7 06:08:23.714679 kernel: ACPI: PM: (supports S0 S1 S5) Jul 7 06:08:23.714685 kernel: ACPI: Using IOAPIC for interrupt routing Jul 7 06:08:23.714690 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 7 06:08:23.714696 kernel: PCI: Using E820 reservations for host bridge windows Jul 7 06:08:23.714702 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Jul 7 06:08:23.714709 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Jul 7 06:08:23.714801 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 06:08:23.714860 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Jul 7 06:08:23.714911 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Jul 7 06:08:23.714919 kernel: PCI host bridge to bus 0000:00 Jul 7 06:08:23.714973 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 7 06:08:23.715022 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Jul 7 06:08:23.715067 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 7 06:08:23.715111 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 7 06:08:23.715155 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Jul 7 06:08:23.715199 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Jul 7 06:08:23.715258 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 conventional PCI endpoint Jul 7 06:08:23.715317 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 conventional PCI bridge Jul 7 06:08:23.715373 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 7 06:08:23.715429 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Jul 7 06:08:23.715557 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a conventional PCI endpoint Jul 7 06:08:23.715616 kernel: pci 0000:00:07.1: BAR 4 [io 0x1060-0x106f] Jul 7 06:08:23.715667 kernel: pci 0000:00:07.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Jul 7 06:08:23.715718 kernel: pci 0000:00:07.1: BAR 1 [io 0x03f6]: legacy IDE quirk Jul 7 06:08:23.715771 kernel: pci 0000:00:07.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Jul 7 06:08:23.715821 kernel: pci 0000:00:07.1: BAR 3 [io 0x0376]: legacy IDE quirk Jul 7 06:08:23.715875 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Jul 7 06:08:23.715926 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Jul 7 06:08:23.715978 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Jul 7 06:08:23.716034 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 conventional PCI endpoint Jul 7 06:08:23.716085 kernel: pci 0000:00:07.7: BAR 0 [io 0x1080-0x10bf] Jul 7 06:08:23.718549 kernel: pci 0000:00:07.7: BAR 1 [mem 0xfebfe000-0xfebfffff 64bit] Jul 7 06:08:23.718611 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 conventional PCI endpoint Jul 7 06:08:23.718665 kernel: pci 0000:00:0f.0: BAR 0 [io 0x1070-0x107f] Jul 7 06:08:23.718721 kernel: pci 0000:00:0f.0: BAR 1 [mem 0xe8000000-0xefffffff pref] Jul 7 06:08:23.718772 kernel: pci 0000:00:0f.0: BAR 2 [mem 0xfe000000-0xfe7fffff] Jul 7 06:08:23.718823 kernel: pci 0000:00:0f.0: ROM [mem 0x00000000-0x00007fff pref] Jul 7 06:08:23.718873 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 7 06:08:23.718928 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 conventional PCI bridge Jul 7 06:08:23.718980 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Jul 7 06:08:23.719030 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jul 7 06:08:23.719082 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jul 7 06:08:23.719132 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 7 06:08:23.719194 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 7 06:08:23.719247 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jul 7 06:08:23.719299 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jul 7 06:08:23.719351 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jul 7 06:08:23.719402 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Jul 7 06:08:23.719457 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 7 06:08:23.719530 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jul 7 06:08:23.719581 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jul 7 06:08:23.719632 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jul 7 06:08:23.719683 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jul 7 06:08:23.719734 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Jul 7 06:08:23.719788 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 7 06:08:23.719839 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jul 7 06:08:23.719893 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jul 7 06:08:23.719943 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jul 7 06:08:23.719994 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jul 7 06:08:23.720045 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Jul 7 06:08:23.720099 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 7 06:08:23.720158 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jul 7 06:08:23.720211 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jul 7 06:08:23.720262 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jul 7 06:08:23.720313 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Jul 7 06:08:23.720370 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 7 06:08:23.720422 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jul 7 06:08:23.721496 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jul 7 06:08:23.721554 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 7 06:08:23.721609 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Jul 7 06:08:23.721664 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 7 06:08:23.721715 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jul 7 06:08:23.721767 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jul 7 06:08:23.721817 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jul 7 06:08:23.721868 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Jul 7 06:08:23.721923 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 7 06:08:23.721978 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jul 7 06:08:23.722030 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jul 7 06:08:23.722080 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jul 7 06:08:23.722131 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Jul 7 06:08:23.722194 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 7 06:08:23.722245 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jul 7 06:08:23.722297 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jul 7 06:08:23.722350 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jul 7 06:08:23.722401 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Jul 7 06:08:23.722457 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 7 06:08:23.723540 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jul 7 06:08:23.723597 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jul 7 06:08:23.723650 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jul 7 06:08:23.723703 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Jul 7 06:08:23.723760 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 7 06:08:23.723818 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jul 7 06:08:23.723869 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jul 7 06:08:23.723920 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jul 7 06:08:23.723971 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jul 7 06:08:23.724022 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Jul 7 06:08:23.724077 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 7 06:08:23.724132 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jul 7 06:08:23.724184 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jul 7 06:08:23.724234 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jul 7 06:08:23.724285 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jul 7 06:08:23.724336 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Jul 7 06:08:23.724391 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 7 06:08:23.724442 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jul 7 06:08:23.724505 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jul 7 06:08:23.724556 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 7 06:08:23.724607 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Jul 7 06:08:23.724663 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 7 06:08:23.724714 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jul 7 06:08:23.724765 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jul 7 06:08:23.724815 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 7 06:08:23.724866 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Jul 7 06:08:23.724925 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 7 06:08:23.724977 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jul 7 06:08:23.725028 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jul 7 06:08:23.725079 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jul 7 06:08:23.725129 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Jul 7 06:08:23.725185 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 7 06:08:23.725237 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jul 7 06:08:23.725290 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jul 7 06:08:23.725341 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jul 7 06:08:23.725392 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Jul 7 06:08:23.725447 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 7 06:08:23.727903 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jul 7 06:08:23.727961 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jul 7 06:08:23.728013 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 7 06:08:23.728067 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Jul 7 06:08:23.728124 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 7 06:08:23.728176 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jul 7 06:08:23.728228 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jul 7 06:08:23.728279 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jul 7 06:08:23.728330 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 7 06:08:23.728380 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Jul 7 06:08:23.730088 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 7 06:08:23.730178 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jul 7 06:08:23.730245 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jul 7 06:08:23.730312 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jul 7 06:08:23.730386 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jul 7 06:08:23.730449 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Jul 7 06:08:23.730533 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 7 06:08:23.730598 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jul 7 06:08:23.730656 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jul 7 06:08:23.730708 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jul 7 06:08:23.730759 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jul 7 06:08:23.730812 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Jul 7 06:08:23.730869 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 7 06:08:23.730920 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jul 7 06:08:23.730971 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jul 7 06:08:23.731022 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 7 06:08:23.731073 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Jul 7 06:08:23.731130 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 7 06:08:23.731185 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jul 7 06:08:23.731236 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jul 7 06:08:23.731287 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 7 06:08:23.731338 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Jul 7 06:08:23.731395 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 7 06:08:23.731448 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jul 7 06:08:23.731797 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jul 7 06:08:23.731858 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jul 7 06:08:23.731913 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Jul 7 06:08:23.731971 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 7 06:08:23.732028 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jul 7 06:08:23.732082 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jul 7 06:08:23.732133 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jul 7 06:08:23.732204 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Jul 7 06:08:23.732266 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 7 06:08:23.732318 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jul 7 06:08:23.732368 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jul 7 06:08:23.732419 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 7 06:08:23.732480 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Jul 7 06:08:23.732539 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 7 06:08:23.732592 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jul 7 06:08:23.732646 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jul 7 06:08:23.732696 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jul 7 06:08:23.732747 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jul 7 06:08:23.732798 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Jul 7 06:08:23.732854 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 7 06:08:23.732906 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jul 7 06:08:23.732956 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jul 7 06:08:23.733009 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jul 7 06:08:23.733060 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jul 7 06:08:23.733110 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Jul 7 06:08:23.733171 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 7 06:08:23.733223 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jul 7 06:08:23.733274 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jul 7 06:08:23.733324 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jul 7 06:08:23.733378 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Jul 7 06:08:23.733433 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 7 06:08:23.735365 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jul 7 06:08:23.735426 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jul 7 06:08:23.735492 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 7 06:08:23.735552 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Jul 7 06:08:23.735800 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 7 06:08:23.735949 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jul 7 06:08:23.736002 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jul 7 06:08:23.736053 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jul 7 06:08:23.736128 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Jul 7 06:08:23.736201 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 7 06:08:23.736254 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jul 7 06:08:23.736306 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jul 7 06:08:23.736358 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jul 7 06:08:23.736411 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Jul 7 06:08:23.736482 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 7 06:08:23.736538 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jul 7 06:08:23.736597 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jul 7 06:08:23.736648 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jul 7 06:08:23.736699 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Jul 7 06:08:23.736754 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jul 7 06:08:23.736809 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jul 7 06:08:23.736861 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jul 7 06:08:23.736911 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 7 06:08:23.736962 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Jul 7 06:08:23.737017 kernel: pci_bus 0000:01: extended config space not accessible Jul 7 06:08:23.737070 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 7 06:08:23.737123 kernel: pci_bus 0000:02: extended config space not accessible Jul 7 06:08:23.737134 kernel: acpiphp: Slot [32] registered Jul 7 06:08:23.737141 kernel: acpiphp: Slot [33] registered Jul 7 06:08:23.737147 kernel: acpiphp: Slot [34] registered Jul 7 06:08:23.737152 kernel: acpiphp: Slot [35] registered Jul 7 06:08:23.737158 kernel: acpiphp: Slot [36] registered Jul 7 06:08:23.737164 kernel: acpiphp: Slot [37] registered Jul 7 06:08:23.737170 kernel: acpiphp: Slot [38] registered Jul 7 06:08:23.737176 kernel: acpiphp: Slot [39] registered Jul 7 06:08:23.737182 kernel: acpiphp: Slot [40] registered Jul 7 06:08:23.737189 kernel: acpiphp: Slot [41] registered Jul 7 06:08:23.737195 kernel: acpiphp: Slot [42] registered Jul 7 06:08:23.737201 kernel: acpiphp: Slot [43] registered Jul 7 06:08:23.737207 kernel: acpiphp: Slot [44] registered Jul 7 06:08:23.737213 kernel: acpiphp: Slot [45] registered Jul 7 06:08:23.737219 kernel: acpiphp: Slot [46] registered Jul 7 06:08:23.737224 kernel: acpiphp: Slot [47] registered Jul 7 06:08:23.737230 kernel: acpiphp: Slot [48] registered Jul 7 06:08:23.737236 kernel: acpiphp: Slot [49] registered Jul 7 06:08:23.737242 kernel: acpiphp: Slot [50] registered Jul 7 06:08:23.737249 kernel: acpiphp: Slot [51] registered Jul 7 06:08:23.737255 kernel: acpiphp: Slot [52] registered Jul 7 06:08:23.737261 kernel: acpiphp: Slot [53] registered Jul 7 06:08:23.737267 kernel: acpiphp: Slot [54] registered Jul 7 06:08:23.737273 kernel: acpiphp: Slot [55] registered Jul 7 06:08:23.737279 kernel: acpiphp: Slot [56] registered Jul 7 06:08:23.737285 kernel: acpiphp: Slot [57] registered Jul 7 06:08:23.737291 kernel: acpiphp: Slot [58] registered Jul 7 06:08:23.737298 kernel: acpiphp: Slot [59] registered Jul 7 06:08:23.737304 kernel: acpiphp: Slot [60] registered Jul 7 06:08:23.737311 kernel: acpiphp: Slot [61] registered Jul 7 06:08:23.737316 kernel: acpiphp: Slot [62] registered Jul 7 06:08:23.737322 kernel: acpiphp: Slot [63] registered Jul 7 06:08:23.737400 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Jul 7 06:08:23.737457 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Jul 7 06:08:23.739071 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Jul 7 06:08:23.739131 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Jul 7 06:08:23.739186 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Jul 7 06:08:23.739241 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Jul 7 06:08:23.739319 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 PCIe Endpoint Jul 7 06:08:23.739373 kernel: pci 0000:03:00.0: BAR 0 [io 0x4000-0x4007] Jul 7 06:08:23.739425 kernel: pci 0000:03:00.0: BAR 1 [mem 0xfd5f8000-0xfd5fffff 64bit] Jul 7 06:08:23.739492 kernel: pci 0000:03:00.0: ROM [mem 0x00000000-0x0000ffff pref] Jul 7 06:08:23.739576 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jul 7 06:08:23.739632 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jul 7 06:08:23.739688 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jul 7 06:08:23.739742 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jul 7 06:08:23.739795 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jul 7 06:08:23.739848 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jul 7 06:08:23.739902 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jul 7 06:08:23.739955 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jul 7 06:08:23.740008 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jul 7 06:08:23.740063 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jul 7 06:08:23.740121 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 PCIe Endpoint Jul 7 06:08:23.740184 kernel: pci 0000:0b:00.0: BAR 0 [mem 0xfd4fc000-0xfd4fcfff] Jul 7 06:08:23.740239 kernel: pci 0000:0b:00.0: BAR 1 [mem 0xfd4fd000-0xfd4fdfff] Jul 7 06:08:23.740291 kernel: pci 0000:0b:00.0: BAR 2 [mem 0xfd4fe000-0xfd4fffff] Jul 7 06:08:23.740343 kernel: pci 0000:0b:00.0: BAR 3 [io 0x5000-0x500f] Jul 7 06:08:23.740395 kernel: pci 0000:0b:00.0: ROM [mem 0x00000000-0x0000ffff pref] Jul 7 06:08:23.740451 kernel: pci 0000:0b:00.0: supports D1 D2 Jul 7 06:08:23.740524 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 7 06:08:23.740578 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jul 7 06:08:23.740632 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jul 7 06:08:23.740687 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jul 7 06:08:23.740741 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jul 7 06:08:23.740795 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jul 7 06:08:23.740849 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jul 7 06:08:23.740905 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jul 7 06:08:23.740959 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jul 7 06:08:23.741012 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jul 7 06:08:23.741065 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jul 7 06:08:23.741118 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jul 7 06:08:23.741171 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jul 7 06:08:23.741225 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jul 7 06:08:23.741282 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jul 7 06:08:23.741334 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jul 7 06:08:23.741388 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jul 7 06:08:23.741441 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jul 7 06:08:23.741503 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jul 7 06:08:23.741556 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jul 7 06:08:23.741609 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jul 7 06:08:23.741660 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jul 7 06:08:23.741716 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jul 7 06:08:23.741769 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jul 7 06:08:23.741821 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jul 7 06:08:23.741874 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jul 7 06:08:23.741884 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Jul 7 06:08:23.741890 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Jul 7 06:08:23.741896 kernel: ACPI: PCI: Interrupt link LNKB disabled Jul 7 06:08:23.741905 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 7 06:08:23.741911 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Jul 7 06:08:23.741917 kernel: iommu: Default domain type: Translated Jul 7 06:08:23.741923 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 7 06:08:23.741929 kernel: PCI: Using ACPI for IRQ routing Jul 7 06:08:23.741935 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 7 06:08:23.741942 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Jul 7 06:08:23.741948 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Jul 7 06:08:23.741999 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Jul 7 06:08:23.742053 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Jul 7 06:08:23.742103 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 7 06:08:23.742112 kernel: vgaarb: loaded Jul 7 06:08:23.742119 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Jul 7 06:08:23.742125 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Jul 7 06:08:23.742131 kernel: clocksource: Switched to clocksource tsc-early Jul 7 06:08:23.742137 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 06:08:23.742143 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 06:08:23.742150 kernel: pnp: PnP ACPI init Jul 7 06:08:23.742213 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Jul 7 06:08:23.742261 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Jul 7 06:08:23.742307 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Jul 7 06:08:23.742360 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Jul 7 06:08:23.742410 kernel: pnp 00:06: [dma 2] Jul 7 06:08:23.742460 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Jul 7 06:08:23.742527 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Jul 7 06:08:23.742574 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Jul 7 06:08:23.742582 kernel: pnp: PnP ACPI: found 8 devices Jul 7 06:08:23.742588 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 7 06:08:23.742595 kernel: NET: Registered PF_INET protocol family Jul 7 06:08:23.742601 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 7 06:08:23.742607 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 7 06:08:23.742613 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 06:08:23.742619 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 7 06:08:23.742627 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 7 06:08:23.742633 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 7 06:08:23.742640 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 7 06:08:23.742646 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 7 06:08:23.742652 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 06:08:23.742658 kernel: NET: Registered PF_XDP protocol family Jul 7 06:08:23.742711 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 7 06:08:23.742764 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jul 7 06:08:23.742820 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jul 7 06:08:23.742872 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jul 7 06:08:23.742925 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jul 7 06:08:23.742977 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jul 7 06:08:23.743031 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Jul 7 06:08:23.743083 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jul 7 06:08:23.743135 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jul 7 06:08:23.743191 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jul 7 06:08:23.743246 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jul 7 06:08:23.743299 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jul 7 06:08:23.743351 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jul 7 06:08:23.743404 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jul 7 06:08:23.743457 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jul 7 06:08:23.743540 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jul 7 06:08:23.743594 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jul 7 06:08:23.743646 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jul 7 06:08:23.743702 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jul 7 06:08:23.743754 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Jul 7 06:08:23.743806 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Jul 7 06:08:23.743857 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Jul 7 06:08:23.743909 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Jul 7 06:08:23.743960 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref]: assigned Jul 7 06:08:23.744011 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref]: assigned Jul 7 06:08:23.744064 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Jul 7 06:08:23.744115 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Jul 7 06:08:23.744167 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Jul 7 06:08:23.744218 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Jul 7 06:08:23.744269 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Jul 7 06:08:23.744320 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Jul 7 06:08:23.744371 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Jul 7 06:08:23.744422 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Jul 7 06:08:23.744486 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Jul 7 06:08:23.744540 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Jul 7 06:08:23.744592 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Jul 7 06:08:23.744642 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Jul 7 06:08:23.744693 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Jul 7 06:08:23.744745 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Jul 7 06:08:23.744796 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Jul 7 06:08:23.744847 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Jul 7 06:08:23.744902 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Jul 7 06:08:23.744953 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Jul 7 06:08:23.745004 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Jul 7 06:08:23.745055 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Jul 7 06:08:23.745106 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Jul 7 06:08:23.745157 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Jul 7 06:08:23.745207 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Jul 7 06:08:23.745258 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Jul 7 06:08:23.745312 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Jul 7 06:08:23.745363 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Jul 7 06:08:23.745414 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Jul 7 06:08:23.745471 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Jul 7 06:08:23.745523 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Jul 7 06:08:23.745574 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Jul 7 06:08:23.745625 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Jul 7 06:08:23.745674 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Jul 7 06:08:23.745728 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Jul 7 06:08:23.745779 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Jul 7 06:08:23.745830 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Jul 7 06:08:23.745882 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Jul 7 06:08:23.745934 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Jul 7 06:08:23.745985 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Jul 7 06:08:23.746039 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Jul 7 06:08:23.746109 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Jul 7 06:08:23.746173 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Jul 7 06:08:23.746226 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Jul 7 06:08:23.746277 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Jul 7 06:08:23.746328 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Jul 7 06:08:23.746379 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Jul 7 06:08:23.746429 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Jul 7 06:08:23.746489 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Jul 7 06:08:23.746540 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Jul 7 06:08:23.746594 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Jul 7 06:08:23.746645 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Jul 7 06:08:23.746695 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Jul 7 06:08:23.746745 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Jul 7 06:08:23.746795 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Jul 7 06:08:23.746846 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Jul 7 06:08:23.746896 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Jul 7 06:08:23.746946 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Jul 7 06:08:23.746997 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Jul 7 06:08:23.747048 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Jul 7 06:08:23.747101 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Jul 7 06:08:23.747152 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Jul 7 06:08:23.747202 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Jul 7 06:08:23.747253 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Jul 7 06:08:23.747304 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Jul 7 06:08:23.747361 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Jul 7 06:08:23.747428 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Jul 7 06:08:23.747496 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Jul 7 06:08:23.747548 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Jul 7 06:08:23.747603 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Jul 7 06:08:23.747654 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Jul 7 06:08:23.747704 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Jul 7 06:08:23.747755 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Jul 7 06:08:23.747805 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Jul 7 06:08:23.747855 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Jul 7 06:08:23.747905 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Jul 7 06:08:23.747959 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Jul 7 06:08:23.748009 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Jul 7 06:08:23.748059 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Jul 7 06:08:23.748111 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Jul 7 06:08:23.748162 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Jul 7 06:08:23.748212 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Jul 7 06:08:23.748262 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Jul 7 06:08:23.748313 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Jul 7 06:08:23.748363 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Jul 7 06:08:23.748415 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Jul 7 06:08:23.748477 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 7 06:08:23.748533 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Jul 7 06:08:23.748584 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jul 7 06:08:23.748634 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jul 7 06:08:23.748686 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 7 06:08:23.748740 kernel: pci 0000:03:00.0: ROM [mem 0xfd500000-0xfd50ffff pref]: assigned Jul 7 06:08:23.748792 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jul 7 06:08:23.748846 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jul 7 06:08:23.748897 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jul 7 06:08:23.748948 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Jul 7 06:08:23.749000 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jul 7 06:08:23.749050 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jul 7 06:08:23.749100 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jul 7 06:08:23.749157 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jul 7 06:08:23.749214 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jul 7 06:08:23.749265 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jul 7 06:08:23.749318 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jul 7 06:08:23.749368 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jul 7 06:08:23.749418 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jul 7 06:08:23.749478 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jul 7 06:08:23.749533 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jul 7 06:08:23.749584 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jul 7 06:08:23.749634 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jul 7 06:08:23.749684 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 7 06:08:23.749737 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jul 7 06:08:23.749789 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jul 7 06:08:23.749840 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jul 7 06:08:23.749891 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jul 7 06:08:23.749941 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jul 7 06:08:23.749991 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jul 7 06:08:23.750042 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jul 7 06:08:23.750092 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jul 7 06:08:23.750144 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jul 7 06:08:23.750206 kernel: pci 0000:0b:00.0: ROM [mem 0xfd400000-0xfd40ffff pref]: assigned Jul 7 06:08:23.750257 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jul 7 06:08:23.750309 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jul 7 06:08:23.750360 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jul 7 06:08:23.750411 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Jul 7 06:08:23.750469 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jul 7 06:08:23.750528 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jul 7 06:08:23.750583 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jul 7 06:08:23.750634 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jul 7 06:08:23.750687 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jul 7 06:08:23.750738 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jul 7 06:08:23.750788 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jul 7 06:08:23.750839 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jul 7 06:08:23.750891 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jul 7 06:08:23.750943 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jul 7 06:08:23.750995 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 7 06:08:23.751050 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jul 7 06:08:23.751101 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jul 7 06:08:23.751151 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 7 06:08:23.751203 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jul 7 06:08:23.751253 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jul 7 06:08:23.751304 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jul 7 06:08:23.751356 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jul 7 06:08:23.751409 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jul 7 06:08:23.751460 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jul 7 06:08:23.751520 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jul 7 06:08:23.751570 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jul 7 06:08:23.751621 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 7 06:08:23.751672 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jul 7 06:08:23.751724 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jul 7 06:08:23.751775 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jul 7 06:08:23.751829 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 7 06:08:23.751881 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jul 7 06:08:23.751932 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jul 7 06:08:23.751984 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jul 7 06:08:23.752035 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jul 7 06:08:23.752088 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jul 7 06:08:23.752139 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jul 7 06:08:23.752196 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jul 7 06:08:23.752247 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jul 7 06:08:23.752299 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jul 7 06:08:23.752351 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jul 7 06:08:23.752402 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 7 06:08:23.752453 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jul 7 06:08:23.752524 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jul 7 06:08:23.752575 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 7 06:08:23.752627 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jul 7 06:08:23.752677 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jul 7 06:08:23.752727 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jul 7 06:08:23.752781 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jul 7 06:08:23.752831 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jul 7 06:08:23.752881 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jul 7 06:08:23.752932 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jul 7 06:08:23.752982 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jul 7 06:08:23.753032 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 7 06:08:23.753088 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jul 7 06:08:23.753138 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jul 7 06:08:23.753193 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jul 7 06:08:23.753243 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jul 7 06:08:23.753294 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jul 7 06:08:23.753344 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jul 7 06:08:23.753395 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jul 7 06:08:23.753445 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jul 7 06:08:23.753514 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jul 7 06:08:23.753566 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jul 7 06:08:23.753620 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jul 7 06:08:23.753674 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jul 7 06:08:23.753724 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jul 7 06:08:23.753774 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 7 06:08:23.753827 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jul 7 06:08:23.753878 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jul 7 06:08:23.753928 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jul 7 06:08:23.753982 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jul 7 06:08:23.754033 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jul 7 06:08:23.754084 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jul 7 06:08:23.754137 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jul 7 06:08:23.754189 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jul 7 06:08:23.754240 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jul 7 06:08:23.754293 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jul 7 06:08:23.754346 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jul 7 06:08:23.754396 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 7 06:08:23.754447 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Jul 7 06:08:23.754648 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Jul 7 06:08:23.754696 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Jul 7 06:08:23.754743 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Jul 7 06:08:23.754788 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Jul 7 06:08:23.754844 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Jul 7 06:08:23.754896 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Jul 7 06:08:23.754952 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 7 06:08:23.755251 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Jul 7 06:08:23.755300 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Jul 7 06:08:23.755351 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Jul 7 06:08:23.755399 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Jul 7 06:08:23.755447 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Jul 7 06:08:23.755543 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Jul 7 06:08:23.755593 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Jul 7 06:08:23.755640 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Jul 7 06:08:23.755695 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Jul 7 06:08:23.755742 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Jul 7 06:08:23.755789 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Jul 7 06:08:23.755839 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Jul 7 06:08:23.755889 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Jul 7 06:08:23.755936 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Jul 7 06:08:23.755987 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Jul 7 06:08:23.756034 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Jul 7 06:08:23.756085 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Jul 7 06:08:23.756133 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 7 06:08:23.756194 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Jul 7 06:08:23.756242 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Jul 7 06:08:23.756293 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Jul 7 06:08:23.756340 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Jul 7 06:08:23.756392 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Jul 7 06:08:23.756439 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Jul 7 06:08:23.756502 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Jul 7 06:08:23.756549 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Jul 7 06:08:23.756595 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Jul 7 06:08:23.756739 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Jul 7 06:08:23.756789 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Jul 7 06:08:23.756835 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Jul 7 06:08:23.756889 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Jul 7 06:08:23.756935 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Jul 7 06:08:23.756981 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Jul 7 06:08:23.757032 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Jul 7 06:08:23.757079 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 7 06:08:23.757130 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Jul 7 06:08:23.757180 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 7 06:08:23.757231 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Jul 7 06:08:23.757279 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Jul 7 06:08:23.757331 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Jul 7 06:08:23.757379 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Jul 7 06:08:23.757430 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Jul 7 06:08:23.757501 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 7 06:08:23.757557 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Jul 7 06:08:23.757604 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Jul 7 06:08:23.757650 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 7 06:08:23.757701 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Jul 7 06:08:23.757748 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Jul 7 06:08:23.757793 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Jul 7 06:08:23.757847 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Jul 7 06:08:23.757895 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Jul 7 06:08:23.757941 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Jul 7 06:08:23.757992 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Jul 7 06:08:23.758038 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 7 06:08:23.758088 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Jul 7 06:08:23.758135 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 7 06:08:23.758189 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Jul 7 06:08:23.758236 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Jul 7 06:08:23.758288 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Jul 7 06:08:23.758335 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Jul 7 06:08:23.758386 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Jul 7 06:08:23.758432 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 7 06:08:23.758511 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Jul 7 06:08:23.758574 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Jul 7 06:08:23.758620 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Jul 7 06:08:23.758670 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Jul 7 06:08:23.758717 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Jul 7 06:08:23.758764 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Jul 7 06:08:23.758814 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Jul 7 06:08:23.758863 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Jul 7 06:08:23.758917 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Jul 7 06:08:23.758965 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 7 06:08:23.759016 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Jul 7 06:08:23.759063 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Jul 7 06:08:23.759113 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Jul 7 06:08:23.759166 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Jul 7 06:08:23.759217 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Jul 7 06:08:23.759264 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Jul 7 06:08:23.759316 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Jul 7 06:08:23.759364 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 7 06:08:23.759422 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 7 06:08:23.759433 kernel: PCI: CLS 32 bytes, default 64 Jul 7 06:08:23.759440 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 7 06:08:23.759446 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jul 7 06:08:23.759452 kernel: clocksource: Switched to clocksource tsc Jul 7 06:08:23.759459 kernel: Initialise system trusted keyrings Jul 7 06:08:23.761380 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 7 06:08:23.761390 kernel: Key type asymmetric registered Jul 7 06:08:23.761397 kernel: Asymmetric key parser 'x509' registered Jul 7 06:08:23.761403 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 7 06:08:23.761411 kernel: io scheduler mq-deadline registered Jul 7 06:08:23.761418 kernel: io scheduler kyber registered Jul 7 06:08:23.761424 kernel: io scheduler bfq registered Jul 7 06:08:23.761500 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Jul 7 06:08:23.761557 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 7 06:08:23.761612 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Jul 7 06:08:23.761665 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 7 06:08:23.761720 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Jul 7 06:08:23.761772 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 7 06:08:23.761824 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Jul 7 06:08:23.761876 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 7 06:08:23.761928 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Jul 7 06:08:23.761979 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 7 06:08:23.762032 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Jul 7 06:08:23.762083 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 7 06:08:23.762139 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Jul 7 06:08:23.762197 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 7 06:08:23.762254 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Jul 7 06:08:23.762316 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 7 06:08:23.762370 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Jul 7 06:08:23.762421 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 7 06:08:23.762488 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Jul 7 06:08:23.762546 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 7 06:08:23.762599 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Jul 7 06:08:23.762651 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 7 06:08:23.762703 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Jul 7 06:08:23.762755 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 7 06:08:23.762817 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Jul 7 06:08:23.762871 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 7 06:08:23.762926 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Jul 7 06:08:23.762978 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 7 06:08:23.763029 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Jul 7 06:08:23.763080 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 7 06:08:23.763133 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Jul 7 06:08:23.763190 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 7 06:08:23.763242 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Jul 7 06:08:23.763293 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 7 06:08:23.763350 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Jul 7 06:08:23.763401 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 7 06:08:23.763453 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Jul 7 06:08:23.763523 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 7 06:08:23.763576 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Jul 7 06:08:23.763628 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 7 06:08:23.763680 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Jul 7 06:08:23.763734 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 7 06:08:23.763787 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Jul 7 06:08:23.763840 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 7 06:08:23.763893 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Jul 7 06:08:23.763945 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 7 06:08:23.763997 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Jul 7 06:08:23.764049 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 7 06:08:23.764102 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Jul 7 06:08:23.764156 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 7 06:08:23.764208 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Jul 7 06:08:23.764259 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 7 06:08:23.764310 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Jul 7 06:08:23.764364 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 7 06:08:23.764418 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Jul 7 06:08:23.764815 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 7 06:08:23.764881 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Jul 7 06:08:23.764936 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 7 06:08:23.764990 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Jul 7 06:08:23.765042 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 7 06:08:23.765096 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Jul 7 06:08:23.765148 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 7 06:08:23.765201 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Jul 7 06:08:23.765253 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 7 06:08:23.765264 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 7 06:08:23.765273 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 06:08:23.765279 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 7 06:08:23.765286 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Jul 7 06:08:23.765293 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 7 06:08:23.765299 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 7 06:08:23.765354 kernel: rtc_cmos 00:01: registered as rtc0 Jul 7 06:08:23.765406 kernel: rtc_cmos 00:01: setting system clock to 2025-07-07T06:08:23 UTC (1751868503) Jul 7 06:08:23.765453 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Jul 7 06:08:23.765462 kernel: intel_pstate: CPU model not supported Jul 7 06:08:23.765480 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 7 06:08:23.765486 kernel: NET: Registered PF_INET6 protocol family Jul 7 06:08:23.765493 kernel: Segment Routing with IPv6 Jul 7 06:08:23.765499 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 06:08:23.765505 kernel: NET: Registered PF_PACKET protocol family Jul 7 06:08:23.765514 kernel: Key type dns_resolver registered Jul 7 06:08:23.765520 kernel: IPI shorthand broadcast: enabled Jul 7 06:08:23.765526 kernel: sched_clock: Marking stable (2712130600, 175260598)->(2903298244, -15907046) Jul 7 06:08:23.765532 kernel: registered taskstats version 1 Jul 7 06:08:23.765539 kernel: Loading compiled-in X.509 certificates Jul 7 06:08:23.765545 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.35-flatcar: b8e96f4c6a9e663230fc9c12b186cf91fcc7a64e' Jul 7 06:08:23.765551 kernel: Demotion targets for Node 0: null Jul 7 06:08:23.765558 kernel: Key type .fscrypt registered Jul 7 06:08:23.765564 kernel: Key type fscrypt-provisioning registered Jul 7 06:08:23.765571 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 06:08:23.765578 kernel: ima: Allocated hash algorithm: sha1 Jul 7 06:08:23.765584 kernel: ima: No architecture policies found Jul 7 06:08:23.765591 kernel: clk: Disabling unused clocks Jul 7 06:08:23.765597 kernel: Warning: unable to open an initial console. Jul 7 06:08:23.765603 kernel: Freeing unused kernel image (initmem) memory: 54432K Jul 7 06:08:23.765610 kernel: Write protecting the kernel read-only data: 24576k Jul 7 06:08:23.765616 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 7 06:08:23.765622 kernel: Run /init as init process Jul 7 06:08:23.765629 kernel: with arguments: Jul 7 06:08:23.765636 kernel: /init Jul 7 06:08:23.765642 kernel: with environment: Jul 7 06:08:23.765649 kernel: HOME=/ Jul 7 06:08:23.765654 kernel: TERM=linux Jul 7 06:08:23.765661 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 06:08:23.765668 systemd[1]: Successfully made /usr/ read-only. Jul 7 06:08:23.765677 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 06:08:23.765685 systemd[1]: Detected virtualization vmware. Jul 7 06:08:23.765691 systemd[1]: Detected architecture x86-64. Jul 7 06:08:23.765697 systemd[1]: Running in initrd. Jul 7 06:08:23.765704 systemd[1]: No hostname configured, using default hostname. Jul 7 06:08:23.765710 systemd[1]: Hostname set to . Jul 7 06:08:23.765716 systemd[1]: Initializing machine ID from random generator. Jul 7 06:08:23.765723 systemd[1]: Queued start job for default target initrd.target. Jul 7 06:08:23.765729 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:08:23.765737 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:08:23.765744 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 06:08:23.765751 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 06:08:23.765757 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 06:08:23.765764 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 06:08:23.765771 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 06:08:23.765778 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 06:08:23.765786 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:08:23.765793 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:08:23.765799 systemd[1]: Reached target paths.target - Path Units. Jul 7 06:08:23.765806 systemd[1]: Reached target slices.target - Slice Units. Jul 7 06:08:23.765812 systemd[1]: Reached target swap.target - Swaps. Jul 7 06:08:23.765819 systemd[1]: Reached target timers.target - Timer Units. Jul 7 06:08:23.765825 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 06:08:23.765832 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 06:08:23.765840 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 06:08:23.765846 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 7 06:08:23.765853 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:08:23.765860 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 06:08:23.765866 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:08:23.765873 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 06:08:23.765879 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 06:08:23.765886 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 06:08:23.765893 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 06:08:23.765900 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 7 06:08:23.765907 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 06:08:23.765913 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 06:08:23.765920 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 06:08:23.765927 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:08:23.765933 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 06:08:23.765941 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:08:23.765948 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 06:08:23.765955 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 06:08:23.765974 systemd-journald[242]: Collecting audit messages is disabled. Jul 7 06:08:23.765993 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 06:08:23.766000 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 06:08:23.766007 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 06:08:23.766013 kernel: Bridge firewalling registered Jul 7 06:08:23.766020 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 06:08:23.766027 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:08:23.766034 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 06:08:23.766041 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:08:23.766048 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:08:23.766055 systemd-journald[242]: Journal started Jul 7 06:08:23.766070 systemd-journald[242]: Runtime Journal (/run/log/journal/81ed94b190cb475180d65314771090cf) is 4.8M, max 38.8M, 34M free. Jul 7 06:08:23.713659 systemd-modules-load[244]: Inserted module 'overlay' Jul 7 06:08:23.767299 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 06:08:23.739166 systemd-modules-load[244]: Inserted module 'br_netfilter' Jul 7 06:08:23.769980 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 06:08:23.776818 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:08:23.778315 systemd-tmpfiles[261]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 7 06:08:23.780706 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:08:23.782535 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 06:08:23.782751 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:08:23.783286 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 06:08:23.802242 dracut-cmdline[283]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 06:08:23.819294 systemd-resolved[282]: Positive Trust Anchors: Jul 7 06:08:23.819303 systemd-resolved[282]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 06:08:23.819325 systemd-resolved[282]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 06:08:23.821430 systemd-resolved[282]: Defaulting to hostname 'linux'. Jul 7 06:08:23.822364 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 06:08:23.822542 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:08:23.856487 kernel: SCSI subsystem initialized Jul 7 06:08:23.873491 kernel: Loading iSCSI transport class v2.0-870. Jul 7 06:08:23.882487 kernel: iscsi: registered transport (tcp) Jul 7 06:08:23.905482 kernel: iscsi: registered transport (qla4xxx) Jul 7 06:08:23.905529 kernel: QLogic iSCSI HBA Driver Jul 7 06:08:23.916337 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 06:08:23.927293 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:08:23.928331 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 06:08:23.951006 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 06:08:23.951834 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 06:08:23.988491 kernel: raid6: avx2x4 gen() 46680 MB/s Jul 7 06:08:24.005483 kernel: raid6: avx2x2 gen() 52873 MB/s Jul 7 06:08:24.022690 kernel: raid6: avx2x1 gen() 44676 MB/s Jul 7 06:08:24.022721 kernel: raid6: using algorithm avx2x2 gen() 52873 MB/s Jul 7 06:08:24.040702 kernel: raid6: .... xor() 32096 MB/s, rmw enabled Jul 7 06:08:24.040738 kernel: raid6: using avx2x2 recovery algorithm Jul 7 06:08:24.054480 kernel: xor: automatically using best checksumming function avx Jul 7 06:08:24.159483 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 06:08:24.163058 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 06:08:24.163973 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:08:24.182710 systemd-udevd[494]: Using default interface naming scheme 'v255'. Jul 7 06:08:24.186082 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:08:24.186696 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 06:08:24.202074 dracut-pre-trigger[496]: rd.md=0: removing MD RAID activation Jul 7 06:08:24.215638 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 06:08:24.216402 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 06:08:24.299124 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:08:24.301003 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 06:08:24.388481 kernel: VMware PVSCSI driver - version 1.0.7.0-k Jul 7 06:08:24.392901 kernel: vmw_pvscsi: using 64bit dma Jul 7 06:08:24.392927 kernel: vmw_pvscsi: max_id: 16 Jul 7 06:08:24.392935 kernel: vmw_pvscsi: setting ring_pages to 8 Jul 7 06:08:24.404682 kernel: vmw_pvscsi: enabling reqCallThreshold Jul 7 06:08:24.404713 kernel: vmw_pvscsi: driver-based request coalescing enabled Jul 7 06:08:24.404721 kernel: vmw_pvscsi: using MSI-X Jul 7 06:08:24.408497 kernel: VMware vmxnet3 virtual NIC driver - version 1.9.0.0-k-NAPI Jul 7 06:08:24.417490 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Jul 7 06:08:24.420032 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Jul 7 06:08:24.420193 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Jul 7 06:08:24.421836 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Jul 7 06:08:24.423472 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Jul 7 06:08:24.427030 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Jul 7 06:08:24.427151 kernel: cryptd: max_cpu_qlen set to 1000 Jul 7 06:08:24.430505 (udev-worker)[536]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Jul 7 06:08:24.436477 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Jul 7 06:08:24.437786 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:08:24.437931 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:08:24.438794 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:08:24.440530 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:08:24.448019 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Jul 7 06:08:24.448135 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 7 06:08:24.448205 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Jul 7 06:08:24.448268 kernel: sd 0:0:0:0: [sda] Cache data unavailable Jul 7 06:08:24.448824 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Jul 7 06:08:24.453476 kernel: libata version 3.00 loaded. Jul 7 06:08:24.454476 kernel: AES CTR mode by8 optimization enabled Jul 7 06:08:24.464545 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 06:08:24.465489 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 7 06:08:24.472849 kernel: ata_piix 0000:00:07.1: version 2.13 Jul 7 06:08:24.472962 kernel: scsi host1: ata_piix Jul 7 06:08:24.473034 kernel: scsi host2: ata_piix Jul 7 06:08:24.473093 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 lpm-pol 0 Jul 7 06:08:24.473103 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 lpm-pol 0 Jul 7 06:08:24.481515 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:08:24.515230 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Jul 7 06:08:24.520634 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Jul 7 06:08:24.525054 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Jul 7 06:08:24.525175 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Jul 7 06:08:24.530670 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jul 7 06:08:24.531260 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 06:08:24.569481 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 06:08:24.639501 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Jul 7 06:08:24.646996 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Jul 7 06:08:24.672624 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Jul 7 06:08:24.672746 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 7 06:08:24.683504 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 7 06:08:24.996819 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 06:08:24.997220 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 06:08:24.997349 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:08:24.997592 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 06:08:24.998262 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 06:08:25.013432 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 06:08:25.585447 disk-uuid[641]: The operation has completed successfully. Jul 7 06:08:25.585648 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 7 06:08:25.625169 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 06:08:25.625252 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 06:08:25.639189 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 06:08:25.648028 sh[673]: Success Jul 7 06:08:25.664598 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 06:08:25.664630 kernel: device-mapper: uevent: version 1.0.3 Jul 7 06:08:25.665789 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 7 06:08:25.672479 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Jul 7 06:08:25.761628 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 06:08:25.762799 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 06:08:25.771526 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 06:08:25.783486 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 7 06:08:25.785506 kernel: BTRFS: device fsid 9d124217-7448-4fc6-a329-8a233bb5a0ac devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (685) Jul 7 06:08:25.787749 kernel: BTRFS info (device dm-0): first mount of filesystem 9d124217-7448-4fc6-a329-8a233bb5a0ac Jul 7 06:08:25.787762 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:08:25.787770 kernel: BTRFS info (device dm-0): using free-space-tree Jul 7 06:08:25.796688 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 06:08:25.796999 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 7 06:08:25.797584 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Jul 7 06:08:25.799534 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 06:08:25.828822 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (708) Jul 7 06:08:25.828856 kernel: BTRFS info (device sda6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:08:25.828864 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:08:25.830060 kernel: BTRFS info (device sda6): using free-space-tree Jul 7 06:08:25.841507 kernel: BTRFS info (device sda6): last unmount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:08:25.841810 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 06:08:25.842456 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 06:08:25.886523 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jul 7 06:08:25.887386 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 06:08:25.948194 ignition[727]: Ignition 2.21.0 Jul 7 06:08:25.948201 ignition[727]: Stage: fetch-offline Jul 7 06:08:25.948224 ignition[727]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:08:25.948229 ignition[727]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 7 06:08:25.948371 ignition[727]: parsed url from cmdline: "" Jul 7 06:08:25.948373 ignition[727]: no config URL provided Jul 7 06:08:25.948376 ignition[727]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 06:08:25.948380 ignition[727]: no config at "/usr/lib/ignition/user.ign" Jul 7 06:08:25.948913 ignition[727]: config successfully fetched Jul 7 06:08:25.948931 ignition[727]: parsing config with SHA512: 27d24c53a317bfdfb301f1306ad23dd68032b3a918fa007c7ec1592a9412629040bf219d3ef6f0c233182ff706cc0a8ec4d8e7368b83738119caa709483b4b01 Jul 7 06:08:25.954006 unknown[727]: fetched base config from "system" Jul 7 06:08:25.954012 unknown[727]: fetched user config from "vmware" Jul 7 06:08:25.954612 ignition[727]: fetch-offline: fetch-offline passed Jul 7 06:08:25.954751 ignition[727]: Ignition finished successfully Jul 7 06:08:25.955682 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 06:08:25.968869 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 06:08:25.969943 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 06:08:25.992939 systemd-networkd[865]: lo: Link UP Jul 7 06:08:25.992945 systemd-networkd[865]: lo: Gained carrier Jul 7 06:08:25.993988 systemd-networkd[865]: Enumeration completed Jul 7 06:08:25.994120 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 06:08:25.994289 systemd-networkd[865]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Jul 7 06:08:25.994387 systemd[1]: Reached target network.target - Network. Jul 7 06:08:25.994644 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 7 06:08:25.996554 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 06:08:25.996734 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jul 7 06:08:25.997748 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jul 7 06:08:25.997422 systemd-networkd[865]: ens192: Link UP Jul 7 06:08:25.997426 systemd-networkd[865]: ens192: Gained carrier Jul 7 06:08:26.015105 ignition[868]: Ignition 2.21.0 Jul 7 06:08:26.015114 ignition[868]: Stage: kargs Jul 7 06:08:26.015194 ignition[868]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:08:26.015200 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 7 06:08:26.016082 ignition[868]: kargs: kargs passed Jul 7 06:08:26.016113 ignition[868]: Ignition finished successfully Jul 7 06:08:26.017703 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 06:08:26.018558 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 06:08:26.037228 ignition[876]: Ignition 2.21.0 Jul 7 06:08:26.037239 ignition[876]: Stage: disks Jul 7 06:08:26.037321 ignition[876]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:08:26.037327 ignition[876]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 7 06:08:26.038308 ignition[876]: disks: disks passed Jul 7 06:08:26.038754 ignition[876]: Ignition finished successfully Jul 7 06:08:26.039555 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 06:08:26.039919 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 06:08:26.040053 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 06:08:26.040252 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 06:08:26.040438 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 06:08:26.040621 systemd[1]: Reached target basic.target - Basic System. Jul 7 06:08:26.041277 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 06:08:26.057040 systemd-fsck[886]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Jul 7 06:08:26.058202 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 06:08:26.059038 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 06:08:26.133475 kernel: EXT4-fs (sda9): mounted filesystem df0fa228-af1b-4496-9a54-2d4ccccd27d9 r/w with ordered data mode. Quota mode: none. Jul 7 06:08:26.133949 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 06:08:26.134375 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 06:08:26.135407 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 06:08:26.137511 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 06:08:26.137884 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 7 06:08:26.138055 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 06:08:26.138071 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 06:08:26.142143 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 06:08:26.142941 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 06:08:26.150478 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (894) Jul 7 06:08:26.152843 kernel: BTRFS info (device sda6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:08:26.152862 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:08:26.154454 kernel: BTRFS info (device sda6): using free-space-tree Jul 7 06:08:26.158158 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 06:08:26.205149 initrd-setup-root[918]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 06:08:26.207966 initrd-setup-root[925]: cut: /sysroot/etc/group: No such file or directory Jul 7 06:08:26.210318 initrd-setup-root[932]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 06:08:26.212779 initrd-setup-root[939]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 06:08:26.299004 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 06:08:26.299812 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 06:08:26.301529 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 06:08:26.309497 kernel: BTRFS info (device sda6): last unmount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:08:26.324480 ignition[1007]: INFO : Ignition 2.21.0 Jul 7 06:08:26.324480 ignition[1007]: INFO : Stage: mount Jul 7 06:08:26.324480 ignition[1007]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:08:26.324480 ignition[1007]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 7 06:08:26.325021 ignition[1007]: INFO : mount: mount passed Jul 7 06:08:26.325021 ignition[1007]: INFO : Ignition finished successfully Jul 7 06:08:26.325843 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 06:08:26.326907 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 06:08:26.328161 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 06:08:26.783477 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 06:08:26.784402 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 06:08:26.799491 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1019) Jul 7 06:08:26.799520 kernel: BTRFS info (device sda6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:08:26.801968 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:08:26.801984 kernel: BTRFS info (device sda6): using free-space-tree Jul 7 06:08:26.806027 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 06:08:26.820273 ignition[1036]: INFO : Ignition 2.21.0 Jul 7 06:08:26.821073 ignition[1036]: INFO : Stage: files Jul 7 06:08:26.821073 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:08:26.821073 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 7 06:08:26.821529 ignition[1036]: DEBUG : files: compiled without relabeling support, skipping Jul 7 06:08:26.829883 ignition[1036]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 06:08:26.829883 ignition[1036]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 06:08:26.843916 ignition[1036]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 06:08:26.844200 ignition[1036]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 06:08:26.844574 unknown[1036]: wrote ssh authorized keys file for user: core Jul 7 06:08:26.844959 ignition[1036]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 06:08:26.847877 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 7 06:08:26.848211 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 7 06:08:26.885313 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 06:08:27.034242 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 7 06:08:27.034242 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 7 06:08:27.034665 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 06:08:27.034665 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 06:08:27.034665 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 06:08:27.034665 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 06:08:27.034665 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 06:08:27.034665 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 06:08:27.034665 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 06:08:27.035882 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 06:08:27.036035 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 06:08:27.036035 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 06:08:27.038227 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 06:08:27.038227 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 06:08:27.038620 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 7 06:08:27.065541 systemd-networkd[865]: ens192: Gained IPv6LL Jul 7 06:08:27.729224 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 7 06:08:28.063581 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 06:08:28.064064 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jul 7 06:08:28.064863 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jul 7 06:08:28.065151 ignition[1036]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 7 06:08:28.065151 ignition[1036]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 06:08:28.065596 ignition[1036]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 06:08:28.065883 ignition[1036]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 7 06:08:28.065883 ignition[1036]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 7 06:08:28.065883 ignition[1036]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 7 06:08:28.065883 ignition[1036]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 7 06:08:28.065883 ignition[1036]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 7 06:08:28.065883 ignition[1036]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 7 06:08:28.088098 ignition[1036]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 7 06:08:28.090018 ignition[1036]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 7 06:08:28.090319 ignition[1036]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 7 06:08:28.090319 ignition[1036]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 7 06:08:28.090319 ignition[1036]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 06:08:28.091602 ignition[1036]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 06:08:28.091602 ignition[1036]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 06:08:28.091602 ignition[1036]: INFO : files: files passed Jul 7 06:08:28.091602 ignition[1036]: INFO : Ignition finished successfully Jul 7 06:08:28.091861 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 06:08:28.092533 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 06:08:28.093543 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 06:08:28.101913 initrd-setup-root-after-ignition[1067]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:08:28.101913 initrd-setup-root-after-ignition[1067]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:08:28.102971 initrd-setup-root-after-ignition[1071]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:08:28.103970 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 06:08:28.104188 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 06:08:28.104853 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 06:08:28.105154 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 06:08:28.105210 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 06:08:28.136610 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 06:08:28.136678 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 06:08:28.136958 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 06:08:28.137066 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 06:08:28.137404 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 06:08:28.137913 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 06:08:28.148498 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 06:08:28.149348 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 06:08:28.161058 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:08:28.161338 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:08:28.161682 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 06:08:28.161938 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 06:08:28.162105 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 06:08:28.162497 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 06:08:28.162752 systemd[1]: Stopped target basic.target - Basic System. Jul 7 06:08:28.162967 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 06:08:28.163250 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 06:08:28.163540 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 06:08:28.163783 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 7 06:08:28.164069 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 06:08:28.164328 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 06:08:28.164630 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 06:08:28.164895 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 06:08:28.165121 systemd[1]: Stopped target swap.target - Swaps. Jul 7 06:08:28.165350 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 06:08:28.165520 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 06:08:28.165868 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:08:28.166138 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:08:28.166370 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 06:08:28.166544 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:08:28.166816 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 06:08:28.166881 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 06:08:28.167297 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 06:08:28.167369 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 06:08:28.167815 systemd[1]: Stopped target paths.target - Path Units. Jul 7 06:08:28.168025 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 06:08:28.168192 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:08:28.168508 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 06:08:28.168744 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 06:08:28.168972 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 06:08:28.169023 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 06:08:28.169370 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 06:08:28.169420 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 06:08:28.169796 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 06:08:28.169865 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 06:08:28.170282 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 06:08:28.170345 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 06:08:28.171142 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 06:08:28.171363 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 06:08:28.171556 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:08:28.173519 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 06:08:28.173645 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 06:08:28.173747 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:08:28.173937 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 06:08:28.174017 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 06:08:28.176222 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 06:08:28.178514 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 06:08:28.187061 ignition[1092]: INFO : Ignition 2.21.0 Jul 7 06:08:28.187546 ignition[1092]: INFO : Stage: umount Jul 7 06:08:28.187546 ignition[1092]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:08:28.187546 ignition[1092]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 7 06:08:28.187089 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 06:08:28.188604 ignition[1092]: INFO : umount: umount passed Jul 7 06:08:28.189726 ignition[1092]: INFO : Ignition finished successfully Jul 7 06:08:28.190700 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 06:08:28.190779 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 06:08:28.191019 systemd[1]: Stopped target network.target - Network. Jul 7 06:08:28.191134 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 06:08:28.191161 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 06:08:28.191308 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 06:08:28.191330 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 06:08:28.191489 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 06:08:28.191518 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 06:08:28.191663 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 06:08:28.191684 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 06:08:28.191879 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 06:08:28.192138 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 06:08:28.193593 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 06:08:28.193665 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 06:08:28.195056 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 7 06:08:28.195194 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 06:08:28.195218 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:08:28.196010 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 7 06:08:28.200257 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 06:08:28.200325 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 06:08:28.201016 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 7 06:08:28.201108 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 7 06:08:28.201267 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 06:08:28.201285 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:08:28.201884 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 06:08:28.201993 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 06:08:28.202018 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 06:08:28.202142 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Jul 7 06:08:28.202164 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jul 7 06:08:28.202285 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 06:08:28.202309 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:08:28.202579 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 06:08:28.202601 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 06:08:28.202856 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:08:28.205676 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 7 06:08:28.212756 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 06:08:28.212981 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:08:28.213708 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 06:08:28.213874 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 06:08:28.214130 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 06:08:28.214256 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:08:28.214537 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 06:08:28.214561 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 06:08:28.214716 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 06:08:28.214740 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 06:08:28.214875 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 06:08:28.214900 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:08:28.216595 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 06:08:28.216707 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 7 06:08:28.216737 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:08:28.217964 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 06:08:28.218116 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:08:28.218438 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:08:28.218470 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:08:28.219257 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 06:08:28.219307 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 06:08:28.221651 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 06:08:28.221713 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 06:08:28.270459 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 06:08:28.270554 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 06:08:28.270818 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 06:08:28.270940 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 06:08:28.270968 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 06:08:28.271550 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 06:08:28.285199 systemd[1]: Switching root. Jul 7 06:08:28.328140 systemd-journald[242]: Journal stopped Jul 7 06:08:30.038002 systemd-journald[242]: Received SIGTERM from PID 1 (systemd). Jul 7 06:08:30.038029 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 06:08:30.038037 kernel: SELinux: policy capability open_perms=1 Jul 7 06:08:30.038043 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 06:08:30.038048 kernel: SELinux: policy capability always_check_network=0 Jul 7 06:08:30.038055 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 06:08:30.038062 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 06:08:30.038068 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 06:08:30.038073 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 06:08:30.038079 kernel: SELinux: policy capability userspace_initial_context=0 Jul 7 06:08:30.038085 systemd[1]: Successfully loaded SELinux policy in 48.159ms. Jul 7 06:08:30.038092 kernel: audit: type=1403 audit(1751868509.419:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 06:08:30.038100 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.217ms. Jul 7 06:08:30.038107 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 06:08:30.038114 systemd[1]: Detected virtualization vmware. Jul 7 06:08:30.038120 systemd[1]: Detected architecture x86-64. Jul 7 06:08:30.038128 systemd[1]: Detected first boot. Jul 7 06:08:30.038135 systemd[1]: Initializing machine ID from random generator. Jul 7 06:08:30.038142 zram_generator::config[1136]: No configuration found. Jul 7 06:08:30.038246 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Jul 7 06:08:30.038263 kernel: Guest personality initialized and is active Jul 7 06:08:30.038274 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 7 06:08:30.038284 kernel: Initialized host personality Jul 7 06:08:30.038296 kernel: NET: Registered PF_VSOCK protocol family Jul 7 06:08:30.038306 systemd[1]: Populated /etc with preset unit settings. Jul 7 06:08:30.038315 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 7 06:08:30.038322 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Jul 7 06:08:30.038329 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 7 06:08:30.038336 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 7 06:08:30.038342 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 7 06:08:30.038354 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 7 06:08:30.038362 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 06:08:30.038369 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 06:08:30.038375 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 06:08:30.038382 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 06:08:30.038389 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 06:08:30.038396 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 06:08:30.038404 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 06:08:30.038411 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 06:08:30.038418 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:08:30.038426 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:08:30.038433 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 06:08:30.038440 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 06:08:30.038447 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 06:08:30.038454 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 06:08:30.038462 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 7 06:08:30.038481 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:08:30.038488 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:08:30.038495 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 7 06:08:30.042323 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 7 06:08:30.042334 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 7 06:08:30.042342 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 06:08:30.042348 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:08:30.042358 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 06:08:30.042364 systemd[1]: Reached target slices.target - Slice Units. Jul 7 06:08:30.042371 systemd[1]: Reached target swap.target - Swaps. Jul 7 06:08:30.042378 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 06:08:30.042385 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 06:08:30.042393 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 7 06:08:30.042400 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:08:30.042407 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 06:08:30.042414 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:08:30.042421 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 06:08:30.042429 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 06:08:30.042436 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 06:08:30.042443 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 06:08:30.042451 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:08:30.042458 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 06:08:30.044163 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 06:08:30.044176 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 06:08:30.044184 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 06:08:30.044191 systemd[1]: Reached target machines.target - Containers. Jul 7 06:08:30.044198 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 06:08:30.044205 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Jul 7 06:08:30.044214 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 06:08:30.044222 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 06:08:30.044229 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:08:30.044236 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 06:08:30.044243 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:08:30.044250 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 06:08:30.044257 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:08:30.044264 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 06:08:30.044272 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 7 06:08:30.044280 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 7 06:08:30.044287 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 7 06:08:30.044293 systemd[1]: Stopped systemd-fsck-usr.service. Jul 7 06:08:30.044301 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:08:30.044308 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 06:08:30.044315 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 06:08:30.044322 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 06:08:30.044329 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 06:08:30.044337 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 7 06:08:30.044344 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 06:08:30.044354 systemd[1]: verity-setup.service: Deactivated successfully. Jul 7 06:08:30.044366 systemd[1]: Stopped verity-setup.service. Jul 7 06:08:30.044374 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:08:30.044381 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 06:08:30.044392 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 06:08:30.044400 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 06:08:30.044408 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 06:08:30.044415 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 06:08:30.044422 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 06:08:30.044429 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:08:30.044436 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 06:08:30.044443 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 06:08:30.044450 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:08:30.044457 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:08:30.045158 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:08:30.045177 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:08:30.045185 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 06:08:30.045193 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 06:08:30.045200 kernel: fuse: init (API version 7.41) Jul 7 06:08:30.045207 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 06:08:30.045214 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 06:08:30.045221 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 06:08:30.045228 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 7 06:08:30.045236 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 06:08:30.045244 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:08:30.045253 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 06:08:30.045262 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 06:08:30.045270 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 06:08:30.045276 kernel: loop: module loaded Jul 7 06:08:30.045283 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:08:30.045292 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 06:08:30.045317 systemd-journald[1233]: Collecting audit messages is disabled. Jul 7 06:08:30.045335 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 06:08:30.045345 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 06:08:30.045352 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 06:08:30.045360 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:08:30.045367 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:08:30.045374 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:08:30.045381 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 7 06:08:30.045388 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 06:08:30.045395 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 06:08:30.045402 kernel: ACPI: bus type drm_connector registered Jul 7 06:08:30.045411 systemd-journald[1233]: Journal started Jul 7 06:08:30.045426 systemd-journald[1233]: Runtime Journal (/run/log/journal/cff6d9cc941e4b698487ace4fa3d52f1) is 4.8M, max 38.8M, 34M free. Jul 7 06:08:29.817753 systemd[1]: Queued start job for default target multi-user.target. Jul 7 06:08:29.830660 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 7 06:08:29.830902 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 7 06:08:30.047714 jq[1206]: true Jul 7 06:08:30.050516 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 06:08:30.050534 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 06:08:30.050684 jq[1244]: true Jul 7 06:08:30.057554 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 06:08:30.057584 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 06:08:30.058262 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 06:08:30.058391 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 06:08:30.058671 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 06:08:30.058840 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 06:08:30.063535 kernel: loop0: detected capacity change from 0 to 2960 Jul 7 06:08:30.069609 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:08:30.080033 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 06:08:30.083548 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 06:08:30.087691 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 06:08:30.087833 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 7 06:08:30.114056 systemd-journald[1233]: Time spent on flushing to /var/log/journal/cff6d9cc941e4b698487ace4fa3d52f1 is 34.665ms for 1760 entries. Jul 7 06:08:30.114056 systemd-journald[1233]: System Journal (/var/log/journal/cff6d9cc941e4b698487ace4fa3d52f1) is 8M, max 584.8M, 576.8M free. Jul 7 06:08:30.192057 systemd-journald[1233]: Received client request to flush runtime journal. Jul 7 06:08:30.192098 kernel: loop1: detected capacity change from 0 to 224512 Jul 7 06:08:30.118558 ignition[1260]: Ignition 2.21.0 Jul 7 06:08:30.118493 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 7 06:08:30.119136 ignition[1260]: deleting config from guestinfo properties Jul 7 06:08:30.128694 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 06:08:30.126658 ignition[1260]: Successfully deleted config Jul 7 06:08:30.130926 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 06:08:30.131679 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Jul 7 06:08:30.181014 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:08:30.193426 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 06:08:30.196759 systemd-tmpfiles[1301]: ACLs are not supported, ignoring. Jul 7 06:08:30.196769 systemd-tmpfiles[1301]: ACLs are not supported, ignoring. Jul 7 06:08:30.200949 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:08:30.242483 kernel: loop2: detected capacity change from 0 to 113872 Jul 7 06:08:30.278608 kernel: loop3: detected capacity change from 0 to 146240 Jul 7 06:08:30.339484 kernel: loop4: detected capacity change from 0 to 2960 Jul 7 06:08:30.350485 kernel: loop5: detected capacity change from 0 to 224512 Jul 7 06:08:30.387495 kernel: loop6: detected capacity change from 0 to 113872 Jul 7 06:08:30.403757 kernel: loop7: detected capacity change from 0 to 146240 Jul 7 06:08:30.435207 (sd-merge)[1311]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. Jul 7 06:08:30.435482 (sd-merge)[1311]: Merged extensions into '/usr'. Jul 7 06:08:30.443650 systemd[1]: Reload requested from client PID 1259 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 06:08:30.443661 systemd[1]: Reloading... Jul 7 06:08:30.508480 zram_generator::config[1344]: No configuration found. Jul 7 06:08:30.581022 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:08:30.590310 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 7 06:08:30.634988 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 06:08:30.635326 systemd[1]: Reloading finished in 191 ms. Jul 7 06:08:30.647042 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 06:08:30.652451 systemd[1]: Starting ensure-sysext.service... Jul 7 06:08:30.654905 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 06:08:30.664602 systemd-tmpfiles[1394]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 7 06:08:30.664786 systemd-tmpfiles[1394]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 7 06:08:30.664973 systemd-tmpfiles[1394]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 06:08:30.665199 systemd-tmpfiles[1394]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 06:08:30.665758 systemd-tmpfiles[1394]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 06:08:30.665970 systemd-tmpfiles[1394]: ACLs are not supported, ignoring. Jul 7 06:08:30.666039 systemd-tmpfiles[1394]: ACLs are not supported, ignoring. Jul 7 06:08:30.675652 systemd[1]: Reload requested from client PID 1393 ('systemctl') (unit ensure-sysext.service)... Jul 7 06:08:30.675665 systemd[1]: Reloading... Jul 7 06:08:30.682848 systemd-tmpfiles[1394]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 06:08:30.682855 systemd-tmpfiles[1394]: Skipping /boot Jul 7 06:08:30.689798 systemd-tmpfiles[1394]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 06:08:30.689923 systemd-tmpfiles[1394]: Skipping /boot Jul 7 06:08:30.712505 zram_generator::config[1422]: No configuration found. Jul 7 06:08:30.762137 ldconfig[1252]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 06:08:30.790961 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:08:30.799127 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 7 06:08:30.844849 systemd[1]: Reloading finished in 168 ms. Jul 7 06:08:30.871732 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 06:08:30.872082 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 06:08:30.875169 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:08:30.881315 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 06:08:30.883781 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 06:08:30.885536 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 06:08:30.887519 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 06:08:30.888766 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:08:30.893161 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 06:08:30.896606 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:08:30.899631 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:08:30.901602 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:08:30.904367 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:08:30.904580 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:08:30.904670 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:08:30.904734 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:08:30.908354 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 06:08:30.910247 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:08:30.910355 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:08:30.910408 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:08:30.910460 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:08:30.914577 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:08:30.918730 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 06:08:30.918958 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:08:30.919537 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:08:30.919640 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:08:30.924520 systemd[1]: Finished ensure-sysext.service. Jul 7 06:08:30.928650 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 7 06:08:30.931745 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 06:08:30.932120 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 06:08:30.939761 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 06:08:30.940542 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:08:30.940689 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:08:30.946616 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 06:08:30.946756 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 06:08:30.946968 systemd-udevd[1488]: Using default interface naming scheme 'v255'. Jul 7 06:08:30.947960 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:08:30.948078 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:08:30.948361 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:08:30.948490 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:08:30.949896 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 06:08:30.949930 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 06:08:30.954936 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 06:08:30.955225 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 06:08:30.965158 augenrules[1521]: No rules Jul 7 06:08:30.965229 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 06:08:30.966201 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 06:08:30.966345 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 06:08:30.974904 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:08:30.976549 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 06:08:30.977710 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 06:08:31.102616 systemd-networkd[1529]: lo: Link UP Jul 7 06:08:31.102621 systemd-networkd[1529]: lo: Gained carrier Jul 7 06:08:31.103035 systemd-networkd[1529]: Enumeration completed Jul 7 06:08:31.103082 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 06:08:31.104285 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 7 06:08:31.107567 systemd-networkd[1529]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Jul 7 06:08:31.110506 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jul 7 06:08:31.110646 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jul 7 06:08:31.109796 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 06:08:31.113550 systemd-networkd[1529]: ens192: Link UP Jul 7 06:08:31.113667 systemd-networkd[1529]: ens192: Gained carrier Jul 7 06:08:31.127602 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 7 06:08:31.127793 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 06:08:31.137776 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 7 06:08:31.141129 systemd-resolved[1485]: Positive Trust Anchors: Jul 7 06:08:31.141136 systemd-resolved[1485]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 06:08:31.141160 systemd-resolved[1485]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 06:08:31.144238 systemd-resolved[1485]: Defaulting to hostname 'linux'. Jul 7 06:08:31.145366 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 06:08:31.145545 systemd[1]: Reached target network.target - Network. Jul 7 06:08:31.145643 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:08:31.145770 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 06:08:31.145922 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 06:08:31.146071 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 06:08:31.146189 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 7 06:08:31.146378 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 06:08:31.146559 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 06:08:31.146680 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 06:08:31.146810 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 06:08:31.146831 systemd[1]: Reached target paths.target - Path Units. Jul 7 06:08:31.146927 systemd[1]: Reached target timers.target - Timer Units. Jul 7 06:08:31.150521 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 06:08:31.151636 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 06:08:31.155210 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 7 06:08:31.156593 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 7 06:08:31.156720 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 7 06:08:31.159607 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 06:08:31.160207 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 7 06:08:31.160991 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 06:08:31.164515 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 7 06:08:31.164536 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 06:08:31.164881 systemd[1]: Reached target basic.target - Basic System. Jul 7 06:08:31.165010 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 06:08:31.165027 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 06:08:31.166475 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 06:08:31.167666 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 06:08:31.171458 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 06:08:31.173152 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 06:08:31.176585 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 06:08:31.176704 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 06:08:31.183645 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 7 06:08:31.185521 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 06:08:31.188519 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 06:08:31.190604 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 06:08:31.192609 jq[1582]: false Jul 7 06:08:31.197086 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 06:08:31.199592 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 06:08:31.200188 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 06:08:31.200652 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 06:08:31.206221 extend-filesystems[1583]: Found /dev/sda6 Jul 7 06:08:31.206857 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 06:08:31.209146 extend-filesystems[1583]: Found /dev/sda9 Jul 7 06:08:31.210601 google_oslogin_nss_cache[1584]: oslogin_cache_refresh[1584]: Refreshing passwd entry cache Jul 7 06:08:31.209507 oslogin_cache_refresh[1584]: Refreshing passwd entry cache Jul 7 06:08:31.212622 extend-filesystems[1583]: Checking size of /dev/sda9 Jul 7 06:08:31.212728 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 06:08:31.215552 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Jul 7 06:08:31.218763 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 06:08:31.219295 google_oslogin_nss_cache[1584]: oslogin_cache_refresh[1584]: Failure getting users, quitting Jul 7 06:08:31.219295 google_oslogin_nss_cache[1584]: oslogin_cache_refresh[1584]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 06:08:31.219295 google_oslogin_nss_cache[1584]: oslogin_cache_refresh[1584]: Refreshing group entry cache Jul 7 06:08:31.218973 oslogin_cache_refresh[1584]: Failure getting users, quitting Jul 7 06:08:31.218984 oslogin_cache_refresh[1584]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 06:08:31.219016 oslogin_cache_refresh[1584]: Refreshing group entry cache Jul 7 06:08:31.219666 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 06:08:31.219809 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 06:08:31.221697 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 06:08:31.226094 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 06:08:31.226183 oslogin_cache_refresh[1584]: Failure getting groups, quitting Jul 7 06:08:31.226727 google_oslogin_nss_cache[1584]: oslogin_cache_refresh[1584]: Failure getting groups, quitting Jul 7 06:08:31.226727 google_oslogin_nss_cache[1584]: oslogin_cache_refresh[1584]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 06:08:31.226190 oslogin_cache_refresh[1584]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 06:08:31.228873 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 7 06:08:31.229271 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 7 06:08:31.233254 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 06:08:31.234887 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 06:08:31.238575 jq[1596]: true Jul 7 06:09:57.721854 systemd-timesyncd[1504]: Contacted time server 72.30.35.89:123 (0.flatcar.pool.ntp.org). Jul 7 06:09:57.722020 systemd-timesyncd[1504]: Initial clock synchronization to Mon 2025-07-07 06:09:57.721802 UTC. Jul 7 06:09:57.722440 systemd-resolved[1485]: Clock change detected. Flushing caches. Jul 7 06:09:57.730501 extend-filesystems[1583]: Old size kept for /dev/sda9 Jul 7 06:09:57.727595 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 06:09:57.728120 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 06:09:57.732074 update_engine[1593]: I20250707 06:09:57.731187 1593 main.cc:92] Flatcar Update Engine starting Jul 7 06:09:57.737416 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Jul 7 06:09:57.743744 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Jul 7 06:09:57.753141 (ntainerd)[1626]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 06:09:57.753817 dbus-daemon[1579]: [system] SELinux support is enabled Jul 7 06:09:57.753912 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 06:09:57.755552 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 06:09:57.755573 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 06:09:57.755741 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 06:09:57.755767 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 06:09:57.762357 jq[1617]: true Jul 7 06:09:57.762756 tar[1605]: linux-amd64/LICENSE Jul 7 06:09:57.762756 tar[1605]: linux-amd64/helm Jul 7 06:09:57.767115 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jul 7 06:09:57.771316 update_engine[1593]: I20250707 06:09:57.770790 1593 update_check_scheduler.cc:74] Next update check in 8m10s Jul 7 06:09:57.771449 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 06:09:57.771624 systemd[1]: Started update-engine.service - Update Engine. Jul 7 06:09:57.778234 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 06:09:57.804342 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Jul 7 06:09:57.808597 unknown[1625]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Jul 7 06:09:57.810773 unknown[1625]: Core dump limit set to -1 Jul 7 06:09:57.828147 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 06:09:57.838134 kernel: mousedev: PS/2 mouse device common for all mice Jul 7 06:09:57.872530 systemd-logind[1592]: New seat seat0. Jul 7 06:09:57.873583 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 06:09:57.882647 bash[1650]: Updated "/home/core/.ssh/authorized_keys" Jul 7 06:09:57.886316 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 06:09:57.886849 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 7 06:09:57.895087 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 7 06:09:57.924454 kernel: ACPI: button: Power Button [PWRF] Jul 7 06:09:57.977548 sshd_keygen[1609]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 06:09:58.020624 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 06:09:58.024932 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 06:09:58.040199 containerd[1626]: time="2025-07-07T06:09:58Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 7 06:09:58.041453 containerd[1626]: time="2025-07-07T06:09:58.041439195Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 7 06:09:58.044184 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Jul 7 06:09:58.050631 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 06:09:58.050818 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 06:09:58.052632 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 06:09:58.061962 containerd[1626]: time="2025-07-07T06:09:58.061937710Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="5.824µs" Jul 7 06:09:58.061962 containerd[1626]: time="2025-07-07T06:09:58.061957419Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 7 06:09:58.062023 containerd[1626]: time="2025-07-07T06:09:58.061969006Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 7 06:09:58.062076 containerd[1626]: time="2025-07-07T06:09:58.062055745Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 7 06:09:58.069624 containerd[1626]: time="2025-07-07T06:09:58.069593488Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 7 06:09:58.069672 containerd[1626]: time="2025-07-07T06:09:58.069640986Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 06:09:58.069734 containerd[1626]: time="2025-07-07T06:09:58.069717787Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 06:09:58.069762 containerd[1626]: time="2025-07-07T06:09:58.069732295Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 06:09:58.069947 containerd[1626]: time="2025-07-07T06:09:58.069924627Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 06:09:58.069971 containerd[1626]: time="2025-07-07T06:09:58.069944432Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 06:09:58.069971 containerd[1626]: time="2025-07-07T06:09:58.069958511Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 06:09:58.069971 containerd[1626]: time="2025-07-07T06:09:58.069964598Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 7 06:09:58.070032 containerd[1626]: time="2025-07-07T06:09:58.070018819Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 7 06:09:58.070175 containerd[1626]: time="2025-07-07T06:09:58.070160628Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 06:09:58.070197 containerd[1626]: time="2025-07-07T06:09:58.070179696Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 06:09:58.070197 containerd[1626]: time="2025-07-07T06:09:58.070186902Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 7 06:09:58.070242 containerd[1626]: time="2025-07-07T06:09:58.070206627Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 7 06:09:58.070520 containerd[1626]: time="2025-07-07T06:09:58.070386849Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 7 06:09:58.070520 containerd[1626]: time="2025-07-07T06:09:58.070432476Z" level=info msg="metadata content store policy set" policy=shared Jul 7 06:09:58.071678 containerd[1626]: time="2025-07-07T06:09:58.071661080Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 7 06:09:58.071808 containerd[1626]: time="2025-07-07T06:09:58.071796256Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 7 06:09:58.072150 containerd[1626]: time="2025-07-07T06:09:58.071944771Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 7 06:09:58.072150 containerd[1626]: time="2025-07-07T06:09:58.071959859Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 7 06:09:58.072150 containerd[1626]: time="2025-07-07T06:09:58.071967874Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 7 06:09:58.072150 containerd[1626]: time="2025-07-07T06:09:58.071974243Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 7 06:09:58.072150 containerd[1626]: time="2025-07-07T06:09:58.071981914Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 7 06:09:58.072150 containerd[1626]: time="2025-07-07T06:09:58.071988811Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 7 06:09:58.072150 containerd[1626]: time="2025-07-07T06:09:58.071995957Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 7 06:09:58.072150 containerd[1626]: time="2025-07-07T06:09:58.072001926Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 7 06:09:58.072150 containerd[1626]: time="2025-07-07T06:09:58.072007963Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 7 06:09:58.072150 containerd[1626]: time="2025-07-07T06:09:58.072015338Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 7 06:09:58.073201 containerd[1626]: time="2025-07-07T06:09:58.072442238Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 7 06:09:58.073201 containerd[1626]: time="2025-07-07T06:09:58.072458773Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 7 06:09:58.073201 containerd[1626]: time="2025-07-07T06:09:58.072468854Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 7 06:09:58.073201 containerd[1626]: time="2025-07-07T06:09:58.072475455Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 7 06:09:58.073201 containerd[1626]: time="2025-07-07T06:09:58.072481854Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 7 06:09:58.073201 containerd[1626]: time="2025-07-07T06:09:58.072487966Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 7 06:09:58.073201 containerd[1626]: time="2025-07-07T06:09:58.072494405Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 7 06:09:58.073201 containerd[1626]: time="2025-07-07T06:09:58.072503672Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 7 06:09:58.073201 containerd[1626]: time="2025-07-07T06:09:58.072514153Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 7 06:09:58.073201 containerd[1626]: time="2025-07-07T06:09:58.072520469Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 7 06:09:58.073201 containerd[1626]: time="2025-07-07T06:09:58.072526709Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 7 06:09:58.073201 containerd[1626]: time="2025-07-07T06:09:58.072564716Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 7 06:09:58.073201 containerd[1626]: time="2025-07-07T06:09:58.072573400Z" level=info msg="Start snapshots syncer" Jul 7 06:09:58.073201 containerd[1626]: time="2025-07-07T06:09:58.072585757Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 7 06:09:58.073405 containerd[1626]: time="2025-07-07T06:09:58.072720896Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 7 06:09:58.073405 containerd[1626]: time="2025-07-07T06:09:58.072756853Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 7 06:09:58.074478 containerd[1626]: time="2025-07-07T06:09:58.074463999Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 7 06:09:58.074766 containerd[1626]: time="2025-07-07T06:09:58.074754329Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 7 06:09:58.074936 containerd[1626]: time="2025-07-07T06:09:58.074925956Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 7 06:09:58.074975 containerd[1626]: time="2025-07-07T06:09:58.074967850Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 7 06:09:58.075022 containerd[1626]: time="2025-07-07T06:09:58.075011280Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 7 06:09:58.075100 containerd[1626]: time="2025-07-07T06:09:58.075064369Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 7 06:09:58.075398 containerd[1626]: time="2025-07-07T06:09:58.075267089Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 7 06:09:58.075398 containerd[1626]: time="2025-07-07T06:09:58.075279388Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 7 06:09:58.075398 containerd[1626]: time="2025-07-07T06:09:58.075297993Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 7 06:09:58.075398 containerd[1626]: time="2025-07-07T06:09:58.075306444Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 7 06:09:58.075398 containerd[1626]: time="2025-07-07T06:09:58.075314363Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 7 06:09:58.076254 containerd[1626]: time="2025-07-07T06:09:58.075951591Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 06:09:58.076767 containerd[1626]: time="2025-07-07T06:09:58.076755189Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 06:09:58.076922 containerd[1626]: time="2025-07-07T06:09:58.076912030Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 06:09:58.076960 containerd[1626]: time="2025-07-07T06:09:58.076952513Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 06:09:58.076990 containerd[1626]: time="2025-07-07T06:09:58.076984119Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 7 06:09:58.077152 containerd[1626]: time="2025-07-07T06:09:58.077143150Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 7 06:09:58.077199 containerd[1626]: time="2025-07-07T06:09:58.077190564Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 7 06:09:58.077236 containerd[1626]: time="2025-07-07T06:09:58.077230309Z" level=info msg="runtime interface created" Jul 7 06:09:58.077263 containerd[1626]: time="2025-07-07T06:09:58.077258339Z" level=info msg="created NRI interface" Jul 7 06:09:58.077330 containerd[1626]: time="2025-07-07T06:09:58.077322388Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 7 06:09:58.078111 containerd[1626]: time="2025-07-07T06:09:58.077482153Z" level=info msg="Connect containerd service" Jul 7 06:09:58.078111 containerd[1626]: time="2025-07-07T06:09:58.077512361Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 06:09:58.078111 containerd[1626]: time="2025-07-07T06:09:58.077935962Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 06:09:58.084993 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 06:09:58.086763 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 06:09:58.090260 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 7 06:09:58.091486 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 06:09:58.096035 locksmithd[1632]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 06:09:58.213132 (udev-worker)[1555]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Jul 7 06:09:58.219233 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:09:58.224116 systemd-logind[1592]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 7 06:09:58.298415 systemd-logind[1592]: Watching system buttons on /dev/input/event2 (Power Button) Jul 7 06:09:58.359112 containerd[1626]: time="2025-07-07T06:09:58.359084231Z" level=info msg="Start subscribing containerd event" Jul 7 06:09:58.359178 containerd[1626]: time="2025-07-07T06:09:58.359119576Z" level=info msg="Start recovering state" Jul 7 06:09:58.359178 containerd[1626]: time="2025-07-07T06:09:58.359172931Z" level=info msg="Start event monitor" Jul 7 06:09:58.359217 containerd[1626]: time="2025-07-07T06:09:58.359184201Z" level=info msg="Start cni network conf syncer for default" Jul 7 06:09:58.359217 containerd[1626]: time="2025-07-07T06:09:58.359191818Z" level=info msg="Start streaming server" Jul 7 06:09:58.359217 containerd[1626]: time="2025-07-07T06:09:58.359199305Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 7 06:09:58.359217 containerd[1626]: time="2025-07-07T06:09:58.359205565Z" level=info msg="runtime interface starting up..." Jul 7 06:09:58.359217 containerd[1626]: time="2025-07-07T06:09:58.359208577Z" level=info msg="starting plugins..." Jul 7 06:09:58.359217 containerd[1626]: time="2025-07-07T06:09:58.359216630Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 7 06:09:58.359682 containerd[1626]: time="2025-07-07T06:09:58.359670001Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 06:09:58.359724 containerd[1626]: time="2025-07-07T06:09:58.359714631Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 06:09:58.359805 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 06:09:58.361123 containerd[1626]: time="2025-07-07T06:09:58.360559424Z" level=info msg="containerd successfully booted in 0.320634s" Jul 7 06:09:58.437747 tar[1605]: linux-amd64/README.md Jul 7 06:09:58.445380 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 06:09:58.706411 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:09:59.244217 systemd-networkd[1529]: ens192: Gained IPv6LL Jul 7 06:09:59.245908 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 06:09:59.246950 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 06:09:59.248497 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Jul 7 06:09:59.249902 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:09:59.252125 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 06:09:59.274010 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 06:09:59.283190 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 7 06:09:59.283358 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Jul 7 06:09:59.283710 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 06:10:00.680566 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:10:00.680946 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 06:10:00.683395 systemd[1]: Startup finished in 2.754s (kernel) + 5.822s (initrd) + 4.828s (userspace) = 13.405s. Jul 7 06:10:00.687331 (kubelet)[1803]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:10:00.711223 login[1711]: pam_lastlog(login:session): file /var/log/lastlog is locked/read, retrying Jul 7 06:10:00.711539 login[1709]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 7 06:10:00.719146 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 06:10:00.720399 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 06:10:00.722911 systemd-logind[1592]: New session 1 of user core. Jul 7 06:10:00.737107 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 06:10:00.739352 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 06:10:00.745373 (systemd)[1810]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 06:10:00.747487 systemd-logind[1592]: New session c1 of user core. Jul 7 06:10:00.843117 systemd[1810]: Queued start job for default target default.target. Jul 7 06:10:00.847866 systemd[1810]: Created slice app.slice - User Application Slice. Jul 7 06:10:00.847884 systemd[1810]: Reached target paths.target - Paths. Jul 7 06:10:00.847909 systemd[1810]: Reached target timers.target - Timers. Jul 7 06:10:00.850112 systemd[1810]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 06:10:00.855365 systemd[1810]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 06:10:00.855396 systemd[1810]: Reached target sockets.target - Sockets. Jul 7 06:10:00.855419 systemd[1810]: Reached target basic.target - Basic System. Jul 7 06:10:00.855442 systemd[1810]: Reached target default.target - Main User Target. Jul 7 06:10:00.855458 systemd[1810]: Startup finished in 103ms. Jul 7 06:10:00.855530 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 06:10:00.865213 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 06:10:01.697525 kubelet[1803]: E0707 06:10:01.697499 1803 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:10:01.699073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:10:01.699163 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:10:01.699495 systemd[1]: kubelet.service: Consumed 656ms CPU time, 268.2M memory peak. Jul 7 06:10:01.712669 login[1711]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 7 06:10:01.716118 systemd-logind[1592]: New session 2 of user core. Jul 7 06:10:01.726190 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 06:10:11.941611 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 06:10:11.942936 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:10:12.403742 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:10:12.406868 (kubelet)[1852]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:10:12.427742 kubelet[1852]: E0707 06:10:12.427698 1852 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:10:12.429827 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:10:12.429911 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:10:12.430263 systemd[1]: kubelet.service: Consumed 111ms CPU time, 110.2M memory peak. Jul 7 06:10:22.441517 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 7 06:10:22.442707 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:10:22.816723 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:10:22.826324 (kubelet)[1867]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:10:22.855964 kubelet[1867]: E0707 06:10:22.855939 1867 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:10:22.857207 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:10:22.857305 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:10:22.857631 systemd[1]: kubelet.service: Consumed 100ms CPU time, 110.2M memory peak. Jul 7 06:10:28.001269 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 06:10:28.003213 systemd[1]: Started sshd@0-139.178.70.102:22-139.178.68.195:40388.service - OpenSSH per-connection server daemon (139.178.68.195:40388). Jul 7 06:10:28.083942 sshd[1874]: Accepted publickey for core from 139.178.68.195 port 40388 ssh2: RSA SHA256:7X4SiK+KmZA2miCaJT9Isk7VmwTZNhYs8oAuZv2KODY Jul 7 06:10:28.084724 sshd-session[1874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:10:28.087664 systemd-logind[1592]: New session 3 of user core. Jul 7 06:10:28.094218 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 06:10:28.148257 systemd[1]: Started sshd@1-139.178.70.102:22-139.178.68.195:40872.service - OpenSSH per-connection server daemon (139.178.68.195:40872). Jul 7 06:10:28.187387 sshd[1879]: Accepted publickey for core from 139.178.68.195 port 40872 ssh2: RSA SHA256:7X4SiK+KmZA2miCaJT9Isk7VmwTZNhYs8oAuZv2KODY Jul 7 06:10:28.187940 sshd-session[1879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:10:28.191983 systemd-logind[1592]: New session 4 of user core. Jul 7 06:10:28.197222 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 06:10:28.245818 sshd[1881]: Connection closed by 139.178.68.195 port 40872 Jul 7 06:10:28.245533 sshd-session[1879]: pam_unix(sshd:session): session closed for user core Jul 7 06:10:28.255809 systemd[1]: sshd@1-139.178.70.102:22-139.178.68.195:40872.service: Deactivated successfully. Jul 7 06:10:28.256947 systemd[1]: session-4.scope: Deactivated successfully. Jul 7 06:10:28.258162 systemd-logind[1592]: Session 4 logged out. Waiting for processes to exit. Jul 7 06:10:28.259107 systemd[1]: Started sshd@2-139.178.70.102:22-139.178.68.195:40888.service - OpenSSH per-connection server daemon (139.178.68.195:40888). Jul 7 06:10:28.259763 systemd-logind[1592]: Removed session 4. Jul 7 06:10:28.297687 sshd[1887]: Accepted publickey for core from 139.178.68.195 port 40888 ssh2: RSA SHA256:7X4SiK+KmZA2miCaJT9Isk7VmwTZNhYs8oAuZv2KODY Jul 7 06:10:28.298444 sshd-session[1887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:10:28.301616 systemd-logind[1592]: New session 5 of user core. Jul 7 06:10:28.308158 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 06:10:28.353858 sshd[1889]: Connection closed by 139.178.68.195 port 40888 Jul 7 06:10:28.354714 sshd-session[1887]: pam_unix(sshd:session): session closed for user core Jul 7 06:10:28.360666 systemd[1]: sshd@2-139.178.70.102:22-139.178.68.195:40888.service: Deactivated successfully. Jul 7 06:10:28.361784 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 06:10:28.362467 systemd-logind[1592]: Session 5 logged out. Waiting for processes to exit. Jul 7 06:10:28.363549 systemd-logind[1592]: Removed session 5. Jul 7 06:10:28.364767 systemd[1]: Started sshd@3-139.178.70.102:22-139.178.68.195:40902.service - OpenSSH per-connection server daemon (139.178.68.195:40902). Jul 7 06:10:28.406170 sshd[1895]: Accepted publickey for core from 139.178.68.195 port 40902 ssh2: RSA SHA256:7X4SiK+KmZA2miCaJT9Isk7VmwTZNhYs8oAuZv2KODY Jul 7 06:10:28.406871 sshd-session[1895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:10:28.409518 systemd-logind[1592]: New session 6 of user core. Jul 7 06:10:28.420376 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 06:10:28.468106 sshd[1897]: Connection closed by 139.178.68.195 port 40902 Jul 7 06:10:28.468714 sshd-session[1895]: pam_unix(sshd:session): session closed for user core Jul 7 06:10:28.477668 systemd[1]: sshd@3-139.178.70.102:22-139.178.68.195:40902.service: Deactivated successfully. Jul 7 06:10:28.478630 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 06:10:28.479159 systemd-logind[1592]: Session 6 logged out. Waiting for processes to exit. Jul 7 06:10:28.480239 systemd[1]: Started sshd@4-139.178.70.102:22-139.178.68.195:40918.service - OpenSSH per-connection server daemon (139.178.68.195:40918). Jul 7 06:10:28.481329 systemd-logind[1592]: Removed session 6. Jul 7 06:10:28.519454 sshd[1903]: Accepted publickey for core from 139.178.68.195 port 40918 ssh2: RSA SHA256:7X4SiK+KmZA2miCaJT9Isk7VmwTZNhYs8oAuZv2KODY Jul 7 06:10:28.520335 sshd-session[1903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:10:28.522756 systemd-logind[1592]: New session 7 of user core. Jul 7 06:10:28.529266 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 06:10:28.598364 sudo[1906]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 06:10:28.598522 sudo[1906]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:10:28.610374 sudo[1906]: pam_unix(sudo:session): session closed for user root Jul 7 06:10:28.611187 sshd[1905]: Connection closed by 139.178.68.195 port 40918 Jul 7 06:10:28.611524 sshd-session[1903]: pam_unix(sshd:session): session closed for user core Jul 7 06:10:28.618920 systemd[1]: sshd@4-139.178.70.102:22-139.178.68.195:40918.service: Deactivated successfully. Jul 7 06:10:28.620004 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 06:10:28.620622 systemd-logind[1592]: Session 7 logged out. Waiting for processes to exit. Jul 7 06:10:28.622425 systemd[1]: Started sshd@5-139.178.70.102:22-139.178.68.195:40922.service - OpenSSH per-connection server daemon (139.178.68.195:40922). Jul 7 06:10:28.623016 systemd-logind[1592]: Removed session 7. Jul 7 06:10:28.658738 sshd[1912]: Accepted publickey for core from 139.178.68.195 port 40922 ssh2: RSA SHA256:7X4SiK+KmZA2miCaJT9Isk7VmwTZNhYs8oAuZv2KODY Jul 7 06:10:28.659532 sshd-session[1912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:10:28.662122 systemd-logind[1592]: New session 8 of user core. Jul 7 06:10:28.672158 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 06:10:28.720055 sudo[1916]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 06:10:28.720240 sudo[1916]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:10:28.724996 sudo[1916]: pam_unix(sudo:session): session closed for user root Jul 7 06:10:28.727893 sudo[1915]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 7 06:10:28.728038 sudo[1915]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:10:28.734696 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 06:10:28.760265 augenrules[1938]: No rules Jul 7 06:10:28.760853 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 06:10:28.761095 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 06:10:28.761827 sudo[1915]: pam_unix(sudo:session): session closed for user root Jul 7 06:10:28.762721 sshd[1914]: Connection closed by 139.178.68.195 port 40922 Jul 7 06:10:28.763254 sshd-session[1912]: pam_unix(sshd:session): session closed for user core Jul 7 06:10:28.768060 systemd[1]: sshd@5-139.178.70.102:22-139.178.68.195:40922.service: Deactivated successfully. Jul 7 06:10:28.768936 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 06:10:28.769431 systemd-logind[1592]: Session 8 logged out. Waiting for processes to exit. Jul 7 06:10:28.770576 systemd[1]: Started sshd@6-139.178.70.102:22-139.178.68.195:40924.service - OpenSSH per-connection server daemon (139.178.68.195:40924). Jul 7 06:10:28.771839 systemd-logind[1592]: Removed session 8. Jul 7 06:10:28.814382 sshd[1947]: Accepted publickey for core from 139.178.68.195 port 40924 ssh2: RSA SHA256:7X4SiK+KmZA2miCaJT9Isk7VmwTZNhYs8oAuZv2KODY Jul 7 06:10:28.815164 sshd-session[1947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:10:28.818064 systemd-logind[1592]: New session 9 of user core. Jul 7 06:10:28.828165 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 06:10:28.876974 sudo[1950]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 06:10:28.877192 sudo[1950]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:10:29.491191 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 06:10:29.500356 (dockerd)[1968]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 06:10:29.775163 dockerd[1968]: time="2025-07-07T06:10:29.775085230Z" level=info msg="Starting up" Jul 7 06:10:29.775992 dockerd[1968]: time="2025-07-07T06:10:29.775979746Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 7 06:10:29.788971 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2158863150-merged.mount: Deactivated successfully. Jul 7 06:10:29.805044 dockerd[1968]: time="2025-07-07T06:10:29.805017674Z" level=info msg="Loading containers: start." Jul 7 06:10:29.812094 kernel: Initializing XFRM netlink socket Jul 7 06:10:30.082179 systemd-networkd[1529]: docker0: Link UP Jul 7 06:10:30.128920 dockerd[1968]: time="2025-07-07T06:10:30.128843367Z" level=info msg="Loading containers: done." Jul 7 06:10:30.160555 dockerd[1968]: time="2025-07-07T06:10:30.160520045Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 06:10:30.160665 dockerd[1968]: time="2025-07-07T06:10:30.160582379Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 7 06:10:30.160665 dockerd[1968]: time="2025-07-07T06:10:30.160647234Z" level=info msg="Initializing buildkit" Jul 7 06:10:30.170474 dockerd[1968]: time="2025-07-07T06:10:30.170364695Z" level=info msg="Completed buildkit initialization" Jul 7 06:10:30.176041 dockerd[1968]: time="2025-07-07T06:10:30.176020333Z" level=info msg="Daemon has completed initialization" Jul 7 06:10:30.176572 dockerd[1968]: time="2025-07-07T06:10:30.176167173Z" level=info msg="API listen on /run/docker.sock" Jul 7 06:10:30.176254 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 06:10:30.787598 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2567002875-merged.mount: Deactivated successfully. Jul 7 06:10:31.060518 containerd[1626]: time="2025-07-07T06:10:31.060417154Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 7 06:10:31.726814 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3752586679.mount: Deactivated successfully. Jul 7 06:10:32.673877 containerd[1626]: time="2025-07-07T06:10:32.673615814Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:10:32.674456 containerd[1626]: time="2025-07-07T06:10:32.674147849Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799045" Jul 7 06:10:32.674456 containerd[1626]: time="2025-07-07T06:10:32.674262165Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:10:32.675533 containerd[1626]: time="2025-07-07T06:10:32.675519545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:10:32.676110 containerd[1626]: time="2025-07-07T06:10:32.676091632Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 1.615652115s" Jul 7 06:10:32.676168 containerd[1626]: time="2025-07-07T06:10:32.676159270Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jul 7 06:10:32.676549 containerd[1626]: time="2025-07-07T06:10:32.676491099Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 7 06:10:32.941646 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 7 06:10:32.942668 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:10:33.247053 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:10:33.255432 (kubelet)[2230]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:10:33.281877 kubelet[2230]: E0707 06:10:33.281842 2230 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:10:33.283325 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:10:33.283410 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:10:33.283822 systemd[1]: kubelet.service: Consumed 99ms CPU time, 110M memory peak. Jul 7 06:10:34.451637 containerd[1626]: time="2025-07-07T06:10:34.451159042Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:10:34.456812 containerd[1626]: time="2025-07-07T06:10:34.456796851Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783912" Jul 7 06:10:34.466726 containerd[1626]: time="2025-07-07T06:10:34.466695676Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:10:34.471803 containerd[1626]: time="2025-07-07T06:10:34.471783453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:10:34.472249 containerd[1626]: time="2025-07-07T06:10:34.472234306Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 1.795645055s" Jul 7 06:10:34.472300 containerd[1626]: time="2025-07-07T06:10:34.472292818Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jul 7 06:10:34.472800 containerd[1626]: time="2025-07-07T06:10:34.472764008Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 7 06:10:35.511796 containerd[1626]: time="2025-07-07T06:10:35.511731588Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:10:35.512677 containerd[1626]: time="2025-07-07T06:10:35.512662674Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176916" Jul 7 06:10:35.513128 containerd[1626]: time="2025-07-07T06:10:35.513117434Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:10:35.514477 containerd[1626]: time="2025-07-07T06:10:35.514460847Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:10:35.515089 containerd[1626]: time="2025-07-07T06:10:35.515060816Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 1.042281762s" Jul 7 06:10:35.515160 containerd[1626]: time="2025-07-07T06:10:35.515151754Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jul 7 06:10:35.515463 containerd[1626]: time="2025-07-07T06:10:35.515445950Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 7 06:10:36.419749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount970159786.mount: Deactivated successfully. Jul 7 06:10:36.780130 containerd[1626]: time="2025-07-07T06:10:36.780092359Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:10:36.787516 containerd[1626]: time="2025-07-07T06:10:36.787497013Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895363" Jul 7 06:10:36.792941 containerd[1626]: time="2025-07-07T06:10:36.792924202Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:10:36.800809 containerd[1626]: time="2025-07-07T06:10:36.800793088Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:10:36.801183 containerd[1626]: time="2025-07-07T06:10:36.801159931Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 1.285694376s" Jul 7 06:10:36.801223 containerd[1626]: time="2025-07-07T06:10:36.801183839Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jul 7 06:10:36.801660 containerd[1626]: time="2025-07-07T06:10:36.801645757Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 7 06:10:37.282057 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2431732333.mount: Deactivated successfully. Jul 7 06:10:38.104263 containerd[1626]: time="2025-07-07T06:10:38.104231956Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:10:38.110518 containerd[1626]: time="2025-07-07T06:10:38.110484688Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 7 06:10:38.118780 containerd[1626]: time="2025-07-07T06:10:38.118741243Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:10:38.124552 containerd[1626]: time="2025-07-07T06:10:38.124508852Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:10:38.125183 containerd[1626]: time="2025-07-07T06:10:38.125097835Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.323386329s" Jul 7 06:10:38.125183 containerd[1626]: time="2025-07-07T06:10:38.125116461Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 7 06:10:38.125439 containerd[1626]: time="2025-07-07T06:10:38.125412737Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 06:10:38.621439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2976672696.mount: Deactivated successfully. Jul 7 06:10:38.623125 containerd[1626]: time="2025-07-07T06:10:38.623100835Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:10:38.623485 containerd[1626]: time="2025-07-07T06:10:38.623462609Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 7 06:10:38.624083 containerd[1626]: time="2025-07-07T06:10:38.623753223Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:10:38.624811 containerd[1626]: time="2025-07-07T06:10:38.624756710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:10:38.625495 containerd[1626]: time="2025-07-07T06:10:38.625479783Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 499.995948ms" Jul 7 06:10:38.625534 containerd[1626]: time="2025-07-07T06:10:38.625496334Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 7 06:10:38.625757 containerd[1626]: time="2025-07-07T06:10:38.625746891Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 7 06:10:39.300103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1878930674.mount: Deactivated successfully. Jul 7 06:10:43.291520 update_engine[1593]: I20250707 06:10:43.291474 1593 update_attempter.cc:509] Updating boot flags... Jul 7 06:10:43.319956 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 7 06:10:43.322169 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:10:44.361735 containerd[1626]: time="2025-07-07T06:10:44.361677419Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:10:44.364285 containerd[1626]: time="2025-07-07T06:10:44.363425096Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Jul 7 06:10:44.364654 containerd[1626]: time="2025-07-07T06:10:44.364592805Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:10:44.368132 containerd[1626]: time="2025-07-07T06:10:44.368100852Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:10:44.369620 containerd[1626]: time="2025-07-07T06:10:44.368662549Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 5.742898075s" Jul 7 06:10:44.369620 containerd[1626]: time="2025-07-07T06:10:44.368679569Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jul 7 06:10:44.388755 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:10:44.396528 (kubelet)[2394]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:10:44.561089 kubelet[2394]: E0707 06:10:44.561033 2394 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:10:44.563298 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:10:44.563400 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:10:44.563887 systemd[1]: kubelet.service: Consumed 116ms CPU time, 108.3M memory peak. Jul 7 06:10:46.163211 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:10:46.163522 systemd[1]: kubelet.service: Consumed 116ms CPU time, 108.3M memory peak. Jul 7 06:10:46.165329 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:10:46.186665 systemd[1]: Reload requested from client PID 2422 ('systemctl') (unit session-9.scope)... Jul 7 06:10:46.186676 systemd[1]: Reloading... Jul 7 06:10:46.252089 zram_generator::config[2469]: No configuration found. Jul 7 06:10:46.310268 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:10:46.318428 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 7 06:10:46.385310 systemd[1]: Reloading finished in 198 ms. Jul 7 06:10:46.509149 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 06:10:46.509234 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 06:10:46.509468 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:10:46.511038 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:10:46.888216 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:10:46.897724 (kubelet)[2533]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 06:10:46.925526 kubelet[2533]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:10:46.925526 kubelet[2533]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 06:10:46.925526 kubelet[2533]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:10:46.925744 kubelet[2533]: I0707 06:10:46.925561 2533 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 06:10:47.080080 kubelet[2533]: I0707 06:10:47.079627 2533 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 7 06:10:47.080080 kubelet[2533]: I0707 06:10:47.079644 2533 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 06:10:47.080080 kubelet[2533]: I0707 06:10:47.079812 2533 server.go:954] "Client rotation is on, will bootstrap in background" Jul 7 06:10:47.112242 kubelet[2533]: E0707 06:10:47.112218 2533 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.102:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:10:47.113483 kubelet[2533]: I0707 06:10:47.113471 2533 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 06:10:47.124344 kubelet[2533]: I0707 06:10:47.124330 2533 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 06:10:47.128388 kubelet[2533]: I0707 06:10:47.128376 2533 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 06:10:47.129774 kubelet[2533]: I0707 06:10:47.129750 2533 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 06:10:47.129874 kubelet[2533]: I0707 06:10:47.129773 2533 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 06:10:47.131688 kubelet[2533]: I0707 06:10:47.131670 2533 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 06:10:47.131688 kubelet[2533]: I0707 06:10:47.131687 2533 container_manager_linux.go:304] "Creating device plugin manager" Jul 7 06:10:47.132598 kubelet[2533]: I0707 06:10:47.132586 2533 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:10:47.136726 kubelet[2533]: I0707 06:10:47.136620 2533 kubelet.go:446] "Attempting to sync node with API server" Jul 7 06:10:47.136726 kubelet[2533]: I0707 06:10:47.136653 2533 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 06:10:47.137561 kubelet[2533]: I0707 06:10:47.137544 2533 kubelet.go:352] "Adding apiserver pod source" Jul 7 06:10:47.137561 kubelet[2533]: I0707 06:10:47.137555 2533 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 06:10:47.139820 kubelet[2533]: W0707 06:10:47.138752 2533 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.102:6443: connect: connection refused Jul 7 06:10:47.139820 kubelet[2533]: E0707 06:10:47.138785 2533 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:10:47.139820 kubelet[2533]: W0707 06:10:47.139359 2533 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.102:6443: connect: connection refused Jul 7 06:10:47.139820 kubelet[2533]: E0707 06:10:47.139382 2533 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:10:47.140643 kubelet[2533]: I0707 06:10:47.140628 2533 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 06:10:47.143078 kubelet[2533]: I0707 06:10:47.142952 2533 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 06:10:47.143078 kubelet[2533]: W0707 06:10:47.142984 2533 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 06:10:47.145496 kubelet[2533]: I0707 06:10:47.145487 2533 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 06:10:47.145552 kubelet[2533]: I0707 06:10:47.145547 2533 server.go:1287] "Started kubelet" Jul 7 06:10:47.146593 kubelet[2533]: I0707 06:10:47.146374 2533 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 06:10:47.146935 kubelet[2533]: I0707 06:10:47.146920 2533 server.go:479] "Adding debug handlers to kubelet server" Jul 7 06:10:47.150544 kubelet[2533]: I0707 06:10:47.150492 2533 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 06:10:47.150663 kubelet[2533]: I0707 06:10:47.150649 2533 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 06:10:47.155787 kubelet[2533]: I0707 06:10:47.155344 2533 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 06:10:47.155942 kubelet[2533]: E0707 06:10:47.153651 2533 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.102:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.102:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184fe340b640f46b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-07 06:10:47.145534571 +0000 UTC m=+0.244033997,LastTimestamp:2025-07-07 06:10:47.145534571 +0000 UTC m=+0.244033997,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 7 06:10:47.158678 kubelet[2533]: I0707 06:10:47.157659 2533 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 06:10:47.159837 kubelet[2533]: I0707 06:10:47.159556 2533 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 06:10:47.159837 kubelet[2533]: E0707 06:10:47.159686 2533 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:10:47.159890 kubelet[2533]: I0707 06:10:47.159860 2533 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 06:10:47.159890 kubelet[2533]: I0707 06:10:47.159887 2533 reconciler.go:26] "Reconciler: start to sync state" Jul 7 06:10:47.160525 kubelet[2533]: W0707 06:10:47.160500 2533 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.102:6443: connect: connection refused Jul 7 06:10:47.160552 kubelet[2533]: E0707 06:10:47.160531 2533 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:10:47.160584 kubelet[2533]: E0707 06:10:47.160565 2533 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.102:6443: connect: connection refused" interval="200ms" Jul 7 06:10:47.162349 kubelet[2533]: I0707 06:10:47.162336 2533 factory.go:221] Registration of the systemd container factory successfully Jul 7 06:10:47.162397 kubelet[2533]: I0707 06:10:47.162379 2533 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 06:10:47.166411 kubelet[2533]: I0707 06:10:47.166401 2533 factory.go:221] Registration of the containerd container factory successfully Jul 7 06:10:47.169489 kubelet[2533]: E0707 06:10:47.169477 2533 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 06:10:47.170765 kubelet[2533]: I0707 06:10:47.170747 2533 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 06:10:47.171414 kubelet[2533]: I0707 06:10:47.171406 2533 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 06:10:47.171461 kubelet[2533]: I0707 06:10:47.171456 2533 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 7 06:10:47.171496 kubelet[2533]: I0707 06:10:47.171492 2533 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 06:10:47.171526 kubelet[2533]: I0707 06:10:47.171522 2533 kubelet.go:2382] "Starting kubelet main sync loop" Jul 7 06:10:47.171578 kubelet[2533]: E0707 06:10:47.171569 2533 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 06:10:47.175643 kubelet[2533]: W0707 06:10:47.175621 2533 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.102:6443: connect: connection refused Jul 7 06:10:47.175736 kubelet[2533]: E0707 06:10:47.175726 2533 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:10:47.189342 kubelet[2533]: I0707 06:10:47.189327 2533 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 06:10:47.189496 kubelet[2533]: I0707 06:10:47.189490 2533 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 06:10:47.189556 kubelet[2533]: I0707 06:10:47.189551 2533 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:10:47.260722 kubelet[2533]: E0707 06:10:47.260689 2533 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:10:47.271880 kubelet[2533]: E0707 06:10:47.271852 2533 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 7 06:10:47.302881 kubelet[2533]: I0707 06:10:47.302646 2533 policy_none.go:49] "None policy: Start" Jul 7 06:10:47.302881 kubelet[2533]: I0707 06:10:47.302668 2533 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 06:10:47.302881 kubelet[2533]: I0707 06:10:47.302679 2533 state_mem.go:35] "Initializing new in-memory state store" Jul 7 06:10:47.314236 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 7 06:10:47.326857 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 7 06:10:47.339836 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 7 06:10:47.341308 kubelet[2533]: I0707 06:10:47.341240 2533 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 06:10:47.341521 kubelet[2533]: I0707 06:10:47.341458 2533 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 06:10:47.341521 kubelet[2533]: I0707 06:10:47.341469 2533 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 06:10:47.346780 kubelet[2533]: I0707 06:10:47.346657 2533 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 06:10:47.347428 kubelet[2533]: E0707 06:10:47.347391 2533 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 06:10:47.347607 kubelet[2533]: E0707 06:10:47.347527 2533 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 7 06:10:47.360860 kubelet[2533]: E0707 06:10:47.360838 2533 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.102:6443: connect: connection refused" interval="400ms" Jul 7 06:10:47.443060 kubelet[2533]: I0707 06:10:47.442999 2533 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 06:10:47.443386 kubelet[2533]: E0707 06:10:47.443362 2533 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.102:6443/api/v1/nodes\": dial tcp 139.178.70.102:6443: connect: connection refused" node="localhost" Jul 7 06:10:47.480836 systemd[1]: Created slice kubepods-burstable-pode795784ce47cba3ba0ef4e14aaeee069.slice - libcontainer container kubepods-burstable-pode795784ce47cba3ba0ef4e14aaeee069.slice. Jul 7 06:10:47.487622 kubelet[2533]: E0707 06:10:47.487577 2533 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:10:47.489793 systemd[1]: Created slice kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice - libcontainer container kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice. Jul 7 06:10:47.499892 kubelet[2533]: E0707 06:10:47.499880 2533 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:10:47.501933 systemd[1]: Created slice kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice - libcontainer container kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice. Jul 7 06:10:47.503045 kubelet[2533]: E0707 06:10:47.503031 2533 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:10:47.562250 kubelet[2533]: I0707 06:10:47.562198 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e795784ce47cba3ba0ef4e14aaeee069-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e795784ce47cba3ba0ef4e14aaeee069\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:10:47.562250 kubelet[2533]: I0707 06:10:47.562223 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:10:47.562369 kubelet[2533]: I0707 06:10:47.562275 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:10:47.562369 kubelet[2533]: I0707 06:10:47.562293 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:10:47.562369 kubelet[2533]: I0707 06:10:47.562304 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e795784ce47cba3ba0ef4e14aaeee069-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e795784ce47cba3ba0ef4e14aaeee069\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:10:47.562369 kubelet[2533]: I0707 06:10:47.562316 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e795784ce47cba3ba0ef4e14aaeee069-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e795784ce47cba3ba0ef4e14aaeee069\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:10:47.562369 kubelet[2533]: I0707 06:10:47.562362 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:10:47.562494 kubelet[2533]: I0707 06:10:47.562375 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:10:47.562494 kubelet[2533]: I0707 06:10:47.562387 2533 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 7 06:10:47.644421 kubelet[2533]: I0707 06:10:47.644364 2533 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 06:10:47.644766 kubelet[2533]: E0707 06:10:47.644744 2533 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.102:6443/api/v1/nodes\": dial tcp 139.178.70.102:6443: connect: connection refused" node="localhost" Jul 7 06:10:47.761643 kubelet[2533]: E0707 06:10:47.761569 2533 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.102:6443: connect: connection refused" interval="800ms" Jul 7 06:10:47.789516 containerd[1626]: time="2025-07-07T06:10:47.789486813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e795784ce47cba3ba0ef4e14aaeee069,Namespace:kube-system,Attempt:0,}" Jul 7 06:10:47.805084 containerd[1626]: time="2025-07-07T06:10:47.804821238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,}" Jul 7 06:10:47.805248 containerd[1626]: time="2025-07-07T06:10:47.805237466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,}" Jul 7 06:10:47.864411 containerd[1626]: time="2025-07-07T06:10:47.864383108Z" level=info msg="connecting to shim d21aca282f7da6bcb62c1e240f4876cd00cc5f41753e9b8570d56dacad650e80" address="unix:///run/containerd/s/84da66171908b0b1a57657769ada8dd6d28d22b70db968bbc4ad2ebc91e49186" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:10:47.868316 containerd[1626]: time="2025-07-07T06:10:47.868271494Z" level=info msg="connecting to shim 7250f068b630abe9c29b93bc0ca11d6a4e914281632bcec59f87a157b9a4fc23" address="unix:///run/containerd/s/6b7dea7130237b0b2dc099e6e86c79d22557039b88968cd9b849f2a4873fdaf2" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:10:47.868474 containerd[1626]: time="2025-07-07T06:10:47.868460592Z" level=info msg="connecting to shim efaa0ce1a94a313c2b8310afaea41954f2cb05c5d0996d40b0dfc4cfe541ffff" address="unix:///run/containerd/s/9ecef4a3882d9cfdb2eca18009c9b5913402552e7027832355380a53b5d42fb3" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:10:47.936204 systemd[1]: Started cri-containerd-7250f068b630abe9c29b93bc0ca11d6a4e914281632bcec59f87a157b9a4fc23.scope - libcontainer container 7250f068b630abe9c29b93bc0ca11d6a4e914281632bcec59f87a157b9a4fc23. Jul 7 06:10:47.937297 systemd[1]: Started cri-containerd-d21aca282f7da6bcb62c1e240f4876cd00cc5f41753e9b8570d56dacad650e80.scope - libcontainer container d21aca282f7da6bcb62c1e240f4876cd00cc5f41753e9b8570d56dacad650e80. Jul 7 06:10:47.939084 systemd[1]: Started cri-containerd-efaa0ce1a94a313c2b8310afaea41954f2cb05c5d0996d40b0dfc4cfe541ffff.scope - libcontainer container efaa0ce1a94a313c2b8310afaea41954f2cb05c5d0996d40b0dfc4cfe541ffff. Jul 7 06:10:47.986832 containerd[1626]: time="2025-07-07T06:10:47.986807515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"d21aca282f7da6bcb62c1e240f4876cd00cc5f41753e9b8570d56dacad650e80\"" Jul 7 06:10:47.988997 containerd[1626]: time="2025-07-07T06:10:47.988915384Z" level=info msg="CreateContainer within sandbox \"d21aca282f7da6bcb62c1e240f4876cd00cc5f41753e9b8570d56dacad650e80\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 06:10:47.999258 containerd[1626]: time="2025-07-07T06:10:47.999229692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e795784ce47cba3ba0ef4e14aaeee069,Namespace:kube-system,Attempt:0,} returns sandbox id \"7250f068b630abe9c29b93bc0ca11d6a4e914281632bcec59f87a157b9a4fc23\"" Jul 7 06:10:47.999329 containerd[1626]: time="2025-07-07T06:10:47.999302959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"efaa0ce1a94a313c2b8310afaea41954f2cb05c5d0996d40b0dfc4cfe541ffff\"" Jul 7 06:10:48.001461 containerd[1626]: time="2025-07-07T06:10:48.001433595Z" level=info msg="CreateContainer within sandbox \"7250f068b630abe9c29b93bc0ca11d6a4e914281632bcec59f87a157b9a4fc23\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 06:10:48.002705 containerd[1626]: time="2025-07-07T06:10:48.002690332Z" level=info msg="CreateContainer within sandbox \"efaa0ce1a94a313c2b8310afaea41954f2cb05c5d0996d40b0dfc4cfe541ffff\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 06:10:48.030962 containerd[1626]: time="2025-07-07T06:10:48.030874999Z" level=info msg="Container f26f4b319f6dc20f8a06182061c3efb031857af5e16a5ccb7d795742f51fcefe: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:10:48.031641 containerd[1626]: time="2025-07-07T06:10:48.031552682Z" level=info msg="Container 9bf4b2faf20841b69a94389957918caa3e2469768b59c740a7a7ab72c0bf5b78: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:10:48.044618 containerd[1626]: time="2025-07-07T06:10:48.044593921Z" level=info msg="CreateContainer within sandbox \"d21aca282f7da6bcb62c1e240f4876cd00cc5f41753e9b8570d56dacad650e80\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f26f4b319f6dc20f8a06182061c3efb031857af5e16a5ccb7d795742f51fcefe\"" Jul 7 06:10:48.045963 containerd[1626]: time="2025-07-07T06:10:48.045947068Z" level=info msg="Container 9fcbcab5e0278a8a344202f08fb8058e171f4b6b2a9ede06da2f8ec985f81c96: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:10:48.046942 kubelet[2533]: I0707 06:10:48.046788 2533 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 06:10:48.047230 kubelet[2533]: E0707 06:10:48.047013 2533 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.102:6443/api/v1/nodes\": dial tcp 139.178.70.102:6443: connect: connection refused" node="localhost" Jul 7 06:10:48.047487 containerd[1626]: time="2025-07-07T06:10:48.047473324Z" level=info msg="StartContainer for \"f26f4b319f6dc20f8a06182061c3efb031857af5e16a5ccb7d795742f51fcefe\"" Jul 7 06:10:48.048501 containerd[1626]: time="2025-07-07T06:10:48.048056826Z" level=info msg="CreateContainer within sandbox \"7250f068b630abe9c29b93bc0ca11d6a4e914281632bcec59f87a157b9a4fc23\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9bf4b2faf20841b69a94389957918caa3e2469768b59c740a7a7ab72c0bf5b78\"" Jul 7 06:10:48.048501 containerd[1626]: time="2025-07-07T06:10:48.048180960Z" level=info msg="connecting to shim f26f4b319f6dc20f8a06182061c3efb031857af5e16a5ccb7d795742f51fcefe" address="unix:///run/containerd/s/84da66171908b0b1a57657769ada8dd6d28d22b70db968bbc4ad2ebc91e49186" protocol=ttrpc version=3 Jul 7 06:10:48.048559 containerd[1626]: time="2025-07-07T06:10:48.048520980Z" level=info msg="StartContainer for \"9bf4b2faf20841b69a94389957918caa3e2469768b59c740a7a7ab72c0bf5b78\"" Jul 7 06:10:48.050310 containerd[1626]: time="2025-07-07T06:10:48.050292753Z" level=info msg="connecting to shim 9bf4b2faf20841b69a94389957918caa3e2469768b59c740a7a7ab72c0bf5b78" address="unix:///run/containerd/s/6b7dea7130237b0b2dc099e6e86c79d22557039b88968cd9b849f2a4873fdaf2" protocol=ttrpc version=3 Jul 7 06:10:48.051366 containerd[1626]: time="2025-07-07T06:10:48.051348575Z" level=info msg="CreateContainer within sandbox \"efaa0ce1a94a313c2b8310afaea41954f2cb05c5d0996d40b0dfc4cfe541ffff\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9fcbcab5e0278a8a344202f08fb8058e171f4b6b2a9ede06da2f8ec985f81c96\"" Jul 7 06:10:48.051776 containerd[1626]: time="2025-07-07T06:10:48.051762367Z" level=info msg="StartContainer for \"9fcbcab5e0278a8a344202f08fb8058e171f4b6b2a9ede06da2f8ec985f81c96\"" Jul 7 06:10:48.053032 containerd[1626]: time="2025-07-07T06:10:48.053016461Z" level=info msg="connecting to shim 9fcbcab5e0278a8a344202f08fb8058e171f4b6b2a9ede06da2f8ec985f81c96" address="unix:///run/containerd/s/9ecef4a3882d9cfdb2eca18009c9b5913402552e7027832355380a53b5d42fb3" protocol=ttrpc version=3 Jul 7 06:10:48.064289 systemd[1]: Started cri-containerd-f26f4b319f6dc20f8a06182061c3efb031857af5e16a5ccb7d795742f51fcefe.scope - libcontainer container f26f4b319f6dc20f8a06182061c3efb031857af5e16a5ccb7d795742f51fcefe. Jul 7 06:10:48.067240 systemd[1]: Started cri-containerd-9bf4b2faf20841b69a94389957918caa3e2469768b59c740a7a7ab72c0bf5b78.scope - libcontainer container 9bf4b2faf20841b69a94389957918caa3e2469768b59c740a7a7ab72c0bf5b78. Jul 7 06:10:48.070383 systemd[1]: Started cri-containerd-9fcbcab5e0278a8a344202f08fb8058e171f4b6b2a9ede06da2f8ec985f81c96.scope - libcontainer container 9fcbcab5e0278a8a344202f08fb8058e171f4b6b2a9ede06da2f8ec985f81c96. Jul 7 06:10:48.133308 containerd[1626]: time="2025-07-07T06:10:48.133281126Z" level=info msg="StartContainer for \"f26f4b319f6dc20f8a06182061c3efb031857af5e16a5ccb7d795742f51fcefe\" returns successfully" Jul 7 06:10:48.134149 containerd[1626]: time="2025-07-07T06:10:48.133389346Z" level=info msg="StartContainer for \"9bf4b2faf20841b69a94389957918caa3e2469768b59c740a7a7ab72c0bf5b78\" returns successfully" Jul 7 06:10:48.142181 containerd[1626]: time="2025-07-07T06:10:48.142154247Z" level=info msg="StartContainer for \"9fcbcab5e0278a8a344202f08fb8058e171f4b6b2a9ede06da2f8ec985f81c96\" returns successfully" Jul 7 06:10:48.196217 kubelet[2533]: E0707 06:10:48.196093 2533 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:10:48.197407 kubelet[2533]: E0707 06:10:48.197396 2533 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:10:48.198778 kubelet[2533]: E0707 06:10:48.198720 2533 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:10:48.248134 kubelet[2533]: W0707 06:10:48.248095 2533 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.102:6443: connect: connection refused Jul 7 06:10:48.248134 kubelet[2533]: E0707 06:10:48.248138 2533 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:10:48.260762 kubelet[2533]: W0707 06:10:48.260706 2533 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.102:6443: connect: connection refused Jul 7 06:10:48.260762 kubelet[2533]: E0707 06:10:48.260746 2533 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:10:48.439921 kubelet[2533]: W0707 06:10:48.439844 2533 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.102:6443: connect: connection refused Jul 7 06:10:48.439921 kubelet[2533]: E0707 06:10:48.439884 2533 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:10:48.561848 kubelet[2533]: E0707 06:10:48.561822 2533 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.102:6443: connect: connection refused" interval="1.6s" Jul 7 06:10:48.692426 kubelet[2533]: W0707 06:10:48.692361 2533 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.102:6443: connect: connection refused Jul 7 06:10:48.692426 kubelet[2533]: E0707 06:10:48.692411 2533 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:10:48.847919 kubelet[2533]: I0707 06:10:48.847897 2533 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 06:10:48.848255 kubelet[2533]: E0707 06:10:48.848238 2533 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.102:6443/api/v1/nodes\": dial tcp 139.178.70.102:6443: connect: connection refused" node="localhost" Jul 7 06:10:49.200125 kubelet[2533]: E0707 06:10:49.200021 2533 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:10:49.200125 kubelet[2533]: E0707 06:10:49.200111 2533 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:10:49.284205 kubelet[2533]: E0707 06:10:49.284157 2533 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.102:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:10:50.451772 kubelet[2533]: I0707 06:10:50.451754 2533 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 06:10:50.838124 kubelet[2533]: E0707 06:10:50.838022 2533 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 7 06:10:50.892108 kubelet[2533]: I0707 06:10:50.891979 2533 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 7 06:10:50.960779 kubelet[2533]: I0707 06:10:50.960727 2533 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 7 06:10:50.964524 kubelet[2533]: E0707 06:10:50.964492 2533 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 7 06:10:50.964524 kubelet[2533]: I0707 06:10:50.964515 2533 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 7 06:10:50.965746 kubelet[2533]: E0707 06:10:50.965728 2533 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 7 06:10:50.965746 kubelet[2533]: I0707 06:10:50.965742 2533 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 7 06:10:50.966904 kubelet[2533]: E0707 06:10:50.966888 2533 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 7 06:10:51.141513 kubelet[2533]: I0707 06:10:51.141297 2533 apiserver.go:52] "Watching apiserver" Jul 7 06:10:51.160332 kubelet[2533]: I0707 06:10:51.160296 2533 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 06:10:53.085109 systemd[1]: Reload requested from client PID 2802 ('systemctl') (unit session-9.scope)... Jul 7 06:10:53.085122 systemd[1]: Reloading... Jul 7 06:10:53.134090 zram_generator::config[2846]: No configuration found. Jul 7 06:10:53.226307 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:10:53.235721 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jul 7 06:10:53.315888 systemd[1]: Reloading finished in 230 ms. Jul 7 06:10:53.336362 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:10:53.354836 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 06:10:53.355126 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:10:53.355218 systemd[1]: kubelet.service: Consumed 464ms CPU time, 128.4M memory peak. Jul 7 06:10:53.356982 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:10:53.802780 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:10:53.815451 (kubelet)[2913]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 06:10:54.104598 kubelet[2913]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:10:54.104869 kubelet[2913]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 06:10:54.105083 kubelet[2913]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:10:54.105083 kubelet[2913]: I0707 06:10:54.104997 2913 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 06:10:54.109728 kubelet[2913]: I0707 06:10:54.109713 2913 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 7 06:10:54.109795 kubelet[2913]: I0707 06:10:54.109790 2913 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 06:10:54.109971 kubelet[2913]: I0707 06:10:54.109963 2913 server.go:954] "Client rotation is on, will bootstrap in background" Jul 7 06:10:54.110761 kubelet[2913]: I0707 06:10:54.110751 2913 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 7 06:10:54.112280 kubelet[2913]: I0707 06:10:54.112264 2913 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 06:10:54.114998 kubelet[2913]: I0707 06:10:54.114990 2913 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 06:10:54.117375 kubelet[2913]: I0707 06:10:54.117363 2913 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 06:10:54.119234 kubelet[2913]: I0707 06:10:54.119206 2913 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 06:10:54.119455 kubelet[2913]: I0707 06:10:54.119289 2913 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 06:10:54.119547 kubelet[2913]: I0707 06:10:54.119539 2913 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 06:10:54.119585 kubelet[2913]: I0707 06:10:54.119578 2913 container_manager_linux.go:304] "Creating device plugin manager" Jul 7 06:10:54.119647 kubelet[2913]: I0707 06:10:54.119642 2913 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:10:54.119818 kubelet[2913]: I0707 06:10:54.119812 2913 kubelet.go:446] "Attempting to sync node with API server" Jul 7 06:10:54.119874 kubelet[2913]: I0707 06:10:54.119868 2913 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 06:10:54.119919 kubelet[2913]: I0707 06:10:54.119913 2913 kubelet.go:352] "Adding apiserver pod source" Jul 7 06:10:54.119975 kubelet[2913]: I0707 06:10:54.119970 2913 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 06:10:54.122091 kubelet[2913]: I0707 06:10:54.122082 2913 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 06:10:54.123794 kubelet[2913]: I0707 06:10:54.122359 2913 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 06:10:54.123794 kubelet[2913]: I0707 06:10:54.122611 2913 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 06:10:54.123794 kubelet[2913]: I0707 06:10:54.122627 2913 server.go:1287] "Started kubelet" Jul 7 06:10:54.124333 kubelet[2913]: I0707 06:10:54.124325 2913 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 06:10:54.133779 kubelet[2913]: I0707 06:10:54.133751 2913 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 06:10:54.134364 kubelet[2913]: I0707 06:10:54.134351 2913 server.go:479] "Adding debug handlers to kubelet server" Jul 7 06:10:54.135604 kubelet[2913]: I0707 06:10:54.135474 2913 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 06:10:54.135604 kubelet[2913]: I0707 06:10:54.135597 2913 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 06:10:54.136426 kubelet[2913]: I0707 06:10:54.135836 2913 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 06:10:54.136426 kubelet[2913]: E0707 06:10:54.136016 2913 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 06:10:54.136426 kubelet[2913]: I0707 06:10:54.136212 2913 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 06:10:54.136426 kubelet[2913]: I0707 06:10:54.136265 2913 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 06:10:54.136426 kubelet[2913]: I0707 06:10:54.136331 2913 reconciler.go:26] "Reconciler: start to sync state" Jul 7 06:10:54.137119 kubelet[2913]: I0707 06:10:54.137100 2913 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 06:10:54.138636 kubelet[2913]: I0707 06:10:54.138617 2913 factory.go:221] Registration of the containerd container factory successfully Jul 7 06:10:54.138636 kubelet[2913]: I0707 06:10:54.138627 2913 factory.go:221] Registration of the systemd container factory successfully Jul 7 06:10:54.154182 kubelet[2913]: I0707 06:10:54.154104 2913 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 06:10:54.156303 kubelet[2913]: I0707 06:10:54.156254 2913 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 06:10:54.156303 kubelet[2913]: I0707 06:10:54.156274 2913 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 7 06:10:54.156303 kubelet[2913]: I0707 06:10:54.156285 2913 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 06:10:54.156303 kubelet[2913]: I0707 06:10:54.156289 2913 kubelet.go:2382] "Starting kubelet main sync loop" Jul 7 06:10:54.156521 kubelet[2913]: E0707 06:10:54.156469 2913 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 06:10:54.169954 kubelet[2913]: I0707 06:10:54.169913 2913 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 06:10:54.169954 kubelet[2913]: I0707 06:10:54.169923 2913 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 06:10:54.169954 kubelet[2913]: I0707 06:10:54.169933 2913 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:10:54.170131 kubelet[2913]: I0707 06:10:54.170030 2913 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 06:10:54.170131 kubelet[2913]: I0707 06:10:54.170037 2913 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 06:10:54.170131 kubelet[2913]: I0707 06:10:54.170047 2913 policy_none.go:49] "None policy: Start" Jul 7 06:10:54.170131 kubelet[2913]: I0707 06:10:54.170053 2913 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 06:10:54.170131 kubelet[2913]: I0707 06:10:54.170058 2913 state_mem.go:35] "Initializing new in-memory state store" Jul 7 06:10:54.170131 kubelet[2913]: I0707 06:10:54.170126 2913 state_mem.go:75] "Updated machine memory state" Jul 7 06:10:54.172846 kubelet[2913]: I0707 06:10:54.172489 2913 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 06:10:54.172846 kubelet[2913]: I0707 06:10:54.172599 2913 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 06:10:54.172846 kubelet[2913]: I0707 06:10:54.172607 2913 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 06:10:54.172846 kubelet[2913]: I0707 06:10:54.172800 2913 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 06:10:54.173779 kubelet[2913]: E0707 06:10:54.173766 2913 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 06:10:54.256934 kubelet[2913]: I0707 06:10:54.256916 2913 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 7 06:10:54.276654 kubelet[2913]: I0707 06:10:54.276640 2913 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 06:10:54.292923 kubelet[2913]: I0707 06:10:54.292900 2913 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 7 06:10:54.294384 kubelet[2913]: I0707 06:10:54.293812 2913 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 7 06:10:54.317917 kubelet[2913]: I0707 06:10:54.317806 2913 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 7 06:10:54.318154 kubelet[2913]: I0707 06:10:54.318119 2913 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 7 06:10:54.336885 kubelet[2913]: I0707 06:10:54.336864 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:10:54.437741 kubelet[2913]: I0707 06:10:54.437087 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:10:54.437741 kubelet[2913]: I0707 06:10:54.437131 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 7 06:10:54.437741 kubelet[2913]: I0707 06:10:54.437146 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e795784ce47cba3ba0ef4e14aaeee069-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e795784ce47cba3ba0ef4e14aaeee069\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:10:54.437741 kubelet[2913]: I0707 06:10:54.437158 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e795784ce47cba3ba0ef4e14aaeee069-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e795784ce47cba3ba0ef4e14aaeee069\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:10:54.437741 kubelet[2913]: I0707 06:10:54.437181 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:10:54.437920 kubelet[2913]: I0707 06:10:54.437192 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:10:54.437920 kubelet[2913]: I0707 06:10:54.437202 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:10:54.437920 kubelet[2913]: I0707 06:10:54.437210 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e795784ce47cba3ba0ef4e14aaeee069-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e795784ce47cba3ba0ef4e14aaeee069\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:10:55.126386 kubelet[2913]: I0707 06:10:55.126358 2913 apiserver.go:52] "Watching apiserver" Jul 7 06:10:55.137323 kubelet[2913]: I0707 06:10:55.137289 2913 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 06:10:55.167029 kubelet[2913]: I0707 06:10:55.167009 2913 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 7 06:10:55.167419 kubelet[2913]: I0707 06:10:55.167234 2913 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 7 06:10:55.167419 kubelet[2913]: I0707 06:10:55.167352 2913 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 7 06:10:55.207326 kubelet[2913]: E0707 06:10:55.207247 2913 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 7 06:10:55.208107 kubelet[2913]: E0707 06:10:55.208089 2913 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 7 06:10:55.208290 kubelet[2913]: E0707 06:10:55.208280 2913 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 7 06:10:55.237495 kubelet[2913]: I0707 06:10:55.237452 2913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.237379948 podStartE2EDuration="1.237379948s" podCreationTimestamp="2025-07-07 06:10:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:10:55.237191732 +0000 UTC m=+1.188021105" watchObservedRunningTime="2025-07-07 06:10:55.237379948 +0000 UTC m=+1.188209321" Jul 7 06:10:55.266482 kubelet[2913]: I0707 06:10:55.266421 2913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.266402268 podStartE2EDuration="1.266402268s" podCreationTimestamp="2025-07-07 06:10:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:10:55.251188693 +0000 UTC m=+1.202018073" watchObservedRunningTime="2025-07-07 06:10:55.266402268 +0000 UTC m=+1.217231647" Jul 7 06:10:55.267045 kubelet[2913]: I0707 06:10:55.266881 2913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.266733925 podStartE2EDuration="1.266733925s" podCreationTimestamp="2025-07-07 06:10:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:10:55.265750411 +0000 UTC m=+1.216579792" watchObservedRunningTime="2025-07-07 06:10:55.266733925 +0000 UTC m=+1.217563301" Jul 7 06:10:59.747497 kubelet[2913]: I0707 06:10:59.747475 2913 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 06:10:59.749354 kubelet[2913]: I0707 06:10:59.749213 2913 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 06:10:59.749378 containerd[1626]: time="2025-07-07T06:10:59.747834916Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 06:11:00.270078 kubelet[2913]: I0707 06:11:00.268988 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlrl2\" (UniqueName: \"kubernetes.io/projected/280a69dc-0c1f-40b9-8491-21c5ad5bb34a-kube-api-access-tlrl2\") pod \"kube-proxy-mwbmf\" (UID: \"280a69dc-0c1f-40b9-8491-21c5ad5bb34a\") " pod="kube-system/kube-proxy-mwbmf" Jul 7 06:11:00.270078 kubelet[2913]: I0707 06:11:00.269044 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/280a69dc-0c1f-40b9-8491-21c5ad5bb34a-kube-proxy\") pod \"kube-proxy-mwbmf\" (UID: \"280a69dc-0c1f-40b9-8491-21c5ad5bb34a\") " pod="kube-system/kube-proxy-mwbmf" Jul 7 06:11:00.270263 kubelet[2913]: I0707 06:11:00.269064 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/280a69dc-0c1f-40b9-8491-21c5ad5bb34a-xtables-lock\") pod \"kube-proxy-mwbmf\" (UID: \"280a69dc-0c1f-40b9-8491-21c5ad5bb34a\") " pod="kube-system/kube-proxy-mwbmf" Jul 7 06:11:00.270263 kubelet[2913]: I0707 06:11:00.270237 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/280a69dc-0c1f-40b9-8491-21c5ad5bb34a-lib-modules\") pod \"kube-proxy-mwbmf\" (UID: \"280a69dc-0c1f-40b9-8491-21c5ad5bb34a\") " pod="kube-system/kube-proxy-mwbmf" Jul 7 06:11:00.270883 systemd[1]: Created slice kubepods-besteffort-pod280a69dc_0c1f_40b9_8491_21c5ad5bb34a.slice - libcontainer container kubepods-besteffort-pod280a69dc_0c1f_40b9_8491_21c5ad5bb34a.slice. Jul 7 06:11:00.578041 containerd[1626]: time="2025-07-07T06:11:00.577964974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mwbmf,Uid:280a69dc-0c1f-40b9-8491-21c5ad5bb34a,Namespace:kube-system,Attempt:0,}" Jul 7 06:11:00.588141 containerd[1626]: time="2025-07-07T06:11:00.588111870Z" level=info msg="connecting to shim 2d1dbd23305de9b7e8b3b27ec3c9d907f37cbcca0af4a4fe3c26b24b1dbf73de" address="unix:///run/containerd/s/57f9ed6aaaec9d04a197786a1d38b4821967de9b863b12292a48e03d5a6ac08d" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:11:00.606193 systemd[1]: Started cri-containerd-2d1dbd23305de9b7e8b3b27ec3c9d907f37cbcca0af4a4fe3c26b24b1dbf73de.scope - libcontainer container 2d1dbd23305de9b7e8b3b27ec3c9d907f37cbcca0af4a4fe3c26b24b1dbf73de. Jul 7 06:11:00.624451 containerd[1626]: time="2025-07-07T06:11:00.624425788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mwbmf,Uid:280a69dc-0c1f-40b9-8491-21c5ad5bb34a,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d1dbd23305de9b7e8b3b27ec3c9d907f37cbcca0af4a4fe3c26b24b1dbf73de\"" Jul 7 06:11:00.626570 containerd[1626]: time="2025-07-07T06:11:00.626327732Z" level=info msg="CreateContainer within sandbox \"2d1dbd23305de9b7e8b3b27ec3c9d907f37cbcca0af4a4fe3c26b24b1dbf73de\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 06:11:00.632405 containerd[1626]: time="2025-07-07T06:11:00.631826067Z" level=info msg="Container d885b69943d396b31859b2b0cddcc778087e56c4b417e35d458fd256821366ac: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:11:00.634246 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2172072397.mount: Deactivated successfully. Jul 7 06:11:00.636455 containerd[1626]: time="2025-07-07T06:11:00.636435423Z" level=info msg="CreateContainer within sandbox \"2d1dbd23305de9b7e8b3b27ec3c9d907f37cbcca0af4a4fe3c26b24b1dbf73de\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d885b69943d396b31859b2b0cddcc778087e56c4b417e35d458fd256821366ac\"" Jul 7 06:11:00.636844 containerd[1626]: time="2025-07-07T06:11:00.636828731Z" level=info msg="StartContainer for \"d885b69943d396b31859b2b0cddcc778087e56c4b417e35d458fd256821366ac\"" Jul 7 06:11:00.637807 containerd[1626]: time="2025-07-07T06:11:00.637775932Z" level=info msg="connecting to shim d885b69943d396b31859b2b0cddcc778087e56c4b417e35d458fd256821366ac" address="unix:///run/containerd/s/57f9ed6aaaec9d04a197786a1d38b4821967de9b863b12292a48e03d5a6ac08d" protocol=ttrpc version=3 Jul 7 06:11:00.651225 systemd[1]: Started cri-containerd-d885b69943d396b31859b2b0cddcc778087e56c4b417e35d458fd256821366ac.scope - libcontainer container d885b69943d396b31859b2b0cddcc778087e56c4b417e35d458fd256821366ac. Jul 7 06:11:00.692620 containerd[1626]: time="2025-07-07T06:11:00.692597062Z" level=info msg="StartContainer for \"d885b69943d396b31859b2b0cddcc778087e56c4b417e35d458fd256821366ac\" returns successfully" Jul 7 06:11:00.735276 systemd[1]: Created slice kubepods-besteffort-pod9e8e4210_c103_4ae1_b0b4_b7068c370289.slice - libcontainer container kubepods-besteffort-pod9e8e4210_c103_4ae1_b0b4_b7068c370289.slice. Jul 7 06:11:00.773605 kubelet[2913]: I0707 06:11:00.773573 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwnwq\" (UniqueName: \"kubernetes.io/projected/9e8e4210-c103-4ae1-b0b4-b7068c370289-kube-api-access-dwnwq\") pod \"tigera-operator-747864d56d-jqnhf\" (UID: \"9e8e4210-c103-4ae1-b0b4-b7068c370289\") " pod="tigera-operator/tigera-operator-747864d56d-jqnhf" Jul 7 06:11:00.773605 kubelet[2913]: I0707 06:11:00.773603 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9e8e4210-c103-4ae1-b0b4-b7068c370289-var-lib-calico\") pod \"tigera-operator-747864d56d-jqnhf\" (UID: \"9e8e4210-c103-4ae1-b0b4-b7068c370289\") " pod="tigera-operator/tigera-operator-747864d56d-jqnhf" Jul 7 06:11:01.038756 containerd[1626]: time="2025-07-07T06:11:01.038718150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-jqnhf,Uid:9e8e4210-c103-4ae1-b0b4-b7068c370289,Namespace:tigera-operator,Attempt:0,}" Jul 7 06:11:01.138473 containerd[1626]: time="2025-07-07T06:11:01.138439373Z" level=info msg="connecting to shim 3976adbe42770725e37f63cc451a3cb384c7a0be4622f1f4fe21f308b533bdc8" address="unix:///run/containerd/s/16c92a412b3c03d12b5ba3a36abf4e689de33887ac035c3e9a0be5936afbfa1b" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:11:01.158200 systemd[1]: Started cri-containerd-3976adbe42770725e37f63cc451a3cb384c7a0be4622f1f4fe21f308b533bdc8.scope - libcontainer container 3976adbe42770725e37f63cc451a3cb384c7a0be4622f1f4fe21f308b533bdc8. Jul 7 06:11:01.189236 kubelet[2913]: I0707 06:11:01.189057 2913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mwbmf" podStartSLOduration=1.189043891 podStartE2EDuration="1.189043891s" podCreationTimestamp="2025-07-07 06:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:11:01.189003109 +0000 UTC m=+7.139832491" watchObservedRunningTime="2025-07-07 06:11:01.189043891 +0000 UTC m=+7.139873264" Jul 7 06:11:01.200946 containerd[1626]: time="2025-07-07T06:11:01.200735484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-jqnhf,Uid:9e8e4210-c103-4ae1-b0b4-b7068c370289,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3976adbe42770725e37f63cc451a3cb384c7a0be4622f1f4fe21f308b533bdc8\"" Jul 7 06:11:01.202286 containerd[1626]: time="2025-07-07T06:11:01.202267690Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 7 06:11:02.342604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1727999067.mount: Deactivated successfully. Jul 7 06:11:02.779475 containerd[1626]: time="2025-07-07T06:11:02.779452044Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:02.780206 containerd[1626]: time="2025-07-07T06:11:02.779830982Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 7 06:11:02.780206 containerd[1626]: time="2025-07-07T06:11:02.780184446Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:02.781201 containerd[1626]: time="2025-07-07T06:11:02.781190323Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:02.781589 containerd[1626]: time="2025-07-07T06:11:02.781572860Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 1.579285977s" Jul 7 06:11:02.781621 containerd[1626]: time="2025-07-07T06:11:02.781590044Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 7 06:11:02.783128 containerd[1626]: time="2025-07-07T06:11:02.783116230Z" level=info msg="CreateContainer within sandbox \"3976adbe42770725e37f63cc451a3cb384c7a0be4622f1f4fe21f308b533bdc8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 7 06:11:02.789062 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount41429283.mount: Deactivated successfully. Jul 7 06:11:02.791143 containerd[1626]: time="2025-07-07T06:11:02.791121468Z" level=info msg="Container c8014c43ee358d7d31479e7e116849b80a7caaf8e45e9ea9169c54ec81a584ca: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:11:02.799607 containerd[1626]: time="2025-07-07T06:11:02.799578670Z" level=info msg="CreateContainer within sandbox \"3976adbe42770725e37f63cc451a3cb384c7a0be4622f1f4fe21f308b533bdc8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c8014c43ee358d7d31479e7e116849b80a7caaf8e45e9ea9169c54ec81a584ca\"" Jul 7 06:11:02.800278 containerd[1626]: time="2025-07-07T06:11:02.800265358Z" level=info msg="StartContainer for \"c8014c43ee358d7d31479e7e116849b80a7caaf8e45e9ea9169c54ec81a584ca\"" Jul 7 06:11:02.801158 containerd[1626]: time="2025-07-07T06:11:02.801131707Z" level=info msg="connecting to shim c8014c43ee358d7d31479e7e116849b80a7caaf8e45e9ea9169c54ec81a584ca" address="unix:///run/containerd/s/16c92a412b3c03d12b5ba3a36abf4e689de33887ac035c3e9a0be5936afbfa1b" protocol=ttrpc version=3 Jul 7 06:11:02.818156 systemd[1]: Started cri-containerd-c8014c43ee358d7d31479e7e116849b80a7caaf8e45e9ea9169c54ec81a584ca.scope - libcontainer container c8014c43ee358d7d31479e7e116849b80a7caaf8e45e9ea9169c54ec81a584ca. Jul 7 06:11:02.836582 containerd[1626]: time="2025-07-07T06:11:02.836559484Z" level=info msg="StartContainer for \"c8014c43ee358d7d31479e7e116849b80a7caaf8e45e9ea9169c54ec81a584ca\" returns successfully" Jul 7 06:11:03.847887 kubelet[2913]: I0707 06:11:03.847809 2913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-jqnhf" podStartSLOduration=2.267622159 podStartE2EDuration="3.847790423s" podCreationTimestamp="2025-07-07 06:11:00 +0000 UTC" firstStartedPulling="2025-07-07 06:11:01.201856976 +0000 UTC m=+7.152686347" lastFinishedPulling="2025-07-07 06:11:02.78202524 +0000 UTC m=+8.732854611" observedRunningTime="2025-07-07 06:11:03.184612811 +0000 UTC m=+9.135442186" watchObservedRunningTime="2025-07-07 06:11:03.847790423 +0000 UTC m=+9.798619803" Jul 7 06:11:07.883630 sudo[1950]: pam_unix(sudo:session): session closed for user root Jul 7 06:11:07.886516 sshd[1949]: Connection closed by 139.178.68.195 port 40924 Jul 7 06:11:07.886868 sshd-session[1947]: pam_unix(sshd:session): session closed for user core Jul 7 06:11:07.889508 systemd[1]: sshd@6-139.178.70.102:22-139.178.68.195:40924.service: Deactivated successfully. Jul 7 06:11:07.891700 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 06:11:07.892412 systemd[1]: session-9.scope: Consumed 2.648s CPU time, 152.6M memory peak. Jul 7 06:11:07.896812 systemd-logind[1592]: Session 9 logged out. Waiting for processes to exit. Jul 7 06:11:07.897848 systemd-logind[1592]: Removed session 9. Jul 7 06:11:10.135921 systemd[1]: Created slice kubepods-besteffort-pod0aa3ead4_7121_4ad3_9b3b_b8861cae8d0d.slice - libcontainer container kubepods-besteffort-pod0aa3ead4_7121_4ad3_9b3b_b8861cae8d0d.slice. Jul 7 06:11:10.236083 kubelet[2913]: I0707 06:11:10.235694 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0aa3ead4-7121-4ad3-9b3b-b8861cae8d0d-typha-certs\") pod \"calico-typha-797bb76449-j6sd8\" (UID: \"0aa3ead4-7121-4ad3-9b3b-b8861cae8d0d\") " pod="calico-system/calico-typha-797bb76449-j6sd8" Jul 7 06:11:10.236531 kubelet[2913]: I0707 06:11:10.236458 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0aa3ead4-7121-4ad3-9b3b-b8861cae8d0d-tigera-ca-bundle\") pod \"calico-typha-797bb76449-j6sd8\" (UID: \"0aa3ead4-7121-4ad3-9b3b-b8861cae8d0d\") " pod="calico-system/calico-typha-797bb76449-j6sd8" Jul 7 06:11:10.236531 kubelet[2913]: I0707 06:11:10.236485 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqb7c\" (UniqueName: \"kubernetes.io/projected/0aa3ead4-7121-4ad3-9b3b-b8861cae8d0d-kube-api-access-nqb7c\") pod \"calico-typha-797bb76449-j6sd8\" (UID: \"0aa3ead4-7121-4ad3-9b3b-b8861cae8d0d\") " pod="calico-system/calico-typha-797bb76449-j6sd8" Jul 7 06:11:10.441405 containerd[1626]: time="2025-07-07T06:11:10.441370926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-797bb76449-j6sd8,Uid:0aa3ead4-7121-4ad3-9b3b-b8861cae8d0d,Namespace:calico-system,Attempt:0,}" Jul 7 06:11:10.520691 containerd[1626]: time="2025-07-07T06:11:10.520484710Z" level=info msg="connecting to shim a0a2af18d21d25b11327a124f9cfd22359bf30c17b2c010b7034878aa785700d" address="unix:///run/containerd/s/856905c9525394dcfe7b829d0ff0d40f5c9edcbd0afaa6652a8ec60d91849f45" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:11:10.543910 systemd[1]: Created slice kubepods-besteffort-pod9b66a8f3_cb93_4a3c_b29a_9f8abfa6d8d4.slice - libcontainer container kubepods-besteffort-pod9b66a8f3_cb93_4a3c_b29a_9f8abfa6d8d4.slice. Jul 7 06:11:10.555536 systemd[1]: Started cri-containerd-a0a2af18d21d25b11327a124f9cfd22359bf30c17b2c010b7034878aa785700d.scope - libcontainer container a0a2af18d21d25b11327a124f9cfd22359bf30c17b2c010b7034878aa785700d. Jul 7 06:11:10.598456 containerd[1626]: time="2025-07-07T06:11:10.598427642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-797bb76449-j6sd8,Uid:0aa3ead4-7121-4ad3-9b3b-b8861cae8d0d,Namespace:calico-system,Attempt:0,} returns sandbox id \"a0a2af18d21d25b11327a124f9cfd22359bf30c17b2c010b7034878aa785700d\"" Jul 7 06:11:10.599622 containerd[1626]: time="2025-07-07T06:11:10.599602385Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 7 06:11:10.639626 kubelet[2913]: I0707 06:11:10.639591 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b66a8f3-cb93-4a3c-b29a-9f8abfa6d8d4-tigera-ca-bundle\") pod \"calico-node-gsx2n\" (UID: \"9b66a8f3-cb93-4a3c-b29a-9f8abfa6d8d4\") " pod="calico-system/calico-node-gsx2n" Jul 7 06:11:10.639626 kubelet[2913]: I0707 06:11:10.639622 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9b66a8f3-cb93-4a3c-b29a-9f8abfa6d8d4-var-lib-calico\") pod \"calico-node-gsx2n\" (UID: \"9b66a8f3-cb93-4a3c-b29a-9f8abfa6d8d4\") " pod="calico-system/calico-node-gsx2n" Jul 7 06:11:10.639745 kubelet[2913]: I0707 06:11:10.639638 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9b66a8f3-cb93-4a3c-b29a-9f8abfa6d8d4-cni-log-dir\") pod \"calico-node-gsx2n\" (UID: \"9b66a8f3-cb93-4a3c-b29a-9f8abfa6d8d4\") " pod="calico-system/calico-node-gsx2n" Jul 7 06:11:10.639745 kubelet[2913]: I0707 06:11:10.639648 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9b66a8f3-cb93-4a3c-b29a-9f8abfa6d8d4-flexvol-driver-host\") pod \"calico-node-gsx2n\" (UID: \"9b66a8f3-cb93-4a3c-b29a-9f8abfa6d8d4\") " pod="calico-system/calico-node-gsx2n" Jul 7 06:11:10.639745 kubelet[2913]: I0707 06:11:10.639658 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b66a8f3-cb93-4a3c-b29a-9f8abfa6d8d4-xtables-lock\") pod \"calico-node-gsx2n\" (UID: \"9b66a8f3-cb93-4a3c-b29a-9f8abfa6d8d4\") " pod="calico-system/calico-node-gsx2n" Jul 7 06:11:10.639745 kubelet[2913]: I0707 06:11:10.639668 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9b66a8f3-cb93-4a3c-b29a-9f8abfa6d8d4-var-run-calico\") pod \"calico-node-gsx2n\" (UID: \"9b66a8f3-cb93-4a3c-b29a-9f8abfa6d8d4\") " pod="calico-system/calico-node-gsx2n" Jul 7 06:11:10.639745 kubelet[2913]: I0707 06:11:10.639679 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9b66a8f3-cb93-4a3c-b29a-9f8abfa6d8d4-cni-net-dir\") pod \"calico-node-gsx2n\" (UID: \"9b66a8f3-cb93-4a3c-b29a-9f8abfa6d8d4\") " pod="calico-system/calico-node-gsx2n" Jul 7 06:11:10.639837 kubelet[2913]: I0707 06:11:10.639688 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9b66a8f3-cb93-4a3c-b29a-9f8abfa6d8d4-policysync\") pod \"calico-node-gsx2n\" (UID: \"9b66a8f3-cb93-4a3c-b29a-9f8abfa6d8d4\") " pod="calico-system/calico-node-gsx2n" Jul 7 06:11:10.639837 kubelet[2913]: I0707 06:11:10.639700 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdn4s\" (UniqueName: \"kubernetes.io/projected/9b66a8f3-cb93-4a3c-b29a-9f8abfa6d8d4-kube-api-access-bdn4s\") pod \"calico-node-gsx2n\" (UID: \"9b66a8f3-cb93-4a3c-b29a-9f8abfa6d8d4\") " pod="calico-system/calico-node-gsx2n" Jul 7 06:11:10.639837 kubelet[2913]: I0707 06:11:10.639710 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b66a8f3-cb93-4a3c-b29a-9f8abfa6d8d4-lib-modules\") pod \"calico-node-gsx2n\" (UID: \"9b66a8f3-cb93-4a3c-b29a-9f8abfa6d8d4\") " pod="calico-system/calico-node-gsx2n" Jul 7 06:11:10.639837 kubelet[2913]: I0707 06:11:10.639720 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9b66a8f3-cb93-4a3c-b29a-9f8abfa6d8d4-cni-bin-dir\") pod \"calico-node-gsx2n\" (UID: \"9b66a8f3-cb93-4a3c-b29a-9f8abfa6d8d4\") " pod="calico-system/calico-node-gsx2n" Jul 7 06:11:10.639837 kubelet[2913]: I0707 06:11:10.639730 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9b66a8f3-cb93-4a3c-b29a-9f8abfa6d8d4-node-certs\") pod \"calico-node-gsx2n\" (UID: \"9b66a8f3-cb93-4a3c-b29a-9f8abfa6d8d4\") " pod="calico-system/calico-node-gsx2n" Jul 7 06:11:10.749331 kubelet[2913]: E0707 06:11:10.749133 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.749331 kubelet[2913]: W0707 06:11:10.749153 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.750078 kubelet[2913]: E0707 06:11:10.749759 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.755902 kubelet[2913]: E0707 06:11:10.755840 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.755902 kubelet[2913]: W0707 06:11:10.755858 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.755902 kubelet[2913]: E0707 06:11:10.755873 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.773499 kubelet[2913]: E0707 06:11:10.773456 2913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s2hl9" podUID="6b187562-15be-401e-89b0-8601141135c4" Jul 7 06:11:10.832978 kubelet[2913]: E0707 06:11:10.831425 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.832978 kubelet[2913]: W0707 06:11:10.831443 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.832978 kubelet[2913]: E0707 06:11:10.831482 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.832978 kubelet[2913]: E0707 06:11:10.831628 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.832978 kubelet[2913]: W0707 06:11:10.831637 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.832978 kubelet[2913]: E0707 06:11:10.831646 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.832978 kubelet[2913]: E0707 06:11:10.831761 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.832978 kubelet[2913]: W0707 06:11:10.831768 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.832978 kubelet[2913]: E0707 06:11:10.831802 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.832978 kubelet[2913]: E0707 06:11:10.832796 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.834055 kubelet[2913]: W0707 06:11:10.832821 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.834055 kubelet[2913]: E0707 06:11:10.832835 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.834055 kubelet[2913]: E0707 06:11:10.833626 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.834055 kubelet[2913]: W0707 06:11:10.833639 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.834055 kubelet[2913]: E0707 06:11:10.833655 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.834055 kubelet[2913]: E0707 06:11:10.833792 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.834055 kubelet[2913]: W0707 06:11:10.833798 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.834055 kubelet[2913]: E0707 06:11:10.833806 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.834494 kubelet[2913]: E0707 06:11:10.834404 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.834494 kubelet[2913]: W0707 06:11:10.834414 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.834494 kubelet[2913]: E0707 06:11:10.834425 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.834705 kubelet[2913]: E0707 06:11:10.834619 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.834705 kubelet[2913]: W0707 06:11:10.834628 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.834705 kubelet[2913]: E0707 06:11:10.834638 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.834869 kubelet[2913]: E0707 06:11:10.834859 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.834923 kubelet[2913]: W0707 06:11:10.834914 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.834980 kubelet[2913]: E0707 06:11:10.834970 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.835205 kubelet[2913]: E0707 06:11:10.835134 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.835205 kubelet[2913]: W0707 06:11:10.835144 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.835205 kubelet[2913]: E0707 06:11:10.835154 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.835424 kubelet[2913]: E0707 06:11:10.835356 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.835424 kubelet[2913]: W0707 06:11:10.835369 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.835424 kubelet[2913]: E0707 06:11:10.835377 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.835630 kubelet[2913]: E0707 06:11:10.835555 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.835630 kubelet[2913]: W0707 06:11:10.835564 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.835630 kubelet[2913]: E0707 06:11:10.835573 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.835848 kubelet[2913]: E0707 06:11:10.835783 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.835848 kubelet[2913]: W0707 06:11:10.835792 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.835848 kubelet[2913]: E0707 06:11:10.835801 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.836053 kubelet[2913]: E0707 06:11:10.835988 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.836053 kubelet[2913]: W0707 06:11:10.835997 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.836053 kubelet[2913]: E0707 06:11:10.836006 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.836242 kubelet[2913]: E0707 06:11:10.836233 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.836355 kubelet[2913]: W0707 06:11:10.836289 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.836355 kubelet[2913]: E0707 06:11:10.836301 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.836570 kubelet[2913]: E0707 06:11:10.836478 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.836570 kubelet[2913]: W0707 06:11:10.836488 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.836570 kubelet[2913]: E0707 06:11:10.836497 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.836796 kubelet[2913]: E0707 06:11:10.836725 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.836796 kubelet[2913]: W0707 06:11:10.836735 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.836796 kubelet[2913]: E0707 06:11:10.836744 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.837024 kubelet[2913]: E0707 06:11:10.836956 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.837024 kubelet[2913]: W0707 06:11:10.836966 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.837024 kubelet[2913]: E0707 06:11:10.836975 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.837200 kubelet[2913]: E0707 06:11:10.837193 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.837242 kubelet[2913]: W0707 06:11:10.837235 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.837343 kubelet[2913]: E0707 06:11:10.837285 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.837464 kubelet[2913]: E0707 06:11:10.837454 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.837586 kubelet[2913]: W0707 06:11:10.837518 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.837586 kubelet[2913]: E0707 06:11:10.837531 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.842384 kubelet[2913]: E0707 06:11:10.842353 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.842584 kubelet[2913]: W0707 06:11:10.842460 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.842584 kubelet[2913]: E0707 06:11:10.842477 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.842584 kubelet[2913]: I0707 06:11:10.842501 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6b187562-15be-401e-89b0-8601141135c4-varrun\") pod \"csi-node-driver-s2hl9\" (UID: \"6b187562-15be-401e-89b0-8601141135c4\") " pod="calico-system/csi-node-driver-s2hl9" Jul 7 06:11:10.843321 kubelet[2913]: E0707 06:11:10.843283 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.843321 kubelet[2913]: W0707 06:11:10.843298 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.843510 kubelet[2913]: E0707 06:11:10.843411 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.843510 kubelet[2913]: I0707 06:11:10.843433 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6b187562-15be-401e-89b0-8601141135c4-socket-dir\") pod \"csi-node-driver-s2hl9\" (UID: \"6b187562-15be-401e-89b0-8601141135c4\") " pod="calico-system/csi-node-driver-s2hl9" Jul 7 06:11:10.844677 kubelet[2913]: E0707 06:11:10.844658 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.844886 kubelet[2913]: W0707 06:11:10.844777 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.844886 kubelet[2913]: E0707 06:11:10.844804 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.845127 kubelet[2913]: E0707 06:11:10.845116 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.845194 kubelet[2913]: W0707 06:11:10.845180 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.845320 kubelet[2913]: E0707 06:11:10.845310 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.845448 kubelet[2913]: E0707 06:11:10.845430 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.845448 kubelet[2913]: W0707 06:11:10.845439 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.845573 kubelet[2913]: E0707 06:11:10.845565 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.845636 kubelet[2913]: I0707 06:11:10.845625 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6b187562-15be-401e-89b0-8601141135c4-registration-dir\") pod \"csi-node-driver-s2hl9\" (UID: \"6b187562-15be-401e-89b0-8601141135c4\") " pod="calico-system/csi-node-driver-s2hl9" Jul 7 06:11:10.845749 kubelet[2913]: E0707 06:11:10.845706 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.845749 kubelet[2913]: W0707 06:11:10.845774 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.845749 kubelet[2913]: E0707 06:11:10.845784 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.846012 kubelet[2913]: E0707 06:11:10.845954 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.846012 kubelet[2913]: W0707 06:11:10.845961 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.846012 kubelet[2913]: E0707 06:11:10.845967 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.846188 kubelet[2913]: E0707 06:11:10.846134 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.846188 kubelet[2913]: W0707 06:11:10.846140 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.846188 kubelet[2913]: E0707 06:11:10.846146 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.846305 kubelet[2913]: E0707 06:11:10.846298 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.846398 kubelet[2913]: W0707 06:11:10.846350 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.846398 kubelet[2913]: E0707 06:11:10.846360 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.846519 kubelet[2913]: E0707 06:11:10.846492 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.846519 kubelet[2913]: W0707 06:11:10.846501 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.846519 kubelet[2913]: E0707 06:11:10.846510 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.846667 kubelet[2913]: I0707 06:11:10.846614 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtqht\" (UniqueName: \"kubernetes.io/projected/6b187562-15be-401e-89b0-8601141135c4-kube-api-access-wtqht\") pod \"csi-node-driver-s2hl9\" (UID: \"6b187562-15be-401e-89b0-8601141135c4\") " pod="calico-system/csi-node-driver-s2hl9" Jul 7 06:11:10.846783 kubelet[2913]: E0707 06:11:10.846768 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.846783 kubelet[2913]: W0707 06:11:10.846776 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.846905 kubelet[2913]: E0707 06:11:10.846852 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.846905 kubelet[2913]: I0707 06:11:10.846870 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6b187562-15be-401e-89b0-8601141135c4-kubelet-dir\") pod \"csi-node-driver-s2hl9\" (UID: \"6b187562-15be-401e-89b0-8601141135c4\") " pod="calico-system/csi-node-driver-s2hl9" Jul 7 06:11:10.847167 kubelet[2913]: E0707 06:11:10.847095 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.847167 kubelet[2913]: W0707 06:11:10.847106 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.847167 kubelet[2913]: E0707 06:11:10.847119 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.847399 kubelet[2913]: E0707 06:11:10.847329 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.847399 kubelet[2913]: W0707 06:11:10.847338 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.847399 kubelet[2913]: E0707 06:11:10.847348 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.847698 kubelet[2913]: E0707 06:11:10.847625 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.847698 kubelet[2913]: W0707 06:11:10.847635 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.847698 kubelet[2913]: E0707 06:11:10.847643 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.847887 kubelet[2913]: E0707 06:11:10.847856 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.847887 kubelet[2913]: W0707 06:11:10.847864 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.847887 kubelet[2913]: E0707 06:11:10.847872 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.849959 containerd[1626]: time="2025-07-07T06:11:10.849692101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gsx2n,Uid:9b66a8f3-cb93-4a3c-b29a-9f8abfa6d8d4,Namespace:calico-system,Attempt:0,}" Jul 7 06:11:10.907894 containerd[1626]: time="2025-07-07T06:11:10.907866336Z" level=info msg="connecting to shim a668deace2c81757d0d9aa07a8341990d6ef92c8c6059c8656151f539b57e203" address="unix:///run/containerd/s/e7ec5277fa6f332f7630811fc64ae0be4db6bb5e09daf28856fae27b454583a8" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:11:10.930300 systemd[1]: Started cri-containerd-a668deace2c81757d0d9aa07a8341990d6ef92c8c6059c8656151f539b57e203.scope - libcontainer container a668deace2c81757d0d9aa07a8341990d6ef92c8c6059c8656151f539b57e203. Jul 7 06:11:10.947721 kubelet[2913]: E0707 06:11:10.947694 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.947721 kubelet[2913]: W0707 06:11:10.947712 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.947721 kubelet[2913]: E0707 06:11:10.947729 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.948312 kubelet[2913]: E0707 06:11:10.948175 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.948312 kubelet[2913]: W0707 06:11:10.948188 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.948312 kubelet[2913]: E0707 06:11:10.948203 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.948841 kubelet[2913]: E0707 06:11:10.948658 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.948841 kubelet[2913]: W0707 06:11:10.948667 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.948841 kubelet[2913]: E0707 06:11:10.948704 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.949005 kubelet[2913]: E0707 06:11:10.948997 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.949194 kubelet[2913]: W0707 06:11:10.949151 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.949194 kubelet[2913]: E0707 06:11:10.949168 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.949399 kubelet[2913]: E0707 06:11:10.949385 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.949435 kubelet[2913]: W0707 06:11:10.949400 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.949435 kubelet[2913]: E0707 06:11:10.949414 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.949682 kubelet[2913]: E0707 06:11:10.949673 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.949682 kubelet[2913]: W0707 06:11:10.949680 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.949847 kubelet[2913]: E0707 06:11:10.949690 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.949874 kubelet[2913]: E0707 06:11:10.949868 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.949896 kubelet[2913]: W0707 06:11:10.949873 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.949896 kubelet[2913]: E0707 06:11:10.949880 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.950211 kubelet[2913]: E0707 06:11:10.950196 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.950211 kubelet[2913]: W0707 06:11:10.950207 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.950275 kubelet[2913]: E0707 06:11:10.950217 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.950467 kubelet[2913]: E0707 06:11:10.950454 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.950467 kubelet[2913]: W0707 06:11:10.950463 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.950532 kubelet[2913]: E0707 06:11:10.950473 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.950759 kubelet[2913]: E0707 06:11:10.950629 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.950759 kubelet[2913]: W0707 06:11:10.950635 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.950946 kubelet[2913]: E0707 06:11:10.950922 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.950946 kubelet[2913]: E0707 06:11:10.950936 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.950946 kubelet[2913]: W0707 06:11:10.950941 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.951200 kubelet[2913]: E0707 06:11:10.950958 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.951200 kubelet[2913]: E0707 06:11:10.951098 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.951200 kubelet[2913]: W0707 06:11:10.951103 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.951200 kubelet[2913]: E0707 06:11:10.951174 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.951319 kubelet[2913]: E0707 06:11:10.951214 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.951319 kubelet[2913]: W0707 06:11:10.951219 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.951319 kubelet[2913]: E0707 06:11:10.951256 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.951610 kubelet[2913]: E0707 06:11:10.951446 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.951610 kubelet[2913]: W0707 06:11:10.951451 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.951610 kubelet[2913]: E0707 06:11:10.951504 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.951713 kubelet[2913]: E0707 06:11:10.951704 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.951713 kubelet[2913]: W0707 06:11:10.951710 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.951816 kubelet[2913]: E0707 06:11:10.951737 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.951816 kubelet[2913]: E0707 06:11:10.951815 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.951979 kubelet[2913]: W0707 06:11:10.951820 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.951979 kubelet[2913]: E0707 06:11:10.951832 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.951979 kubelet[2913]: E0707 06:11:10.951973 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.951979 kubelet[2913]: W0707 06:11:10.951977 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.952272 kubelet[2913]: E0707 06:11:10.951985 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.952272 kubelet[2913]: E0707 06:11:10.952098 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.952272 kubelet[2913]: W0707 06:11:10.952103 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.952272 kubelet[2913]: E0707 06:11:10.952108 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.952272 kubelet[2913]: E0707 06:11:10.952254 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.952272 kubelet[2913]: W0707 06:11:10.952259 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.952272 kubelet[2913]: E0707 06:11:10.952270 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.952599 kubelet[2913]: E0707 06:11:10.952433 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.952599 kubelet[2913]: W0707 06:11:10.952438 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.952599 kubelet[2913]: E0707 06:11:10.952443 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.952875 kubelet[2913]: E0707 06:11:10.952851 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.952974 kubelet[2913]: W0707 06:11:10.952950 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.952974 kubelet[2913]: E0707 06:11:10.952964 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.953639 kubelet[2913]: E0707 06:11:10.953563 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.953639 kubelet[2913]: W0707 06:11:10.953572 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.953639 kubelet[2913]: E0707 06:11:10.953588 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.954052 kubelet[2913]: E0707 06:11:10.953799 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.954052 kubelet[2913]: W0707 06:11:10.953806 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.954052 kubelet[2913]: E0707 06:11:10.953813 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.954396 kubelet[2913]: E0707 06:11:10.954290 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.954396 kubelet[2913]: W0707 06:11:10.954326 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.954396 kubelet[2913]: E0707 06:11:10.954334 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.958274 kubelet[2913]: E0707 06:11:10.958085 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.958274 kubelet[2913]: W0707 06:11:10.958101 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.958274 kubelet[2913]: E0707 06:11:10.958116 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.963373 kubelet[2913]: E0707 06:11:10.963324 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:10.963373 kubelet[2913]: W0707 06:11:10.963337 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:10.963373 kubelet[2913]: E0707 06:11:10.963350 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:10.973269 containerd[1626]: time="2025-07-07T06:11:10.973237901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gsx2n,Uid:9b66a8f3-cb93-4a3c-b29a-9f8abfa6d8d4,Namespace:calico-system,Attempt:0,} returns sandbox id \"a668deace2c81757d0d9aa07a8341990d6ef92c8c6059c8656151f539b57e203\"" Jul 7 06:11:12.076802 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1230415882.mount: Deactivated successfully. Jul 7 06:11:12.159011 kubelet[2913]: E0707 06:11:12.158224 2913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s2hl9" podUID="6b187562-15be-401e-89b0-8601141135c4" Jul 7 06:11:13.132827 containerd[1626]: time="2025-07-07T06:11:13.132788134Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:13.145466 containerd[1626]: time="2025-07-07T06:11:13.145423371Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 7 06:11:13.175188 containerd[1626]: time="2025-07-07T06:11:13.175127306Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:13.182652 containerd[1626]: time="2025-07-07T06:11:13.182615124Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:13.183146 containerd[1626]: time="2025-07-07T06:11:13.182890784Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.583270402s" Jul 7 06:11:13.183146 containerd[1626]: time="2025-07-07T06:11:13.182911065Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 7 06:11:13.183764 containerd[1626]: time="2025-07-07T06:11:13.183706350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 7 06:11:13.205013 containerd[1626]: time="2025-07-07T06:11:13.204969123Z" level=info msg="CreateContainer within sandbox \"a0a2af18d21d25b11327a124f9cfd22359bf30c17b2c010b7034878aa785700d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 7 06:11:13.255052 containerd[1626]: time="2025-07-07T06:11:13.255026998Z" level=info msg="Container 31ec130e83ef2d7ada2b8c713d504ec2e60f51cf0988d47c0cc1e4b3ec18f324: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:11:13.257644 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3794142010.mount: Deactivated successfully. Jul 7 06:11:13.271475 containerd[1626]: time="2025-07-07T06:11:13.271423771Z" level=info msg="CreateContainer within sandbox \"a0a2af18d21d25b11327a124f9cfd22359bf30c17b2c010b7034878aa785700d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"31ec130e83ef2d7ada2b8c713d504ec2e60f51cf0988d47c0cc1e4b3ec18f324\"" Jul 7 06:11:13.272131 containerd[1626]: time="2025-07-07T06:11:13.272111384Z" level=info msg="StartContainer for \"31ec130e83ef2d7ada2b8c713d504ec2e60f51cf0988d47c0cc1e4b3ec18f324\"" Jul 7 06:11:13.273136 containerd[1626]: time="2025-07-07T06:11:13.273112536Z" level=info msg="connecting to shim 31ec130e83ef2d7ada2b8c713d504ec2e60f51cf0988d47c0cc1e4b3ec18f324" address="unix:///run/containerd/s/856905c9525394dcfe7b829d0ff0d40f5c9edcbd0afaa6652a8ec60d91849f45" protocol=ttrpc version=3 Jul 7 06:11:13.292247 systemd[1]: Started cri-containerd-31ec130e83ef2d7ada2b8c713d504ec2e60f51cf0988d47c0cc1e4b3ec18f324.scope - libcontainer container 31ec130e83ef2d7ada2b8c713d504ec2e60f51cf0988d47c0cc1e4b3ec18f324. Jul 7 06:11:13.340426 containerd[1626]: time="2025-07-07T06:11:13.340399009Z" level=info msg="StartContainer for \"31ec130e83ef2d7ada2b8c713d504ec2e60f51cf0988d47c0cc1e4b3ec18f324\" returns successfully" Jul 7 06:11:14.157898 kubelet[2913]: E0707 06:11:14.157662 2913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s2hl9" podUID="6b187562-15be-401e-89b0-8601141135c4" Jul 7 06:11:14.259148 kubelet[2913]: E0707 06:11:14.259112 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:14.259356 kubelet[2913]: W0707 06:11:14.259265 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:14.259356 kubelet[2913]: E0707 06:11:14.259287 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:14.259527 kubelet[2913]: E0707 06:11:14.259476 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:14.259527 kubelet[2913]: W0707 06:11:14.259482 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:14.259629 kubelet[2913]: E0707 06:11:14.259579 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:14.259733 kubelet[2913]: E0707 06:11:14.259719 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:14.259787 kubelet[2913]: W0707 06:11:14.259778 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:14.259917 kubelet[2913]: E0707 06:11:14.259826 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:14.260015 kubelet[2913]: E0707 06:11:14.260005 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:14.260091 kubelet[2913]: W0707 06:11:14.260044 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:14.260091 kubelet[2913]: E0707 06:11:14.260054 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:14.260311 kubelet[2913]: E0707 06:11:14.260249 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:14.260311 kubelet[2913]: W0707 06:11:14.260256 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:14.260311 kubelet[2913]: E0707 06:11:14.260267 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:14.260478 kubelet[2913]: E0707 06:11:14.260465 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:14.260573 kubelet[2913]: W0707 06:11:14.260520 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:14.260573 kubelet[2913]: E0707 06:11:14.260530 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:14.260717 kubelet[2913]: E0707 06:11:14.260682 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:14.260717 kubelet[2913]: W0707 06:11:14.260691 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:14.260717 kubelet[2913]: E0707 06:11:14.260698 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:14.260925 kubelet[2913]: E0707 06:11:14.260914 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:14.261000 kubelet[2913]: W0707 06:11:14.260962 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:14.261000 kubelet[2913]: E0707 06:11:14.260971 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:14.261215 kubelet[2913]: E0707 06:11:14.261159 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:14.261215 kubelet[2913]: W0707 06:11:14.261168 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:14.261215 kubelet[2913]: E0707 06:11:14.261177 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:14.261404 kubelet[2913]: E0707 06:11:14.261396 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:14.261457 kubelet[2913]: W0707 06:11:14.261451 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:14.261495 kubelet[2913]: E0707 06:11:14.261489 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:14.261639 kubelet[2913]: E0707 06:11:14.261632 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:14.261678 kubelet[2913]: W0707 06:11:14.261672 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:14.261726 kubelet[2913]: E0707 06:11:14.261718 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:14.266194 kubelet[2913]: E0707 06:11:14.261959 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:14.266194 kubelet[2913]: W0707 06:11:14.261965 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:14.266194 kubelet[2913]: E0707 06:11:14.261970 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:14.266194 kubelet[2913]: E0707 06:11:14.262235 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:14.266194 kubelet[2913]: W0707 06:11:14.262247 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:14.266194 kubelet[2913]: E0707 06:11:14.262262 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:14.266194 kubelet[2913]: E0707 06:11:14.262386 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:14.266194 kubelet[2913]: W0707 06:11:14.262395 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:14.266194 kubelet[2913]: E0707 06:11:14.262404 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:14.266194 kubelet[2913]: E0707 06:11:14.262530 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:14.266521 kubelet[2913]: W0707 06:11:14.262544 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:14.266521 kubelet[2913]: E0707 06:11:14.262553 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:14.275181 kubelet[2913]: E0707 06:11:14.274958 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:14.275181 kubelet[2913]: W0707 06:11:14.274975 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:14.275181 kubelet[2913]: E0707 06:11:14.274992 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:14.275450 kubelet[2913]: E0707 06:11:14.275371 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:14.275450 kubelet[2913]: W0707 06:11:14.275381 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:14.275450 kubelet[2913]: E0707 06:11:14.275396 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:14.275552 kubelet[2913]: E0707 06:11:14.275520 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:14.275552 kubelet[2913]: W0707 06:11:14.275530 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:14.275552 kubelet[2913]: E0707 06:11:14.275541 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:14.275705 kubelet[2913]: E0707 06:11:14.275641 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:14.275705 kubelet[2913]: W0707 06:11:14.275647 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:14.275705 kubelet[2913]: E0707 06:11:14.275660 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:14.275806 kubelet[2913]: E0707 06:11:14.275793 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:14.275806 kubelet[2913]: W0707 06:11:14.275803 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:14.275864 kubelet[2913]: E0707 06:11:14.275816 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:14.275993 kubelet[2913]: E0707 06:11:14.275974 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:14.275993 kubelet[2913]: W0707 06:11:14.275986 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:14.276041 kubelet[2913]: E0707 06:11:14.275994 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:14.276310 kubelet[2913]: E0707 06:11:14.276294 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:14.276310 kubelet[2913]: W0707 06:11:14.276303 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:14.276310 kubelet[2913]: E0707 06:11:14.276314 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:14.279232 kubelet[2913]: E0707 06:11:14.276412 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:14.279232 kubelet[2913]: W0707 06:11:14.276419 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:14.279232 kubelet[2913]: E0707 06:11:14.276426 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:14.279232 kubelet[2913]: E0707 06:11:14.276563 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:14.279232 kubelet[2913]: W0707 06:11:14.276569 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:14.279232 kubelet[2913]: E0707 06:11:14.276681 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:14.279232 kubelet[2913]: W0707 06:11:14.276688 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:14.279232 kubelet[2913]: E0707 06:11:14.276696 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:14.279232 kubelet[2913]: E0707 06:11:14.276653 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:14.279232 kubelet[2913]: E0707 06:11:14.276824 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:14.279485 kubelet[2913]: W0707 06:11:14.276833 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:14.279485 kubelet[2913]: E0707 06:11:14.276847 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:14.279485 kubelet[2913]: E0707 06:11:14.276960 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:14.279485 kubelet[2913]: W0707 06:11:14.276965 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:14.279485 kubelet[2913]: E0707 06:11:14.276977 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:14.279485 kubelet[2913]: E0707 06:11:14.277127 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:14.279485 kubelet[2913]: W0707 06:11:14.277132 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:14.279485 kubelet[2913]: E0707 06:11:14.277143 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:14.279485 kubelet[2913]: E0707 06:11:14.277235 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:14.279485 kubelet[2913]: W0707 06:11:14.277242 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:14.279782 kubelet[2913]: E0707 06:11:14.277253 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:14.279782 kubelet[2913]: E0707 06:11:14.277468 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:14.279782 kubelet[2913]: W0707 06:11:14.277476 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:14.279782 kubelet[2913]: E0707 06:11:14.277489 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:14.279782 kubelet[2913]: E0707 06:11:14.277603 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:14.279782 kubelet[2913]: W0707 06:11:14.277608 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:14.279782 kubelet[2913]: E0707 06:11:14.277620 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:14.279782 kubelet[2913]: E0707 06:11:14.277787 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:14.279782 kubelet[2913]: W0707 06:11:14.277795 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:14.279782 kubelet[2913]: E0707 06:11:14.277805 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:14.280223 kubelet[2913]: E0707 06:11:14.278086 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:14.280223 kubelet[2913]: W0707 06:11:14.278094 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:14.280223 kubelet[2913]: E0707 06:11:14.278102 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:15.024034 containerd[1626]: time="2025-07-07T06:11:15.023988170Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:15.036789 containerd[1626]: time="2025-07-07T06:11:15.036743253Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 7 06:11:15.051084 containerd[1626]: time="2025-07-07T06:11:15.051017155Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:15.063112 containerd[1626]: time="2025-07-07T06:11:15.063054118Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:15.063978 containerd[1626]: time="2025-07-07T06:11:15.063931043Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.8801965s" Jul 7 06:11:15.063978 containerd[1626]: time="2025-07-07T06:11:15.063949982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 7 06:11:15.066205 containerd[1626]: time="2025-07-07T06:11:15.066169822Z" level=info msg="CreateContainer within sandbox \"a668deace2c81757d0d9aa07a8341990d6ef92c8c6059c8656151f539b57e203\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 7 06:11:15.109950 containerd[1626]: time="2025-07-07T06:11:15.109919684Z" level=info msg="Container 05ba7fe441aaafc67b43da07052694d30882700b42899ec93d4f1bb2a4c54d81: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:11:15.113150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3012537435.mount: Deactivated successfully. Jul 7 06:11:15.186141 containerd[1626]: time="2025-07-07T06:11:15.186117333Z" level=info msg="CreateContainer within sandbox \"a668deace2c81757d0d9aa07a8341990d6ef92c8c6059c8656151f539b57e203\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"05ba7fe441aaafc67b43da07052694d30882700b42899ec93d4f1bb2a4c54d81\"" Jul 7 06:11:15.186547 containerd[1626]: time="2025-07-07T06:11:15.186503547Z" level=info msg="StartContainer for \"05ba7fe441aaafc67b43da07052694d30882700b42899ec93d4f1bb2a4c54d81\"" Jul 7 06:11:15.187696 containerd[1626]: time="2025-07-07T06:11:15.187664702Z" level=info msg="connecting to shim 05ba7fe441aaafc67b43da07052694d30882700b42899ec93d4f1bb2a4c54d81" address="unix:///run/containerd/s/e7ec5277fa6f332f7630811fc64ae0be4db6bb5e09daf28856fae27b454583a8" protocol=ttrpc version=3 Jul 7 06:11:15.210478 systemd[1]: Started cri-containerd-05ba7fe441aaafc67b43da07052694d30882700b42899ec93d4f1bb2a4c54d81.scope - libcontainer container 05ba7fe441aaafc67b43da07052694d30882700b42899ec93d4f1bb2a4c54d81. Jul 7 06:11:15.218868 kubelet[2913]: I0707 06:11:15.218842 2913 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:11:15.249500 containerd[1626]: time="2025-07-07T06:11:15.249444485Z" level=info msg="StartContainer for \"05ba7fe441aaafc67b43da07052694d30882700b42899ec93d4f1bb2a4c54d81\" returns successfully" Jul 7 06:11:15.257511 systemd[1]: cri-containerd-05ba7fe441aaafc67b43da07052694d30882700b42899ec93d4f1bb2a4c54d81.scope: Deactivated successfully. Jul 7 06:11:15.315266 containerd[1626]: time="2025-07-07T06:11:15.315101526Z" level=info msg="received exit event container_id:\"05ba7fe441aaafc67b43da07052694d30882700b42899ec93d4f1bb2a4c54d81\" id:\"05ba7fe441aaafc67b43da07052694d30882700b42899ec93d4f1bb2a4c54d81\" pid:3575 exited_at:{seconds:1751868675 nanos:260471060}" Jul 7 06:11:15.322434 containerd[1626]: time="2025-07-07T06:11:15.322384653Z" level=info msg="TaskExit event in podsandbox handler container_id:\"05ba7fe441aaafc67b43da07052694d30882700b42899ec93d4f1bb2a4c54d81\" id:\"05ba7fe441aaafc67b43da07052694d30882700b42899ec93d4f1bb2a4c54d81\" pid:3575 exited_at:{seconds:1751868675 nanos:260471060}" Jul 7 06:11:15.334514 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05ba7fe441aaafc67b43da07052694d30882700b42899ec93d4f1bb2a4c54d81-rootfs.mount: Deactivated successfully. Jul 7 06:11:16.157193 kubelet[2913]: E0707 06:11:16.156555 2913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s2hl9" podUID="6b187562-15be-401e-89b0-8601141135c4" Jul 7 06:11:16.211539 containerd[1626]: time="2025-07-07T06:11:16.211443508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 7 06:11:16.229535 kubelet[2913]: I0707 06:11:16.229431 2913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-797bb76449-j6sd8" podStartSLOduration=3.645229497 podStartE2EDuration="6.229410292s" podCreationTimestamp="2025-07-07 06:11:10 +0000 UTC" firstStartedPulling="2025-07-07 06:11:10.599288695 +0000 UTC m=+16.550118070" lastFinishedPulling="2025-07-07 06:11:13.18346949 +0000 UTC m=+19.134298865" observedRunningTime="2025-07-07 06:11:14.222603757 +0000 UTC m=+20.173433144" watchObservedRunningTime="2025-07-07 06:11:16.229410292 +0000 UTC m=+22.180239667" Jul 7 06:11:18.158411 kubelet[2913]: E0707 06:11:18.158195 2913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s2hl9" podUID="6b187562-15be-401e-89b0-8601141135c4" Jul 7 06:11:20.156936 kubelet[2913]: E0707 06:11:20.156725 2913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s2hl9" podUID="6b187562-15be-401e-89b0-8601141135c4" Jul 7 06:11:20.423500 containerd[1626]: time="2025-07-07T06:11:20.423397178Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:20.424525 containerd[1626]: time="2025-07-07T06:11:20.423972991Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 7 06:11:20.424525 containerd[1626]: time="2025-07-07T06:11:20.424489243Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:20.426240 containerd[1626]: time="2025-07-07T06:11:20.426197373Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:20.426979 containerd[1626]: time="2025-07-07T06:11:20.426849783Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 4.214976572s" Jul 7 06:11:20.426979 containerd[1626]: time="2025-07-07T06:11:20.426884937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 7 06:11:20.429422 containerd[1626]: time="2025-07-07T06:11:20.429391868Z" level=info msg="CreateContainer within sandbox \"a668deace2c81757d0d9aa07a8341990d6ef92c8c6059c8656151f539b57e203\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 7 06:11:20.437226 containerd[1626]: time="2025-07-07T06:11:20.436317397Z" level=info msg="Container 14fd5846156947647f127ef5c7a24e4656c832bd0672e559eaa643638e2842aa: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:11:20.439924 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2324019749.mount: Deactivated successfully. Jul 7 06:11:20.454975 containerd[1626]: time="2025-07-07T06:11:20.454875814Z" level=info msg="CreateContainer within sandbox \"a668deace2c81757d0d9aa07a8341990d6ef92c8c6059c8656151f539b57e203\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"14fd5846156947647f127ef5c7a24e4656c832bd0672e559eaa643638e2842aa\"" Jul 7 06:11:20.455832 containerd[1626]: time="2025-07-07T06:11:20.455411864Z" level=info msg="StartContainer for \"14fd5846156947647f127ef5c7a24e4656c832bd0672e559eaa643638e2842aa\"" Jul 7 06:11:20.457056 containerd[1626]: time="2025-07-07T06:11:20.457025104Z" level=info msg="connecting to shim 14fd5846156947647f127ef5c7a24e4656c832bd0672e559eaa643638e2842aa" address="unix:///run/containerd/s/e7ec5277fa6f332f7630811fc64ae0be4db6bb5e09daf28856fae27b454583a8" protocol=ttrpc version=3 Jul 7 06:11:20.485291 systemd[1]: Started cri-containerd-14fd5846156947647f127ef5c7a24e4656c832bd0672e559eaa643638e2842aa.scope - libcontainer container 14fd5846156947647f127ef5c7a24e4656c832bd0672e559eaa643638e2842aa. Jul 7 06:11:20.527766 containerd[1626]: time="2025-07-07T06:11:20.527736197Z" level=info msg="StartContainer for \"14fd5846156947647f127ef5c7a24e4656c832bd0672e559eaa643638e2842aa\" returns successfully" Jul 7 06:11:22.157523 kubelet[2913]: E0707 06:11:22.157305 2913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s2hl9" podUID="6b187562-15be-401e-89b0-8601141135c4" Jul 7 06:11:22.933365 systemd[1]: cri-containerd-14fd5846156947647f127ef5c7a24e4656c832bd0672e559eaa643638e2842aa.scope: Deactivated successfully. Jul 7 06:11:22.933584 systemd[1]: cri-containerd-14fd5846156947647f127ef5c7a24e4656c832bd0672e559eaa643638e2842aa.scope: Consumed 357ms CPU time, 159M memory peak, 12K read from disk, 171.2M written to disk. Jul 7 06:11:23.076079 kubelet[2913]: I0707 06:11:23.075684 2913 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 7 06:11:23.095300 containerd[1626]: time="2025-07-07T06:11:23.095270064Z" level=info msg="TaskExit event in podsandbox handler container_id:\"14fd5846156947647f127ef5c7a24e4656c832bd0672e559eaa643638e2842aa\" id:\"14fd5846156947647f127ef5c7a24e4656c832bd0672e559eaa643638e2842aa\" pid:3636 exited_at:{seconds:1751868683 nanos:89007272}" Jul 7 06:11:23.096119 containerd[1626]: time="2025-07-07T06:11:23.096057082Z" level=info msg="received exit event container_id:\"14fd5846156947647f127ef5c7a24e4656c832bd0672e559eaa643638e2842aa\" id:\"14fd5846156947647f127ef5c7a24e4656c832bd0672e559eaa643638e2842aa\" pid:3636 exited_at:{seconds:1751868683 nanos:89007272}" Jul 7 06:11:23.147545 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14fd5846156947647f127ef5c7a24e4656c832bd0672e559eaa643638e2842aa-rootfs.mount: Deactivated successfully. Jul 7 06:11:23.263644 systemd[1]: Created slice kubepods-besteffort-podc39e2e58_014e_46c2_a9f8_678760dfbaab.slice - libcontainer container kubepods-besteffort-podc39e2e58_014e_46c2_a9f8_678760dfbaab.slice. Jul 7 06:11:23.278280 systemd[1]: Created slice kubepods-besteffort-pod9916c420_a66b_48d2_b745_9d139345d2e5.slice - libcontainer container kubepods-besteffort-pod9916c420_a66b_48d2_b745_9d139345d2e5.slice. Jul 7 06:11:23.286269 systemd[1]: Created slice kubepods-burstable-podea85a59a_05a6_4889_9a35_7c48f6b90bf6.slice - libcontainer container kubepods-burstable-podea85a59a_05a6_4889_9a35_7c48f6b90bf6.slice. Jul 7 06:11:23.295035 systemd[1]: Created slice kubepods-besteffort-pod7c232664_a291_43ff_ace0_996e82732b0e.slice - libcontainer container kubepods-besteffort-pod7c232664_a291_43ff_ace0_996e82732b0e.slice. Jul 7 06:11:23.301460 systemd[1]: Created slice kubepods-burstable-pod3bb9c053_df04_4380_8529_a63aa06f6a2b.slice - libcontainer container kubepods-burstable-pod3bb9c053_df04_4380_8529_a63aa06f6a2b.slice. Jul 7 06:11:23.307698 systemd[1]: Created slice kubepods-besteffort-pod0c028a4a_b59d_4d3f_950b_f5683fe34de3.slice - libcontainer container kubepods-besteffort-pod0c028a4a_b59d_4d3f_950b_f5683fe34de3.slice. Jul 7 06:11:23.315825 systemd[1]: Created slice kubepods-besteffort-pod69e3cf54_901a_4c69_86b8_2454097d2c89.slice - libcontainer container kubepods-besteffort-pod69e3cf54_901a_4c69_86b8_2454097d2c89.slice. Jul 7 06:11:23.320262 systemd[1]: Created slice kubepods-besteffort-pod1dd8bedf_e2f9_4637_8575_0ab51c5e70c5.slice - libcontainer container kubepods-besteffort-pod1dd8bedf_e2f9_4637_8575_0ab51c5e70c5.slice. Jul 7 06:11:23.347645 kubelet[2913]: I0707 06:11:23.347615 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxxhp\" (UniqueName: \"kubernetes.io/projected/1dd8bedf-e2f9-4637-8575-0ab51c5e70c5-kube-api-access-kxxhp\") pod \"whisker-6c69c4b6bd-cjg8q\" (UID: \"1dd8bedf-e2f9-4637-8575-0ab51c5e70c5\") " pod="calico-system/whisker-6c69c4b6bd-cjg8q" Jul 7 06:11:23.347645 kubelet[2913]: I0707 06:11:23.347645 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7c232664-a291-43ff-ace0-996e82732b0e-calico-apiserver-certs\") pod \"calico-apiserver-b79765fb9-tngnt\" (UID: \"7c232664-a291-43ff-ace0-996e82732b0e\") " pod="calico-apiserver/calico-apiserver-b79765fb9-tngnt" Jul 7 06:11:23.347913 kubelet[2913]: I0707 06:11:23.347777 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9916c420-a66b-48d2-b745-9d139345d2e5-tigera-ca-bundle\") pod \"calico-kube-controllers-79bf997d64-4kw92\" (UID: \"9916c420-a66b-48d2-b745-9d139345d2e5\") " pod="calico-system/calico-kube-controllers-79bf997d64-4kw92" Jul 7 06:11:23.347913 kubelet[2913]: I0707 06:11:23.347800 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1dd8bedf-e2f9-4637-8575-0ab51c5e70c5-whisker-backend-key-pair\") pod \"whisker-6c69c4b6bd-cjg8q\" (UID: \"1dd8bedf-e2f9-4637-8575-0ab51c5e70c5\") " pod="calico-system/whisker-6c69c4b6bd-cjg8q" Jul 7 06:11:23.347913 kubelet[2913]: I0707 06:11:23.347811 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z47qj\" (UniqueName: \"kubernetes.io/projected/3bb9c053-df04-4380-8529-a63aa06f6a2b-kube-api-access-z47qj\") pod \"coredns-668d6bf9bc-wqb5c\" (UID: \"3bb9c053-df04-4380-8529-a63aa06f6a2b\") " pod="kube-system/coredns-668d6bf9bc-wqb5c" Jul 7 06:11:23.347913 kubelet[2913]: I0707 06:11:23.347824 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf96c\" (UniqueName: \"kubernetes.io/projected/9916c420-a66b-48d2-b745-9d139345d2e5-kube-api-access-tf96c\") pod \"calico-kube-controllers-79bf997d64-4kw92\" (UID: \"9916c420-a66b-48d2-b745-9d139345d2e5\") " pod="calico-system/calico-kube-controllers-79bf997d64-4kw92" Jul 7 06:11:23.347913 kubelet[2913]: I0707 06:11:23.347853 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3bb9c053-df04-4380-8529-a63aa06f6a2b-config-volume\") pod \"coredns-668d6bf9bc-wqb5c\" (UID: \"3bb9c053-df04-4380-8529-a63aa06f6a2b\") " pod="kube-system/coredns-668d6bf9bc-wqb5c" Jul 7 06:11:23.348004 kubelet[2913]: I0707 06:11:23.347863 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1dd8bedf-e2f9-4637-8575-0ab51c5e70c5-whisker-ca-bundle\") pod \"whisker-6c69c4b6bd-cjg8q\" (UID: \"1dd8bedf-e2f9-4637-8575-0ab51c5e70c5\") " pod="calico-system/whisker-6c69c4b6bd-cjg8q" Jul 7 06:11:23.348004 kubelet[2913]: I0707 06:11:23.347874 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8qnk\" (UniqueName: \"kubernetes.io/projected/7c232664-a291-43ff-ace0-996e82732b0e-kube-api-access-w8qnk\") pod \"calico-apiserver-b79765fb9-tngnt\" (UID: \"7c232664-a291-43ff-ace0-996e82732b0e\") " pod="calico-apiserver/calico-apiserver-b79765fb9-tngnt" Jul 7 06:11:23.348004 kubelet[2913]: I0707 06:11:23.347884 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqmf4\" (UniqueName: \"kubernetes.io/projected/c39e2e58-014e-46c2-a9f8-678760dfbaab-kube-api-access-lqmf4\") pod \"calico-apiserver-945f9d44-bq9wq\" (UID: \"c39e2e58-014e-46c2-a9f8-678760dfbaab\") " pod="calico-apiserver/calico-apiserver-945f9d44-bq9wq" Jul 7 06:11:23.348004 kubelet[2913]: I0707 06:11:23.347895 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0c028a4a-b59d-4d3f-950b-f5683fe34de3-calico-apiserver-certs\") pod \"calico-apiserver-945f9d44-42wqf\" (UID: \"0c028a4a-b59d-4d3f-950b-f5683fe34de3\") " pod="calico-apiserver/calico-apiserver-945f9d44-42wqf" Jul 7 06:11:23.348004 kubelet[2913]: I0707 06:11:23.347918 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nvb4\" (UniqueName: \"kubernetes.io/projected/ea85a59a-05a6-4889-9a35-7c48f6b90bf6-kube-api-access-2nvb4\") pod \"coredns-668d6bf9bc-cz5kf\" (UID: \"ea85a59a-05a6-4889-9a35-7c48f6b90bf6\") " pod="kube-system/coredns-668d6bf9bc-cz5kf" Jul 7 06:11:23.348140 kubelet[2913]: I0707 06:11:23.347930 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69e3cf54-901a-4c69-86b8-2454097d2c89-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-xlb84\" (UID: \"69e3cf54-901a-4c69-86b8-2454097d2c89\") " pod="calico-system/goldmane-768f4c5c69-xlb84" Jul 7 06:11:23.348140 kubelet[2913]: I0707 06:11:23.347940 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8g92\" (UniqueName: \"kubernetes.io/projected/69e3cf54-901a-4c69-86b8-2454097d2c89-kube-api-access-k8g92\") pod \"goldmane-768f4c5c69-xlb84\" (UID: \"69e3cf54-901a-4c69-86b8-2454097d2c89\") " pod="calico-system/goldmane-768f4c5c69-xlb84" Jul 7 06:11:23.348140 kubelet[2913]: I0707 06:11:23.347950 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfmlz\" (UniqueName: \"kubernetes.io/projected/0c028a4a-b59d-4d3f-950b-f5683fe34de3-kube-api-access-wfmlz\") pod \"calico-apiserver-945f9d44-42wqf\" (UID: \"0c028a4a-b59d-4d3f-950b-f5683fe34de3\") " pod="calico-apiserver/calico-apiserver-945f9d44-42wqf" Jul 7 06:11:23.348140 kubelet[2913]: I0707 06:11:23.347967 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea85a59a-05a6-4889-9a35-7c48f6b90bf6-config-volume\") pod \"coredns-668d6bf9bc-cz5kf\" (UID: \"ea85a59a-05a6-4889-9a35-7c48f6b90bf6\") " pod="kube-system/coredns-668d6bf9bc-cz5kf" Jul 7 06:11:23.348140 kubelet[2913]: I0707 06:11:23.347977 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69e3cf54-901a-4c69-86b8-2454097d2c89-config\") pod \"goldmane-768f4c5c69-xlb84\" (UID: \"69e3cf54-901a-4c69-86b8-2454097d2c89\") " pod="calico-system/goldmane-768f4c5c69-xlb84" Jul 7 06:11:23.354811 kubelet[2913]: I0707 06:11:23.348001 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/69e3cf54-901a-4c69-86b8-2454097d2c89-goldmane-key-pair\") pod \"goldmane-768f4c5c69-xlb84\" (UID: \"69e3cf54-901a-4c69-86b8-2454097d2c89\") " pod="calico-system/goldmane-768f4c5c69-xlb84" Jul 7 06:11:23.354811 kubelet[2913]: I0707 06:11:23.348012 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c39e2e58-014e-46c2-a9f8-678760dfbaab-calico-apiserver-certs\") pod \"calico-apiserver-945f9d44-bq9wq\" (UID: \"c39e2e58-014e-46c2-a9f8-678760dfbaab\") " pod="calico-apiserver/calico-apiserver-945f9d44-bq9wq" Jul 7 06:11:23.574938 containerd[1626]: time="2025-07-07T06:11:23.574856087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-945f9d44-bq9wq,Uid:c39e2e58-014e-46c2-a9f8-678760dfbaab,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:11:23.585428 containerd[1626]: time="2025-07-07T06:11:23.585390924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79bf997d64-4kw92,Uid:9916c420-a66b-48d2-b745-9d139345d2e5,Namespace:calico-system,Attempt:0,}" Jul 7 06:11:23.601573 containerd[1626]: time="2025-07-07T06:11:23.601523019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b79765fb9-tngnt,Uid:7c232664-a291-43ff-ace0-996e82732b0e,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:11:23.602048 containerd[1626]: time="2025-07-07T06:11:23.601984541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cz5kf,Uid:ea85a59a-05a6-4889-9a35-7c48f6b90bf6,Namespace:kube-system,Attempt:0,}" Jul 7 06:11:23.609575 containerd[1626]: time="2025-07-07T06:11:23.609415607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wqb5c,Uid:3bb9c053-df04-4380-8529-a63aa06f6a2b,Namespace:kube-system,Attempt:0,}" Jul 7 06:11:23.614481 containerd[1626]: time="2025-07-07T06:11:23.614454441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-945f9d44-42wqf,Uid:0c028a4a-b59d-4d3f-950b-f5683fe34de3,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:11:23.631576 containerd[1626]: time="2025-07-07T06:11:23.631547687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c69c4b6bd-cjg8q,Uid:1dd8bedf-e2f9-4637-8575-0ab51c5e70c5,Namespace:calico-system,Attempt:0,}" Jul 7 06:11:23.634826 containerd[1626]: time="2025-07-07T06:11:23.634790068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-xlb84,Uid:69e3cf54-901a-4c69-86b8-2454097d2c89,Namespace:calico-system,Attempt:0,}" Jul 7 06:11:24.134649 containerd[1626]: time="2025-07-07T06:11:24.134271723Z" level=error msg="Failed to destroy network for sandbox \"505088e829d4d864a33b00cf571ca44c306c80d36755d0b5738c75701b4188aa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:11:24.164361 containerd[1626]: time="2025-07-07T06:11:24.138918200Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c69c4b6bd-cjg8q,Uid:1dd8bedf-e2f9-4637-8575-0ab51c5e70c5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"505088e829d4d864a33b00cf571ca44c306c80d36755d0b5738c75701b4188aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:11:24.164561 containerd[1626]: time="2025-07-07T06:11:24.151635610Z" level=error msg="Failed to destroy network for sandbox \"1c04054932058cdbb9a64aa306b74c2ac7d4ef88a6e4e34e243d92bfbfaadad2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:11:24.167472 containerd[1626]: time="2025-07-07T06:11:24.152685117Z" level=error msg="Failed to destroy network for sandbox \"39bd7f2d9b6586700bbacd40edeeec0e3cd25b46ba21d34490f74363d722b7c1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:11:24.168087 systemd[1]: run-netns-cni\x2da990a0a2\x2d0bc8\x2d85a0\x2dc7ab\x2d2fba6ec7dcbc.mount: Deactivated successfully. Jul 7 06:11:24.169743 containerd[1626]: time="2025-07-07T06:11:24.169703069Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-945f9d44-42wqf,Uid:0c028a4a-b59d-4d3f-950b-f5683fe34de3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c04054932058cdbb9a64aa306b74c2ac7d4ef88a6e4e34e243d92bfbfaadad2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:11:24.170221 containerd[1626]: time="2025-07-07T06:11:24.162238991Z" level=error msg="Failed to destroy network for sandbox \"231289a9f703561b9b05702d5b611aaf91bea7f714d720ac2ef31da2adbe0395\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:11:24.173220 systemd[1]: run-netns-cni\x2d1a74ba57\x2d46ce\x2d26af\x2dfc22\x2d240b8040e644.mount: Deactivated successfully. Jul 7 06:11:24.173504 containerd[1626]: time="2025-07-07T06:11:24.173486521Z" level=error msg="Failed to destroy network for sandbox \"e3ba72fcd7f6f54ed50379fd886d5beb854b3d74c9286b561917464e3a79876f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:11:24.176208 containerd[1626]: time="2025-07-07T06:11:24.174764696Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79bf997d64-4kw92,Uid:9916c420-a66b-48d2-b745-9d139345d2e5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"231289a9f703561b9b05702d5b611aaf91bea7f714d720ac2ef31da2adbe0395\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:11:24.176870 systemd[1]: run-netns-cni\x2d0eac774d\x2d0344\x2d5645\x2d4690\x2ddd2ef48519cd.mount: Deactivated successfully. Jul 7 06:11:24.179172 kubelet[2913]: E0707 06:11:24.178773 2913 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c04054932058cdbb9a64aa306b74c2ac7d4ef88a6e4e34e243d92bfbfaadad2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:11:24.179172 kubelet[2913]: E0707 06:11:24.178835 2913 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c04054932058cdbb9a64aa306b74c2ac7d4ef88a6e4e34e243d92bfbfaadad2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-945f9d44-42wqf" Jul 7 06:11:24.179172 kubelet[2913]: E0707 06:11:24.178849 2913 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c04054932058cdbb9a64aa306b74c2ac7d4ef88a6e4e34e243d92bfbfaadad2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-945f9d44-42wqf" Jul 7 06:11:24.179691 containerd[1626]: time="2025-07-07T06:11:24.179659178Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-xlb84,Uid:69e3cf54-901a-4c69-86b8-2454097d2c89,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"39bd7f2d9b6586700bbacd40edeeec0e3cd25b46ba21d34490f74363d722b7c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:11:24.181009 containerd[1626]: time="2025-07-07T06:11:24.180641997Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-945f9d44-bq9wq,Uid:c39e2e58-014e-46c2-a9f8-678760dfbaab,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3ba72fcd7f6f54ed50379fd886d5beb854b3d74c9286b561917464e3a79876f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:11:24.181060 systemd[1]: run-netns-cni\x2d0feefaab\x2d7d62\x2dc633\x2d98db\x2df1e63a543527.mount: Deactivated successfully. Jul 7 06:11:24.181538 kubelet[2913]: E0707 06:11:24.181507 2913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-945f9d44-42wqf_calico-apiserver(0c028a4a-b59d-4d3f-950b-f5683fe34de3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-945f9d44-42wqf_calico-apiserver(0c028a4a-b59d-4d3f-950b-f5683fe34de3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c04054932058cdbb9a64aa306b74c2ac7d4ef88a6e4e34e243d92bfbfaadad2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-945f9d44-42wqf" podUID="0c028a4a-b59d-4d3f-950b-f5683fe34de3" Jul 7 06:11:24.182324 containerd[1626]: time="2025-07-07T06:11:24.182304611Z" level=error msg="Failed to destroy network for sandbox \"7b7cdd8497e8433348b1839a629b9e95cde8e7a13936e3aefe4382273bd3c399\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:11:24.185182 containerd[1626]: time="2025-07-07T06:11:24.185023784Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wqb5c,Uid:3bb9c053-df04-4380-8529-a63aa06f6a2b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b7cdd8497e8433348b1839a629b9e95cde8e7a13936e3aefe4382273bd3c399\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:11:24.186454 kubelet[2913]: E0707 06:11:24.185137 2913 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3ba72fcd7f6f54ed50379fd886d5beb854b3d74c9286b561917464e3a79876f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:11:24.186454 kubelet[2913]: E0707 06:11:24.185171 2913 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3ba72fcd7f6f54ed50379fd886d5beb854b3d74c9286b561917464e3a79876f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-945f9d44-bq9wq" Jul 7 06:11:24.186454 kubelet[2913]: E0707 06:11:24.185184 2913 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3ba72fcd7f6f54ed50379fd886d5beb854b3d74c9286b561917464e3a79876f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-945f9d44-bq9wq" Jul 7 06:11:24.186550 kubelet[2913]: E0707 06:11:24.185207 2913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-945f9d44-bq9wq_calico-apiserver(c39e2e58-014e-46c2-a9f8-678760dfbaab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-945f9d44-bq9wq_calico-apiserver(c39e2e58-014e-46c2-a9f8-678760dfbaab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e3ba72fcd7f6f54ed50379fd886d5beb854b3d74c9286b561917464e3a79876f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-945f9d44-bq9wq" podUID="c39e2e58-014e-46c2-a9f8-678760dfbaab" Jul 7 06:11:24.186550 kubelet[2913]: E0707 06:11:24.185289 2913 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"231289a9f703561b9b05702d5b611aaf91bea7f714d720ac2ef31da2adbe0395\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:11:24.186550 kubelet[2913]: E0707 06:11:24.185648 2913 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"231289a9f703561b9b05702d5b611aaf91bea7f714d720ac2ef31da2adbe0395\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79bf997d64-4kw92" Jul 7 06:11:24.186634 kubelet[2913]: E0707 06:11:24.185665 2913 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"231289a9f703561b9b05702d5b611aaf91bea7f714d720ac2ef31da2adbe0395\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79bf997d64-4kw92" Jul 7 06:11:24.186634 kubelet[2913]: E0707 06:11:24.185698 2913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-79bf997d64-4kw92_calico-system(9916c420-a66b-48d2-b745-9d139345d2e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-79bf997d64-4kw92_calico-system(9916c420-a66b-48d2-b745-9d139345d2e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"231289a9f703561b9b05702d5b611aaf91bea7f714d720ac2ef31da2adbe0395\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79bf997d64-4kw92" podUID="9916c420-a66b-48d2-b745-9d139345d2e5" Jul 7 06:11:24.186634 kubelet[2913]: E0707 06:11:24.185937 2913 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39bd7f2d9b6586700bbacd40edeeec0e3cd25b46ba21d34490f74363d722b7c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:11:24.186727 kubelet[2913]: E0707 06:11:24.186111 2913 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39bd7f2d9b6586700bbacd40edeeec0e3cd25b46ba21d34490f74363d722b7c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-xlb84" Jul 7 06:11:24.186727 kubelet[2913]: E0707 06:11:24.186127 2913 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39bd7f2d9b6586700bbacd40edeeec0e3cd25b46ba21d34490f74363d722b7c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-xlb84" Jul 7 06:11:24.186727 kubelet[2913]: E0707 06:11:24.186410 2913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-xlb84_calico-system(69e3cf54-901a-4c69-86b8-2454097d2c89)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-xlb84_calico-system(69e3cf54-901a-4c69-86b8-2454097d2c89)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"39bd7f2d9b6586700bbacd40edeeec0e3cd25b46ba21d34490f74363d722b7c1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-xlb84" podUID="69e3cf54-901a-4c69-86b8-2454097d2c89" Jul 7 06:11:24.186812 kubelet[2913]: E0707 06:11:24.186470 2913 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b7cdd8497e8433348b1839a629b9e95cde8e7a13936e3aefe4382273bd3c399\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:11:24.186812 kubelet[2913]: E0707 06:11:24.186707 2913 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b7cdd8497e8433348b1839a629b9e95cde8e7a13936e3aefe4382273bd3c399\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wqb5c" Jul 7 06:11:24.186871 kubelet[2913]: E0707 06:11:24.186812 2913 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b7cdd8497e8433348b1839a629b9e95cde8e7a13936e3aefe4382273bd3c399\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wqb5c" Jul 7 06:11:24.186871 kubelet[2913]: E0707 06:11:24.186832 2913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-wqb5c_kube-system(3bb9c053-df04-4380-8529-a63aa06f6a2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-wqb5c_kube-system(3bb9c053-df04-4380-8529-a63aa06f6a2b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7b7cdd8497e8433348b1839a629b9e95cde8e7a13936e3aefe4382273bd3c399\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wqb5c" podUID="3bb9c053-df04-4380-8529-a63aa06f6a2b" Jul 7 06:11:24.186871 kubelet[2913]: E0707 06:11:24.186854 2913 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"505088e829d4d864a33b00cf571ca44c306c80d36755d0b5738c75701b4188aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:11:24.187723 kubelet[2913]: E0707 06:11:24.186865 2913 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"505088e829d4d864a33b00cf571ca44c306c80d36755d0b5738c75701b4188aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c69c4b6bd-cjg8q" Jul 7 06:11:24.187723 kubelet[2913]: E0707 06:11:24.187120 2913 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"505088e829d4d864a33b00cf571ca44c306c80d36755d0b5738c75701b4188aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c69c4b6bd-cjg8q" Jul 7 06:11:24.187723 kubelet[2913]: E0707 06:11:24.187140 2913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6c69c4b6bd-cjg8q_calico-system(1dd8bedf-e2f9-4637-8575-0ab51c5e70c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6c69c4b6bd-cjg8q_calico-system(1dd8bedf-e2f9-4637-8575-0ab51c5e70c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"505088e829d4d864a33b00cf571ca44c306c80d36755d0b5738c75701b4188aa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6c69c4b6bd-cjg8q" podUID="1dd8bedf-e2f9-4637-8575-0ab51c5e70c5" Jul 7 06:11:24.188936 containerd[1626]: time="2025-07-07T06:11:24.188843840Z" level=error msg="Failed to destroy network for sandbox \"6b1cb36609f6d1ebd129c1c289ce6fa13cedc8af49e0f4e2b385aa3d4458f9d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:11:24.190210 containerd[1626]: time="2025-07-07T06:11:24.190185653Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b79765fb9-tngnt,Uid:7c232664-a291-43ff-ace0-996e82732b0e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b1cb36609f6d1ebd129c1c289ce6fa13cedc8af49e0f4e2b385aa3d4458f9d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:11:24.190770 kubelet[2913]: E0707 06:11:24.190492 2913 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b1cb36609f6d1ebd129c1c289ce6fa13cedc8af49e0f4e2b385aa3d4458f9d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:11:24.190770 kubelet[2913]: E0707 06:11:24.190527 2913 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b1cb36609f6d1ebd129c1c289ce6fa13cedc8af49e0f4e2b385aa3d4458f9d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b79765fb9-tngnt" Jul 7 06:11:24.190770 kubelet[2913]: E0707 06:11:24.190543 2913 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b1cb36609f6d1ebd129c1c289ce6fa13cedc8af49e0f4e2b385aa3d4458f9d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b79765fb9-tngnt" Jul 7 06:11:24.190886 kubelet[2913]: E0707 06:11:24.190590 2913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b79765fb9-tngnt_calico-apiserver(7c232664-a291-43ff-ace0-996e82732b0e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b79765fb9-tngnt_calico-apiserver(7c232664-a291-43ff-ace0-996e82732b0e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b1cb36609f6d1ebd129c1c289ce6fa13cedc8af49e0f4e2b385aa3d4458f9d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b79765fb9-tngnt" podUID="7c232664-a291-43ff-ace0-996e82732b0e" Jul 7 06:11:24.191635 containerd[1626]: time="2025-07-07T06:11:24.191557525Z" level=error msg="Failed to destroy network for sandbox \"c7866522582e0ff10a429a88d11049227343a134d746286d012fa19a8605ac34\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:11:24.192098 containerd[1626]: time="2025-07-07T06:11:24.192076239Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cz5kf,Uid:ea85a59a-05a6-4889-9a35-7c48f6b90bf6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7866522582e0ff10a429a88d11049227343a134d746286d012fa19a8605ac34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:11:24.192272 kubelet[2913]: E0707 06:11:24.192257 2913 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7866522582e0ff10a429a88d11049227343a134d746286d012fa19a8605ac34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:11:24.192335 kubelet[2913]: E0707 06:11:24.192326 2913 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7866522582e0ff10a429a88d11049227343a134d746286d012fa19a8605ac34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-cz5kf" Jul 7 06:11:24.194914 kubelet[2913]: E0707 06:11:24.192421 2913 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7866522582e0ff10a429a88d11049227343a134d746286d012fa19a8605ac34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-cz5kf" Jul 7 06:11:24.194914 kubelet[2913]: E0707 06:11:24.192450 2913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-cz5kf_kube-system(ea85a59a-05a6-4889-9a35-7c48f6b90bf6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-cz5kf_kube-system(ea85a59a-05a6-4889-9a35-7c48f6b90bf6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c7866522582e0ff10a429a88d11049227343a134d746286d012fa19a8605ac34\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-cz5kf" podUID="ea85a59a-05a6-4889-9a35-7c48f6b90bf6" Jul 7 06:11:24.196739 systemd[1]: Created slice kubepods-besteffort-pod6b187562_15be_401e_89b0_8601141135c4.slice - libcontainer container kubepods-besteffort-pod6b187562_15be_401e_89b0_8601141135c4.slice. Jul 7 06:11:24.231373 containerd[1626]: time="2025-07-07T06:11:24.231231947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s2hl9,Uid:6b187562-15be-401e-89b0-8601141135c4,Namespace:calico-system,Attempt:0,}" Jul 7 06:11:24.314351 containerd[1626]: time="2025-07-07T06:11:24.314020346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 7 06:11:24.338293 containerd[1626]: time="2025-07-07T06:11:24.338213764Z" level=error msg="Failed to destroy network for sandbox \"cef5177e27a3f302b33775b5da97c4bdb755be068a4eda5ec9d288693d28eb90\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:11:24.343347 containerd[1626]: time="2025-07-07T06:11:24.343319555Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s2hl9,Uid:6b187562-15be-401e-89b0-8601141135c4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cef5177e27a3f302b33775b5da97c4bdb755be068a4eda5ec9d288693d28eb90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:11:24.344163 kubelet[2913]: E0707 06:11:24.344139 2913 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cef5177e27a3f302b33775b5da97c4bdb755be068a4eda5ec9d288693d28eb90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:11:24.344210 kubelet[2913]: E0707 06:11:24.344174 2913 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cef5177e27a3f302b33775b5da97c4bdb755be068a4eda5ec9d288693d28eb90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s2hl9" Jul 7 06:11:24.344210 kubelet[2913]: E0707 06:11:24.344188 2913 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cef5177e27a3f302b33775b5da97c4bdb755be068a4eda5ec9d288693d28eb90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s2hl9" Jul 7 06:11:24.344268 kubelet[2913]: E0707 06:11:24.344210 2913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-s2hl9_calico-system(6b187562-15be-401e-89b0-8601141135c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-s2hl9_calico-system(6b187562-15be-401e-89b0-8601141135c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cef5177e27a3f302b33775b5da97c4bdb755be068a4eda5ec9d288693d28eb90\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s2hl9" podUID="6b187562-15be-401e-89b0-8601141135c4" Jul 7 06:11:25.148098 systemd[1]: run-netns-cni\x2d1168fb35\x2d06e8\x2dcd48\x2db1f0\x2d143ea397af1b.mount: Deactivated successfully. Jul 7 06:11:25.148166 systemd[1]: run-netns-cni\x2d27339444\x2dff93\x2de1a7\x2d89ca\x2dedfa476a0fc2.mount: Deactivated successfully. Jul 7 06:11:25.148214 systemd[1]: run-netns-cni\x2d37099d78\x2dad43\x2d60b8\x2da26f\x2d884bea30d385.mount: Deactivated successfully. Jul 7 06:11:31.734332 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3170541815.mount: Deactivated successfully. Jul 7 06:11:32.148089 containerd[1626]: time="2025-07-07T06:11:32.147856035Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:32.154576 containerd[1626]: time="2025-07-07T06:11:32.123949938Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 7 06:11:32.169470 containerd[1626]: time="2025-07-07T06:11:32.169430959Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:32.184931 containerd[1626]: time="2025-07-07T06:11:32.184898539Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:32.185791 containerd[1626]: time="2025-07-07T06:11:32.185769432Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 7.870707514s" Jul 7 06:11:32.185858 containerd[1626]: time="2025-07-07T06:11:32.185793628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 7 06:11:32.303523 containerd[1626]: time="2025-07-07T06:11:32.303489832Z" level=info msg="CreateContainer within sandbox \"a668deace2c81757d0d9aa07a8341990d6ef92c8c6059c8656151f539b57e203\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 7 06:11:32.438489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2255267491.mount: Deactivated successfully. Jul 7 06:11:32.439438 containerd[1626]: time="2025-07-07T06:11:32.438905722Z" level=info msg="Container c05b9e293047d80d625a0ae7fbdef27a4e452d5ff264e1dcdb0cf72c4ea8af9d: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:11:32.582412 containerd[1626]: time="2025-07-07T06:11:32.582384759Z" level=info msg="CreateContainer within sandbox \"a668deace2c81757d0d9aa07a8341990d6ef92c8c6059c8656151f539b57e203\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c05b9e293047d80d625a0ae7fbdef27a4e452d5ff264e1dcdb0cf72c4ea8af9d\"" Jul 7 06:11:32.583927 containerd[1626]: time="2025-07-07T06:11:32.583900138Z" level=info msg="StartContainer for \"c05b9e293047d80d625a0ae7fbdef27a4e452d5ff264e1dcdb0cf72c4ea8af9d\"" Jul 7 06:11:32.595433 containerd[1626]: time="2025-07-07T06:11:32.595380315Z" level=info msg="connecting to shim c05b9e293047d80d625a0ae7fbdef27a4e452d5ff264e1dcdb0cf72c4ea8af9d" address="unix:///run/containerd/s/e7ec5277fa6f332f7630811fc64ae0be4db6bb5e09daf28856fae27b454583a8" protocol=ttrpc version=3 Jul 7 06:11:32.849182 systemd[1]: Started cri-containerd-c05b9e293047d80d625a0ae7fbdef27a4e452d5ff264e1dcdb0cf72c4ea8af9d.scope - libcontainer container c05b9e293047d80d625a0ae7fbdef27a4e452d5ff264e1dcdb0cf72c4ea8af9d. Jul 7 06:11:32.906968 containerd[1626]: time="2025-07-07T06:11:32.906878438Z" level=info msg="StartContainer for \"c05b9e293047d80d625a0ae7fbdef27a4e452d5ff264e1dcdb0cf72c4ea8af9d\" returns successfully" Jul 7 06:11:33.388777 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 7 06:11:33.393274 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 7 06:11:33.661305 containerd[1626]: time="2025-07-07T06:11:33.661147915Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c05b9e293047d80d625a0ae7fbdef27a4e452d5ff264e1dcdb0cf72c4ea8af9d\" id:\"8d61ed0edf07d385fe38fcef31c76e30c48d4a963417af341cadcf31ebf602e6\" pid:3986 exit_status:1 exited_at:{seconds:1751868693 nanos:648215124}" Jul 7 06:11:33.845035 kubelet[2913]: I0707 06:11:33.844989 2913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-gsx2n" podStartSLOduration=2.63280571 podStartE2EDuration="23.8449661s" podCreationTimestamp="2025-07-07 06:11:10 +0000 UTC" firstStartedPulling="2025-07-07 06:11:10.974171976 +0000 UTC m=+16.925001347" lastFinishedPulling="2025-07-07 06:11:32.186332362 +0000 UTC m=+38.137161737" observedRunningTime="2025-07-07 06:11:33.353233487 +0000 UTC m=+39.304062867" watchObservedRunningTime="2025-07-07 06:11:33.8449661 +0000 UTC m=+39.795795482" Jul 7 06:11:33.930261 kubelet[2913]: I0707 06:11:33.929912 2913 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1dd8bedf-e2f9-4637-8575-0ab51c5e70c5-whisker-ca-bundle\") pod \"1dd8bedf-e2f9-4637-8575-0ab51c5e70c5\" (UID: \"1dd8bedf-e2f9-4637-8575-0ab51c5e70c5\") " Jul 7 06:11:33.930415 kubelet[2913]: I0707 06:11:33.929947 2913 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxxhp\" (UniqueName: \"kubernetes.io/projected/1dd8bedf-e2f9-4637-8575-0ab51c5e70c5-kube-api-access-kxxhp\") pod \"1dd8bedf-e2f9-4637-8575-0ab51c5e70c5\" (UID: \"1dd8bedf-e2f9-4637-8575-0ab51c5e70c5\") " Jul 7 06:11:33.930415 kubelet[2913]: I0707 06:11:33.930372 2913 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1dd8bedf-e2f9-4637-8575-0ab51c5e70c5-whisker-backend-key-pair\") pod \"1dd8bedf-e2f9-4637-8575-0ab51c5e70c5\" (UID: \"1dd8bedf-e2f9-4637-8575-0ab51c5e70c5\") " Jul 7 06:11:33.930606 kubelet[2913]: I0707 06:11:33.930587 2913 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dd8bedf-e2f9-4637-8575-0ab51c5e70c5-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "1dd8bedf-e2f9-4637-8575-0ab51c5e70c5" (UID: "1dd8bedf-e2f9-4637-8575-0ab51c5e70c5"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 7 06:11:33.934682 systemd[1]: var-lib-kubelet-pods-1dd8bedf\x2de2f9\x2d4637\x2d8575\x2d0ab51c5e70c5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkxxhp.mount: Deactivated successfully. Jul 7 06:11:33.934949 systemd[1]: var-lib-kubelet-pods-1dd8bedf\x2de2f9\x2d4637\x2d8575\x2d0ab51c5e70c5-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 7 06:11:33.936132 kubelet[2913]: I0707 06:11:33.935135 2913 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dd8bedf-e2f9-4637-8575-0ab51c5e70c5-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "1dd8bedf-e2f9-4637-8575-0ab51c5e70c5" (UID: "1dd8bedf-e2f9-4637-8575-0ab51c5e70c5"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 7 06:11:33.936132 kubelet[2913]: I0707 06:11:33.935504 2913 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dd8bedf-e2f9-4637-8575-0ab51c5e70c5-kube-api-access-kxxhp" (OuterVolumeSpecName: "kube-api-access-kxxhp") pod "1dd8bedf-e2f9-4637-8575-0ab51c5e70c5" (UID: "1dd8bedf-e2f9-4637-8575-0ab51c5e70c5"). InnerVolumeSpecName "kube-api-access-kxxhp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 06:11:34.033955 kubelet[2913]: I0707 06:11:34.033907 2913 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1dd8bedf-e2f9-4637-8575-0ab51c5e70c5-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 7 06:11:34.033955 kubelet[2913]: I0707 06:11:34.033932 2913 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1dd8bedf-e2f9-4637-8575-0ab51c5e70c5-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 7 06:11:34.033955 kubelet[2913]: I0707 06:11:34.033940 2913 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kxxhp\" (UniqueName: \"kubernetes.io/projected/1dd8bedf-e2f9-4637-8575-0ab51c5e70c5-kube-api-access-kxxhp\") on node \"localhost\" DevicePath \"\"" Jul 7 06:11:34.164522 systemd[1]: Removed slice kubepods-besteffort-pod1dd8bedf_e2f9_4637_8575_0ab51c5e70c5.slice - libcontainer container kubepods-besteffort-pod1dd8bedf_e2f9_4637_8575_0ab51c5e70c5.slice. Jul 7 06:11:34.431994 systemd[1]: Created slice kubepods-besteffort-podb9fd2d60_bba1_410b_8013_623eaaa77c14.slice - libcontainer container kubepods-besteffort-podb9fd2d60_bba1_410b_8013_623eaaa77c14.slice. Jul 7 06:11:34.453754 containerd[1626]: time="2025-07-07T06:11:34.453728832Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c05b9e293047d80d625a0ae7fbdef27a4e452d5ff264e1dcdb0cf72c4ea8af9d\" id:\"5e3fdb0bc0669b450faa2e83c09206cf476b964a72dfe0a4c0f9a6ca007ff239\" pid:4031 exit_status:1 exited_at:{seconds:1751868694 nanos:453536287}" Jul 7 06:11:34.538248 kubelet[2913]: I0707 06:11:34.538213 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9fd2d60-bba1-410b-8013-623eaaa77c14-whisker-ca-bundle\") pod \"whisker-74496479b4-kmsvx\" (UID: \"b9fd2d60-bba1-410b-8013-623eaaa77c14\") " pod="calico-system/whisker-74496479b4-kmsvx" Jul 7 06:11:34.538248 kubelet[2913]: I0707 06:11:34.538249 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-498zl\" (UniqueName: \"kubernetes.io/projected/b9fd2d60-bba1-410b-8013-623eaaa77c14-kube-api-access-498zl\") pod \"whisker-74496479b4-kmsvx\" (UID: \"b9fd2d60-bba1-410b-8013-623eaaa77c14\") " pod="calico-system/whisker-74496479b4-kmsvx" Jul 7 06:11:34.556689 kubelet[2913]: I0707 06:11:34.538271 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b9fd2d60-bba1-410b-8013-623eaaa77c14-whisker-backend-key-pair\") pod \"whisker-74496479b4-kmsvx\" (UID: \"b9fd2d60-bba1-410b-8013-623eaaa77c14\") " pod="calico-system/whisker-74496479b4-kmsvx" Jul 7 06:11:34.735989 containerd[1626]: time="2025-07-07T06:11:34.735881731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74496479b4-kmsvx,Uid:b9fd2d60-bba1-410b-8013-623eaaa77c14,Namespace:calico-system,Attempt:0,}" Jul 7 06:11:35.344373 containerd[1626]: time="2025-07-07T06:11:35.344229325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79bf997d64-4kw92,Uid:9916c420-a66b-48d2-b745-9d139345d2e5,Namespace:calico-system,Attempt:0,}" Jul 7 06:11:35.640671 systemd-networkd[1529]: vxlan.calico: Link UP Jul 7 06:11:35.640683 systemd-networkd[1529]: vxlan.calico: Gained carrier Jul 7 06:11:35.952798 systemd-networkd[1529]: calia70a088fb93: Link UP Jul 7 06:11:35.953634 systemd-networkd[1529]: calia70a088fb93: Gained carrier Jul 7 06:11:35.984420 containerd[1626]: 2025-07-07 06:11:34.774 [INFO][4046] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:11:35.984420 containerd[1626]: 2025-07-07 06:11:35.335 [INFO][4046] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--74496479b4--kmsvx-eth0 whisker-74496479b4- calico-system b9fd2d60-bba1-410b-8013-623eaaa77c14 922 0 2025-07-07 06:11:34 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:74496479b4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-74496479b4-kmsvx eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia70a088fb93 [] [] }} ContainerID="d5a41af89d0f0445df050f6efaee36931c025616bf4ac3a9f9b1c7a2b0c9fcc7" Namespace="calico-system" Pod="whisker-74496479b4-kmsvx" WorkloadEndpoint="localhost-k8s-whisker--74496479b4--kmsvx-" Jul 7 06:11:35.984420 containerd[1626]: 2025-07-07 06:11:35.335 [INFO][4046] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d5a41af89d0f0445df050f6efaee36931c025616bf4ac3a9f9b1c7a2b0c9fcc7" Namespace="calico-system" Pod="whisker-74496479b4-kmsvx" WorkloadEndpoint="localhost-k8s-whisker--74496479b4--kmsvx-eth0" Jul 7 06:11:35.984420 containerd[1626]: 2025-07-07 06:11:35.844 [INFO][4159] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d5a41af89d0f0445df050f6efaee36931c025616bf4ac3a9f9b1c7a2b0c9fcc7" HandleID="k8s-pod-network.d5a41af89d0f0445df050f6efaee36931c025616bf4ac3a9f9b1c7a2b0c9fcc7" Workload="localhost-k8s-whisker--74496479b4--kmsvx-eth0" Jul 7 06:11:35.984742 containerd[1626]: 2025-07-07 06:11:35.846 [INFO][4159] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d5a41af89d0f0445df050f6efaee36931c025616bf4ac3a9f9b1c7a2b0c9fcc7" HandleID="k8s-pod-network.d5a41af89d0f0445df050f6efaee36931c025616bf4ac3a9f9b1c7a2b0c9fcc7" Workload="localhost-k8s-whisker--74496479b4--kmsvx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003fa330), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-74496479b4-kmsvx", "timestamp":"2025-07-07 06:11:35.844603872 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:11:35.984742 containerd[1626]: 2025-07-07 06:11:35.846 [INFO][4159] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:11:35.984742 containerd[1626]: 2025-07-07 06:11:35.847 [INFO][4159] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:11:35.984742 containerd[1626]: 2025-07-07 06:11:35.848 [INFO][4159] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:11:35.984742 containerd[1626]: 2025-07-07 06:11:35.897 [INFO][4159] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d5a41af89d0f0445df050f6efaee36931c025616bf4ac3a9f9b1c7a2b0c9fcc7" host="localhost" Jul 7 06:11:35.984742 containerd[1626]: 2025-07-07 06:11:35.913 [INFO][4159] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:11:35.984742 containerd[1626]: 2025-07-07 06:11:35.919 [INFO][4159] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:11:35.984742 containerd[1626]: 2025-07-07 06:11:35.920 [INFO][4159] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:11:35.984742 containerd[1626]: 2025-07-07 06:11:35.922 [INFO][4159] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:11:35.984742 containerd[1626]: 2025-07-07 06:11:35.922 [INFO][4159] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d5a41af89d0f0445df050f6efaee36931c025616bf4ac3a9f9b1c7a2b0c9fcc7" host="localhost" Jul 7 06:11:35.984983 containerd[1626]: 2025-07-07 06:11:35.923 [INFO][4159] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d5a41af89d0f0445df050f6efaee36931c025616bf4ac3a9f9b1c7a2b0c9fcc7 Jul 7 06:11:35.984983 containerd[1626]: 2025-07-07 06:11:35.926 [INFO][4159] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d5a41af89d0f0445df050f6efaee36931c025616bf4ac3a9f9b1c7a2b0c9fcc7" host="localhost" Jul 7 06:11:35.984983 containerd[1626]: 2025-07-07 06:11:35.931 [INFO][4159] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.d5a41af89d0f0445df050f6efaee36931c025616bf4ac3a9f9b1c7a2b0c9fcc7" host="localhost" Jul 7 06:11:35.984983 containerd[1626]: 2025-07-07 06:11:35.931 [INFO][4159] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.d5a41af89d0f0445df050f6efaee36931c025616bf4ac3a9f9b1c7a2b0c9fcc7" host="localhost" Jul 7 06:11:35.984983 containerd[1626]: 2025-07-07 06:11:35.931 [INFO][4159] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:11:35.984983 containerd[1626]: 2025-07-07 06:11:35.931 [INFO][4159] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="d5a41af89d0f0445df050f6efaee36931c025616bf4ac3a9f9b1c7a2b0c9fcc7" HandleID="k8s-pod-network.d5a41af89d0f0445df050f6efaee36931c025616bf4ac3a9f9b1c7a2b0c9fcc7" Workload="localhost-k8s-whisker--74496479b4--kmsvx-eth0" Jul 7 06:11:35.985129 containerd[1626]: 2025-07-07 06:11:35.935 [INFO][4046] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d5a41af89d0f0445df050f6efaee36931c025616bf4ac3a9f9b1c7a2b0c9fcc7" Namespace="calico-system" Pod="whisker-74496479b4-kmsvx" WorkloadEndpoint="localhost-k8s-whisker--74496479b4--kmsvx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--74496479b4--kmsvx-eth0", GenerateName:"whisker-74496479b4-", Namespace:"calico-system", SelfLink:"", UID:"b9fd2d60-bba1-410b-8013-623eaaa77c14", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 11, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"74496479b4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-74496479b4-kmsvx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia70a088fb93", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:11:35.985129 containerd[1626]: 2025-07-07 06:11:35.935 [INFO][4046] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="d5a41af89d0f0445df050f6efaee36931c025616bf4ac3a9f9b1c7a2b0c9fcc7" Namespace="calico-system" Pod="whisker-74496479b4-kmsvx" WorkloadEndpoint="localhost-k8s-whisker--74496479b4--kmsvx-eth0" Jul 7 06:11:35.985198 containerd[1626]: 2025-07-07 06:11:35.935 [INFO][4046] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia70a088fb93 ContainerID="d5a41af89d0f0445df050f6efaee36931c025616bf4ac3a9f9b1c7a2b0c9fcc7" Namespace="calico-system" Pod="whisker-74496479b4-kmsvx" WorkloadEndpoint="localhost-k8s-whisker--74496479b4--kmsvx-eth0" Jul 7 06:11:35.985198 containerd[1626]: 2025-07-07 06:11:35.959 [INFO][4046] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d5a41af89d0f0445df050f6efaee36931c025616bf4ac3a9f9b1c7a2b0c9fcc7" Namespace="calico-system" Pod="whisker-74496479b4-kmsvx" WorkloadEndpoint="localhost-k8s-whisker--74496479b4--kmsvx-eth0" Jul 7 06:11:35.985240 containerd[1626]: 2025-07-07 06:11:35.959 [INFO][4046] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d5a41af89d0f0445df050f6efaee36931c025616bf4ac3a9f9b1c7a2b0c9fcc7" Namespace="calico-system" Pod="whisker-74496479b4-kmsvx" WorkloadEndpoint="localhost-k8s-whisker--74496479b4--kmsvx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--74496479b4--kmsvx-eth0", GenerateName:"whisker-74496479b4-", Namespace:"calico-system", SelfLink:"", UID:"b9fd2d60-bba1-410b-8013-623eaaa77c14", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 11, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"74496479b4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d5a41af89d0f0445df050f6efaee36931c025616bf4ac3a9f9b1c7a2b0c9fcc7", Pod:"whisker-74496479b4-kmsvx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia70a088fb93", MAC:"32:84:f7:cc:02:5b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:11:35.985288 containerd[1626]: 2025-07-07 06:11:35.982 [INFO][4046] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d5a41af89d0f0445df050f6efaee36931c025616bf4ac3a9f9b1c7a2b0c9fcc7" Namespace="calico-system" Pod="whisker-74496479b4-kmsvx" WorkloadEndpoint="localhost-k8s-whisker--74496479b4--kmsvx-eth0" Jul 7 06:11:36.117532 systemd-networkd[1529]: calic181700365f: Link UP Jul 7 06:11:36.118186 systemd-networkd[1529]: calic181700365f: Gained carrier Jul 7 06:11:36.135430 containerd[1626]: 2025-07-07 06:11:36.035 [INFO][4262] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--79bf997d64--4kw92-eth0 calico-kube-controllers-79bf997d64- calico-system 9916c420-a66b-48d2-b745-9d139345d2e5 834 0 2025-07-07 06:11:10 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:79bf997d64 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-79bf997d64-4kw92 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic181700365f [] [] }} ContainerID="83e8fdc8285e18a17c5ce1bcd7c74381ea382ef6dbcccc76a2934419bf88b6fc" Namespace="calico-system" Pod="calico-kube-controllers-79bf997d64-4kw92" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79bf997d64--4kw92-" Jul 7 06:11:36.135430 containerd[1626]: 2025-07-07 06:11:36.035 [INFO][4262] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="83e8fdc8285e18a17c5ce1bcd7c74381ea382ef6dbcccc76a2934419bf88b6fc" Namespace="calico-system" Pod="calico-kube-controllers-79bf997d64-4kw92" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79bf997d64--4kw92-eth0" Jul 7 06:11:36.135430 containerd[1626]: 2025-07-07 06:11:36.073 [INFO][4282] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="83e8fdc8285e18a17c5ce1bcd7c74381ea382ef6dbcccc76a2934419bf88b6fc" HandleID="k8s-pod-network.83e8fdc8285e18a17c5ce1bcd7c74381ea382ef6dbcccc76a2934419bf88b6fc" Workload="localhost-k8s-calico--kube--controllers--79bf997d64--4kw92-eth0" Jul 7 06:11:36.135603 containerd[1626]: 2025-07-07 06:11:36.073 [INFO][4282] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="83e8fdc8285e18a17c5ce1bcd7c74381ea382ef6dbcccc76a2934419bf88b6fc" HandleID="k8s-pod-network.83e8fdc8285e18a17c5ce1bcd7c74381ea382ef6dbcccc76a2934419bf88b6fc" Workload="localhost-k8s-calico--kube--controllers--79bf997d64--4kw92-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024eff0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-79bf997d64-4kw92", "timestamp":"2025-07-07 06:11:36.073595542 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:11:36.135603 containerd[1626]: 2025-07-07 06:11:36.073 [INFO][4282] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:11:36.135603 containerd[1626]: 2025-07-07 06:11:36.073 [INFO][4282] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:11:36.135603 containerd[1626]: 2025-07-07 06:11:36.073 [INFO][4282] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:11:36.135603 containerd[1626]: 2025-07-07 06:11:36.081 [INFO][4282] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.83e8fdc8285e18a17c5ce1bcd7c74381ea382ef6dbcccc76a2934419bf88b6fc" host="localhost" Jul 7 06:11:36.135603 containerd[1626]: 2025-07-07 06:11:36.083 [INFO][4282] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:11:36.135603 containerd[1626]: 2025-07-07 06:11:36.086 [INFO][4282] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:11:36.135603 containerd[1626]: 2025-07-07 06:11:36.088 [INFO][4282] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:11:36.135603 containerd[1626]: 2025-07-07 06:11:36.089 [INFO][4282] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:11:36.135603 containerd[1626]: 2025-07-07 06:11:36.089 [INFO][4282] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.83e8fdc8285e18a17c5ce1bcd7c74381ea382ef6dbcccc76a2934419bf88b6fc" host="localhost" Jul 7 06:11:36.148101 containerd[1626]: 2025-07-07 06:11:36.090 [INFO][4282] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.83e8fdc8285e18a17c5ce1bcd7c74381ea382ef6dbcccc76a2934419bf88b6fc Jul 7 06:11:36.148101 containerd[1626]: 2025-07-07 06:11:36.105 [INFO][4282] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.83e8fdc8285e18a17c5ce1bcd7c74381ea382ef6dbcccc76a2934419bf88b6fc" host="localhost" Jul 7 06:11:36.148101 containerd[1626]: 2025-07-07 06:11:36.111 [INFO][4282] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.83e8fdc8285e18a17c5ce1bcd7c74381ea382ef6dbcccc76a2934419bf88b6fc" host="localhost" Jul 7 06:11:36.148101 containerd[1626]: 2025-07-07 06:11:36.111 [INFO][4282] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.83e8fdc8285e18a17c5ce1bcd7c74381ea382ef6dbcccc76a2934419bf88b6fc" host="localhost" Jul 7 06:11:36.148101 containerd[1626]: 2025-07-07 06:11:36.111 [INFO][4282] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:11:36.148101 containerd[1626]: 2025-07-07 06:11:36.111 [INFO][4282] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="83e8fdc8285e18a17c5ce1bcd7c74381ea382ef6dbcccc76a2934419bf88b6fc" HandleID="k8s-pod-network.83e8fdc8285e18a17c5ce1bcd7c74381ea382ef6dbcccc76a2934419bf88b6fc" Workload="localhost-k8s-calico--kube--controllers--79bf997d64--4kw92-eth0" Jul 7 06:11:36.163883 containerd[1626]: 2025-07-07 06:11:36.114 [INFO][4262] cni-plugin/k8s.go 418: Populated endpoint ContainerID="83e8fdc8285e18a17c5ce1bcd7c74381ea382ef6dbcccc76a2934419bf88b6fc" Namespace="calico-system" Pod="calico-kube-controllers-79bf997d64-4kw92" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79bf997d64--4kw92-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--79bf997d64--4kw92-eth0", GenerateName:"calico-kube-controllers-79bf997d64-", Namespace:"calico-system", SelfLink:"", UID:"9916c420-a66b-48d2-b745-9d139345d2e5", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 11, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79bf997d64", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-79bf997d64-4kw92", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic181700365f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:11:36.163945 containerd[1626]: 2025-07-07 06:11:36.114 [INFO][4262] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="83e8fdc8285e18a17c5ce1bcd7c74381ea382ef6dbcccc76a2934419bf88b6fc" Namespace="calico-system" Pod="calico-kube-controllers-79bf997d64-4kw92" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79bf997d64--4kw92-eth0" Jul 7 06:11:36.163945 containerd[1626]: 2025-07-07 06:11:36.114 [INFO][4262] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic181700365f ContainerID="83e8fdc8285e18a17c5ce1bcd7c74381ea382ef6dbcccc76a2934419bf88b6fc" Namespace="calico-system" Pod="calico-kube-controllers-79bf997d64-4kw92" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79bf997d64--4kw92-eth0" Jul 7 06:11:36.163945 containerd[1626]: 2025-07-07 06:11:36.118 [INFO][4262] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="83e8fdc8285e18a17c5ce1bcd7c74381ea382ef6dbcccc76a2934419bf88b6fc" Namespace="calico-system" Pod="calico-kube-controllers-79bf997d64-4kw92" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79bf997d64--4kw92-eth0" Jul 7 06:11:36.170989 containerd[1626]: 2025-07-07 06:11:36.120 [INFO][4262] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="83e8fdc8285e18a17c5ce1bcd7c74381ea382ef6dbcccc76a2934419bf88b6fc" Namespace="calico-system" Pod="calico-kube-controllers-79bf997d64-4kw92" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79bf997d64--4kw92-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--79bf997d64--4kw92-eth0", GenerateName:"calico-kube-controllers-79bf997d64-", Namespace:"calico-system", SelfLink:"", UID:"9916c420-a66b-48d2-b745-9d139345d2e5", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 11, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79bf997d64", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"83e8fdc8285e18a17c5ce1bcd7c74381ea382ef6dbcccc76a2934419bf88b6fc", Pod:"calico-kube-controllers-79bf997d64-4kw92", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic181700365f", MAC:"9a:e1:c5:c8:c8:f0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:11:36.171059 kubelet[2913]: I0707 06:11:36.170315 2913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dd8bedf-e2f9-4637-8575-0ab51c5e70c5" path="/var/lib/kubelet/pods/1dd8bedf-e2f9-4637-8575-0ab51c5e70c5/volumes" Jul 7 06:11:36.171252 containerd[1626]: 2025-07-07 06:11:36.133 [INFO][4262] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="83e8fdc8285e18a17c5ce1bcd7c74381ea382ef6dbcccc76a2934419bf88b6fc" Namespace="calico-system" Pod="calico-kube-controllers-79bf997d64-4kw92" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79bf997d64--4kw92-eth0" Jul 7 06:11:36.171252 containerd[1626]: time="2025-07-07T06:11:36.158684446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-945f9d44-bq9wq,Uid:c39e2e58-014e-46c2-a9f8-678760dfbaab,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:11:36.283584 containerd[1626]: time="2025-07-07T06:11:36.282416916Z" level=info msg="connecting to shim 83e8fdc8285e18a17c5ce1bcd7c74381ea382ef6dbcccc76a2934419bf88b6fc" address="unix:///run/containerd/s/fbbd2575f54c97b2a14adef6deb6c661a1011d0e71189ed5a5eb54dc2a96a04e" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:11:36.285616 containerd[1626]: time="2025-07-07T06:11:36.285575443Z" level=info msg="connecting to shim d5a41af89d0f0445df050f6efaee36931c025616bf4ac3a9f9b1c7a2b0c9fcc7" address="unix:///run/containerd/s/07d863d65586ff09e71e99fc3b340766f56a736ccd5bc384c4ef7c9f5dea0705" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:11:36.304413 systemd-networkd[1529]: cali9a06932c89e: Link UP Jul 7 06:11:36.305664 systemd-networkd[1529]: cali9a06932c89e: Gained carrier Jul 7 06:11:36.331249 systemd[1]: Started cri-containerd-83e8fdc8285e18a17c5ce1bcd7c74381ea382ef6dbcccc76a2934419bf88b6fc.scope - libcontainer container 83e8fdc8285e18a17c5ce1bcd7c74381ea382ef6dbcccc76a2934419bf88b6fc. Jul 7 06:11:36.333437 systemd[1]: Started cri-containerd-d5a41af89d0f0445df050f6efaee36931c025616bf4ac3a9f9b1c7a2b0c9fcc7.scope - libcontainer container d5a41af89d0f0445df050f6efaee36931c025616bf4ac3a9f9b1c7a2b0c9fcc7. Jul 7 06:11:36.344327 containerd[1626]: 2025-07-07 06:11:36.223 [INFO][4296] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--945f9d44--bq9wq-eth0 calico-apiserver-945f9d44- calico-apiserver c39e2e58-014e-46c2-a9f8-678760dfbaab 832 0 2025-07-07 06:11:08 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:945f9d44 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-945f9d44-bq9wq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9a06932c89e [] [] }} ContainerID="6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" Namespace="calico-apiserver" Pod="calico-apiserver-945f9d44-bq9wq" WorkloadEndpoint="localhost-k8s-calico--apiserver--945f9d44--bq9wq-" Jul 7 06:11:36.344327 containerd[1626]: 2025-07-07 06:11:36.223 [INFO][4296] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" Namespace="calico-apiserver" Pod="calico-apiserver-945f9d44-bq9wq" WorkloadEndpoint="localhost-k8s-calico--apiserver--945f9d44--bq9wq-eth0" Jul 7 06:11:36.344327 containerd[1626]: 2025-07-07 06:11:36.250 [INFO][4318] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" HandleID="k8s-pod-network.6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" Workload="localhost-k8s-calico--apiserver--945f9d44--bq9wq-eth0" Jul 7 06:11:36.344540 containerd[1626]: 2025-07-07 06:11:36.250 [INFO][4318] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" HandleID="k8s-pod-network.6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" Workload="localhost-k8s-calico--apiserver--945f9d44--bq9wq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fa0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-945f9d44-bq9wq", "timestamp":"2025-07-07 06:11:36.250492905 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:11:36.344540 containerd[1626]: 2025-07-07 06:11:36.250 [INFO][4318] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:11:36.344540 containerd[1626]: 2025-07-07 06:11:36.250 [INFO][4318] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:11:36.344540 containerd[1626]: 2025-07-07 06:11:36.250 [INFO][4318] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:11:36.344540 containerd[1626]: 2025-07-07 06:11:36.255 [INFO][4318] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" host="localhost" Jul 7 06:11:36.344540 containerd[1626]: 2025-07-07 06:11:36.257 [INFO][4318] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:11:36.344540 containerd[1626]: 2025-07-07 06:11:36.271 [INFO][4318] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:11:36.344540 containerd[1626]: 2025-07-07 06:11:36.272 [INFO][4318] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:11:36.344540 containerd[1626]: 2025-07-07 06:11:36.274 [INFO][4318] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:11:36.344540 containerd[1626]: 2025-07-07 06:11:36.274 [INFO][4318] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" host="localhost" Jul 7 06:11:36.350895 containerd[1626]: 2025-07-07 06:11:36.274 [INFO][4318] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466 Jul 7 06:11:36.350895 containerd[1626]: 2025-07-07 06:11:36.278 [INFO][4318] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" host="localhost" Jul 7 06:11:36.350895 containerd[1626]: 2025-07-07 06:11:36.291 [INFO][4318] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" host="localhost" Jul 7 06:11:36.350895 containerd[1626]: 2025-07-07 06:11:36.291 [INFO][4318] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" host="localhost" Jul 7 06:11:36.350895 containerd[1626]: 2025-07-07 06:11:36.291 [INFO][4318] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:11:36.350895 containerd[1626]: 2025-07-07 06:11:36.291 [INFO][4318] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" HandleID="k8s-pod-network.6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" Workload="localhost-k8s-calico--apiserver--945f9d44--bq9wq-eth0" Jul 7 06:11:36.351112 containerd[1626]: 2025-07-07 06:11:36.300 [INFO][4296] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" Namespace="calico-apiserver" Pod="calico-apiserver-945f9d44-bq9wq" WorkloadEndpoint="localhost-k8s-calico--apiserver--945f9d44--bq9wq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--945f9d44--bq9wq-eth0", GenerateName:"calico-apiserver-945f9d44-", Namespace:"calico-apiserver", SelfLink:"", UID:"c39e2e58-014e-46c2-a9f8-678760dfbaab", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 11, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"945f9d44", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-945f9d44-bq9wq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9a06932c89e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:11:36.351169 containerd[1626]: 2025-07-07 06:11:36.300 [INFO][4296] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" Namespace="calico-apiserver" Pod="calico-apiserver-945f9d44-bq9wq" WorkloadEndpoint="localhost-k8s-calico--apiserver--945f9d44--bq9wq-eth0" Jul 7 06:11:36.351169 containerd[1626]: 2025-07-07 06:11:36.300 [INFO][4296] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9a06932c89e ContainerID="6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" Namespace="calico-apiserver" Pod="calico-apiserver-945f9d44-bq9wq" WorkloadEndpoint="localhost-k8s-calico--apiserver--945f9d44--bq9wq-eth0" Jul 7 06:11:36.351169 containerd[1626]: 2025-07-07 06:11:36.305 [INFO][4296] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" Namespace="calico-apiserver" Pod="calico-apiserver-945f9d44-bq9wq" WorkloadEndpoint="localhost-k8s-calico--apiserver--945f9d44--bq9wq-eth0" Jul 7 06:11:36.351224 containerd[1626]: 2025-07-07 06:11:36.306 [INFO][4296] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" Namespace="calico-apiserver" Pod="calico-apiserver-945f9d44-bq9wq" WorkloadEndpoint="localhost-k8s-calico--apiserver--945f9d44--bq9wq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--945f9d44--bq9wq-eth0", GenerateName:"calico-apiserver-945f9d44-", Namespace:"calico-apiserver", SelfLink:"", UID:"c39e2e58-014e-46c2-a9f8-678760dfbaab", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 11, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"945f9d44", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466", Pod:"calico-apiserver-945f9d44-bq9wq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9a06932c89e", MAC:"0a:27:c6:fb:40:e4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:11:36.351268 containerd[1626]: 2025-07-07 06:11:36.327 [INFO][4296] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" Namespace="calico-apiserver" Pod="calico-apiserver-945f9d44-bq9wq" WorkloadEndpoint="localhost-k8s-calico--apiserver--945f9d44--bq9wq-eth0" Jul 7 06:11:36.360931 systemd-resolved[1485]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:11:36.363537 systemd-resolved[1485]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:11:36.424341 containerd[1626]: time="2025-07-07T06:11:36.424280734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74496479b4-kmsvx,Uid:b9fd2d60-bba1-410b-8013-623eaaa77c14,Namespace:calico-system,Attempt:0,} returns sandbox id \"d5a41af89d0f0445df050f6efaee36931c025616bf4ac3a9f9b1c7a2b0c9fcc7\"" Jul 7 06:11:36.440461 containerd[1626]: time="2025-07-07T06:11:36.440435600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79bf997d64-4kw92,Uid:9916c420-a66b-48d2-b745-9d139345d2e5,Namespace:calico-system,Attempt:0,} returns sandbox id \"83e8fdc8285e18a17c5ce1bcd7c74381ea382ef6dbcccc76a2934419bf88b6fc\"" Jul 7 06:11:36.450732 containerd[1626]: time="2025-07-07T06:11:36.450715122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 7 06:11:36.537700 containerd[1626]: time="2025-07-07T06:11:36.537457598Z" level=info msg="connecting to shim 6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" address="unix:///run/containerd/s/6dcd299412ec6c543f78bf4f3ea19d93ec8010929c933dfc9cf79e170c419a50" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:11:36.556155 systemd[1]: Started cri-containerd-6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466.scope - libcontainer container 6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466. Jul 7 06:11:36.568559 systemd-resolved[1485]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:11:36.625175 containerd[1626]: time="2025-07-07T06:11:36.625149415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-945f9d44-bq9wq,Uid:c39e2e58-014e-46c2-a9f8-678760dfbaab,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466\"" Jul 7 06:11:37.157578 containerd[1626]: time="2025-07-07T06:11:37.157498728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-xlb84,Uid:69e3cf54-901a-4c69-86b8-2454097d2c89,Namespace:calico-system,Attempt:0,}" Jul 7 06:11:37.183410 containerd[1626]: time="2025-07-07T06:11:37.183191504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s2hl9,Uid:6b187562-15be-401e-89b0-8601141135c4,Namespace:calico-system,Attempt:0,}" Jul 7 06:11:37.296188 systemd-networkd[1529]: cali2d339758fa5: Link UP Jul 7 06:11:37.296579 systemd-networkd[1529]: cali2d339758fa5: Gained carrier Jul 7 06:11:37.342158 containerd[1626]: 2025-07-07 06:11:37.202 [INFO][4464] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--xlb84-eth0 goldmane-768f4c5c69- calico-system 69e3cf54-901a-4c69-86b8-2454097d2c89 841 0 2025-07-07 06:11:09 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-xlb84 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali2d339758fa5 [] [] }} ContainerID="f6795a896cbcbd43b6d1ec80a75b0cde1eb74a52c47f2346309018d31e97f522" Namespace="calico-system" Pod="goldmane-768f4c5c69-xlb84" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--xlb84-" Jul 7 06:11:37.342158 containerd[1626]: 2025-07-07 06:11:37.203 [INFO][4464] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f6795a896cbcbd43b6d1ec80a75b0cde1eb74a52c47f2346309018d31e97f522" Namespace="calico-system" Pod="goldmane-768f4c5c69-xlb84" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--xlb84-eth0" Jul 7 06:11:37.342158 containerd[1626]: 2025-07-07 06:11:37.224 [INFO][4476] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f6795a896cbcbd43b6d1ec80a75b0cde1eb74a52c47f2346309018d31e97f522" HandleID="k8s-pod-network.f6795a896cbcbd43b6d1ec80a75b0cde1eb74a52c47f2346309018d31e97f522" Workload="localhost-k8s-goldmane--768f4c5c69--xlb84-eth0" Jul 7 06:11:37.342310 containerd[1626]: 2025-07-07 06:11:37.224 [INFO][4476] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f6795a896cbcbd43b6d1ec80a75b0cde1eb74a52c47f2346309018d31e97f522" HandleID="k8s-pod-network.f6795a896cbcbd43b6d1ec80a75b0cde1eb74a52c47f2346309018d31e97f522" Workload="localhost-k8s-goldmane--768f4c5c69--xlb84-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f040), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-xlb84", "timestamp":"2025-07-07 06:11:37.224718282 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:11:37.342310 containerd[1626]: 2025-07-07 06:11:37.225 [INFO][4476] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:11:37.342310 containerd[1626]: 2025-07-07 06:11:37.225 [INFO][4476] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:11:37.342310 containerd[1626]: 2025-07-07 06:11:37.225 [INFO][4476] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:11:37.342310 containerd[1626]: 2025-07-07 06:11:37.231 [INFO][4476] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f6795a896cbcbd43b6d1ec80a75b0cde1eb74a52c47f2346309018d31e97f522" host="localhost" Jul 7 06:11:37.342310 containerd[1626]: 2025-07-07 06:11:37.235 [INFO][4476] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:11:37.342310 containerd[1626]: 2025-07-07 06:11:37.239 [INFO][4476] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:11:37.342310 containerd[1626]: 2025-07-07 06:11:37.277 [INFO][4476] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:11:37.342310 containerd[1626]: 2025-07-07 06:11:37.280 [INFO][4476] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:11:37.342310 containerd[1626]: 2025-07-07 06:11:37.280 [INFO][4476] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f6795a896cbcbd43b6d1ec80a75b0cde1eb74a52c47f2346309018d31e97f522" host="localhost" Jul 7 06:11:37.342504 containerd[1626]: 2025-07-07 06:11:37.281 [INFO][4476] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f6795a896cbcbd43b6d1ec80a75b0cde1eb74a52c47f2346309018d31e97f522 Jul 7 06:11:37.342504 containerd[1626]: 2025-07-07 06:11:37.285 [INFO][4476] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f6795a896cbcbd43b6d1ec80a75b0cde1eb74a52c47f2346309018d31e97f522" host="localhost" Jul 7 06:11:37.342504 containerd[1626]: 2025-07-07 06:11:37.293 [INFO][4476] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.f6795a896cbcbd43b6d1ec80a75b0cde1eb74a52c47f2346309018d31e97f522" host="localhost" Jul 7 06:11:37.342504 containerd[1626]: 2025-07-07 06:11:37.293 [INFO][4476] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.f6795a896cbcbd43b6d1ec80a75b0cde1eb74a52c47f2346309018d31e97f522" host="localhost" Jul 7 06:11:37.342504 containerd[1626]: 2025-07-07 06:11:37.293 [INFO][4476] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:11:37.342504 containerd[1626]: 2025-07-07 06:11:37.293 [INFO][4476] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="f6795a896cbcbd43b6d1ec80a75b0cde1eb74a52c47f2346309018d31e97f522" HandleID="k8s-pod-network.f6795a896cbcbd43b6d1ec80a75b0cde1eb74a52c47f2346309018d31e97f522" Workload="localhost-k8s-goldmane--768f4c5c69--xlb84-eth0" Jul 7 06:11:37.342620 containerd[1626]: 2025-07-07 06:11:37.294 [INFO][4464] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f6795a896cbcbd43b6d1ec80a75b0cde1eb74a52c47f2346309018d31e97f522" Namespace="calico-system" Pod="goldmane-768f4c5c69-xlb84" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--xlb84-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--xlb84-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"69e3cf54-901a-4c69-86b8-2454097d2c89", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 11, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-xlb84", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2d339758fa5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:11:37.342620 containerd[1626]: 2025-07-07 06:11:37.294 [INFO][4464] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="f6795a896cbcbd43b6d1ec80a75b0cde1eb74a52c47f2346309018d31e97f522" Namespace="calico-system" Pod="goldmane-768f4c5c69-xlb84" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--xlb84-eth0" Jul 7 06:11:37.342682 containerd[1626]: 2025-07-07 06:11:37.294 [INFO][4464] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2d339758fa5 ContainerID="f6795a896cbcbd43b6d1ec80a75b0cde1eb74a52c47f2346309018d31e97f522" Namespace="calico-system" Pod="goldmane-768f4c5c69-xlb84" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--xlb84-eth0" Jul 7 06:11:37.342682 containerd[1626]: 2025-07-07 06:11:37.296 [INFO][4464] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f6795a896cbcbd43b6d1ec80a75b0cde1eb74a52c47f2346309018d31e97f522" Namespace="calico-system" Pod="goldmane-768f4c5c69-xlb84" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--xlb84-eth0" Jul 7 06:11:37.342716 containerd[1626]: 2025-07-07 06:11:37.296 [INFO][4464] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f6795a896cbcbd43b6d1ec80a75b0cde1eb74a52c47f2346309018d31e97f522" Namespace="calico-system" Pod="goldmane-768f4c5c69-xlb84" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--xlb84-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--xlb84-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"69e3cf54-901a-4c69-86b8-2454097d2c89", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 11, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f6795a896cbcbd43b6d1ec80a75b0cde1eb74a52c47f2346309018d31e97f522", Pod:"goldmane-768f4c5c69-xlb84", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2d339758fa5", MAC:"36:a7:4e:3c:d7:ef", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:11:37.342836 containerd[1626]: 2025-07-07 06:11:37.338 [INFO][4464] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f6795a896cbcbd43b6d1ec80a75b0cde1eb74a52c47f2346309018d31e97f522" Namespace="calico-system" Pod="goldmane-768f4c5c69-xlb84" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--xlb84-eth0" Jul 7 06:11:37.356160 systemd-networkd[1529]: cali9a06932c89e: Gained IPv6LL Jul 7 06:11:37.379316 systemd-networkd[1529]: cali5f39e4d8518: Link UP Jul 7 06:11:37.380033 systemd-networkd[1529]: cali5f39e4d8518: Gained carrier Jul 7 06:11:37.394853 containerd[1626]: 2025-07-07 06:11:37.237 [INFO][4480] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--s2hl9-eth0 csi-node-driver- calico-system 6b187562-15be-401e-89b0-8601141135c4 715 0 2025-07-07 06:11:10 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-s2hl9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5f39e4d8518 [] [] }} ContainerID="d79b3e93d76225f9b378afd6abb4aa5314e18abd2a971ddc6522f208c6fda16d" Namespace="calico-system" Pod="csi-node-driver-s2hl9" WorkloadEndpoint="localhost-k8s-csi--node--driver--s2hl9-" Jul 7 06:11:37.394853 containerd[1626]: 2025-07-07 06:11:37.238 [INFO][4480] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d79b3e93d76225f9b378afd6abb4aa5314e18abd2a971ddc6522f208c6fda16d" Namespace="calico-system" Pod="csi-node-driver-s2hl9" WorkloadEndpoint="localhost-k8s-csi--node--driver--s2hl9-eth0" Jul 7 06:11:37.394853 containerd[1626]: 2025-07-07 06:11:37.305 [INFO][4495] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d79b3e93d76225f9b378afd6abb4aa5314e18abd2a971ddc6522f208c6fda16d" HandleID="k8s-pod-network.d79b3e93d76225f9b378afd6abb4aa5314e18abd2a971ddc6522f208c6fda16d" Workload="localhost-k8s-csi--node--driver--s2hl9-eth0" Jul 7 06:11:37.395178 containerd[1626]: 2025-07-07 06:11:37.306 [INFO][4495] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d79b3e93d76225f9b378afd6abb4aa5314e18abd2a971ddc6522f208c6fda16d" HandleID="k8s-pod-network.d79b3e93d76225f9b378afd6abb4aa5314e18abd2a971ddc6522f208c6fda16d" Workload="localhost-k8s-csi--node--driver--s2hl9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad4a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-s2hl9", "timestamp":"2025-07-07 06:11:37.305919547 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:11:37.395178 containerd[1626]: 2025-07-07 06:11:37.306 [INFO][4495] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:11:37.395178 containerd[1626]: 2025-07-07 06:11:37.306 [INFO][4495] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:11:37.395178 containerd[1626]: 2025-07-07 06:11:37.306 [INFO][4495] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:11:37.395178 containerd[1626]: 2025-07-07 06:11:37.332 [INFO][4495] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d79b3e93d76225f9b378afd6abb4aa5314e18abd2a971ddc6522f208c6fda16d" host="localhost" Jul 7 06:11:37.395178 containerd[1626]: 2025-07-07 06:11:37.352 [INFO][4495] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:11:37.395178 containerd[1626]: 2025-07-07 06:11:37.355 [INFO][4495] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:11:37.395178 containerd[1626]: 2025-07-07 06:11:37.356 [INFO][4495] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:11:37.395178 containerd[1626]: 2025-07-07 06:11:37.358 [INFO][4495] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:11:37.395178 containerd[1626]: 2025-07-07 06:11:37.358 [INFO][4495] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d79b3e93d76225f9b378afd6abb4aa5314e18abd2a971ddc6522f208c6fda16d" host="localhost" Jul 7 06:11:37.399997 containerd[1626]: 2025-07-07 06:11:37.359 [INFO][4495] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d79b3e93d76225f9b378afd6abb4aa5314e18abd2a971ddc6522f208c6fda16d Jul 7 06:11:37.399997 containerd[1626]: 2025-07-07 06:11:37.369 [INFO][4495] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d79b3e93d76225f9b378afd6abb4aa5314e18abd2a971ddc6522f208c6fda16d" host="localhost" Jul 7 06:11:37.399997 containerd[1626]: 2025-07-07 06:11:37.376 [INFO][4495] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.d79b3e93d76225f9b378afd6abb4aa5314e18abd2a971ddc6522f208c6fda16d" host="localhost" Jul 7 06:11:37.399997 containerd[1626]: 2025-07-07 06:11:37.376 [INFO][4495] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.d79b3e93d76225f9b378afd6abb4aa5314e18abd2a971ddc6522f208c6fda16d" host="localhost" Jul 7 06:11:37.399997 containerd[1626]: 2025-07-07 06:11:37.376 [INFO][4495] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:11:37.399997 containerd[1626]: 2025-07-07 06:11:37.376 [INFO][4495] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="d79b3e93d76225f9b378afd6abb4aa5314e18abd2a971ddc6522f208c6fda16d" HandleID="k8s-pod-network.d79b3e93d76225f9b378afd6abb4aa5314e18abd2a971ddc6522f208c6fda16d" Workload="localhost-k8s-csi--node--driver--s2hl9-eth0" Jul 7 06:11:37.400113 containerd[1626]: 2025-07-07 06:11:37.377 [INFO][4480] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d79b3e93d76225f9b378afd6abb4aa5314e18abd2a971ddc6522f208c6fda16d" Namespace="calico-system" Pod="csi-node-driver-s2hl9" WorkloadEndpoint="localhost-k8s-csi--node--driver--s2hl9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--s2hl9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6b187562-15be-401e-89b0-8601141135c4", ResourceVersion:"715", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 11, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-s2hl9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5f39e4d8518", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:11:37.400162 containerd[1626]: 2025-07-07 06:11:37.377 [INFO][4480] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="d79b3e93d76225f9b378afd6abb4aa5314e18abd2a971ddc6522f208c6fda16d" Namespace="calico-system" Pod="csi-node-driver-s2hl9" WorkloadEndpoint="localhost-k8s-csi--node--driver--s2hl9-eth0" Jul 7 06:11:37.400162 containerd[1626]: 2025-07-07 06:11:37.378 [INFO][4480] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5f39e4d8518 ContainerID="d79b3e93d76225f9b378afd6abb4aa5314e18abd2a971ddc6522f208c6fda16d" Namespace="calico-system" Pod="csi-node-driver-s2hl9" WorkloadEndpoint="localhost-k8s-csi--node--driver--s2hl9-eth0" Jul 7 06:11:37.400162 containerd[1626]: 2025-07-07 06:11:37.379 [INFO][4480] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d79b3e93d76225f9b378afd6abb4aa5314e18abd2a971ddc6522f208c6fda16d" Namespace="calico-system" Pod="csi-node-driver-s2hl9" WorkloadEndpoint="localhost-k8s-csi--node--driver--s2hl9-eth0" Jul 7 06:11:37.400214 containerd[1626]: 2025-07-07 06:11:37.380 [INFO][4480] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d79b3e93d76225f9b378afd6abb4aa5314e18abd2a971ddc6522f208c6fda16d" Namespace="calico-system" Pod="csi-node-driver-s2hl9" WorkloadEndpoint="localhost-k8s-csi--node--driver--s2hl9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--s2hl9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6b187562-15be-401e-89b0-8601141135c4", ResourceVersion:"715", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 11, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d79b3e93d76225f9b378afd6abb4aa5314e18abd2a971ddc6522f208c6fda16d", Pod:"csi-node-driver-s2hl9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5f39e4d8518", MAC:"aa:07:e3:08:5c:88", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:11:37.400253 containerd[1626]: 2025-07-07 06:11:37.392 [INFO][4480] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d79b3e93d76225f9b378afd6abb4aa5314e18abd2a971ddc6522f208c6fda16d" Namespace="calico-system" Pod="csi-node-driver-s2hl9" WorkloadEndpoint="localhost-k8s-csi--node--driver--s2hl9-eth0" Jul 7 06:11:37.415780 containerd[1626]: time="2025-07-07T06:11:37.414688101Z" level=info msg="connecting to shim f6795a896cbcbd43b6d1ec80a75b0cde1eb74a52c47f2346309018d31e97f522" address="unix:///run/containerd/s/580b62b8fc85a40187778ebbf8caf5996b5b243d02c81136430ad4025c2fd6a3" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:11:37.442294 systemd[1]: Started cri-containerd-f6795a896cbcbd43b6d1ec80a75b0cde1eb74a52c47f2346309018d31e97f522.scope - libcontainer container f6795a896cbcbd43b6d1ec80a75b0cde1eb74a52c47f2346309018d31e97f522. Jul 7 06:11:37.456279 containerd[1626]: time="2025-07-07T06:11:37.456243903Z" level=info msg="connecting to shim d79b3e93d76225f9b378afd6abb4aa5314e18abd2a971ddc6522f208c6fda16d" address="unix:///run/containerd/s/a03247944cff35b58ba94ad0c5b3a0b10788663ce424e318c50d3bbcdb7e8011" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:11:37.475472 systemd-resolved[1485]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:11:37.484159 systemd-networkd[1529]: vxlan.calico: Gained IPv6LL Jul 7 06:11:37.491217 systemd[1]: Started cri-containerd-d79b3e93d76225f9b378afd6abb4aa5314e18abd2a971ddc6522f208c6fda16d.scope - libcontainer container d79b3e93d76225f9b378afd6abb4aa5314e18abd2a971ddc6522f208c6fda16d. Jul 7 06:11:37.510472 systemd-resolved[1485]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:11:37.538481 containerd[1626]: time="2025-07-07T06:11:37.538415288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-xlb84,Uid:69e3cf54-901a-4c69-86b8-2454097d2c89,Namespace:calico-system,Attempt:0,} returns sandbox id \"f6795a896cbcbd43b6d1ec80a75b0cde1eb74a52c47f2346309018d31e97f522\"" Jul 7 06:11:37.542119 containerd[1626]: time="2025-07-07T06:11:37.542059288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s2hl9,Uid:6b187562-15be-401e-89b0-8601141135c4,Namespace:calico-system,Attempt:0,} returns sandbox id \"d79b3e93d76225f9b378afd6abb4aa5314e18abd2a971ddc6522f208c6fda16d\"" Jul 7 06:11:37.932248 systemd-networkd[1529]: calia70a088fb93: Gained IPv6LL Jul 7 06:11:38.124273 systemd-networkd[1529]: calic181700365f: Gained IPv6LL Jul 7 06:11:38.167102 containerd[1626]: time="2025-07-07T06:11:38.166979878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cz5kf,Uid:ea85a59a-05a6-4889-9a35-7c48f6b90bf6,Namespace:kube-system,Attempt:0,}" Jul 7 06:11:38.186339 containerd[1626]: time="2025-07-07T06:11:38.186034913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wqb5c,Uid:3bb9c053-df04-4380-8529-a63aa06f6a2b,Namespace:kube-system,Attempt:0,}" Jul 7 06:11:38.305612 systemd-networkd[1529]: cali7e27e827f2a: Link UP Jul 7 06:11:38.306314 systemd-networkd[1529]: cali7e27e827f2a: Gained carrier Jul 7 06:11:38.329086 containerd[1626]: 2025-07-07 06:11:38.199 [INFO][4612] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--cz5kf-eth0 coredns-668d6bf9bc- kube-system ea85a59a-05a6-4889-9a35-7c48f6b90bf6 843 0 2025-07-07 06:11:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-cz5kf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7e27e827f2a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3043104716ee1e4e3e116320abe549d9fcc3df6324dee7103dcf415bcbc98be7" Namespace="kube-system" Pod="coredns-668d6bf9bc-cz5kf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--cz5kf-" Jul 7 06:11:38.329086 containerd[1626]: 2025-07-07 06:11:38.199 [INFO][4612] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3043104716ee1e4e3e116320abe549d9fcc3df6324dee7103dcf415bcbc98be7" Namespace="kube-system" Pod="coredns-668d6bf9bc-cz5kf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--cz5kf-eth0" Jul 7 06:11:38.329086 containerd[1626]: 2025-07-07 06:11:38.233 [INFO][4631] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3043104716ee1e4e3e116320abe549d9fcc3df6324dee7103dcf415bcbc98be7" HandleID="k8s-pod-network.3043104716ee1e4e3e116320abe549d9fcc3df6324dee7103dcf415bcbc98be7" Workload="localhost-k8s-coredns--668d6bf9bc--cz5kf-eth0" Jul 7 06:11:38.332861 containerd[1626]: 2025-07-07 06:11:38.233 [INFO][4631] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3043104716ee1e4e3e116320abe549d9fcc3df6324dee7103dcf415bcbc98be7" HandleID="k8s-pod-network.3043104716ee1e4e3e116320abe549d9fcc3df6324dee7103dcf415bcbc98be7" Workload="localhost-k8s-coredns--668d6bf9bc--cz5kf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f610), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-cz5kf", "timestamp":"2025-07-07 06:11:38.233248953 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:11:38.332861 containerd[1626]: 2025-07-07 06:11:38.233 [INFO][4631] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:11:38.332861 containerd[1626]: 2025-07-07 06:11:38.233 [INFO][4631] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:11:38.332861 containerd[1626]: 2025-07-07 06:11:38.233 [INFO][4631] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:11:38.332861 containerd[1626]: 2025-07-07 06:11:38.240 [INFO][4631] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3043104716ee1e4e3e116320abe549d9fcc3df6324dee7103dcf415bcbc98be7" host="localhost" Jul 7 06:11:38.332861 containerd[1626]: 2025-07-07 06:11:38.245 [INFO][4631] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:11:38.332861 containerd[1626]: 2025-07-07 06:11:38.253 [INFO][4631] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:11:38.332861 containerd[1626]: 2025-07-07 06:11:38.257 [INFO][4631] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:11:38.332861 containerd[1626]: 2025-07-07 06:11:38.262 [INFO][4631] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:11:38.332861 containerd[1626]: 2025-07-07 06:11:38.262 [INFO][4631] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3043104716ee1e4e3e116320abe549d9fcc3df6324dee7103dcf415bcbc98be7" host="localhost" Jul 7 06:11:38.342040 containerd[1626]: 2025-07-07 06:11:38.263 [INFO][4631] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3043104716ee1e4e3e116320abe549d9fcc3df6324dee7103dcf415bcbc98be7 Jul 7 06:11:38.342040 containerd[1626]: 2025-07-07 06:11:38.282 [INFO][4631] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3043104716ee1e4e3e116320abe549d9fcc3df6324dee7103dcf415bcbc98be7" host="localhost" Jul 7 06:11:38.342040 containerd[1626]: 2025-07-07 06:11:38.299 [INFO][4631] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.3043104716ee1e4e3e116320abe549d9fcc3df6324dee7103dcf415bcbc98be7" host="localhost" Jul 7 06:11:38.342040 containerd[1626]: 2025-07-07 06:11:38.299 [INFO][4631] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.3043104716ee1e4e3e116320abe549d9fcc3df6324dee7103dcf415bcbc98be7" host="localhost" Jul 7 06:11:38.342040 containerd[1626]: 2025-07-07 06:11:38.299 [INFO][4631] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:11:38.342040 containerd[1626]: 2025-07-07 06:11:38.299 [INFO][4631] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="3043104716ee1e4e3e116320abe549d9fcc3df6324dee7103dcf415bcbc98be7" HandleID="k8s-pod-network.3043104716ee1e4e3e116320abe549d9fcc3df6324dee7103dcf415bcbc98be7" Workload="localhost-k8s-coredns--668d6bf9bc--cz5kf-eth0" Jul 7 06:11:38.345790 containerd[1626]: 2025-07-07 06:11:38.302 [INFO][4612] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3043104716ee1e4e3e116320abe549d9fcc3df6324dee7103dcf415bcbc98be7" Namespace="kube-system" Pod="coredns-668d6bf9bc-cz5kf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--cz5kf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--cz5kf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ea85a59a-05a6-4889-9a35-7c48f6b90bf6", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 11, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-cz5kf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7e27e827f2a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:11:38.346123 containerd[1626]: 2025-07-07 06:11:38.302 [INFO][4612] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="3043104716ee1e4e3e116320abe549d9fcc3df6324dee7103dcf415bcbc98be7" Namespace="kube-system" Pod="coredns-668d6bf9bc-cz5kf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--cz5kf-eth0" Jul 7 06:11:38.346123 containerd[1626]: 2025-07-07 06:11:38.302 [INFO][4612] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7e27e827f2a ContainerID="3043104716ee1e4e3e116320abe549d9fcc3df6324dee7103dcf415bcbc98be7" Namespace="kube-system" Pod="coredns-668d6bf9bc-cz5kf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--cz5kf-eth0" Jul 7 06:11:38.346123 containerd[1626]: 2025-07-07 06:11:38.306 [INFO][4612] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3043104716ee1e4e3e116320abe549d9fcc3df6324dee7103dcf415bcbc98be7" Namespace="kube-system" Pod="coredns-668d6bf9bc-cz5kf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--cz5kf-eth0" Jul 7 06:11:38.349563 containerd[1626]: 2025-07-07 06:11:38.307 [INFO][4612] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3043104716ee1e4e3e116320abe549d9fcc3df6324dee7103dcf415bcbc98be7" Namespace="kube-system" Pod="coredns-668d6bf9bc-cz5kf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--cz5kf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--cz5kf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ea85a59a-05a6-4889-9a35-7c48f6b90bf6", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 11, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3043104716ee1e4e3e116320abe549d9fcc3df6324dee7103dcf415bcbc98be7", Pod:"coredns-668d6bf9bc-cz5kf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7e27e827f2a", MAC:"26:bc:53:2b:e7:82", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:11:38.349563 containerd[1626]: 2025-07-07 06:11:38.324 [INFO][4612] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3043104716ee1e4e3e116320abe549d9fcc3df6324dee7103dcf415bcbc98be7" Namespace="kube-system" Pod="coredns-668d6bf9bc-cz5kf" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--cz5kf-eth0" Jul 7 06:11:38.383356 systemd-networkd[1529]: cali1c62eb9bbf0: Link UP Jul 7 06:11:38.383974 systemd-networkd[1529]: cali1c62eb9bbf0: Gained carrier Jul 7 06:11:38.428233 containerd[1626]: 2025-07-07 06:11:38.250 [INFO][4623] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--wqb5c-eth0 coredns-668d6bf9bc- kube-system 3bb9c053-df04-4380-8529-a63aa06f6a2b 840 0 2025-07-07 06:11:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-wqb5c eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1c62eb9bbf0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2961bb95b2b62a21a57f28f099145c9695d184102be344c3baf13ea609148569" Namespace="kube-system" Pod="coredns-668d6bf9bc-wqb5c" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wqb5c-" Jul 7 06:11:38.428233 containerd[1626]: 2025-07-07 06:11:38.250 [INFO][4623] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2961bb95b2b62a21a57f28f099145c9695d184102be344c3baf13ea609148569" Namespace="kube-system" Pod="coredns-668d6bf9bc-wqb5c" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wqb5c-eth0" Jul 7 06:11:38.428233 containerd[1626]: 2025-07-07 06:11:38.275 [INFO][4645] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2961bb95b2b62a21a57f28f099145c9695d184102be344c3baf13ea609148569" HandleID="k8s-pod-network.2961bb95b2b62a21a57f28f099145c9695d184102be344c3baf13ea609148569" Workload="localhost-k8s-coredns--668d6bf9bc--wqb5c-eth0" Jul 7 06:11:38.428233 containerd[1626]: 2025-07-07 06:11:38.275 [INFO][4645] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2961bb95b2b62a21a57f28f099145c9695d184102be344c3baf13ea609148569" HandleID="k8s-pod-network.2961bb95b2b62a21a57f28f099145c9695d184102be344c3baf13ea609148569" Workload="localhost-k8s-coredns--668d6bf9bc--wqb5c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5600), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-wqb5c", "timestamp":"2025-07-07 06:11:38.275164585 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:11:38.428233 containerd[1626]: 2025-07-07 06:11:38.275 [INFO][4645] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:11:38.428233 containerd[1626]: 2025-07-07 06:11:38.299 [INFO][4645] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:11:38.428233 containerd[1626]: 2025-07-07 06:11:38.299 [INFO][4645] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:11:38.428233 containerd[1626]: 2025-07-07 06:11:38.341 [INFO][4645] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2961bb95b2b62a21a57f28f099145c9695d184102be344c3baf13ea609148569" host="localhost" Jul 7 06:11:38.428233 containerd[1626]: 2025-07-07 06:11:38.348 [INFO][4645] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:11:38.428233 containerd[1626]: 2025-07-07 06:11:38.352 [INFO][4645] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:11:38.428233 containerd[1626]: 2025-07-07 06:11:38.354 [INFO][4645] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:11:38.428233 containerd[1626]: 2025-07-07 06:11:38.356 [INFO][4645] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:11:38.428233 containerd[1626]: 2025-07-07 06:11:38.357 [INFO][4645] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2961bb95b2b62a21a57f28f099145c9695d184102be344c3baf13ea609148569" host="localhost" Jul 7 06:11:38.428233 containerd[1626]: 2025-07-07 06:11:38.358 [INFO][4645] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2961bb95b2b62a21a57f28f099145c9695d184102be344c3baf13ea609148569 Jul 7 06:11:38.428233 containerd[1626]: 2025-07-07 06:11:38.366 [INFO][4645] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2961bb95b2b62a21a57f28f099145c9695d184102be344c3baf13ea609148569" host="localhost" Jul 7 06:11:38.428233 containerd[1626]: 2025-07-07 06:11:38.374 [INFO][4645] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.2961bb95b2b62a21a57f28f099145c9695d184102be344c3baf13ea609148569" host="localhost" Jul 7 06:11:38.428233 containerd[1626]: 2025-07-07 06:11:38.374 [INFO][4645] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.2961bb95b2b62a21a57f28f099145c9695d184102be344c3baf13ea609148569" host="localhost" Jul 7 06:11:38.428233 containerd[1626]: 2025-07-07 06:11:38.374 [INFO][4645] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:11:38.428233 containerd[1626]: 2025-07-07 06:11:38.374 [INFO][4645] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="2961bb95b2b62a21a57f28f099145c9695d184102be344c3baf13ea609148569" HandleID="k8s-pod-network.2961bb95b2b62a21a57f28f099145c9695d184102be344c3baf13ea609148569" Workload="localhost-k8s-coredns--668d6bf9bc--wqb5c-eth0" Jul 7 06:11:38.435260 containerd[1626]: 2025-07-07 06:11:38.378 [INFO][4623] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2961bb95b2b62a21a57f28f099145c9695d184102be344c3baf13ea609148569" Namespace="kube-system" Pod="coredns-668d6bf9bc-wqb5c" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wqb5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--wqb5c-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3bb9c053-df04-4380-8529-a63aa06f6a2b", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 11, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-wqb5c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1c62eb9bbf0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:11:38.435260 containerd[1626]: 2025-07-07 06:11:38.378 [INFO][4623] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="2961bb95b2b62a21a57f28f099145c9695d184102be344c3baf13ea609148569" Namespace="kube-system" Pod="coredns-668d6bf9bc-wqb5c" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wqb5c-eth0" Jul 7 06:11:38.435260 containerd[1626]: 2025-07-07 06:11:38.378 [INFO][4623] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1c62eb9bbf0 ContainerID="2961bb95b2b62a21a57f28f099145c9695d184102be344c3baf13ea609148569" Namespace="kube-system" Pod="coredns-668d6bf9bc-wqb5c" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wqb5c-eth0" Jul 7 06:11:38.435260 containerd[1626]: 2025-07-07 06:11:38.385 [INFO][4623] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2961bb95b2b62a21a57f28f099145c9695d184102be344c3baf13ea609148569" Namespace="kube-system" Pod="coredns-668d6bf9bc-wqb5c" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wqb5c-eth0" Jul 7 06:11:38.435260 containerd[1626]: 2025-07-07 06:11:38.387 [INFO][4623] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2961bb95b2b62a21a57f28f099145c9695d184102be344c3baf13ea609148569" Namespace="kube-system" Pod="coredns-668d6bf9bc-wqb5c" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wqb5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--wqb5c-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3bb9c053-df04-4380-8529-a63aa06f6a2b", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 11, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2961bb95b2b62a21a57f28f099145c9695d184102be344c3baf13ea609148569", Pod:"coredns-668d6bf9bc-wqb5c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1c62eb9bbf0", MAC:"76:5d:ca:94:1e:52", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:11:38.435260 containerd[1626]: 2025-07-07 06:11:38.411 [INFO][4623] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2961bb95b2b62a21a57f28f099145c9695d184102be344c3baf13ea609148569" Namespace="kube-system" Pod="coredns-668d6bf9bc-wqb5c" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wqb5c-eth0" Jul 7 06:11:38.508223 systemd-networkd[1529]: cali2d339758fa5: Gained IPv6LL Jul 7 06:11:38.540730 containerd[1626]: time="2025-07-07T06:11:38.540657728Z" level=info msg="connecting to shim 3043104716ee1e4e3e116320abe549d9fcc3df6324dee7103dcf415bcbc98be7" address="unix:///run/containerd/s/6e64c23e16ad806bc10dbf643f26731e78bb7b5998340272ed38c7a65da5a20a" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:11:38.596394 containerd[1626]: time="2025-07-07T06:11:38.595927751Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:38.597544 containerd[1626]: time="2025-07-07T06:11:38.597528965Z" level=info msg="connecting to shim 2961bb95b2b62a21a57f28f099145c9695d184102be344c3baf13ea609148569" address="unix:///run/containerd/s/938402c24a5bfaea3269b6bd46113cc7abb8a072fe7150f50f772c7641957187" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:11:38.603408 containerd[1626]: time="2025-07-07T06:11:38.603388491Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 7 06:11:38.610164 containerd[1626]: time="2025-07-07T06:11:38.609568485Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:38.615484 containerd[1626]: time="2025-07-07T06:11:38.615465898Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:38.616992 containerd[1626]: time="2025-07-07T06:11:38.616965331Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 2.166170165s" Jul 7 06:11:38.617110 containerd[1626]: time="2025-07-07T06:11:38.617044363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 7 06:11:38.626223 containerd[1626]: time="2025-07-07T06:11:38.626024028Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 7 06:11:38.626342 systemd[1]: Started cri-containerd-2961bb95b2b62a21a57f28f099145c9695d184102be344c3baf13ea609148569.scope - libcontainer container 2961bb95b2b62a21a57f28f099145c9695d184102be344c3baf13ea609148569. Jul 7 06:11:38.630572 systemd[1]: Started cri-containerd-3043104716ee1e4e3e116320abe549d9fcc3df6324dee7103dcf415bcbc98be7.scope - libcontainer container 3043104716ee1e4e3e116320abe549d9fcc3df6324dee7103dcf415bcbc98be7. Jul 7 06:11:38.641427 systemd-resolved[1485]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:11:38.644539 systemd-resolved[1485]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:11:38.668928 containerd[1626]: time="2025-07-07T06:11:38.668826901Z" level=info msg="CreateContainer within sandbox \"d5a41af89d0f0445df050f6efaee36931c025616bf4ac3a9f9b1c7a2b0c9fcc7\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 7 06:11:38.677905 containerd[1626]: time="2025-07-07T06:11:38.677854352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cz5kf,Uid:ea85a59a-05a6-4889-9a35-7c48f6b90bf6,Namespace:kube-system,Attempt:0,} returns sandbox id \"3043104716ee1e4e3e116320abe549d9fcc3df6324dee7103dcf415bcbc98be7\"" Jul 7 06:11:38.684609 containerd[1626]: time="2025-07-07T06:11:38.684314017Z" level=info msg="CreateContainer within sandbox \"3043104716ee1e4e3e116320abe549d9fcc3df6324dee7103dcf415bcbc98be7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 06:11:38.694944 containerd[1626]: time="2025-07-07T06:11:38.694925168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wqb5c,Uid:3bb9c053-df04-4380-8529-a63aa06f6a2b,Namespace:kube-system,Attempt:0,} returns sandbox id \"2961bb95b2b62a21a57f28f099145c9695d184102be344c3baf13ea609148569\"" Jul 7 06:11:38.697492 containerd[1626]: time="2025-07-07T06:11:38.697444654Z" level=info msg="CreateContainer within sandbox \"2961bb95b2b62a21a57f28f099145c9695d184102be344c3baf13ea609148569\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 06:11:38.704442 containerd[1626]: time="2025-07-07T06:11:38.704416808Z" level=info msg="Container c57c66f9834f838eaec8a9768e96f365f29145d764a0cf5c7df555999d0469e3: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:11:38.728998 containerd[1626]: time="2025-07-07T06:11:38.728973293Z" level=info msg="CreateContainer within sandbox \"d5a41af89d0f0445df050f6efaee36931c025616bf4ac3a9f9b1c7a2b0c9fcc7\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"c57c66f9834f838eaec8a9768e96f365f29145d764a0cf5c7df555999d0469e3\"" Jul 7 06:11:38.729355 containerd[1626]: time="2025-07-07T06:11:38.729339937Z" level=info msg="StartContainer for \"c57c66f9834f838eaec8a9768e96f365f29145d764a0cf5c7df555999d0469e3\"" Jul 7 06:11:38.730807 containerd[1626]: time="2025-07-07T06:11:38.730656774Z" level=info msg="connecting to shim c57c66f9834f838eaec8a9768e96f365f29145d764a0cf5c7df555999d0469e3" address="unix:///run/containerd/s/07d863d65586ff09e71e99fc3b340766f56a736ccd5bc384c4ef7c9f5dea0705" protocol=ttrpc version=3 Jul 7 06:11:38.745180 systemd[1]: Started cri-containerd-c57c66f9834f838eaec8a9768e96f365f29145d764a0cf5c7df555999d0469e3.scope - libcontainer container c57c66f9834f838eaec8a9768e96f365f29145d764a0cf5c7df555999d0469e3. Jul 7 06:11:38.795739 containerd[1626]: time="2025-07-07T06:11:38.795669263Z" level=info msg="StartContainer for \"c57c66f9834f838eaec8a9768e96f365f29145d764a0cf5c7df555999d0469e3\" returns successfully" Jul 7 06:11:38.872172 containerd[1626]: time="2025-07-07T06:11:38.872139064Z" level=info msg="Container cfebab36693c076c96d9ac66cf3816e3955190e873153027c8a818aff524641f: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:11:38.872678 containerd[1626]: time="2025-07-07T06:11:38.872657328Z" level=info msg="Container c38e5477d82678a0b7f880eeea14c8a4e057bd007b7e8dac55115d436091deb5: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:11:38.970108 containerd[1626]: time="2025-07-07T06:11:38.970059873Z" level=info msg="CreateContainer within sandbox \"3043104716ee1e4e3e116320abe549d9fcc3df6324dee7103dcf415bcbc98be7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c38e5477d82678a0b7f880eeea14c8a4e057bd007b7e8dac55115d436091deb5\"" Jul 7 06:11:38.970426 containerd[1626]: time="2025-07-07T06:11:38.970338604Z" level=info msg="CreateContainer within sandbox \"2961bb95b2b62a21a57f28f099145c9695d184102be344c3baf13ea609148569\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cfebab36693c076c96d9ac66cf3816e3955190e873153027c8a818aff524641f\"" Jul 7 06:11:38.970788 containerd[1626]: time="2025-07-07T06:11:38.970757461Z" level=info msg="StartContainer for \"c38e5477d82678a0b7f880eeea14c8a4e057bd007b7e8dac55115d436091deb5\"" Jul 7 06:11:38.973426 containerd[1626]: time="2025-07-07T06:11:38.970990610Z" level=info msg="StartContainer for \"cfebab36693c076c96d9ac66cf3816e3955190e873153027c8a818aff524641f\"" Jul 7 06:11:38.973426 containerd[1626]: time="2025-07-07T06:11:38.971430853Z" level=info msg="connecting to shim c38e5477d82678a0b7f880eeea14c8a4e057bd007b7e8dac55115d436091deb5" address="unix:///run/containerd/s/6e64c23e16ad806bc10dbf643f26731e78bb7b5998340272ed38c7a65da5a20a" protocol=ttrpc version=3 Jul 7 06:11:38.973426 containerd[1626]: time="2025-07-07T06:11:38.972671490Z" level=info msg="connecting to shim cfebab36693c076c96d9ac66cf3816e3955190e873153027c8a818aff524641f" address="unix:///run/containerd/s/938402c24a5bfaea3269b6bd46113cc7abb8a072fe7150f50f772c7641957187" protocol=ttrpc version=3 Jul 7 06:11:38.998235 systemd[1]: Started cri-containerd-c38e5477d82678a0b7f880eeea14c8a4e057bd007b7e8dac55115d436091deb5.scope - libcontainer container c38e5477d82678a0b7f880eeea14c8a4e057bd007b7e8dac55115d436091deb5. Jul 7 06:11:39.000225 systemd[1]: Started cri-containerd-cfebab36693c076c96d9ac66cf3816e3955190e873153027c8a818aff524641f.scope - libcontainer container cfebab36693c076c96d9ac66cf3816e3955190e873153027c8a818aff524641f. Jul 7 06:11:39.062839 containerd[1626]: time="2025-07-07T06:11:39.062731456Z" level=info msg="StartContainer for \"c38e5477d82678a0b7f880eeea14c8a4e057bd007b7e8dac55115d436091deb5\" returns successfully" Jul 7 06:11:39.064992 containerd[1626]: time="2025-07-07T06:11:39.064875958Z" level=info msg="StartContainer for \"cfebab36693c076c96d9ac66cf3816e3955190e873153027c8a818aff524641f\" returns successfully" Jul 7 06:11:39.148177 systemd-networkd[1529]: cali5f39e4d8518: Gained IPv6LL Jul 7 06:11:39.157396 containerd[1626]: time="2025-07-07T06:11:39.157312582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-945f9d44-42wqf,Uid:0c028a4a-b59d-4d3f-950b-f5683fe34de3,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:11:39.157585 containerd[1626]: time="2025-07-07T06:11:39.157566265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b79765fb9-tngnt,Uid:7c232664-a291-43ff-ace0-996e82732b0e,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:11:39.319743 systemd-networkd[1529]: cali416fadd0890: Link UP Jul 7 06:11:39.321118 systemd-networkd[1529]: cali416fadd0890: Gained carrier Jul 7 06:11:39.336287 containerd[1626]: 2025-07-07 06:11:39.244 [INFO][4857] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--b79765fb9--tngnt-eth0 calico-apiserver-b79765fb9- calico-apiserver 7c232664-a291-43ff-ace0-996e82732b0e 839 0 2025-07-07 06:11:08 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:b79765fb9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-b79765fb9-tngnt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali416fadd0890 [] [] }} ContainerID="99d549898a7cc0566de0db4ba4b89bbbee54258809cb88f5bfa836923087eb74" Namespace="calico-apiserver" Pod="calico-apiserver-b79765fb9-tngnt" WorkloadEndpoint="localhost-k8s-calico--apiserver--b79765fb9--tngnt-" Jul 7 06:11:39.336287 containerd[1626]: 2025-07-07 06:11:39.244 [INFO][4857] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="99d549898a7cc0566de0db4ba4b89bbbee54258809cb88f5bfa836923087eb74" Namespace="calico-apiserver" Pod="calico-apiserver-b79765fb9-tngnt" WorkloadEndpoint="localhost-k8s-calico--apiserver--b79765fb9--tngnt-eth0" Jul 7 06:11:39.336287 containerd[1626]: 2025-07-07 06:11:39.280 [INFO][4890] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="99d549898a7cc0566de0db4ba4b89bbbee54258809cb88f5bfa836923087eb74" HandleID="k8s-pod-network.99d549898a7cc0566de0db4ba4b89bbbee54258809cb88f5bfa836923087eb74" Workload="localhost-k8s-calico--apiserver--b79765fb9--tngnt-eth0" Jul 7 06:11:39.336287 containerd[1626]: 2025-07-07 06:11:39.280 [INFO][4890] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="99d549898a7cc0566de0db4ba4b89bbbee54258809cb88f5bfa836923087eb74" HandleID="k8s-pod-network.99d549898a7cc0566de0db4ba4b89bbbee54258809cb88f5bfa836923087eb74" Workload="localhost-k8s-calico--apiserver--b79765fb9--tngnt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f7c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-b79765fb9-tngnt", "timestamp":"2025-07-07 06:11:39.279996588 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:11:39.336287 containerd[1626]: 2025-07-07 06:11:39.280 [INFO][4890] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:11:39.336287 containerd[1626]: 2025-07-07 06:11:39.280 [INFO][4890] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:11:39.336287 containerd[1626]: 2025-07-07 06:11:39.280 [INFO][4890] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:11:39.336287 containerd[1626]: 2025-07-07 06:11:39.288 [INFO][4890] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.99d549898a7cc0566de0db4ba4b89bbbee54258809cb88f5bfa836923087eb74" host="localhost" Jul 7 06:11:39.336287 containerd[1626]: 2025-07-07 06:11:39.293 [INFO][4890] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:11:39.336287 containerd[1626]: 2025-07-07 06:11:39.297 [INFO][4890] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:11:39.336287 containerd[1626]: 2025-07-07 06:11:39.299 [INFO][4890] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:11:39.336287 containerd[1626]: 2025-07-07 06:11:39.302 [INFO][4890] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:11:39.336287 containerd[1626]: 2025-07-07 06:11:39.302 [INFO][4890] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.99d549898a7cc0566de0db4ba4b89bbbee54258809cb88f5bfa836923087eb74" host="localhost" Jul 7 06:11:39.336287 containerd[1626]: 2025-07-07 06:11:39.303 [INFO][4890] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.99d549898a7cc0566de0db4ba4b89bbbee54258809cb88f5bfa836923087eb74 Jul 7 06:11:39.336287 containerd[1626]: 2025-07-07 06:11:39.306 [INFO][4890] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.99d549898a7cc0566de0db4ba4b89bbbee54258809cb88f5bfa836923087eb74" host="localhost" Jul 7 06:11:39.336287 containerd[1626]: 2025-07-07 06:11:39.313 [INFO][4890] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.99d549898a7cc0566de0db4ba4b89bbbee54258809cb88f5bfa836923087eb74" host="localhost" Jul 7 06:11:39.336287 containerd[1626]: 2025-07-07 06:11:39.313 [INFO][4890] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.99d549898a7cc0566de0db4ba4b89bbbee54258809cb88f5bfa836923087eb74" host="localhost" Jul 7 06:11:39.336287 containerd[1626]: 2025-07-07 06:11:39.314 [INFO][4890] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:11:39.336287 containerd[1626]: 2025-07-07 06:11:39.314 [INFO][4890] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="99d549898a7cc0566de0db4ba4b89bbbee54258809cb88f5bfa836923087eb74" HandleID="k8s-pod-network.99d549898a7cc0566de0db4ba4b89bbbee54258809cb88f5bfa836923087eb74" Workload="localhost-k8s-calico--apiserver--b79765fb9--tngnt-eth0" Jul 7 06:11:39.350063 containerd[1626]: 2025-07-07 06:11:39.316 [INFO][4857] cni-plugin/k8s.go 418: Populated endpoint ContainerID="99d549898a7cc0566de0db4ba4b89bbbee54258809cb88f5bfa836923087eb74" Namespace="calico-apiserver" Pod="calico-apiserver-b79765fb9-tngnt" WorkloadEndpoint="localhost-k8s-calico--apiserver--b79765fb9--tngnt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b79765fb9--tngnt-eth0", GenerateName:"calico-apiserver-b79765fb9-", Namespace:"calico-apiserver", SelfLink:"", UID:"7c232664-a291-43ff-ace0-996e82732b0e", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 11, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b79765fb9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-b79765fb9-tngnt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali416fadd0890", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:11:39.350063 containerd[1626]: 2025-07-07 06:11:39.316 [INFO][4857] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="99d549898a7cc0566de0db4ba4b89bbbee54258809cb88f5bfa836923087eb74" Namespace="calico-apiserver" Pod="calico-apiserver-b79765fb9-tngnt" WorkloadEndpoint="localhost-k8s-calico--apiserver--b79765fb9--tngnt-eth0" Jul 7 06:11:39.350063 containerd[1626]: 2025-07-07 06:11:39.316 [INFO][4857] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali416fadd0890 ContainerID="99d549898a7cc0566de0db4ba4b89bbbee54258809cb88f5bfa836923087eb74" Namespace="calico-apiserver" Pod="calico-apiserver-b79765fb9-tngnt" WorkloadEndpoint="localhost-k8s-calico--apiserver--b79765fb9--tngnt-eth0" Jul 7 06:11:39.350063 containerd[1626]: 2025-07-07 06:11:39.320 [INFO][4857] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="99d549898a7cc0566de0db4ba4b89bbbee54258809cb88f5bfa836923087eb74" Namespace="calico-apiserver" Pod="calico-apiserver-b79765fb9-tngnt" WorkloadEndpoint="localhost-k8s-calico--apiserver--b79765fb9--tngnt-eth0" Jul 7 06:11:39.350063 containerd[1626]: 2025-07-07 06:11:39.320 [INFO][4857] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="99d549898a7cc0566de0db4ba4b89bbbee54258809cb88f5bfa836923087eb74" Namespace="calico-apiserver" Pod="calico-apiserver-b79765fb9-tngnt" WorkloadEndpoint="localhost-k8s-calico--apiserver--b79765fb9--tngnt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b79765fb9--tngnt-eth0", GenerateName:"calico-apiserver-b79765fb9-", Namespace:"calico-apiserver", SelfLink:"", UID:"7c232664-a291-43ff-ace0-996e82732b0e", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 11, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b79765fb9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"99d549898a7cc0566de0db4ba4b89bbbee54258809cb88f5bfa836923087eb74", Pod:"calico-apiserver-b79765fb9-tngnt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali416fadd0890", MAC:"36:65:19:1d:6a:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:11:39.350063 containerd[1626]: 2025-07-07 06:11:39.333 [INFO][4857] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="99d549898a7cc0566de0db4ba4b89bbbee54258809cb88f5bfa836923087eb74" Namespace="calico-apiserver" Pod="calico-apiserver-b79765fb9-tngnt" WorkloadEndpoint="localhost-k8s-calico--apiserver--b79765fb9--tngnt-eth0" Jul 7 06:11:39.382303 containerd[1626]: time="2025-07-07T06:11:39.382243050Z" level=info msg="connecting to shim 99d549898a7cc0566de0db4ba4b89bbbee54258809cb88f5bfa836923087eb74" address="unix:///run/containerd/s/7333a838300962c508dc1eae3d89787517da8c0cf8244766d07b0f01ea0c782c" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:11:39.413327 systemd[1]: Started cri-containerd-99d549898a7cc0566de0db4ba4b89bbbee54258809cb88f5bfa836923087eb74.scope - libcontainer container 99d549898a7cc0566de0db4ba4b89bbbee54258809cb88f5bfa836923087eb74. Jul 7 06:11:39.439791 systemd-resolved[1485]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:11:39.451394 systemd-networkd[1529]: caliecb0f39c7e2: Link UP Jul 7 06:11:39.451567 systemd-networkd[1529]: caliecb0f39c7e2: Gained carrier Jul 7 06:11:39.464559 containerd[1626]: 2025-07-07 06:11:39.237 [INFO][4855] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--945f9d44--42wqf-eth0 calico-apiserver-945f9d44- calico-apiserver 0c028a4a-b59d-4d3f-950b-f5683fe34de3 844 0 2025-07-07 06:11:08 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:945f9d44 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-945f9d44-42wqf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliecb0f39c7e2 [] [] }} ContainerID="5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" Namespace="calico-apiserver" Pod="calico-apiserver-945f9d44-42wqf" WorkloadEndpoint="localhost-k8s-calico--apiserver--945f9d44--42wqf-" Jul 7 06:11:39.464559 containerd[1626]: 2025-07-07 06:11:39.237 [INFO][4855] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" Namespace="calico-apiserver" Pod="calico-apiserver-945f9d44-42wqf" WorkloadEndpoint="localhost-k8s-calico--apiserver--945f9d44--42wqf-eth0" Jul 7 06:11:39.464559 containerd[1626]: 2025-07-07 06:11:39.286 [INFO][4887] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" HandleID="k8s-pod-network.5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" Workload="localhost-k8s-calico--apiserver--945f9d44--42wqf-eth0" Jul 7 06:11:39.464559 containerd[1626]: 2025-07-07 06:11:39.287 [INFO][4887] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" HandleID="k8s-pod-network.5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" Workload="localhost-k8s-calico--apiserver--945f9d44--42wqf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5890), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-945f9d44-42wqf", "timestamp":"2025-07-07 06:11:39.285995489 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:11:39.464559 containerd[1626]: 2025-07-07 06:11:39.287 [INFO][4887] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:11:39.464559 containerd[1626]: 2025-07-07 06:11:39.314 [INFO][4887] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:11:39.464559 containerd[1626]: 2025-07-07 06:11:39.314 [INFO][4887] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:11:39.464559 containerd[1626]: 2025-07-07 06:11:39.394 [INFO][4887] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" host="localhost" Jul 7 06:11:39.464559 containerd[1626]: 2025-07-07 06:11:39.401 [INFO][4887] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:11:39.464559 containerd[1626]: 2025-07-07 06:11:39.421 [INFO][4887] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:11:39.464559 containerd[1626]: 2025-07-07 06:11:39.425 [INFO][4887] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:11:39.464559 containerd[1626]: 2025-07-07 06:11:39.430 [INFO][4887] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:11:39.464559 containerd[1626]: 2025-07-07 06:11:39.430 [INFO][4887] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" host="localhost" Jul 7 06:11:39.464559 containerd[1626]: 2025-07-07 06:11:39.435 [INFO][4887] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7 Jul 7 06:11:39.464559 containerd[1626]: 2025-07-07 06:11:39.439 [INFO][4887] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" host="localhost" Jul 7 06:11:39.464559 containerd[1626]: 2025-07-07 06:11:39.445 [INFO][4887] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" host="localhost" Jul 7 06:11:39.464559 containerd[1626]: 2025-07-07 06:11:39.445 [INFO][4887] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" host="localhost" Jul 7 06:11:39.464559 containerd[1626]: 2025-07-07 06:11:39.445 [INFO][4887] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:11:39.464559 containerd[1626]: 2025-07-07 06:11:39.445 [INFO][4887] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" HandleID="k8s-pod-network.5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" Workload="localhost-k8s-calico--apiserver--945f9d44--42wqf-eth0" Jul 7 06:11:39.465305 containerd[1626]: 2025-07-07 06:11:39.448 [INFO][4855] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" Namespace="calico-apiserver" Pod="calico-apiserver-945f9d44-42wqf" WorkloadEndpoint="localhost-k8s-calico--apiserver--945f9d44--42wqf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--945f9d44--42wqf-eth0", GenerateName:"calico-apiserver-945f9d44-", Namespace:"calico-apiserver", SelfLink:"", UID:"0c028a4a-b59d-4d3f-950b-f5683fe34de3", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 11, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"945f9d44", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-945f9d44-42wqf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliecb0f39c7e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:11:39.465305 containerd[1626]: 2025-07-07 06:11:39.448 [INFO][4855] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" Namespace="calico-apiserver" Pod="calico-apiserver-945f9d44-42wqf" WorkloadEndpoint="localhost-k8s-calico--apiserver--945f9d44--42wqf-eth0" Jul 7 06:11:39.465305 containerd[1626]: 2025-07-07 06:11:39.448 [INFO][4855] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliecb0f39c7e2 ContainerID="5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" Namespace="calico-apiserver" Pod="calico-apiserver-945f9d44-42wqf" WorkloadEndpoint="localhost-k8s-calico--apiserver--945f9d44--42wqf-eth0" Jul 7 06:11:39.465305 containerd[1626]: 2025-07-07 06:11:39.450 [INFO][4855] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" Namespace="calico-apiserver" Pod="calico-apiserver-945f9d44-42wqf" WorkloadEndpoint="localhost-k8s-calico--apiserver--945f9d44--42wqf-eth0" Jul 7 06:11:39.465305 containerd[1626]: 2025-07-07 06:11:39.450 [INFO][4855] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" Namespace="calico-apiserver" Pod="calico-apiserver-945f9d44-42wqf" WorkloadEndpoint="localhost-k8s-calico--apiserver--945f9d44--42wqf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--945f9d44--42wqf-eth0", GenerateName:"calico-apiserver-945f9d44-", Namespace:"calico-apiserver", SelfLink:"", UID:"0c028a4a-b59d-4d3f-950b-f5683fe34de3", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 11, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"945f9d44", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7", Pod:"calico-apiserver-945f9d44-42wqf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliecb0f39c7e2", MAC:"12:47:57:e2:5e:e2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:11:39.465305 containerd[1626]: 2025-07-07 06:11:39.461 [INFO][4855] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" Namespace="calico-apiserver" Pod="calico-apiserver-945f9d44-42wqf" WorkloadEndpoint="localhost-k8s-calico--apiserver--945f9d44--42wqf-eth0" Jul 7 06:11:39.523773 containerd[1626]: time="2025-07-07T06:11:39.523685511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b79765fb9-tngnt,Uid:7c232664-a291-43ff-ace0-996e82732b0e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"99d549898a7cc0566de0db4ba4b89bbbee54258809cb88f5bfa836923087eb74\"" Jul 7 06:11:39.532004 containerd[1626]: time="2025-07-07T06:11:39.531976027Z" level=info msg="connecting to shim 5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" address="unix:///run/containerd/s/fdd2ea843a386eb5a7d292ab5269442bec32066204f9c64122940a75a926e362" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:11:39.562127 systemd[1]: Started cri-containerd-5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7.scope - libcontainer container 5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7. Jul 7 06:11:39.572896 systemd-resolved[1485]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:11:39.602206 containerd[1626]: time="2025-07-07T06:11:39.602168575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-945f9d44-42wqf,Uid:0c028a4a-b59d-4d3f-950b-f5683fe34de3,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7\"" Jul 7 06:11:39.676214 kubelet[2913]: I0707 06:11:39.672028 2913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-cz5kf" podStartSLOduration=39.670255268 podStartE2EDuration="39.670255268s" podCreationTimestamp="2025-07-07 06:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:11:39.669770618 +0000 UTC m=+45.620599993" watchObservedRunningTime="2025-07-07 06:11:39.670255268 +0000 UTC m=+45.621084642" Jul 7 06:11:39.684052 kubelet[2913]: I0707 06:11:39.684009 2913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wqb5c" podStartSLOduration=39.683997367 podStartE2EDuration="39.683997367s" podCreationTimestamp="2025-07-07 06:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:11:39.683664477 +0000 UTC m=+45.634493857" watchObservedRunningTime="2025-07-07 06:11:39.683997367 +0000 UTC m=+45.634826743" Jul 7 06:11:39.725181 systemd-networkd[1529]: cali1c62eb9bbf0: Gained IPv6LL Jul 7 06:11:40.236237 systemd-networkd[1529]: cali7e27e827f2a: Gained IPv6LL Jul 7 06:11:40.749022 systemd-networkd[1529]: caliecb0f39c7e2: Gained IPv6LL Jul 7 06:11:41.068179 systemd-networkd[1529]: cali416fadd0890: Gained IPv6LL Jul 7 06:11:41.712860 containerd[1626]: time="2025-07-07T06:11:41.712821568Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:41.713772 containerd[1626]: time="2025-07-07T06:11:41.713505716Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 7 06:11:41.714179 containerd[1626]: time="2025-07-07T06:11:41.714153429Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:41.715528 containerd[1626]: time="2025-07-07T06:11:41.715500003Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:41.715966 containerd[1626]: time="2025-07-07T06:11:41.715870886Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 3.089812044s" Jul 7 06:11:41.715966 containerd[1626]: time="2025-07-07T06:11:41.715889666Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 7 06:11:41.717721 containerd[1626]: time="2025-07-07T06:11:41.717699997Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 06:11:41.737700 containerd[1626]: time="2025-07-07T06:11:41.737608037Z" level=info msg="CreateContainer within sandbox \"83e8fdc8285e18a17c5ce1bcd7c74381ea382ef6dbcccc76a2934419bf88b6fc\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 7 06:11:41.747084 containerd[1626]: time="2025-07-07T06:11:41.746161532Z" level=info msg="Container 624ae46721977988752979094a716dcad8f293bd084fd1c966c75b4333ac85a6: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:11:41.777832 containerd[1626]: time="2025-07-07T06:11:41.777745774Z" level=info msg="CreateContainer within sandbox \"83e8fdc8285e18a17c5ce1bcd7c74381ea382ef6dbcccc76a2934419bf88b6fc\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"624ae46721977988752979094a716dcad8f293bd084fd1c966c75b4333ac85a6\"" Jul 7 06:11:41.779036 containerd[1626]: time="2025-07-07T06:11:41.779016824Z" level=info msg="StartContainer for \"624ae46721977988752979094a716dcad8f293bd084fd1c966c75b4333ac85a6\"" Jul 7 06:11:41.782082 containerd[1626]: time="2025-07-07T06:11:41.781860935Z" level=info msg="connecting to shim 624ae46721977988752979094a716dcad8f293bd084fd1c966c75b4333ac85a6" address="unix:///run/containerd/s/fbbd2575f54c97b2a14adef6deb6c661a1011d0e71189ed5a5eb54dc2a96a04e" protocol=ttrpc version=3 Jul 7 06:11:41.817264 systemd[1]: Started cri-containerd-624ae46721977988752979094a716dcad8f293bd084fd1c966c75b4333ac85a6.scope - libcontainer container 624ae46721977988752979094a716dcad8f293bd084fd1c966c75b4333ac85a6. Jul 7 06:11:41.860886 containerd[1626]: time="2025-07-07T06:11:41.860821031Z" level=info msg="StartContainer for \"624ae46721977988752979094a716dcad8f293bd084fd1c966c75b4333ac85a6\" returns successfully" Jul 7 06:11:43.636609 kubelet[2913]: I0707 06:11:43.636276 2913 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:11:45.154573 containerd[1626]: time="2025-07-07T06:11:45.154520879Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:45.165239 containerd[1626]: time="2025-07-07T06:11:45.165192623Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 7 06:11:45.182373 containerd[1626]: time="2025-07-07T06:11:45.182317548Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:45.191292 containerd[1626]: time="2025-07-07T06:11:45.191241942Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:45.191915 containerd[1626]: time="2025-07-07T06:11:45.191659116Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 3.473935907s" Jul 7 06:11:45.191915 containerd[1626]: time="2025-07-07T06:11:45.191697852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 06:11:45.192603 containerd[1626]: time="2025-07-07T06:11:45.192591474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 7 06:11:45.195349 containerd[1626]: time="2025-07-07T06:11:45.195156642Z" level=info msg="CreateContainer within sandbox \"6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 06:11:45.217766 containerd[1626]: time="2025-07-07T06:11:45.217202268Z" level=info msg="Container f703af2588cb724990757fe340a405bfdd9e6bb6a48230227a69c53e7a75479b: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:11:45.233432 containerd[1626]: time="2025-07-07T06:11:45.233394125Z" level=info msg="CreateContainer within sandbox \"6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f703af2588cb724990757fe340a405bfdd9e6bb6a48230227a69c53e7a75479b\"" Jul 7 06:11:45.234132 containerd[1626]: time="2025-07-07T06:11:45.234115360Z" level=info msg="StartContainer for \"f703af2588cb724990757fe340a405bfdd9e6bb6a48230227a69c53e7a75479b\"" Jul 7 06:11:45.235151 containerd[1626]: time="2025-07-07T06:11:45.235127347Z" level=info msg="connecting to shim f703af2588cb724990757fe340a405bfdd9e6bb6a48230227a69c53e7a75479b" address="unix:///run/containerd/s/6dcd299412ec6c543f78bf4f3ea19d93ec8010929c933dfc9cf79e170c419a50" protocol=ttrpc version=3 Jul 7 06:11:45.261253 systemd[1]: Started cri-containerd-f703af2588cb724990757fe340a405bfdd9e6bb6a48230227a69c53e7a75479b.scope - libcontainer container f703af2588cb724990757fe340a405bfdd9e6bb6a48230227a69c53e7a75479b. Jul 7 06:11:45.327056 containerd[1626]: time="2025-07-07T06:11:45.327029279Z" level=info msg="StartContainer for \"f703af2588cb724990757fe340a405bfdd9e6bb6a48230227a69c53e7a75479b\" returns successfully" Jul 7 06:11:45.625892 kubelet[2913]: I0707 06:11:45.625838 2913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-79bf997d64-4kw92" podStartSLOduration=30.378853069 podStartE2EDuration="35.625820492s" podCreationTimestamp="2025-07-07 06:11:10 +0000 UTC" firstStartedPulling="2025-07-07 06:11:36.469552103 +0000 UTC m=+42.420381480" lastFinishedPulling="2025-07-07 06:11:41.716519533 +0000 UTC m=+47.667348903" observedRunningTime="2025-07-07 06:11:42.606247053 +0000 UTC m=+48.557076425" watchObservedRunningTime="2025-07-07 06:11:45.625820492 +0000 UTC m=+51.576649866" Jul 7 06:11:46.651260 kubelet[2913]: I0707 06:11:46.651195 2913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-945f9d44-bq9wq" podStartSLOduration=30.087679822 podStartE2EDuration="38.651175971s" podCreationTimestamp="2025-07-07 06:11:08 +0000 UTC" firstStartedPulling="2025-07-07 06:11:36.628896992 +0000 UTC m=+42.579726366" lastFinishedPulling="2025-07-07 06:11:45.192393144 +0000 UTC m=+51.143222515" observedRunningTime="2025-07-07 06:11:45.626199932 +0000 UTC m=+51.577029307" watchObservedRunningTime="2025-07-07 06:11:46.651175971 +0000 UTC m=+52.602005348" Jul 7 06:11:49.114821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3296768863.mount: Deactivated successfully. Jul 7 06:11:49.848366 containerd[1626]: time="2025-07-07T06:11:49.848265554Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:49.869880 containerd[1626]: time="2025-07-07T06:11:49.869837248Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 7 06:11:49.893541 containerd[1626]: time="2025-07-07T06:11:49.893449173Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:49.906225 containerd[1626]: time="2025-07-07T06:11:49.906172903Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:49.906945 containerd[1626]: time="2025-07-07T06:11:49.906650592Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 4.713870999s" Jul 7 06:11:49.906945 containerd[1626]: time="2025-07-07T06:11:49.906672336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 7 06:11:50.071236 containerd[1626]: time="2025-07-07T06:11:50.071207493Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 7 06:11:50.108385 containerd[1626]: time="2025-07-07T06:11:50.108154598Z" level=info msg="CreateContainer within sandbox \"f6795a896cbcbd43b6d1ec80a75b0cde1eb74a52c47f2346309018d31e97f522\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 7 06:11:50.116687 containerd[1626]: time="2025-07-07T06:11:50.116089154Z" level=info msg="Container 220b565e39f808de4aa5c51cae6029853ce51c95b52b27c7f7a9bb1a4da3b19e: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:11:50.123104 containerd[1626]: time="2025-07-07T06:11:50.123057276Z" level=info msg="CreateContainer within sandbox \"f6795a896cbcbd43b6d1ec80a75b0cde1eb74a52c47f2346309018d31e97f522\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"220b565e39f808de4aa5c51cae6029853ce51c95b52b27c7f7a9bb1a4da3b19e\"" Jul 7 06:11:50.123477 containerd[1626]: time="2025-07-07T06:11:50.123460471Z" level=info msg="StartContainer for \"220b565e39f808de4aa5c51cae6029853ce51c95b52b27c7f7a9bb1a4da3b19e\"" Jul 7 06:11:50.126098 containerd[1626]: time="2025-07-07T06:11:50.124138298Z" level=info msg="connecting to shim 220b565e39f808de4aa5c51cae6029853ce51c95b52b27c7f7a9bb1a4da3b19e" address="unix:///run/containerd/s/580b62b8fc85a40187778ebbf8caf5996b5b243d02c81136430ad4025c2fd6a3" protocol=ttrpc version=3 Jul 7 06:11:50.147226 systemd[1]: Started cri-containerd-220b565e39f808de4aa5c51cae6029853ce51c95b52b27c7f7a9bb1a4da3b19e.scope - libcontainer container 220b565e39f808de4aa5c51cae6029853ce51c95b52b27c7f7a9bb1a4da3b19e. Jul 7 06:11:50.194892 containerd[1626]: time="2025-07-07T06:11:50.194859174Z" level=info msg="StartContainer for \"220b565e39f808de4aa5c51cae6029853ce51c95b52b27c7f7a9bb1a4da3b19e\" returns successfully" Jul 7 06:11:51.349027 containerd[1626]: time="2025-07-07T06:11:51.348853685Z" level=info msg="TaskExit event in podsandbox handler container_id:\"220b565e39f808de4aa5c51cae6029853ce51c95b52b27c7f7a9bb1a4da3b19e\" id:\"2e745c4ac0289b9018555bb55744580ea932be96d2847aff20ffc43251864ff7\" pid:5193 exit_status:1 exited_at:{seconds:1751868711 nanos:263577619}" Jul 7 06:11:51.993009 containerd[1626]: time="2025-07-07T06:11:51.992827270Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:51.997994 containerd[1626]: time="2025-07-07T06:11:51.997957978Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 7 06:11:52.010315 containerd[1626]: time="2025-07-07T06:11:52.010282316Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:52.019729 containerd[1626]: time="2025-07-07T06:11:52.019692704Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:52.020166 containerd[1626]: time="2025-07-07T06:11:52.020035901Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.948596478s" Jul 7 06:11:52.020166 containerd[1626]: time="2025-07-07T06:11:52.020060778Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 7 06:11:52.021007 containerd[1626]: time="2025-07-07T06:11:52.020982972Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 7 06:11:52.122628 containerd[1626]: time="2025-07-07T06:11:52.122214338Z" level=info msg="CreateContainer within sandbox \"d79b3e93d76225f9b378afd6abb4aa5314e18abd2a971ddc6522f208c6fda16d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 7 06:11:52.146245 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1722215659.mount: Deactivated successfully. Jul 7 06:11:52.147587 containerd[1626]: time="2025-07-07T06:11:52.147557766Z" level=info msg="Container ce663ff9fb23255fb1ae0e79c16d625050a34d84a94ebd885d07291aee0d61f0: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:11:52.168261 containerd[1626]: time="2025-07-07T06:11:52.168091929Z" level=info msg="CreateContainer within sandbox \"d79b3e93d76225f9b378afd6abb4aa5314e18abd2a971ddc6522f208c6fda16d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ce663ff9fb23255fb1ae0e79c16d625050a34d84a94ebd885d07291aee0d61f0\"" Jul 7 06:11:52.169397 containerd[1626]: time="2025-07-07T06:11:52.169383832Z" level=info msg="StartContainer for \"ce663ff9fb23255fb1ae0e79c16d625050a34d84a94ebd885d07291aee0d61f0\"" Jul 7 06:11:52.171621 containerd[1626]: time="2025-07-07T06:11:52.171607384Z" level=info msg="connecting to shim ce663ff9fb23255fb1ae0e79c16d625050a34d84a94ebd885d07291aee0d61f0" address="unix:///run/containerd/s/a03247944cff35b58ba94ad0c5b3a0b10788663ce424e318c50d3bbcdb7e8011" protocol=ttrpc version=3 Jul 7 06:11:52.197852 systemd[1]: Started cri-containerd-ce663ff9fb23255fb1ae0e79c16d625050a34d84a94ebd885d07291aee0d61f0.scope - libcontainer container ce663ff9fb23255fb1ae0e79c16d625050a34d84a94ebd885d07291aee0d61f0. Jul 7 06:11:52.243505 containerd[1626]: time="2025-07-07T06:11:52.243391814Z" level=info msg="StartContainer for \"ce663ff9fb23255fb1ae0e79c16d625050a34d84a94ebd885d07291aee0d61f0\" returns successfully" Jul 7 06:11:52.279619 containerd[1626]: time="2025-07-07T06:11:52.279592471Z" level=info msg="TaskExit event in podsandbox handler container_id:\"220b565e39f808de4aa5c51cae6029853ce51c95b52b27c7f7a9bb1a4da3b19e\" id:\"ae9509cc7d41240ae8959bb8d95c97266611fdcf70e0fd69e09ebe59bac62329\" pid:5233 exit_status:1 exited_at:{seconds:1751868712 nanos:279317396}" Jul 7 06:11:53.151814 kubelet[2913]: I0707 06:11:53.151680 2913 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:11:53.180431 containerd[1626]: time="2025-07-07T06:11:53.180410544Z" level=info msg="TaskExit event in podsandbox handler container_id:\"220b565e39f808de4aa5c51cae6029853ce51c95b52b27c7f7a9bb1a4da3b19e\" id:\"7eaa059c560feff9cb3e73a8536adc17726a428eeee53318165e0b97cd756962\" pid:5291 exit_status:1 exited_at:{seconds:1751868713 nanos:178761589}" Jul 7 06:11:53.217906 containerd[1626]: time="2025-07-07T06:11:53.217881243Z" level=info msg="TaskExit event in podsandbox handler container_id:\"624ae46721977988752979094a716dcad8f293bd084fd1c966c75b4333ac85a6\" id:\"d86ac7aacbb7190f0c363cb3400314dabaf669cba72602fc7039c0f0ef6db602\" pid:5274 exited_at:{seconds:1751868713 nanos:217565368}" Jul 7 06:11:53.252197 kubelet[2913]: I0707 06:11:53.239444 2913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-xlb84" podStartSLOduration=31.729720056 podStartE2EDuration="44.23941977s" podCreationTimestamp="2025-07-07 06:11:09 +0000 UTC" firstStartedPulling="2025-07-07 06:11:37.53917511 +0000 UTC m=+43.490004480" lastFinishedPulling="2025-07-07 06:11:50.048874821 +0000 UTC m=+55.999704194" observedRunningTime="2025-07-07 06:11:51.169961498 +0000 UTC m=+57.120790878" watchObservedRunningTime="2025-07-07 06:11:53.23941977 +0000 UTC m=+59.190249151" Jul 7 06:11:53.281930 containerd[1626]: time="2025-07-07T06:11:53.281897103Z" level=info msg="TaskExit event in podsandbox handler container_id:\"624ae46721977988752979094a716dcad8f293bd084fd1c966c75b4333ac85a6\" id:\"91e3b4f028a1ce817e785262998340655e544b0da787405edcf671a467f3fb7f\" pid:5315 exited_at:{seconds:1751868713 nanos:281521034}" Jul 7 06:11:55.923109 containerd[1626]: time="2025-07-07T06:11:55.923082598Z" level=info msg="TaskExit event in podsandbox handler container_id:\"220b565e39f808de4aa5c51cae6029853ce51c95b52b27c7f7a9bb1a4da3b19e\" id:\"6fb77115e1de11e4a9b215182aa3fd57a7a819dbf55576e0b4dd32ef361470bf\" pid:5341 exited_at:{seconds:1751868715 nanos:922898007}" Jul 7 06:11:56.785706 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3850412109.mount: Deactivated successfully. Jul 7 06:11:56.880300 containerd[1626]: time="2025-07-07T06:11:56.879815223Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:56.890038 containerd[1626]: time="2025-07-07T06:11:56.890010661Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 7 06:11:56.897872 containerd[1626]: time="2025-07-07T06:11:56.897837164Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:56.905379 containerd[1626]: time="2025-07-07T06:11:56.905352421Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:56.916063 containerd[1626]: time="2025-07-07T06:11:56.905645075Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 4.884639562s" Jul 7 06:11:56.916063 containerd[1626]: time="2025-07-07T06:11:56.905661945Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 7 06:11:56.916063 containerd[1626]: time="2025-07-07T06:11:56.906390837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 06:11:56.921593 containerd[1626]: time="2025-07-07T06:11:56.916958789Z" level=info msg="CreateContainer within sandbox \"d5a41af89d0f0445df050f6efaee36931c025616bf4ac3a9f9b1c7a2b0c9fcc7\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 7 06:11:56.963442 containerd[1626]: time="2025-07-07T06:11:56.963180948Z" level=info msg="Container ef4d33024d1ee438d5ee4035f38bd90c7335ae009f08bdaa9af930a1eaacac59: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:11:56.972785 containerd[1626]: time="2025-07-07T06:11:56.972698917Z" level=info msg="CreateContainer within sandbox \"d5a41af89d0f0445df050f6efaee36931c025616bf4ac3a9f9b1c7a2b0c9fcc7\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"ef4d33024d1ee438d5ee4035f38bd90c7335ae009f08bdaa9af930a1eaacac59\"" Jul 7 06:11:56.973054 containerd[1626]: time="2025-07-07T06:11:56.973022182Z" level=info msg="StartContainer for \"ef4d33024d1ee438d5ee4035f38bd90c7335ae009f08bdaa9af930a1eaacac59\"" Jul 7 06:11:56.974451 containerd[1626]: time="2025-07-07T06:11:56.974427495Z" level=info msg="connecting to shim ef4d33024d1ee438d5ee4035f38bd90c7335ae009f08bdaa9af930a1eaacac59" address="unix:///run/containerd/s/07d863d65586ff09e71e99fc3b340766f56a736ccd5bc384c4ef7c9f5dea0705" protocol=ttrpc version=3 Jul 7 06:11:56.998198 systemd[1]: Started cri-containerd-ef4d33024d1ee438d5ee4035f38bd90c7335ae009f08bdaa9af930a1eaacac59.scope - libcontainer container ef4d33024d1ee438d5ee4035f38bd90c7335ae009f08bdaa9af930a1eaacac59. Jul 7 06:11:57.044695 containerd[1626]: time="2025-07-07T06:11:57.044614112Z" level=info msg="StartContainer for \"ef4d33024d1ee438d5ee4035f38bd90c7335ae009f08bdaa9af930a1eaacac59\" returns successfully" Jul 7 06:11:57.205656 kubelet[2913]: I0707 06:11:57.205501 2913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-74496479b4-kmsvx" podStartSLOduration=2.733676785 podStartE2EDuration="23.189233473s" podCreationTimestamp="2025-07-07 06:11:34 +0000 UTC" firstStartedPulling="2025-07-07 06:11:36.450548655 +0000 UTC m=+42.401378026" lastFinishedPulling="2025-07-07 06:11:56.906105341 +0000 UTC m=+62.856934714" observedRunningTime="2025-07-07 06:11:57.181162301 +0000 UTC m=+63.131991682" watchObservedRunningTime="2025-07-07 06:11:57.189233473 +0000 UTC m=+63.140062847" Jul 7 06:11:57.357143 containerd[1626]: time="2025-07-07T06:11:57.356682701Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:57.363608 containerd[1626]: time="2025-07-07T06:11:57.363580289Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 7 06:11:57.364761 containerd[1626]: time="2025-07-07T06:11:57.364706484Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 458.29985ms" Jul 7 06:11:57.364761 containerd[1626]: time="2025-07-07T06:11:57.364724009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 06:11:57.365432 containerd[1626]: time="2025-07-07T06:11:57.365243679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 06:11:57.367880 containerd[1626]: time="2025-07-07T06:11:57.367863615Z" level=info msg="CreateContainer within sandbox \"99d549898a7cc0566de0db4ba4b89bbbee54258809cb88f5bfa836923087eb74\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 06:11:57.376736 containerd[1626]: time="2025-07-07T06:11:57.376388322Z" level=info msg="Container 862a250909c3dc65b3cab493a6987e6b8765cfe0e0c0a6c7948feb358d965c10: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:11:57.383178 containerd[1626]: time="2025-07-07T06:11:57.383139699Z" level=info msg="CreateContainer within sandbox \"99d549898a7cc0566de0db4ba4b89bbbee54258809cb88f5bfa836923087eb74\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"862a250909c3dc65b3cab493a6987e6b8765cfe0e0c0a6c7948feb358d965c10\"" Jul 7 06:11:57.383547 containerd[1626]: time="2025-07-07T06:11:57.383531473Z" level=info msg="StartContainer for \"862a250909c3dc65b3cab493a6987e6b8765cfe0e0c0a6c7948feb358d965c10\"" Jul 7 06:11:57.385364 containerd[1626]: time="2025-07-07T06:11:57.384144461Z" level=info msg="connecting to shim 862a250909c3dc65b3cab493a6987e6b8765cfe0e0c0a6c7948feb358d965c10" address="unix:///run/containerd/s/7333a838300962c508dc1eae3d89787517da8c0cf8244766d07b0f01ea0c782c" protocol=ttrpc version=3 Jul 7 06:11:57.408210 systemd[1]: Started cri-containerd-862a250909c3dc65b3cab493a6987e6b8765cfe0e0c0a6c7948feb358d965c10.scope - libcontainer container 862a250909c3dc65b3cab493a6987e6b8765cfe0e0c0a6c7948feb358d965c10. Jul 7 06:11:57.457161 containerd[1626]: time="2025-07-07T06:11:57.457136383Z" level=info msg="StartContainer for \"862a250909c3dc65b3cab493a6987e6b8765cfe0e0c0a6c7948feb358d965c10\" returns successfully" Jul 7 06:11:57.767899 containerd[1626]: time="2025-07-07T06:11:57.767870354Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:57.769226 containerd[1626]: time="2025-07-07T06:11:57.769201943Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 7 06:11:57.771004 containerd[1626]: time="2025-07-07T06:11:57.770984409Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 405.725244ms" Jul 7 06:11:57.771004 containerd[1626]: time="2025-07-07T06:11:57.771005377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 06:11:57.772490 containerd[1626]: time="2025-07-07T06:11:57.772477381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 7 06:11:57.775895 containerd[1626]: time="2025-07-07T06:11:57.775873432Z" level=info msg="CreateContainer within sandbox \"5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 06:11:57.784726 containerd[1626]: time="2025-07-07T06:11:57.784694966Z" level=info msg="Container 9c7bba36bf845d15762ba11695cb99af9cc388c2ab347cbda00428e12c41146d: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:11:57.796301 containerd[1626]: time="2025-07-07T06:11:57.796277993Z" level=info msg="CreateContainer within sandbox \"5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9c7bba36bf845d15762ba11695cb99af9cc388c2ab347cbda00428e12c41146d\"" Jul 7 06:11:57.796859 containerd[1626]: time="2025-07-07T06:11:57.796847821Z" level=info msg="StartContainer for \"9c7bba36bf845d15762ba11695cb99af9cc388c2ab347cbda00428e12c41146d\"" Jul 7 06:11:57.797986 containerd[1626]: time="2025-07-07T06:11:57.797953902Z" level=info msg="connecting to shim 9c7bba36bf845d15762ba11695cb99af9cc388c2ab347cbda00428e12c41146d" address="unix:///run/containerd/s/fdd2ea843a386eb5a7d292ab5269442bec32066204f9c64122940a75a926e362" protocol=ttrpc version=3 Jul 7 06:11:57.819208 systemd[1]: Started cri-containerd-9c7bba36bf845d15762ba11695cb99af9cc388c2ab347cbda00428e12c41146d.scope - libcontainer container 9c7bba36bf845d15762ba11695cb99af9cc388c2ab347cbda00428e12c41146d. Jul 7 06:11:57.890438 containerd[1626]: time="2025-07-07T06:11:57.890385591Z" level=info msg="StartContainer for \"9c7bba36bf845d15762ba11695cb99af9cc388c2ab347cbda00428e12c41146d\" returns successfully" Jul 7 06:11:58.221899 kubelet[2913]: I0707 06:11:58.219554 2913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-b79765fb9-tngnt" podStartSLOduration=32.375828889 podStartE2EDuration="50.215559099s" podCreationTimestamp="2025-07-07 06:11:08 +0000 UTC" firstStartedPulling="2025-07-07 06:11:39.525454734 +0000 UTC m=+45.476284108" lastFinishedPulling="2025-07-07 06:11:57.365184944 +0000 UTC m=+63.316014318" observedRunningTime="2025-07-07 06:11:58.188193874 +0000 UTC m=+64.139023255" watchObservedRunningTime="2025-07-07 06:11:58.215559099 +0000 UTC m=+64.166388479" Jul 7 06:11:58.724101 kubelet[2913]: I0707 06:11:58.723531 2913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-945f9d44-42wqf" podStartSLOduration=32.554437966 podStartE2EDuration="50.723518705s" podCreationTimestamp="2025-07-07 06:11:08 +0000 UTC" firstStartedPulling="2025-07-07 06:11:39.603272732 +0000 UTC m=+45.554102106" lastFinishedPulling="2025-07-07 06:11:57.772353472 +0000 UTC m=+63.723182845" observedRunningTime="2025-07-07 06:11:58.222213753 +0000 UTC m=+64.173043126" watchObservedRunningTime="2025-07-07 06:11:58.723518705 +0000 UTC m=+64.674348080" Jul 7 06:11:59.179137 kubelet[2913]: I0707 06:11:59.178928 2913 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:12:00.348741 containerd[1626]: time="2025-07-07T06:12:00.348707680Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:00.367094 containerd[1626]: time="2025-07-07T06:12:00.367051449Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 7 06:12:00.377647 containerd[1626]: time="2025-07-07T06:12:00.377593429Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:00.391333 containerd[1626]: time="2025-07-07T06:12:00.389253270Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:00.391333 containerd[1626]: time="2025-07-07T06:12:00.389538648Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.617046222s" Jul 7 06:12:00.391333 containerd[1626]: time="2025-07-07T06:12:00.389557534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 7 06:12:00.859347 containerd[1626]: time="2025-07-07T06:12:00.859318960Z" level=info msg="CreateContainer within sandbox \"d79b3e93d76225f9b378afd6abb4aa5314e18abd2a971ddc6522f208c6fda16d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 7 06:12:00.924698 containerd[1626]: time="2025-07-07T06:12:00.923968764Z" level=info msg="Container 3dc789770cea0526b6dc67662f8a7f3b882be821225b05f43989f3860ef64a18: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:12:00.937420 containerd[1626]: time="2025-07-07T06:12:00.937363700Z" level=info msg="CreateContainer within sandbox \"d79b3e93d76225f9b378afd6abb4aa5314e18abd2a971ddc6522f208c6fda16d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"3dc789770cea0526b6dc67662f8a7f3b882be821225b05f43989f3860ef64a18\"" Jul 7 06:12:00.938091 containerd[1626]: time="2025-07-07T06:12:00.937989261Z" level=info msg="StartContainer for \"3dc789770cea0526b6dc67662f8a7f3b882be821225b05f43989f3860ef64a18\"" Jul 7 06:12:00.939017 containerd[1626]: time="2025-07-07T06:12:00.939002221Z" level=info msg="connecting to shim 3dc789770cea0526b6dc67662f8a7f3b882be821225b05f43989f3860ef64a18" address="unix:///run/containerd/s/a03247944cff35b58ba94ad0c5b3a0b10788663ce424e318c50d3bbcdb7e8011" protocol=ttrpc version=3 Jul 7 06:12:00.971200 systemd[1]: Started cri-containerd-3dc789770cea0526b6dc67662f8a7f3b882be821225b05f43989f3860ef64a18.scope - libcontainer container 3dc789770cea0526b6dc67662f8a7f3b882be821225b05f43989f3860ef64a18. Jul 7 06:12:01.107760 containerd[1626]: time="2025-07-07T06:12:01.107732852Z" level=info msg="StartContainer for \"3dc789770cea0526b6dc67662f8a7f3b882be821225b05f43989f3860ef64a18\" returns successfully" Jul 7 06:12:03.174032 kubelet[2913]: I0707 06:12:03.171361 2913 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 7 06:12:03.180226 kubelet[2913]: I0707 06:12:03.179209 2913 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 7 06:12:05.080948 containerd[1626]: time="2025-07-07T06:12:05.080888898Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c05b9e293047d80d625a0ae7fbdef27a4e452d5ff264e1dcdb0cf72c4ea8af9d\" id:\"a9252f868aeba3cde9f2124ca219b462672adacb59bd46c2b21bdcda351ae258\" pid:5539 exited_at:{seconds:1751868725 nanos:50247617}" Jul 7 06:12:05.236548 kubelet[2913]: I0707 06:12:05.199886 2913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-s2hl9" podStartSLOduration=31.874675596 podStartE2EDuration="55.186726876s" podCreationTimestamp="2025-07-07 06:11:10 +0000 UTC" firstStartedPulling="2025-07-07 06:11:37.542729505 +0000 UTC m=+43.493558878" lastFinishedPulling="2025-07-07 06:12:00.854780784 +0000 UTC m=+66.805610158" observedRunningTime="2025-07-07 06:12:01.731163211 +0000 UTC m=+67.681992592" watchObservedRunningTime="2025-07-07 06:12:05.186726876 +0000 UTC m=+71.137556258" Jul 7 06:12:18.106921 kubelet[2913]: I0707 06:12:18.106894 2913 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:12:18.442688 containerd[1626]: time="2025-07-07T06:12:18.442657907Z" level=info msg="StopContainer for \"9c7bba36bf845d15762ba11695cb99af9cc388c2ab347cbda00428e12c41146d\" with timeout 30 (s)" Jul 7 06:12:18.528334 containerd[1626]: time="2025-07-07T06:12:18.528307242Z" level=info msg="Stop container \"9c7bba36bf845d15762ba11695cb99af9cc388c2ab347cbda00428e12c41146d\" with signal terminated" Jul 7 06:12:18.712975 kubelet[2913]: I0707 06:12:18.712884 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6803cdd2-31bc-450a-aa53-01c51901595b-calico-apiserver-certs\") pod \"calico-apiserver-b79765fb9-6b4sb\" (UID: \"6803cdd2-31bc-450a-aa53-01c51901595b\") " pod="calico-apiserver/calico-apiserver-b79765fb9-6b4sb" Jul 7 06:12:18.713147 kubelet[2913]: I0707 06:12:18.712975 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqh6n\" (UniqueName: \"kubernetes.io/projected/6803cdd2-31bc-450a-aa53-01c51901595b-kube-api-access-sqh6n\") pod \"calico-apiserver-b79765fb9-6b4sb\" (UID: \"6803cdd2-31bc-450a-aa53-01c51901595b\") " pod="calico-apiserver/calico-apiserver-b79765fb9-6b4sb" Jul 7 06:12:18.800192 systemd[1]: cri-containerd-9c7bba36bf845d15762ba11695cb99af9cc388c2ab347cbda00428e12c41146d.scope: Deactivated successfully. Jul 7 06:12:18.814509 systemd[1]: cri-containerd-9c7bba36bf845d15762ba11695cb99af9cc388c2ab347cbda00428e12c41146d.scope: Consumed 479ms CPU time, 48.7M memory peak, 6.1M read from disk. Jul 7 06:12:18.927958 containerd[1626]: time="2025-07-07T06:12:18.824286915Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9c7bba36bf845d15762ba11695cb99af9cc388c2ab347cbda00428e12c41146d\" id:\"9c7bba36bf845d15762ba11695cb99af9cc388c2ab347cbda00428e12c41146d\" pid:5446 exit_status:1 exited_at:{seconds:1751868738 nanos:823522723}" Jul 7 06:12:18.927958 containerd[1626]: time="2025-07-07T06:12:18.829700168Z" level=info msg="received exit event container_id:\"9c7bba36bf845d15762ba11695cb99af9cc388c2ab347cbda00428e12c41146d\" id:\"9c7bba36bf845d15762ba11695cb99af9cc388c2ab347cbda00428e12c41146d\" pid:5446 exit_status:1 exited_at:{seconds:1751868738 nanos:823522723}" Jul 7 06:12:18.939512 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c7bba36bf845d15762ba11695cb99af9cc388c2ab347cbda00428e12c41146d-rootfs.mount: Deactivated successfully. Jul 7 06:12:19.090558 systemd[1]: Created slice kubepods-besteffort-pod6803cdd2_31bc_450a_aa53_01c51901595b.slice - libcontainer container kubepods-besteffort-pod6803cdd2_31bc_450a_aa53_01c51901595b.slice. Jul 7 06:12:19.208177 containerd[1626]: time="2025-07-07T06:12:19.208143042Z" level=info msg="StopContainer for \"9c7bba36bf845d15762ba11695cb99af9cc388c2ab347cbda00428e12c41146d\" returns successfully" Jul 7 06:12:19.232466 containerd[1626]: time="2025-07-07T06:12:19.232422354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b79765fb9-6b4sb,Uid:6803cdd2-31bc-450a-aa53-01c51901595b,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:12:19.243875 containerd[1626]: time="2025-07-07T06:12:19.243668681Z" level=info msg="StopPodSandbox for \"5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7\"" Jul 7 06:12:19.248567 containerd[1626]: time="2025-07-07T06:12:19.248552280Z" level=info msg="Container to stop \"9c7bba36bf845d15762ba11695cb99af9cc388c2ab347cbda00428e12c41146d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 06:12:19.389885 systemd[1]: cri-containerd-5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7.scope: Deactivated successfully. Jul 7 06:12:19.399246 containerd[1626]: time="2025-07-07T06:12:19.390812109Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7\" id:\"5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7\" pid:4998 exit_status:137 exited_at:{seconds:1751868739 nanos:390507151}" Jul 7 06:12:19.411951 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7-rootfs.mount: Deactivated successfully. Jul 7 06:12:19.433121 containerd[1626]: time="2025-07-07T06:12:19.433077500Z" level=info msg="shim disconnected" id=5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7 namespace=k8s.io Jul 7 06:12:19.433121 containerd[1626]: time="2025-07-07T06:12:19.433100140Z" level=warning msg="cleaning up after shim disconnected" id=5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7 namespace=k8s.io Jul 7 06:12:19.433121 containerd[1626]: time="2025-07-07T06:12:19.433105175Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 06:12:19.618899 containerd[1626]: time="2025-07-07T06:12:19.618864016Z" level=info msg="received exit event sandbox_id:\"5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7\" exit_status:137 exited_at:{seconds:1751868739 nanos:390507151}" Jul 7 06:12:19.932839 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7-shm.mount: Deactivated successfully. Jul 7 06:12:20.197960 systemd[1]: Started sshd@7-139.178.70.102:22-139.178.68.195:60372.service - OpenSSH per-connection server daemon (139.178.68.195:60372). Jul 7 06:12:20.403280 systemd-networkd[1529]: caliecb0f39c7e2: Link DOWN Jul 7 06:12:20.403286 systemd-networkd[1529]: caliecb0f39c7e2: Lost carrier Jul 7 06:12:20.415039 sshd[5668]: Accepted publickey for core from 139.178.68.195 port 60372 ssh2: RSA SHA256:7X4SiK+KmZA2miCaJT9Isk7VmwTZNhYs8oAuZv2KODY Jul 7 06:12:20.420887 sshd-session[5668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:12:20.435458 systemd-logind[1592]: New session 10 of user core. Jul 7 06:12:20.438354 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 06:12:20.672266 kubelet[2913]: I0707 06:12:20.648532 2913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" Jul 7 06:12:20.978230 containerd[1626]: 2025-07-07 06:12:20.392 [INFO][5661] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" Jul 7 06:12:20.978230 containerd[1626]: 2025-07-07 06:12:20.394 [INFO][5661] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" iface="eth0" netns="/var/run/netns/cni-0afa82f1-9712-1255-223b-1aee9cb269c8" Jul 7 06:12:20.978230 containerd[1626]: 2025-07-07 06:12:20.395 [INFO][5661] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" iface="eth0" netns="/var/run/netns/cni-0afa82f1-9712-1255-223b-1aee9cb269c8" Jul 7 06:12:20.978230 containerd[1626]: 2025-07-07 06:12:20.406 [INFO][5661] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" after=11.359841ms iface="eth0" netns="/var/run/netns/cni-0afa82f1-9712-1255-223b-1aee9cb269c8" Jul 7 06:12:20.978230 containerd[1626]: 2025-07-07 06:12:20.407 [INFO][5661] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" Jul 7 06:12:20.978230 containerd[1626]: 2025-07-07 06:12:20.407 [INFO][5661] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" Jul 7 06:12:20.978230 containerd[1626]: 2025-07-07 06:12:20.863 [INFO][5679] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" HandleID="k8s-pod-network.5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" Workload="localhost-k8s-calico--apiserver--945f9d44--42wqf-eth0" Jul 7 06:12:20.978230 containerd[1626]: 2025-07-07 06:12:20.866 [INFO][5679] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:12:20.978230 containerd[1626]: 2025-07-07 06:12:20.866 [INFO][5679] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:12:20.978230 containerd[1626]: 2025-07-07 06:12:20.954 [INFO][5679] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" HandleID="k8s-pod-network.5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" Workload="localhost-k8s-calico--apiserver--945f9d44--42wqf-eth0" Jul 7 06:12:20.978230 containerd[1626]: 2025-07-07 06:12:20.955 [INFO][5679] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" HandleID="k8s-pod-network.5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" Workload="localhost-k8s-calico--apiserver--945f9d44--42wqf-eth0" Jul 7 06:12:20.978230 containerd[1626]: 2025-07-07 06:12:20.962 [INFO][5679] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:12:20.978230 containerd[1626]: 2025-07-07 06:12:20.974 [INFO][5661] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" Jul 7 06:12:20.992984 containerd[1626]: time="2025-07-07T06:12:20.989725564Z" level=info msg="TearDown network for sandbox \"5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7\" successfully" Jul 7 06:12:20.992984 containerd[1626]: time="2025-07-07T06:12:20.989755618Z" level=info msg="StopPodSandbox for \"5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7\" returns successfully" Jul 7 06:12:20.987162 systemd[1]: run-netns-cni\x2d0afa82f1\x2d9712\x2d1255\x2d223b\x2d1aee9cb269c8.mount: Deactivated successfully. Jul 7 06:12:21.075265 containerd[1626]: time="2025-07-07T06:12:21.073260441Z" level=info msg="TaskExit event in podsandbox handler exit_status:137 exited_at:{seconds:1751868739 nanos:390507151}" Jul 7 06:12:21.073598 systemd-networkd[1529]: cali15287a52b04: Link UP Jul 7 06:12:21.073752 systemd-networkd[1529]: cali15287a52b04: Gained carrier Jul 7 06:12:21.104266 containerd[1626]: 2025-07-07 06:12:20.385 [INFO][5600] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--b79765fb9--6b4sb-eth0 calico-apiserver-b79765fb9- calico-apiserver 6803cdd2-31bc-450a-aa53-01c51901595b 1180 0 2025-07-07 06:12:18 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:b79765fb9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-b79765fb9-6b4sb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali15287a52b04 [] [] }} ContainerID="afcfa068b8cc56032b6cc3cc057182ef32195f66372540c6f8b9508899f64de1" Namespace="calico-apiserver" Pod="calico-apiserver-b79765fb9-6b4sb" WorkloadEndpoint="localhost-k8s-calico--apiserver--b79765fb9--6b4sb-" Jul 7 06:12:21.104266 containerd[1626]: 2025-07-07 06:12:20.389 [INFO][5600] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="afcfa068b8cc56032b6cc3cc057182ef32195f66372540c6f8b9508899f64de1" Namespace="calico-apiserver" Pod="calico-apiserver-b79765fb9-6b4sb" WorkloadEndpoint="localhost-k8s-calico--apiserver--b79765fb9--6b4sb-eth0" Jul 7 06:12:21.104266 containerd[1626]: 2025-07-07 06:12:20.864 [INFO][5677] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="afcfa068b8cc56032b6cc3cc057182ef32195f66372540c6f8b9508899f64de1" HandleID="k8s-pod-network.afcfa068b8cc56032b6cc3cc057182ef32195f66372540c6f8b9508899f64de1" Workload="localhost-k8s-calico--apiserver--b79765fb9--6b4sb-eth0" Jul 7 06:12:21.104266 containerd[1626]: 2025-07-07 06:12:20.866 [INFO][5677] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="afcfa068b8cc56032b6cc3cc057182ef32195f66372540c6f8b9508899f64de1" HandleID="k8s-pod-network.afcfa068b8cc56032b6cc3cc057182ef32195f66372540c6f8b9508899f64de1" Workload="localhost-k8s-calico--apiserver--b79765fb9--6b4sb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003924f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-b79765fb9-6b4sb", "timestamp":"2025-07-07 06:12:20.863879493 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:12:21.104266 containerd[1626]: 2025-07-07 06:12:20.866 [INFO][5677] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:12:21.104266 containerd[1626]: 2025-07-07 06:12:20.962 [INFO][5677] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:12:21.104266 containerd[1626]: 2025-07-07 06:12:20.962 [INFO][5677] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:12:21.104266 containerd[1626]: 2025-07-07 06:12:20.996 [INFO][5677] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.afcfa068b8cc56032b6cc3cc057182ef32195f66372540c6f8b9508899f64de1" host="localhost" Jul 7 06:12:21.104266 containerd[1626]: 2025-07-07 06:12:21.012 [INFO][5677] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:12:21.104266 containerd[1626]: 2025-07-07 06:12:21.021 [INFO][5677] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:12:21.104266 containerd[1626]: 2025-07-07 06:12:21.028 [INFO][5677] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:12:21.104266 containerd[1626]: 2025-07-07 06:12:21.034 [INFO][5677] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:12:21.104266 containerd[1626]: 2025-07-07 06:12:21.034 [INFO][5677] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.afcfa068b8cc56032b6cc3cc057182ef32195f66372540c6f8b9508899f64de1" host="localhost" Jul 7 06:12:21.104266 containerd[1626]: 2025-07-07 06:12:21.037 [INFO][5677] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.afcfa068b8cc56032b6cc3cc057182ef32195f66372540c6f8b9508899f64de1 Jul 7 06:12:21.104266 containerd[1626]: 2025-07-07 06:12:21.042 [INFO][5677] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.afcfa068b8cc56032b6cc3cc057182ef32195f66372540c6f8b9508899f64de1" host="localhost" Jul 7 06:12:21.104266 containerd[1626]: 2025-07-07 06:12:21.049 [INFO][5677] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.138/26] block=192.168.88.128/26 handle="k8s-pod-network.afcfa068b8cc56032b6cc3cc057182ef32195f66372540c6f8b9508899f64de1" host="localhost" Jul 7 06:12:21.104266 containerd[1626]: 2025-07-07 06:12:21.049 [INFO][5677] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.138/26] handle="k8s-pod-network.afcfa068b8cc56032b6cc3cc057182ef32195f66372540c6f8b9508899f64de1" host="localhost" Jul 7 06:12:21.104266 containerd[1626]: 2025-07-07 06:12:21.049 [INFO][5677] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:12:21.104266 containerd[1626]: 2025-07-07 06:12:21.049 [INFO][5677] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.138/26] IPv6=[] ContainerID="afcfa068b8cc56032b6cc3cc057182ef32195f66372540c6f8b9508899f64de1" HandleID="k8s-pod-network.afcfa068b8cc56032b6cc3cc057182ef32195f66372540c6f8b9508899f64de1" Workload="localhost-k8s-calico--apiserver--b79765fb9--6b4sb-eth0" Jul 7 06:12:21.144009 containerd[1626]: 2025-07-07 06:12:21.057 [INFO][5600] cni-plugin/k8s.go 418: Populated endpoint ContainerID="afcfa068b8cc56032b6cc3cc057182ef32195f66372540c6f8b9508899f64de1" Namespace="calico-apiserver" Pod="calico-apiserver-b79765fb9-6b4sb" WorkloadEndpoint="localhost-k8s-calico--apiserver--b79765fb9--6b4sb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b79765fb9--6b4sb-eth0", GenerateName:"calico-apiserver-b79765fb9-", Namespace:"calico-apiserver", SelfLink:"", UID:"6803cdd2-31bc-450a-aa53-01c51901595b", ResourceVersion:"1180", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 12, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b79765fb9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-b79765fb9-6b4sb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali15287a52b04", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:12:21.144009 containerd[1626]: 2025-07-07 06:12:21.057 [INFO][5600] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.138/32] ContainerID="afcfa068b8cc56032b6cc3cc057182ef32195f66372540c6f8b9508899f64de1" Namespace="calico-apiserver" Pod="calico-apiserver-b79765fb9-6b4sb" WorkloadEndpoint="localhost-k8s-calico--apiserver--b79765fb9--6b4sb-eth0" Jul 7 06:12:21.144009 containerd[1626]: 2025-07-07 06:12:21.057 [INFO][5600] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali15287a52b04 ContainerID="afcfa068b8cc56032b6cc3cc057182ef32195f66372540c6f8b9508899f64de1" Namespace="calico-apiserver" Pod="calico-apiserver-b79765fb9-6b4sb" WorkloadEndpoint="localhost-k8s-calico--apiserver--b79765fb9--6b4sb-eth0" Jul 7 06:12:21.144009 containerd[1626]: 2025-07-07 06:12:21.074 [INFO][5600] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="afcfa068b8cc56032b6cc3cc057182ef32195f66372540c6f8b9508899f64de1" Namespace="calico-apiserver" Pod="calico-apiserver-b79765fb9-6b4sb" WorkloadEndpoint="localhost-k8s-calico--apiserver--b79765fb9--6b4sb-eth0" Jul 7 06:12:21.144009 containerd[1626]: 2025-07-07 06:12:21.075 [INFO][5600] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="afcfa068b8cc56032b6cc3cc057182ef32195f66372540c6f8b9508899f64de1" Namespace="calico-apiserver" Pod="calico-apiserver-b79765fb9-6b4sb" WorkloadEndpoint="localhost-k8s-calico--apiserver--b79765fb9--6b4sb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b79765fb9--6b4sb-eth0", GenerateName:"calico-apiserver-b79765fb9-", Namespace:"calico-apiserver", SelfLink:"", UID:"6803cdd2-31bc-450a-aa53-01c51901595b", ResourceVersion:"1180", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 12, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b79765fb9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"afcfa068b8cc56032b6cc3cc057182ef32195f66372540c6f8b9508899f64de1", Pod:"calico-apiserver-b79765fb9-6b4sb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali15287a52b04", MAC:"ba:7f:f7:75:73:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:12:21.144009 containerd[1626]: 2025-07-07 06:12:21.100 [INFO][5600] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="afcfa068b8cc56032b6cc3cc057182ef32195f66372540c6f8b9508899f64de1" Namespace="calico-apiserver" Pod="calico-apiserver-b79765fb9-6b4sb" WorkloadEndpoint="localhost-k8s-calico--apiserver--b79765fb9--6b4sb-eth0" Jul 7 06:12:21.632908 containerd[1626]: time="2025-07-07T06:12:21.632869247Z" level=info msg="connecting to shim afcfa068b8cc56032b6cc3cc057182ef32195f66372540c6f8b9508899f64de1" address="unix:///run/containerd/s/2fef7ad2421671abb3a42eac630f3c8868ec97f28f8456bf5ce1f7f7657d2be0" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:12:21.710581 sshd[5682]: Connection closed by 139.178.68.195 port 60372 Jul 7 06:12:21.710869 sshd-session[5668]: pam_unix(sshd:session): session closed for user core Jul 7 06:12:21.724998 systemd[1]: sshd@7-139.178.70.102:22-139.178.68.195:60372.service: Deactivated successfully. Jul 7 06:12:21.727989 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 06:12:21.731824 systemd-logind[1592]: Session 10 logged out. Waiting for processes to exit. Jul 7 06:12:21.737191 systemd-logind[1592]: Removed session 10. Jul 7 06:12:21.818225 systemd[1]: Started cri-containerd-afcfa068b8cc56032b6cc3cc057182ef32195f66372540c6f8b9508899f64de1.scope - libcontainer container afcfa068b8cc56032b6cc3cc057182ef32195f66372540c6f8b9508899f64de1. Jul 7 06:12:21.850183 systemd-resolved[1485]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:12:22.082953 containerd[1626]: time="2025-07-07T06:12:22.082919868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b79765fb9-6b4sb,Uid:6803cdd2-31bc-450a-aa53-01c51901595b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"afcfa068b8cc56032b6cc3cc057182ef32195f66372540c6f8b9508899f64de1\"" Jul 7 06:12:22.307870 containerd[1626]: time="2025-07-07T06:12:22.307783598Z" level=info msg="CreateContainer within sandbox \"afcfa068b8cc56032b6cc3cc057182ef32195f66372540c6f8b9508899f64de1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 06:12:22.370118 containerd[1626]: time="2025-07-07T06:12:22.369293082Z" level=info msg="Container 125a4ce5afedca18c00b999a27586afcb9d22b4aa91e889670cece0089253323: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:12:22.374785 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount287212756.mount: Deactivated successfully. Jul 7 06:12:22.400501 containerd[1626]: time="2025-07-07T06:12:22.400465079Z" level=info msg="CreateContainer within sandbox \"afcfa068b8cc56032b6cc3cc057182ef32195f66372540c6f8b9508899f64de1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"125a4ce5afedca18c00b999a27586afcb9d22b4aa91e889670cece0089253323\"" Jul 7 06:12:22.412239 systemd-networkd[1529]: cali15287a52b04: Gained IPv6LL Jul 7 06:12:22.418176 containerd[1626]: time="2025-07-07T06:12:22.418140656Z" level=info msg="StartContainer for \"125a4ce5afedca18c00b999a27586afcb9d22b4aa91e889670cece0089253323\"" Jul 7 06:12:22.426654 containerd[1626]: time="2025-07-07T06:12:22.425975167Z" level=info msg="connecting to shim 125a4ce5afedca18c00b999a27586afcb9d22b4aa91e889670cece0089253323" address="unix:///run/containerd/s/2fef7ad2421671abb3a42eac630f3c8868ec97f28f8456bf5ce1f7f7657d2be0" protocol=ttrpc version=3 Jul 7 06:12:22.437855 kubelet[2913]: I0707 06:12:22.437529 2913 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfmlz\" (UniqueName: \"kubernetes.io/projected/0c028a4a-b59d-4d3f-950b-f5683fe34de3-kube-api-access-wfmlz\") pod \"0c028a4a-b59d-4d3f-950b-f5683fe34de3\" (UID: \"0c028a4a-b59d-4d3f-950b-f5683fe34de3\") " Jul 7 06:12:22.437855 kubelet[2913]: I0707 06:12:22.437593 2913 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0c028a4a-b59d-4d3f-950b-f5683fe34de3-calico-apiserver-certs\") pod \"0c028a4a-b59d-4d3f-950b-f5683fe34de3\" (UID: \"0c028a4a-b59d-4d3f-950b-f5683fe34de3\") " Jul 7 06:12:22.452449 systemd[1]: Started cri-containerd-125a4ce5afedca18c00b999a27586afcb9d22b4aa91e889670cece0089253323.scope - libcontainer container 125a4ce5afedca18c00b999a27586afcb9d22b4aa91e889670cece0089253323. Jul 7 06:12:22.503996 systemd[1]: var-lib-kubelet-pods-0c028a4a\x2db59d\x2d4d3f\x2d950b\x2df5683fe34de3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwfmlz.mount: Deactivated successfully. Jul 7 06:12:22.506143 kubelet[2913]: I0707 06:12:22.504587 2913 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c028a4a-b59d-4d3f-950b-f5683fe34de3-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "0c028a4a-b59d-4d3f-950b-f5683fe34de3" (UID: "0c028a4a-b59d-4d3f-950b-f5683fe34de3"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 7 06:12:22.507888 kubelet[2913]: I0707 06:12:22.507854 2913 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c028a4a-b59d-4d3f-950b-f5683fe34de3-kube-api-access-wfmlz" (OuterVolumeSpecName: "kube-api-access-wfmlz") pod "0c028a4a-b59d-4d3f-950b-f5683fe34de3" (UID: "0c028a4a-b59d-4d3f-950b-f5683fe34de3"). InnerVolumeSpecName "kube-api-access-wfmlz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 06:12:22.508083 systemd[1]: var-lib-kubelet-pods-0c028a4a\x2db59d\x2d4d3f\x2d950b\x2df5683fe34de3-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jul 7 06:12:22.530325 containerd[1626]: time="2025-07-07T06:12:22.530289699Z" level=info msg="StartContainer for \"125a4ce5afedca18c00b999a27586afcb9d22b4aa91e889670cece0089253323\" returns successfully" Jul 7 06:12:22.542514 kubelet[2913]: I0707 06:12:22.542493 2913 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wfmlz\" (UniqueName: \"kubernetes.io/projected/0c028a4a-b59d-4d3f-950b-f5683fe34de3-kube-api-access-wfmlz\") on node \"localhost\" DevicePath \"\"" Jul 7 06:12:22.542679 kubelet[2913]: I0707 06:12:22.542576 2913 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0c028a4a-b59d-4d3f-950b-f5683fe34de3-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" Jul 7 06:12:23.171510 kubelet[2913]: I0707 06:12:23.063425 2913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-b79765fb9-6b4sb" podStartSLOduration=5.059054953 podStartE2EDuration="5.059054953s" podCreationTimestamp="2025-07-07 06:12:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:12:23.05797831 +0000 UTC m=+89.008807684" watchObservedRunningTime="2025-07-07 06:12:23.059054953 +0000 UTC m=+89.009884328" Jul 7 06:12:23.177675 systemd[1]: Removed slice kubepods-besteffort-pod0c028a4a_b59d_4d3f_950b_f5683fe34de3.slice - libcontainer container kubepods-besteffort-pod0c028a4a_b59d_4d3f_950b_f5683fe34de3.slice. Jul 7 06:12:23.177745 systemd[1]: kubepods-besteffort-pod0c028a4a_b59d_4d3f_950b_f5683fe34de3.slice: Consumed 501ms CPU time, 49.3M memory peak, 6.1M read from disk. Jul 7 06:12:23.459080 containerd[1626]: time="2025-07-07T06:12:23.459035000Z" level=info msg="TaskExit event in podsandbox handler container_id:\"624ae46721977988752979094a716dcad8f293bd084fd1c966c75b4333ac85a6\" id:\"636a1423e0ba55c871d3852953d59ee6bec367a1a53735455faac8c285f75825\" pid:5833 exited_at:{seconds:1751868743 nanos:452197803}" Jul 7 06:12:24.037090 containerd[1626]: time="2025-07-07T06:12:24.037030776Z" level=info msg="TaskExit event in podsandbox handler container_id:\"220b565e39f808de4aa5c51cae6029853ce51c95b52b27c7f7a9bb1a4da3b19e\" id:\"34df7d32045a56b6195cf169e6f3b1fd8547c87eb23d0c57dac0758d49ccc499\" pid:5835 exited_at:{seconds:1751868744 nanos:36830873}" Jul 7 06:12:24.384379 kubelet[2913]: I0707 06:12:24.384146 2913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c028a4a-b59d-4d3f-950b-f5683fe34de3" path="/var/lib/kubelet/pods/0c028a4a-b59d-4d3f-950b-f5683fe34de3/volumes" Jul 7 06:12:24.411890 containerd[1626]: time="2025-07-07T06:12:24.411861153Z" level=info msg="StopContainer for \"f703af2588cb724990757fe340a405bfdd9e6bb6a48230227a69c53e7a75479b\" with timeout 30 (s)" Jul 7 06:12:24.438778 containerd[1626]: time="2025-07-07T06:12:24.438747011Z" level=info msg="Stop container \"f703af2588cb724990757fe340a405bfdd9e6bb6a48230227a69c53e7a75479b\" with signal terminated" Jul 7 06:12:24.450532 systemd[1]: cri-containerd-f703af2588cb724990757fe340a405bfdd9e6bb6a48230227a69c53e7a75479b.scope: Deactivated successfully. Jul 7 06:12:24.451141 systemd[1]: cri-containerd-f703af2588cb724990757fe340a405bfdd9e6bb6a48230227a69c53e7a75479b.scope: Consumed 640ms CPU time, 62.4M memory peak, 5.4M read from disk. Jul 7 06:12:24.453612 containerd[1626]: time="2025-07-07T06:12:24.453515767Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f703af2588cb724990757fe340a405bfdd9e6bb6a48230227a69c53e7a75479b\" id:\"f703af2588cb724990757fe340a405bfdd9e6bb6a48230227a69c53e7a75479b\" pid:5101 exit_status:1 exited_at:{seconds:1751868744 nanos:452725820}" Jul 7 06:12:24.480003 containerd[1626]: time="2025-07-07T06:12:24.479911132Z" level=info msg="received exit event container_id:\"f703af2588cb724990757fe340a405bfdd9e6bb6a48230227a69c53e7a75479b\" id:\"f703af2588cb724990757fe340a405bfdd9e6bb6a48230227a69c53e7a75479b\" pid:5101 exit_status:1 exited_at:{seconds:1751868744 nanos:452725820}" Jul 7 06:12:24.512277 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f703af2588cb724990757fe340a405bfdd9e6bb6a48230227a69c53e7a75479b-rootfs.mount: Deactivated successfully. Jul 7 06:12:24.524110 containerd[1626]: time="2025-07-07T06:12:24.524048048Z" level=info msg="StopContainer for \"f703af2588cb724990757fe340a405bfdd9e6bb6a48230227a69c53e7a75479b\" returns successfully" Jul 7 06:12:24.527942 containerd[1626]: time="2025-07-07T06:12:24.527860096Z" level=info msg="StopPodSandbox for \"6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466\"" Jul 7 06:12:24.531095 containerd[1626]: time="2025-07-07T06:12:24.531077634Z" level=info msg="Container to stop \"f703af2588cb724990757fe340a405bfdd9e6bb6a48230227a69c53e7a75479b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 06:12:24.536761 systemd[1]: cri-containerd-6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466.scope: Deactivated successfully. Jul 7 06:12:24.538654 containerd[1626]: time="2025-07-07T06:12:24.537979587Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466\" id:\"6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466\" pid:4448 exit_status:137 exited_at:{seconds:1751868744 nanos:537748298}" Jul 7 06:12:24.556470 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466-rootfs.mount: Deactivated successfully. Jul 7 06:12:24.558554 containerd[1626]: time="2025-07-07T06:12:24.556936153Z" level=info msg="received exit event sandbox_id:\"6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466\" exit_status:137 exited_at:{seconds:1751868744 nanos:537748298}" Jul 7 06:12:24.560217 containerd[1626]: time="2025-07-07T06:12:24.558905016Z" level=info msg="shim disconnected" id=6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466 namespace=k8s.io Jul 7 06:12:24.560217 containerd[1626]: time="2025-07-07T06:12:24.558916105Z" level=warning msg="cleaning up after shim disconnected" id=6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466 namespace=k8s.io Jul 7 06:12:24.559196 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466-shm.mount: Deactivated successfully. Jul 7 06:12:24.569730 containerd[1626]: time="2025-07-07T06:12:24.558921493Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 06:12:25.088954 kubelet[2913]: I0707 06:12:25.088927 2913 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" Jul 7 06:12:25.344803 systemd-networkd[1529]: cali9a06932c89e: Link DOWN Jul 7 06:12:25.344809 systemd-networkd[1529]: cali9a06932c89e: Lost carrier Jul 7 06:12:25.834723 containerd[1626]: 2025-07-07 06:12:25.336 [INFO][5928] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" Jul 7 06:12:25.834723 containerd[1626]: 2025-07-07 06:12:25.342 [INFO][5928] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" iface="eth0" netns="/var/run/netns/cni-4e27b25e-c048-c446-0917-c421151cbbf1" Jul 7 06:12:25.834723 containerd[1626]: 2025-07-07 06:12:25.342 [INFO][5928] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" iface="eth0" netns="/var/run/netns/cni-4e27b25e-c048-c446-0917-c421151cbbf1" Jul 7 06:12:25.834723 containerd[1626]: 2025-07-07 06:12:25.357 [INFO][5928] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" after=15.260503ms iface="eth0" netns="/var/run/netns/cni-4e27b25e-c048-c446-0917-c421151cbbf1" Jul 7 06:12:25.834723 containerd[1626]: 2025-07-07 06:12:25.358 [INFO][5928] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" Jul 7 06:12:25.834723 containerd[1626]: 2025-07-07 06:12:25.358 [INFO][5928] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" Jul 7 06:12:25.834723 containerd[1626]: 2025-07-07 06:12:25.733 [INFO][5938] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" HandleID="k8s-pod-network.6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" Workload="localhost-k8s-calico--apiserver--945f9d44--bq9wq-eth0" Jul 7 06:12:25.834723 containerd[1626]: 2025-07-07 06:12:25.735 [INFO][5938] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:12:25.834723 containerd[1626]: 2025-07-07 06:12:25.738 [INFO][5938] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:12:25.834723 containerd[1626]: 2025-07-07 06:12:25.825 [INFO][5938] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" HandleID="k8s-pod-network.6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" Workload="localhost-k8s-calico--apiserver--945f9d44--bq9wq-eth0" Jul 7 06:12:25.834723 containerd[1626]: 2025-07-07 06:12:25.826 [INFO][5938] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" HandleID="k8s-pod-network.6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" Workload="localhost-k8s-calico--apiserver--945f9d44--bq9wq-eth0" Jul 7 06:12:25.834723 containerd[1626]: 2025-07-07 06:12:25.827 [INFO][5938] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:12:25.834723 containerd[1626]: 2025-07-07 06:12:25.831 [INFO][5928] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" Jul 7 06:12:25.839134 containerd[1626]: time="2025-07-07T06:12:25.836928150Z" level=info msg="TearDown network for sandbox \"6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466\" successfully" Jul 7 06:12:25.839134 containerd[1626]: time="2025-07-07T06:12:25.836947567Z" level=info msg="StopPodSandbox for \"6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466\" returns successfully" Jul 7 06:12:25.837795 systemd[1]: run-netns-cni\x2d4e27b25e\x2dc048\x2dc446\x2d0917\x2dc421151cbbf1.mount: Deactivated successfully. Jul 7 06:12:26.100211 kubelet[2913]: I0707 06:12:26.099595 2913 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c39e2e58-014e-46c2-a9f8-678760dfbaab-calico-apiserver-certs\") pod \"c39e2e58-014e-46c2-a9f8-678760dfbaab\" (UID: \"c39e2e58-014e-46c2-a9f8-678760dfbaab\") " Jul 7 06:12:26.100211 kubelet[2913]: I0707 06:12:26.099657 2913 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqmf4\" (UniqueName: \"kubernetes.io/projected/c39e2e58-014e-46c2-a9f8-678760dfbaab-kube-api-access-lqmf4\") pod \"c39e2e58-014e-46c2-a9f8-678760dfbaab\" (UID: \"c39e2e58-014e-46c2-a9f8-678760dfbaab\") " Jul 7 06:12:26.136697 systemd[1]: var-lib-kubelet-pods-c39e2e58\x2d014e\x2d46c2\x2da9f8\x2d678760dfbaab-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlqmf4.mount: Deactivated successfully. Jul 7 06:12:26.140136 systemd[1]: var-lib-kubelet-pods-c39e2e58\x2d014e\x2d46c2\x2da9f8\x2d678760dfbaab-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jul 7 06:12:26.141244 kubelet[2913]: I0707 06:12:26.139221 2913 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c39e2e58-014e-46c2-a9f8-678760dfbaab-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "c39e2e58-014e-46c2-a9f8-678760dfbaab" (UID: "c39e2e58-014e-46c2-a9f8-678760dfbaab"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 7 06:12:26.141903 kubelet[2913]: I0707 06:12:26.137482 2913 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c39e2e58-014e-46c2-a9f8-678760dfbaab-kube-api-access-lqmf4" (OuterVolumeSpecName: "kube-api-access-lqmf4") pod "c39e2e58-014e-46c2-a9f8-678760dfbaab" (UID: "c39e2e58-014e-46c2-a9f8-678760dfbaab"). InnerVolumeSpecName "kube-api-access-lqmf4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 06:12:26.194615 systemd[1]: Removed slice kubepods-besteffort-podc39e2e58_014e_46c2_a9f8_678760dfbaab.slice - libcontainer container kubepods-besteffort-podc39e2e58_014e_46c2_a9f8_678760dfbaab.slice. Jul 7 06:12:26.195042 systemd[1]: kubepods-besteffort-podc39e2e58_014e_46c2_a9f8_678760dfbaab.slice: Consumed 663ms CPU time, 63M memory peak, 5.4M read from disk. Jul 7 06:12:26.213641 kubelet[2913]: I0707 06:12:26.212999 2913 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c39e2e58-014e-46c2-a9f8-678760dfbaab-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" Jul 7 06:12:26.213641 kubelet[2913]: I0707 06:12:26.213037 2913 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lqmf4\" (UniqueName: \"kubernetes.io/projected/c39e2e58-014e-46c2-a9f8-678760dfbaab-kube-api-access-lqmf4\") on node \"localhost\" DevicePath \"\"" Jul 7 06:12:26.731988 systemd[1]: Started sshd@8-139.178.70.102:22-139.178.68.195:60376.service - OpenSSH per-connection server daemon (139.178.68.195:60376). Jul 7 06:12:27.408432 sshd[5958]: Accepted publickey for core from 139.178.68.195 port 60376 ssh2: RSA SHA256:7X4SiK+KmZA2miCaJT9Isk7VmwTZNhYs8oAuZv2KODY Jul 7 06:12:27.436331 sshd-session[5958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:12:27.460493 systemd-logind[1592]: New session 11 of user core. Jul 7 06:12:27.465230 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 06:12:28.179647 kubelet[2913]: I0707 06:12:28.179248 2913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c39e2e58-014e-46c2-a9f8-678760dfbaab" path="/var/lib/kubelet/pods/c39e2e58-014e-46c2-a9f8-678760dfbaab/volumes" Jul 7 06:12:28.597923 sshd[5962]: Connection closed by 139.178.68.195 port 60376 Jul 7 06:12:28.601984 systemd[1]: sshd@8-139.178.70.102:22-139.178.68.195:60376.service: Deactivated successfully. Jul 7 06:12:28.598850 sshd-session[5958]: pam_unix(sshd:session): session closed for user core Jul 7 06:12:28.603308 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 06:12:28.604633 systemd-logind[1592]: Session 11 logged out. Waiting for processes to exit. Jul 7 06:12:28.606407 systemd-logind[1592]: Removed session 11. Jul 7 06:12:33.685896 systemd[1]: Started sshd@9-139.178.70.102:22-139.178.68.195:44300.service - OpenSSH per-connection server daemon (139.178.68.195:44300). Jul 7 06:12:34.629667 sshd[5982]: Accepted publickey for core from 139.178.68.195 port 44300 ssh2: RSA SHA256:7X4SiK+KmZA2miCaJT9Isk7VmwTZNhYs8oAuZv2KODY Jul 7 06:12:34.662968 sshd-session[5982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:12:34.683639 systemd-logind[1592]: New session 12 of user core. Jul 7 06:12:34.688230 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 06:12:35.962824 containerd[1626]: time="2025-07-07T06:12:35.962303837Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c05b9e293047d80d625a0ae7fbdef27a4e452d5ff264e1dcdb0cf72c4ea8af9d\" id:\"afb508704cac263e8796074c832b5fa30542e7f601c16281b8bac8f321cdbe32\" pid:5997 exit_status:1 exited_at:{seconds:1751868755 nanos:813773458}" Jul 7 06:12:36.657113 sshd[6003]: Connection closed by 139.178.68.195 port 44300 Jul 7 06:12:36.667016 systemd[1]: Started sshd@10-139.178.70.102:22-139.178.68.195:44310.service - OpenSSH per-connection server daemon (139.178.68.195:44310). Jul 7 06:12:36.794509 sshd-session[5982]: pam_unix(sshd:session): session closed for user core Jul 7 06:12:36.859193 systemd[1]: sshd@9-139.178.70.102:22-139.178.68.195:44300.service: Deactivated successfully. Jul 7 06:12:36.860708 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 06:12:36.862477 systemd-logind[1592]: Session 12 logged out. Waiting for processes to exit. Jul 7 06:12:36.863688 systemd-logind[1592]: Removed session 12. Jul 7 06:12:37.193833 sshd[6026]: Accepted publickey for core from 139.178.68.195 port 44310 ssh2: RSA SHA256:7X4SiK+KmZA2miCaJT9Isk7VmwTZNhYs8oAuZv2KODY Jul 7 06:12:37.194730 sshd-session[6026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:12:37.203585 systemd-logind[1592]: New session 13 of user core. Jul 7 06:12:37.210244 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 06:12:37.994576 sshd[6031]: Connection closed by 139.178.68.195 port 44310 Jul 7 06:12:38.006770 systemd[1]: Started sshd@11-139.178.70.102:22-139.178.68.195:44318.service - OpenSSH per-connection server daemon (139.178.68.195:44318). Jul 7 06:12:38.022496 sshd-session[6026]: pam_unix(sshd:session): session closed for user core Jul 7 06:12:38.049828 systemd-logind[1592]: Session 13 logged out. Waiting for processes to exit. Jul 7 06:12:38.050325 systemd[1]: sshd@10-139.178.70.102:22-139.178.68.195:44310.service: Deactivated successfully. Jul 7 06:12:38.052299 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 06:12:38.055774 systemd-logind[1592]: Removed session 13. Jul 7 06:12:38.120226 sshd[6039]: Accepted publickey for core from 139.178.68.195 port 44318 ssh2: RSA SHA256:7X4SiK+KmZA2miCaJT9Isk7VmwTZNhYs8oAuZv2KODY Jul 7 06:12:38.121332 sshd-session[6039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:12:38.126460 systemd-logind[1592]: New session 14 of user core. Jul 7 06:12:38.132788 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 06:12:38.554457 sshd[6044]: Connection closed by 139.178.68.195 port 44318 Jul 7 06:12:38.570176 sshd-session[6039]: pam_unix(sshd:session): session closed for user core Jul 7 06:12:38.600690 systemd[1]: sshd@11-139.178.70.102:22-139.178.68.195:44318.service: Deactivated successfully. Jul 7 06:12:38.600880 systemd-logind[1592]: Session 14 logged out. Waiting for processes to exit. Jul 7 06:12:38.602869 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 06:12:38.604233 systemd-logind[1592]: Removed session 14. Jul 7 06:12:40.092175 containerd[1626]: time="2025-07-07T06:12:40.092111573Z" level=info msg="TaskExit event in podsandbox handler container_id:\"624ae46721977988752979094a716dcad8f293bd084fd1c966c75b4333ac85a6\" id:\"97a648ffd2773a48715d8dd2ece02fc0f33a943013e6abf8302f86469393eb85\" pid:6068 exited_at:{seconds:1751868760 nanos:91446095}" Jul 7 06:12:43.563796 systemd[1]: Started sshd@12-139.178.70.102:22-139.178.68.195:39080.service - OpenSSH per-connection server daemon (139.178.68.195:39080). Jul 7 06:12:44.418646 sshd[6078]: Accepted publickey for core from 139.178.68.195 port 39080 ssh2: RSA SHA256:7X4SiK+KmZA2miCaJT9Isk7VmwTZNhYs8oAuZv2KODY Jul 7 06:12:44.438796 sshd-session[6078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:12:44.449062 systemd-logind[1592]: New session 15 of user core. Jul 7 06:12:44.456237 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 06:12:46.015591 sshd[6080]: Connection closed by 139.178.68.195 port 39080 Jul 7 06:12:46.024280 systemd[1]: Started sshd@13-139.178.70.102:22-139.178.68.195:39094.service - OpenSSH per-connection server daemon (139.178.68.195:39094). Jul 7 06:12:46.016984 sshd-session[6078]: pam_unix(sshd:session): session closed for user core Jul 7 06:12:46.030690 systemd[1]: sshd@12-139.178.70.102:22-139.178.68.195:39080.service: Deactivated successfully. Jul 7 06:12:46.032361 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 06:12:46.033129 systemd-logind[1592]: Session 15 logged out. Waiting for processes to exit. Jul 7 06:12:46.038192 systemd-logind[1592]: Removed session 15. Jul 7 06:12:46.248578 sshd[6112]: Accepted publickey for core from 139.178.68.195 port 39094 ssh2: RSA SHA256:7X4SiK+KmZA2miCaJT9Isk7VmwTZNhYs8oAuZv2KODY Jul 7 06:12:46.249574 sshd-session[6112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:12:46.253881 systemd-logind[1592]: New session 16 of user core. Jul 7 06:12:46.264212 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 06:12:47.593401 sshd[6117]: Connection closed by 139.178.68.195 port 39094 Jul 7 06:12:47.594691 sshd-session[6112]: pam_unix(sshd:session): session closed for user core Jul 7 06:12:47.603252 systemd[1]: sshd@13-139.178.70.102:22-139.178.68.195:39094.service: Deactivated successfully. Jul 7 06:12:47.604400 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 06:12:47.605422 systemd-logind[1592]: Session 16 logged out. Waiting for processes to exit. Jul 7 06:12:47.607048 systemd[1]: Started sshd@14-139.178.70.102:22-139.178.68.195:39098.service - OpenSSH per-connection server daemon (139.178.68.195:39098). Jul 7 06:12:47.607946 systemd-logind[1592]: Removed session 16. Jul 7 06:12:47.673431 sshd[6127]: Accepted publickey for core from 139.178.68.195 port 39098 ssh2: RSA SHA256:7X4SiK+KmZA2miCaJT9Isk7VmwTZNhYs8oAuZv2KODY Jul 7 06:12:47.674353 sshd-session[6127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:12:47.677438 systemd-logind[1592]: New session 17 of user core. Jul 7 06:12:47.689201 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 06:12:49.113199 sshd[6129]: Connection closed by 139.178.68.195 port 39098 Jul 7 06:12:49.122602 systemd[1]: Started sshd@15-139.178.70.102:22-139.178.68.195:53086.service - OpenSSH per-connection server daemon (139.178.68.195:53086). Jul 7 06:12:49.127940 sshd-session[6127]: pam_unix(sshd:session): session closed for user core Jul 7 06:12:49.137391 systemd[1]: sshd@14-139.178.70.102:22-139.178.68.195:39098.service: Deactivated successfully. Jul 7 06:12:49.140055 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 06:12:49.142337 systemd-logind[1592]: Session 17 logged out. Waiting for processes to exit. Jul 7 06:12:49.143521 systemd-logind[1592]: Removed session 17. Jul 7 06:12:49.248015 sshd[6140]: Accepted publickey for core from 139.178.68.195 port 53086 ssh2: RSA SHA256:7X4SiK+KmZA2miCaJT9Isk7VmwTZNhYs8oAuZv2KODY Jul 7 06:12:49.248879 sshd-session[6140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:12:49.252432 systemd-logind[1592]: New session 18 of user core. Jul 7 06:12:49.258318 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 06:12:50.038147 sshd[6149]: Connection closed by 139.178.68.195 port 53086 Jul 7 06:12:50.050364 sshd-session[6140]: pam_unix(sshd:session): session closed for user core Jul 7 06:12:50.053735 systemd[1]: Started sshd@16-139.178.70.102:22-139.178.68.195:53096.service - OpenSSH per-connection server daemon (139.178.68.195:53096). Jul 7 06:12:50.060814 systemd[1]: sshd@15-139.178.70.102:22-139.178.68.195:53086.service: Deactivated successfully. Jul 7 06:12:50.062328 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 06:12:50.066269 systemd-logind[1592]: Session 18 logged out. Waiting for processes to exit. Jul 7 06:12:50.069317 systemd-logind[1592]: Removed session 18. Jul 7 06:12:50.142335 sshd[6156]: Accepted publickey for core from 139.178.68.195 port 53096 ssh2: RSA SHA256:7X4SiK+KmZA2miCaJT9Isk7VmwTZNhYs8oAuZv2KODY Jul 7 06:12:50.144094 sshd-session[6156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:12:50.148001 systemd-logind[1592]: New session 19 of user core. Jul 7 06:12:50.155159 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 06:12:50.327108 sshd[6161]: Connection closed by 139.178.68.195 port 53096 Jul 7 06:12:50.327468 sshd-session[6156]: pam_unix(sshd:session): session closed for user core Jul 7 06:12:50.330670 systemd[1]: sshd@16-139.178.70.102:22-139.178.68.195:53096.service: Deactivated successfully. Jul 7 06:12:50.332474 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 06:12:50.333461 systemd-logind[1592]: Session 19 logged out. Waiting for processes to exit. Jul 7 06:12:50.334689 systemd-logind[1592]: Removed session 19. Jul 7 06:12:53.437433 containerd[1626]: time="2025-07-07T06:12:53.437390981Z" level=info msg="TaskExit event in podsandbox handler container_id:\"624ae46721977988752979094a716dcad8f293bd084fd1c966c75b4333ac85a6\" id:\"1cea787f66c8c17ae4940cfa7698eb1a9759cf481a42888e6482bc3f749fa28f\" pid:6196 exited_at:{seconds:1751868773 nanos:398119892}" Jul 7 06:12:53.786450 containerd[1626]: time="2025-07-07T06:12:53.786418153Z" level=info msg="TaskExit event in podsandbox handler container_id:\"220b565e39f808de4aa5c51cae6029853ce51c95b52b27c7f7a9bb1a4da3b19e\" id:\"f8cac65785936b5d24307b11666cc22a5afb23d004d8613dc999077303e9fa28\" pid:6201 exited_at:{seconds:1751868773 nanos:786147971}" Jul 7 06:12:54.869896 kubelet[2913]: I0707 06:12:54.869870 2913 scope.go:117] "RemoveContainer" containerID="f703af2588cb724990757fe340a405bfdd9e6bb6a48230227a69c53e7a75479b" Jul 7 06:12:55.024166 containerd[1626]: time="2025-07-07T06:12:55.024081863Z" level=info msg="RemoveContainer for \"f703af2588cb724990757fe340a405bfdd9e6bb6a48230227a69c53e7a75479b\"" Jul 7 06:12:55.135881 containerd[1626]: time="2025-07-07T06:12:55.135798398Z" level=info msg="RemoveContainer for \"f703af2588cb724990757fe340a405bfdd9e6bb6a48230227a69c53e7a75479b\" returns successfully" Jul 7 06:12:55.148420 kubelet[2913]: I0707 06:12:55.148394 2913 scope.go:117] "RemoveContainer" containerID="9c7bba36bf845d15762ba11695cb99af9cc388c2ab347cbda00428e12c41146d" Jul 7 06:12:55.154405 containerd[1626]: time="2025-07-07T06:12:55.154381139Z" level=info msg="RemoveContainer for \"9c7bba36bf845d15762ba11695cb99af9cc388c2ab347cbda00428e12c41146d\"" Jul 7 06:12:55.166492 containerd[1626]: time="2025-07-07T06:12:55.166461696Z" level=info msg="RemoveContainer for \"9c7bba36bf845d15762ba11695cb99af9cc388c2ab347cbda00428e12c41146d\" returns successfully" Jul 7 06:12:55.187541 containerd[1626]: time="2025-07-07T06:12:55.187370496Z" level=info msg="StopPodSandbox for \"6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466\"" Jul 7 06:12:55.371771 systemd[1]: Started sshd@17-139.178.70.102:22-139.178.68.195:53104.service - OpenSSH per-connection server daemon (139.178.68.195:53104). Jul 7 06:12:55.521761 sshd[6234]: Accepted publickey for core from 139.178.68.195 port 53104 ssh2: RSA SHA256:7X4SiK+KmZA2miCaJT9Isk7VmwTZNhYs8oAuZv2KODY Jul 7 06:12:55.526321 sshd-session[6234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:12:55.530863 systemd-logind[1592]: New session 20 of user core. Jul 7 06:12:55.536245 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 06:12:56.083111 containerd[1626]: 2025-07-07 06:12:55.551 [WARNING][6229] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" WorkloadEndpoint="localhost-k8s-calico--apiserver--945f9d44--bq9wq-eth0" Jul 7 06:12:56.083111 containerd[1626]: 2025-07-07 06:12:55.555 [INFO][6229] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" Jul 7 06:12:56.083111 containerd[1626]: 2025-07-07 06:12:55.555 [INFO][6229] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" iface="eth0" netns="" Jul 7 06:12:56.083111 containerd[1626]: 2025-07-07 06:12:55.555 [INFO][6229] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" Jul 7 06:12:56.083111 containerd[1626]: 2025-07-07 06:12:55.555 [INFO][6229] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" Jul 7 06:12:56.083111 containerd[1626]: 2025-07-07 06:12:56.007 [INFO][6240] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" HandleID="k8s-pod-network.6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" Workload="localhost-k8s-calico--apiserver--945f9d44--bq9wq-eth0" Jul 7 06:12:56.083111 containerd[1626]: 2025-07-07 06:12:56.012 [INFO][6240] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:12:56.083111 containerd[1626]: 2025-07-07 06:12:56.014 [INFO][6240] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:12:56.083111 containerd[1626]: 2025-07-07 06:12:56.043 [WARNING][6240] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" HandleID="k8s-pod-network.6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" Workload="localhost-k8s-calico--apiserver--945f9d44--bq9wq-eth0" Jul 7 06:12:56.083111 containerd[1626]: 2025-07-07 06:12:56.043 [INFO][6240] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" HandleID="k8s-pod-network.6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" Workload="localhost-k8s-calico--apiserver--945f9d44--bq9wq-eth0" Jul 7 06:12:56.083111 containerd[1626]: 2025-07-07 06:12:56.044 [INFO][6240] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:12:56.083111 containerd[1626]: 2025-07-07 06:12:56.054 [INFO][6229] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" Jul 7 06:12:56.087863 containerd[1626]: time="2025-07-07T06:12:56.087825904Z" level=info msg="TearDown network for sandbox \"6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466\" successfully" Jul 7 06:12:56.087987 containerd[1626]: time="2025-07-07T06:12:56.087978287Z" level=info msg="StopPodSandbox for \"6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466\" returns successfully" Jul 7 06:12:56.189332 containerd[1626]: time="2025-07-07T06:12:56.189293533Z" level=info msg="RemovePodSandbox for \"6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466\"" Jul 7 06:12:56.190345 containerd[1626]: time="2025-07-07T06:12:56.190206160Z" level=info msg="Forcibly stopping sandbox \"6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466\"" Jul 7 06:12:56.484171 containerd[1626]: 2025-07-07 06:12:56.388 [WARNING][6287] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" WorkloadEndpoint="localhost-k8s-calico--apiserver--945f9d44--bq9wq-eth0" Jul 7 06:12:56.484171 containerd[1626]: 2025-07-07 06:12:56.388 [INFO][6287] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" Jul 7 06:12:56.484171 containerd[1626]: 2025-07-07 06:12:56.388 [INFO][6287] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" iface="eth0" netns="" Jul 7 06:12:56.484171 containerd[1626]: 2025-07-07 06:12:56.388 [INFO][6287] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" Jul 7 06:12:56.484171 containerd[1626]: 2025-07-07 06:12:56.388 [INFO][6287] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" Jul 7 06:12:56.484171 containerd[1626]: 2025-07-07 06:12:56.449 [INFO][6294] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" HandleID="k8s-pod-network.6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" Workload="localhost-k8s-calico--apiserver--945f9d44--bq9wq-eth0" Jul 7 06:12:56.484171 containerd[1626]: 2025-07-07 06:12:56.451 [INFO][6294] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:12:56.484171 containerd[1626]: 2025-07-07 06:12:56.451 [INFO][6294] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:12:56.484171 containerd[1626]: 2025-07-07 06:12:56.473 [WARNING][6294] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" HandleID="k8s-pod-network.6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" Workload="localhost-k8s-calico--apiserver--945f9d44--bq9wq-eth0" Jul 7 06:12:56.484171 containerd[1626]: 2025-07-07 06:12:56.473 [INFO][6294] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" HandleID="k8s-pod-network.6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" Workload="localhost-k8s-calico--apiserver--945f9d44--bq9wq-eth0" Jul 7 06:12:56.484171 containerd[1626]: 2025-07-07 06:12:56.477 [INFO][6294] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:12:56.484171 containerd[1626]: 2025-07-07 06:12:56.481 [INFO][6287] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466" Jul 7 06:12:56.487936 containerd[1626]: time="2025-07-07T06:12:56.484306638Z" level=info msg="TearDown network for sandbox \"6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466\" successfully" Jul 7 06:12:56.501090 containerd[1626]: time="2025-07-07T06:12:56.500992440Z" level=info msg="Ensure that sandbox 6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466 in task-service has been cleanup successfully" Jul 7 06:12:56.511243 containerd[1626]: time="2025-07-07T06:12:56.511187674Z" level=info msg="RemovePodSandbox \"6a4fc9d8da72deac6f61e83850948bc0b50edba520c670b4cb370c799ea4f466\" returns successfully" Jul 7 06:12:56.512298 containerd[1626]: time="2025-07-07T06:12:56.512279223Z" level=info msg="StopPodSandbox for \"5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7\"" Jul 7 06:12:56.670322 containerd[1626]: 2025-07-07 06:12:56.580 [WARNING][6308] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" WorkloadEndpoint="localhost-k8s-calico--apiserver--945f9d44--42wqf-eth0" Jul 7 06:12:56.670322 containerd[1626]: 2025-07-07 06:12:56.580 [INFO][6308] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" Jul 7 06:12:56.670322 containerd[1626]: 2025-07-07 06:12:56.580 [INFO][6308] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" iface="eth0" netns="" Jul 7 06:12:56.670322 containerd[1626]: 2025-07-07 06:12:56.580 [INFO][6308] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" Jul 7 06:12:56.670322 containerd[1626]: 2025-07-07 06:12:56.580 [INFO][6308] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" Jul 7 06:12:56.670322 containerd[1626]: 2025-07-07 06:12:56.634 [INFO][6315] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" HandleID="k8s-pod-network.5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" Workload="localhost-k8s-calico--apiserver--945f9d44--42wqf-eth0" Jul 7 06:12:56.670322 containerd[1626]: 2025-07-07 06:12:56.641 [INFO][6315] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:12:56.670322 containerd[1626]: 2025-07-07 06:12:56.641 [INFO][6315] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:12:56.670322 containerd[1626]: 2025-07-07 06:12:56.663 [WARNING][6315] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" HandleID="k8s-pod-network.5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" Workload="localhost-k8s-calico--apiserver--945f9d44--42wqf-eth0" Jul 7 06:12:56.670322 containerd[1626]: 2025-07-07 06:12:56.663 [INFO][6315] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" HandleID="k8s-pod-network.5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" Workload="localhost-k8s-calico--apiserver--945f9d44--42wqf-eth0" Jul 7 06:12:56.670322 containerd[1626]: 2025-07-07 06:12:56.665 [INFO][6315] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:12:56.670322 containerd[1626]: 2025-07-07 06:12:56.667 [INFO][6308] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" Jul 7 06:12:56.671435 containerd[1626]: time="2025-07-07T06:12:56.670627131Z" level=info msg="TearDown network for sandbox \"5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7\" successfully" Jul 7 06:12:56.671435 containerd[1626]: time="2025-07-07T06:12:56.670644284Z" level=info msg="StopPodSandbox for \"5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7\" returns successfully" Jul 7 06:12:56.671435 containerd[1626]: time="2025-07-07T06:12:56.670929833Z" level=info msg="RemovePodSandbox for \"5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7\"" Jul 7 06:12:56.671435 containerd[1626]: time="2025-07-07T06:12:56.670945819Z" level=info msg="Forcibly stopping sandbox \"5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7\"" Jul 7 06:12:56.764609 containerd[1626]: 2025-07-07 06:12:56.717 [WARNING][6329] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" WorkloadEndpoint="localhost-k8s-calico--apiserver--945f9d44--42wqf-eth0" Jul 7 06:12:56.764609 containerd[1626]: 2025-07-07 06:12:56.717 [INFO][6329] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" Jul 7 06:12:56.764609 containerd[1626]: 2025-07-07 06:12:56.717 [INFO][6329] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" iface="eth0" netns="" Jul 7 06:12:56.764609 containerd[1626]: 2025-07-07 06:12:56.717 [INFO][6329] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" Jul 7 06:12:56.764609 containerd[1626]: 2025-07-07 06:12:56.717 [INFO][6329] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" Jul 7 06:12:56.764609 containerd[1626]: 2025-07-07 06:12:56.751 [INFO][6336] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" HandleID="k8s-pod-network.5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" Workload="localhost-k8s-calico--apiserver--945f9d44--42wqf-eth0" Jul 7 06:12:56.764609 containerd[1626]: 2025-07-07 06:12:56.751 [INFO][6336] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:12:56.764609 containerd[1626]: 2025-07-07 06:12:56.751 [INFO][6336] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:12:56.764609 containerd[1626]: 2025-07-07 06:12:56.756 [WARNING][6336] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" HandleID="k8s-pod-network.5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" Workload="localhost-k8s-calico--apiserver--945f9d44--42wqf-eth0" Jul 7 06:12:56.764609 containerd[1626]: 2025-07-07 06:12:56.756 [INFO][6336] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" HandleID="k8s-pod-network.5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" Workload="localhost-k8s-calico--apiserver--945f9d44--42wqf-eth0" Jul 7 06:12:56.764609 containerd[1626]: 2025-07-07 06:12:56.757 [INFO][6336] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:12:56.764609 containerd[1626]: 2025-07-07 06:12:56.761 [INFO][6329] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7" Jul 7 06:12:56.764609 containerd[1626]: time="2025-07-07T06:12:56.764506090Z" level=info msg="TearDown network for sandbox \"5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7\" successfully" Jul 7 06:12:56.769601 containerd[1626]: time="2025-07-07T06:12:56.769551756Z" level=info msg="Ensure that sandbox 5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7 in task-service has been cleanup successfully" Jul 7 06:12:56.775569 containerd[1626]: time="2025-07-07T06:12:56.775509226Z" level=info msg="RemovePodSandbox \"5cd27859e7584df2e880bc69236ace7cdcd416494bf624f9e3a3d31a4da1bde7\" returns successfully" Jul 7 06:12:56.896621 containerd[1626]: time="2025-07-07T06:12:56.896514049Z" level=info msg="TaskExit event in podsandbox handler container_id:\"220b565e39f808de4aa5c51cae6029853ce51c95b52b27c7f7a9bb1a4da3b19e\" id:\"a9f4f219f0a59868a835dd1bda1387ea30451120410bc2adb7b4ce2da318489d\" pid:6265 exited_at:{seconds:1751868776 nanos:895772847}" Jul 7 06:12:57.325106 sshd[6238]: Connection closed by 139.178.68.195 port 53104 Jul 7 06:12:57.339574 sshd-session[6234]: pam_unix(sshd:session): session closed for user core Jul 7 06:12:57.342665 systemd[1]: sshd@17-139.178.70.102:22-139.178.68.195:53104.service: Deactivated successfully. Jul 7 06:12:57.345599 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 06:12:57.347042 systemd-logind[1592]: Session 20 logged out. Waiting for processes to exit. Jul 7 06:12:57.349004 systemd-logind[1592]: Removed session 20. Jul 7 06:13:02.384807 systemd[1]: Started sshd@18-139.178.70.102:22-139.178.68.195:46680.service - OpenSSH per-connection server daemon (139.178.68.195:46680). Jul 7 06:13:02.781567 sshd[6370]: Accepted publickey for core from 139.178.68.195 port 46680 ssh2: RSA SHA256:7X4SiK+KmZA2miCaJT9Isk7VmwTZNhYs8oAuZv2KODY Jul 7 06:13:02.801304 sshd-session[6370]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:13:02.821259 systemd-logind[1592]: New session 21 of user core. Jul 7 06:13:02.828501 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 06:13:03.462930 sshd[6372]: Connection closed by 139.178.68.195 port 46680 Jul 7 06:13:03.463196 sshd-session[6370]: pam_unix(sshd:session): session closed for user core Jul 7 06:13:03.465474 systemd-logind[1592]: Session 21 logged out. Waiting for processes to exit. Jul 7 06:13:03.465932 systemd[1]: sshd@18-139.178.70.102:22-139.178.68.195:46680.service: Deactivated successfully. Jul 7 06:13:03.467395 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 06:13:03.468425 systemd-logind[1592]: Removed session 21. Jul 7 06:13:05.107570 containerd[1626]: time="2025-07-07T06:13:05.107531626Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c05b9e293047d80d625a0ae7fbdef27a4e452d5ff264e1dcdb0cf72c4ea8af9d\" id:\"8be0e08425f24050d2734a626f062e099d1d113f6d934672a5dcf26073bbef3d\" pid:6394 exited_at:{seconds:1751868785 nanos:74294315}" Jul 7 06:13:08.475087 systemd[1]: Started sshd@19-139.178.70.102:22-139.178.68.195:36794.service - OpenSSH per-connection server daemon (139.178.68.195:36794). Jul 7 06:13:08.779360 sshd[6417]: Accepted publickey for core from 139.178.68.195 port 36794 ssh2: RSA SHA256:7X4SiK+KmZA2miCaJT9Isk7VmwTZNhYs8oAuZv2KODY Jul 7 06:13:08.810532 sshd-session[6417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:13:08.822316 systemd-logind[1592]: New session 22 of user core. Jul 7 06:13:08.830206 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 06:13:09.923759 sshd[6420]: Connection closed by 139.178.68.195 port 36794 Jul 7 06:13:09.926362 sshd-session[6417]: pam_unix(sshd:session): session closed for user core Jul 7 06:13:09.933394 systemd[1]: sshd@19-139.178.70.102:22-139.178.68.195:36794.service: Deactivated successfully. Jul 7 06:13:09.934752 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 06:13:09.935758 systemd-logind[1592]: Session 22 logged out. Waiting for processes to exit. Jul 7 06:13:09.937247 systemd-logind[1592]: Removed session 22.