Oct 13 05:26:56.614846 kernel: Linux version 6.12.51-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Mon Oct 13 03:31:29 -00 2025 Oct 13 05:26:56.614865 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=4919840803704517a91afcb9d57d99e9935244ff049349c54216d9a31bc1da5d Oct 13 05:26:56.614871 kernel: Disabled fast string operations Oct 13 05:26:56.614876 kernel: BIOS-provided physical RAM map: Oct 13 05:26:56.614880 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Oct 13 05:26:56.614886 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Oct 13 05:26:56.614891 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Oct 13 05:26:56.614896 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Oct 13 05:26:56.614901 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Oct 13 05:26:56.614905 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Oct 13 05:26:56.614910 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Oct 13 05:26:56.614915 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Oct 13 05:26:56.614919 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Oct 13 05:26:56.614925 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Oct 13 05:26:56.614930 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Oct 13 05:26:56.614936 kernel: NX (Execute Disable) protection: active Oct 13 05:26:56.614941 kernel: APIC: Static calls initialized Oct 13 05:26:56.614947 kernel: SMBIOS 2.7 present. Oct 13 05:26:56.614952 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Oct 13 05:26:56.614958 kernel: DMI: Memory slots populated: 1/128 Oct 13 05:26:56.614963 kernel: vmware: hypercall mode: 0x00 Oct 13 05:26:56.614968 kernel: Hypervisor detected: VMware Oct 13 05:26:56.614974 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Oct 13 05:26:56.614979 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Oct 13 05:26:56.614984 kernel: vmware: using clock offset of 3495668376 ns Oct 13 05:26:56.614989 kernel: tsc: Detected 3408.000 MHz processor Oct 13 05:26:56.614995 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 13 05:26:56.615002 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 13 05:26:56.615008 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Oct 13 05:26:56.615013 kernel: total RAM covered: 3072M Oct 13 05:26:56.615018 kernel: Found optimal setting for mtrr clean up Oct 13 05:26:56.615024 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Oct 13 05:26:56.615030 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Oct 13 05:26:56.615035 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 13 05:26:56.615041 kernel: Using GB pages for direct mapping Oct 13 05:26:56.615048 kernel: ACPI: Early table checksum verification disabled Oct 13 05:26:56.615053 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Oct 13 05:26:56.615059 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Oct 13 05:26:56.615065 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Oct 13 05:26:56.615070 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Oct 13 05:26:56.615078 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Oct 13 05:26:56.615084 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Oct 13 05:26:56.615090 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Oct 13 05:26:56.615096 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Oct 13 05:26:56.615101 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Oct 13 05:26:56.615107 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Oct 13 05:26:56.615114 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Oct 13 05:26:56.615120 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Oct 13 05:26:56.615125 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Oct 13 05:26:56.615131 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Oct 13 05:26:56.615137 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Oct 13 05:26:56.615143 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Oct 13 05:26:56.615148 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Oct 13 05:26:56.615154 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Oct 13 05:26:56.615160 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Oct 13 05:26:56.615166 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Oct 13 05:26:56.615172 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Oct 13 05:26:56.615177 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Oct 13 05:26:56.615182 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Oct 13 05:26:56.615188 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Oct 13 05:26:56.615194 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Oct 13 05:26:56.615201 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00001000-0x7fffffff] Oct 13 05:26:56.615207 kernel: NODE_DATA(0) allocated [mem 0x7fff8dc0-0x7fffffff] Oct 13 05:26:56.615213 kernel: Zone ranges: Oct 13 05:26:56.615219 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 13 05:26:56.615225 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Oct 13 05:26:56.615230 kernel: Normal empty Oct 13 05:26:56.615236 kernel: Device empty Oct 13 05:26:56.615242 kernel: Movable zone start for each node Oct 13 05:26:56.615249 kernel: Early memory node ranges Oct 13 05:26:56.615254 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Oct 13 05:26:56.615260 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Oct 13 05:26:56.615265 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Oct 13 05:26:56.615271 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Oct 13 05:26:56.615276 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 13 05:26:56.615282 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Oct 13 05:26:56.615289 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Oct 13 05:26:56.615295 kernel: ACPI: PM-Timer IO Port: 0x1008 Oct 13 05:26:56.615301 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Oct 13 05:26:56.615306 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Oct 13 05:26:56.615312 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Oct 13 05:26:56.615318 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Oct 13 05:26:56.615323 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Oct 13 05:26:56.615329 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Oct 13 05:26:56.615335 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Oct 13 05:26:56.615341 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Oct 13 05:26:56.615346 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Oct 13 05:26:56.615352 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Oct 13 05:26:56.615358 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Oct 13 05:26:56.615363 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Oct 13 05:26:56.615368 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Oct 13 05:26:56.615374 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Oct 13 05:26:56.615381 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Oct 13 05:26:56.615387 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Oct 13 05:26:56.615392 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Oct 13 05:26:56.615397 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Oct 13 05:26:56.615403 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Oct 13 05:26:56.615408 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Oct 13 05:26:56.615414 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Oct 13 05:26:56.615419 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Oct 13 05:26:56.615426 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Oct 13 05:26:56.615432 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Oct 13 05:26:56.615437 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Oct 13 05:26:56.615443 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Oct 13 05:26:56.615448 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Oct 13 05:26:56.615454 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Oct 13 05:26:56.615460 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Oct 13 05:26:56.615465 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Oct 13 05:26:56.615472 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Oct 13 05:26:56.615477 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Oct 13 05:26:56.615483 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Oct 13 05:26:56.615488 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Oct 13 05:26:56.615493 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Oct 13 05:26:56.615499 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Oct 13 05:26:56.615504 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Oct 13 05:26:56.615510 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Oct 13 05:26:56.615517 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Oct 13 05:26:56.615522 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Oct 13 05:26:56.615532 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Oct 13 05:26:56.615539 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Oct 13 05:26:56.615545 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Oct 13 05:26:56.615551 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Oct 13 05:26:56.615556 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Oct 13 05:26:56.615563 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Oct 13 05:26:56.615569 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Oct 13 05:26:56.615575 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Oct 13 05:26:56.615581 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Oct 13 05:26:56.615587 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Oct 13 05:26:56.615593 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Oct 13 05:26:56.615599 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Oct 13 05:26:56.615605 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Oct 13 05:26:56.615611 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Oct 13 05:26:56.615617 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Oct 13 05:26:56.615623 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Oct 13 05:26:56.615628 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Oct 13 05:26:56.615634 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Oct 13 05:26:56.615640 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Oct 13 05:26:56.615646 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Oct 13 05:26:56.615652 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Oct 13 05:26:56.615659 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Oct 13 05:26:56.615664 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Oct 13 05:26:56.615670 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Oct 13 05:26:56.615676 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Oct 13 05:26:56.615682 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Oct 13 05:26:56.615688 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Oct 13 05:26:56.615693 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Oct 13 05:26:56.615699 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Oct 13 05:26:56.615705 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Oct 13 05:26:56.615712 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Oct 13 05:26:56.615718 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Oct 13 05:26:56.615724 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Oct 13 05:26:56.615744 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Oct 13 05:26:56.615752 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Oct 13 05:26:56.615758 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Oct 13 05:26:56.615764 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Oct 13 05:26:56.615772 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Oct 13 05:26:56.615777 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Oct 13 05:26:56.615783 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Oct 13 05:26:56.615789 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Oct 13 05:26:56.615795 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Oct 13 05:26:56.615801 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Oct 13 05:26:56.615807 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Oct 13 05:26:56.615812 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Oct 13 05:26:56.615820 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Oct 13 05:26:56.615826 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Oct 13 05:26:56.615832 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Oct 13 05:26:56.615838 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Oct 13 05:26:56.615843 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Oct 13 05:26:56.615849 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Oct 13 05:26:56.615855 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Oct 13 05:26:56.615861 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Oct 13 05:26:56.615866 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Oct 13 05:26:56.615879 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Oct 13 05:26:56.615885 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Oct 13 05:26:56.615891 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Oct 13 05:26:56.615897 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Oct 13 05:26:56.615902 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Oct 13 05:26:56.615908 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Oct 13 05:26:56.615914 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Oct 13 05:26:56.615920 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Oct 13 05:26:56.615927 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Oct 13 05:26:56.615933 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Oct 13 05:26:56.615938 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Oct 13 05:26:56.615944 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Oct 13 05:26:56.615950 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Oct 13 05:26:56.615957 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Oct 13 05:26:56.615962 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Oct 13 05:26:56.615968 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Oct 13 05:26:56.615975 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Oct 13 05:26:56.615981 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Oct 13 05:26:56.615987 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Oct 13 05:26:56.615992 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Oct 13 05:26:56.615998 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Oct 13 05:26:56.616004 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Oct 13 05:26:56.616010 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Oct 13 05:26:56.616016 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Oct 13 05:26:56.616023 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Oct 13 05:26:56.616028 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Oct 13 05:26:56.616034 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Oct 13 05:26:56.616040 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Oct 13 05:26:56.616045 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Oct 13 05:26:56.616052 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Oct 13 05:26:56.616057 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Oct 13 05:26:56.616063 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Oct 13 05:26:56.616070 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Oct 13 05:26:56.616076 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Oct 13 05:26:56.616082 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Oct 13 05:26:56.616088 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Oct 13 05:26:56.616094 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 13 05:26:56.616100 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Oct 13 05:26:56.616106 kernel: TSC deadline timer available Oct 13 05:26:56.616112 kernel: CPU topo: Max. logical packages: 128 Oct 13 05:26:56.616119 kernel: CPU topo: Max. logical dies: 128 Oct 13 05:26:56.616125 kernel: CPU topo: Max. dies per package: 1 Oct 13 05:26:56.616131 kernel: CPU topo: Max. threads per core: 1 Oct 13 05:26:56.616137 kernel: CPU topo: Num. cores per package: 1 Oct 13 05:26:56.616143 kernel: CPU topo: Num. threads per package: 1 Oct 13 05:26:56.616148 kernel: CPU topo: Allowing 2 present CPUs plus 126 hotplug CPUs Oct 13 05:26:56.616154 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Oct 13 05:26:56.616161 kernel: Booting paravirtualized kernel on VMware hypervisor Oct 13 05:26:56.616168 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 13 05:26:56.616174 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Oct 13 05:26:56.616180 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Oct 13 05:26:56.616186 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Oct 13 05:26:56.616193 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Oct 13 05:26:56.616198 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Oct 13 05:26:56.616204 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Oct 13 05:26:56.616211 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Oct 13 05:26:56.616217 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Oct 13 05:26:56.616223 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Oct 13 05:26:56.616229 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Oct 13 05:26:56.616235 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Oct 13 05:26:56.616241 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Oct 13 05:26:56.616247 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Oct 13 05:26:56.616253 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Oct 13 05:26:56.616259 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Oct 13 05:26:56.616265 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Oct 13 05:26:56.616271 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Oct 13 05:26:56.616277 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Oct 13 05:26:56.616283 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Oct 13 05:26:56.616290 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=4919840803704517a91afcb9d57d99e9935244ff049349c54216d9a31bc1da5d Oct 13 05:26:56.616298 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 13 05:26:56.616304 kernel: random: crng init done Oct 13 05:26:56.616310 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Oct 13 05:26:56.616316 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Oct 13 05:26:56.616322 kernel: printk: log_buf_len min size: 262144 bytes Oct 13 05:26:56.616328 kernel: printk: log_buf_len: 1048576 bytes Oct 13 05:26:56.616335 kernel: printk: early log buf free: 245576(93%) Oct 13 05:26:56.616341 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 13 05:26:56.616347 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 13 05:26:56.616353 kernel: Fallback order for Node 0: 0 Oct 13 05:26:56.616359 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524157 Oct 13 05:26:56.616365 kernel: Policy zone: DMA32 Oct 13 05:26:56.616371 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 13 05:26:56.616377 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Oct 13 05:26:56.616385 kernel: ftrace: allocating 40210 entries in 158 pages Oct 13 05:26:56.616390 kernel: ftrace: allocated 158 pages with 5 groups Oct 13 05:26:56.616397 kernel: Dynamic Preempt: voluntary Oct 13 05:26:56.616403 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 13 05:26:56.616409 kernel: rcu: RCU event tracing is enabled. Oct 13 05:26:56.616415 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Oct 13 05:26:56.616421 kernel: Trampoline variant of Tasks RCU enabled. Oct 13 05:26:56.616428 kernel: Rude variant of Tasks RCU enabled. Oct 13 05:26:56.616434 kernel: Tracing variant of Tasks RCU enabled. Oct 13 05:26:56.616440 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 13 05:26:56.616446 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Oct 13 05:26:56.616452 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Oct 13 05:26:56.616458 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Oct 13 05:26:56.616465 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Oct 13 05:26:56.616471 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Oct 13 05:26:56.616478 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Oct 13 05:26:56.616484 kernel: Console: colour VGA+ 80x25 Oct 13 05:26:56.616490 kernel: printk: legacy console [tty0] enabled Oct 13 05:26:56.616496 kernel: printk: legacy console [ttyS0] enabled Oct 13 05:26:56.616502 kernel: ACPI: Core revision 20240827 Oct 13 05:26:56.616508 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Oct 13 05:26:56.616514 kernel: APIC: Switch to symmetric I/O mode setup Oct 13 05:26:56.616521 kernel: x2apic enabled Oct 13 05:26:56.616527 kernel: APIC: Switched APIC routing to: physical x2apic Oct 13 05:26:56.616533 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 13 05:26:56.616540 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Oct 13 05:26:56.616546 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Oct 13 05:26:56.616552 kernel: Disabled fast string operations Oct 13 05:26:56.616558 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Oct 13 05:26:56.616565 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Oct 13 05:26:56.616571 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 13 05:26:56.616577 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Oct 13 05:26:56.616583 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Oct 13 05:26:56.616589 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Oct 13 05:26:56.616595 kernel: RETBleed: Mitigation: Enhanced IBRS Oct 13 05:26:56.616601 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 13 05:26:56.616608 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 13 05:26:56.616614 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Oct 13 05:26:56.616620 kernel: SRBDS: Unknown: Dependent on hypervisor status Oct 13 05:26:56.616626 kernel: GDS: Unknown: Dependent on hypervisor status Oct 13 05:26:56.616632 kernel: active return thunk: its_return_thunk Oct 13 05:26:56.616638 kernel: ITS: Mitigation: Aligned branch/return thunks Oct 13 05:26:56.616644 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 13 05:26:56.616651 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 13 05:26:56.616657 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 13 05:26:56.616665 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 13 05:26:56.616673 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 13 05:26:56.616679 kernel: Freeing SMP alternatives memory: 32K Oct 13 05:26:56.616685 kernel: pid_max: default: 131072 minimum: 1024 Oct 13 05:26:56.616691 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 13 05:26:56.616698 kernel: landlock: Up and running. Oct 13 05:26:56.616705 kernel: SELinux: Initializing. Oct 13 05:26:56.616711 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 13 05:26:56.616717 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 13 05:26:56.616723 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Oct 13 05:26:56.616738 kernel: Performance Events: Skylake events, core PMU driver. Oct 13 05:26:56.616747 kernel: core: CPUID marked event: 'cpu cycles' unavailable Oct 13 05:26:56.616755 kernel: core: CPUID marked event: 'instructions' unavailable Oct 13 05:26:56.616761 kernel: core: CPUID marked event: 'bus cycles' unavailable Oct 13 05:26:56.616767 kernel: core: CPUID marked event: 'cache references' unavailable Oct 13 05:26:56.616773 kernel: core: CPUID marked event: 'cache misses' unavailable Oct 13 05:26:56.616779 kernel: core: CPUID marked event: 'branch instructions' unavailable Oct 13 05:26:56.616785 kernel: core: CPUID marked event: 'branch misses' unavailable Oct 13 05:26:56.616791 kernel: ... version: 1 Oct 13 05:26:56.616798 kernel: ... bit width: 48 Oct 13 05:26:56.616804 kernel: ... generic registers: 4 Oct 13 05:26:56.616810 kernel: ... value mask: 0000ffffffffffff Oct 13 05:26:56.616816 kernel: ... max period: 000000007fffffff Oct 13 05:26:56.616822 kernel: ... fixed-purpose events: 0 Oct 13 05:26:56.616828 kernel: ... event mask: 000000000000000f Oct 13 05:26:56.616834 kernel: signal: max sigframe size: 1776 Oct 13 05:26:56.616841 kernel: rcu: Hierarchical SRCU implementation. Oct 13 05:26:56.616847 kernel: rcu: Max phase no-delay instances is 400. Oct 13 05:26:56.616853 kernel: Timer migration: 3 hierarchy levels; 8 children per group; 3 crossnode level Oct 13 05:26:56.616860 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 13 05:26:56.616866 kernel: smp: Bringing up secondary CPUs ... Oct 13 05:26:56.616872 kernel: smpboot: x86: Booting SMP configuration: Oct 13 05:26:56.616878 kernel: .... node #0, CPUs: #1 Oct 13 05:26:56.616883 kernel: Disabled fast string operations Oct 13 05:26:56.616891 kernel: smp: Brought up 1 node, 2 CPUs Oct 13 05:26:56.616897 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Oct 13 05:26:56.616903 kernel: Memory: 1954948K/2096628K available (14336K kernel code, 2450K rwdata, 10012K rodata, 24532K init, 1684K bss, 130292K reserved, 0K cma-reserved) Oct 13 05:26:56.616909 kernel: devtmpfs: initialized Oct 13 05:26:56.616915 kernel: x86/mm: Memory block size: 128MB Oct 13 05:26:56.616921 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Oct 13 05:26:56.616928 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 13 05:26:56.616935 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Oct 13 05:26:56.616941 kernel: pinctrl core: initialized pinctrl subsystem Oct 13 05:26:56.616947 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 13 05:26:56.616953 kernel: audit: initializing netlink subsys (disabled) Oct 13 05:26:56.616959 kernel: audit: type=2000 audit(1760333214.323:1): state=initialized audit_enabled=0 res=1 Oct 13 05:26:56.616965 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 13 05:26:56.616971 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 13 05:26:56.616978 kernel: cpuidle: using governor menu Oct 13 05:26:56.616984 kernel: Simple Boot Flag at 0x36 set to 0x80 Oct 13 05:26:56.616990 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 13 05:26:56.616996 kernel: dca service started, version 1.12.1 Oct 13 05:26:56.617009 kernel: PCI: ECAM [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) for domain 0000 [bus 00-7f] Oct 13 05:26:56.617017 kernel: PCI: Using configuration type 1 for base access Oct 13 05:26:56.617023 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 13 05:26:56.617032 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 13 05:26:56.617038 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 13 05:26:56.617045 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 13 05:26:56.617051 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 13 05:26:56.617057 kernel: ACPI: Added _OSI(Module Device) Oct 13 05:26:56.617063 kernel: ACPI: Added _OSI(Processor Device) Oct 13 05:26:56.617070 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 13 05:26:56.617077 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 13 05:26:56.617083 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Oct 13 05:26:56.617090 kernel: ACPI: Interpreter enabled Oct 13 05:26:56.617096 kernel: ACPI: PM: (supports S0 S1 S5) Oct 13 05:26:56.617103 kernel: ACPI: Using IOAPIC for interrupt routing Oct 13 05:26:56.617109 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 13 05:26:56.617115 kernel: PCI: Using E820 reservations for host bridge windows Oct 13 05:26:56.617123 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Oct 13 05:26:56.617129 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Oct 13 05:26:56.617242 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 13 05:26:56.617314 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Oct 13 05:26:56.617380 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Oct 13 05:26:56.617390 kernel: PCI host bridge to bus 0000:00 Oct 13 05:26:56.617457 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 13 05:26:56.617518 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Oct 13 05:26:56.617576 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 13 05:26:56.617635 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 13 05:26:56.617693 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Oct 13 05:26:56.617774 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Oct 13 05:26:56.617858 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 conventional PCI endpoint Oct 13 05:26:56.617930 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 conventional PCI bridge Oct 13 05:26:56.617998 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Oct 13 05:26:56.618072 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Oct 13 05:26:56.618145 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a conventional PCI endpoint Oct 13 05:26:56.618216 kernel: pci 0000:00:07.1: BAR 4 [io 0x1060-0x106f] Oct 13 05:26:56.618283 kernel: pci 0000:00:07.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Oct 13 05:26:56.618360 kernel: pci 0000:00:07.1: BAR 1 [io 0x03f6]: legacy IDE quirk Oct 13 05:26:56.618432 kernel: pci 0000:00:07.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Oct 13 05:26:56.618498 kernel: pci 0000:00:07.1: BAR 3 [io 0x0376]: legacy IDE quirk Oct 13 05:26:56.618568 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Oct 13 05:26:56.618634 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Oct 13 05:26:56.618699 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Oct 13 05:26:56.620531 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 conventional PCI endpoint Oct 13 05:26:56.620617 kernel: pci 0000:00:07.7: BAR 0 [io 0x1080-0x10bf] Oct 13 05:26:56.620686 kernel: pci 0000:00:07.7: BAR 1 [mem 0xfebfe000-0xfebfffff 64bit] Oct 13 05:26:56.620775 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 conventional PCI endpoint Oct 13 05:26:56.620848 kernel: pci 0000:00:0f.0: BAR 0 [io 0x1070-0x107f] Oct 13 05:26:56.620921 kernel: pci 0000:00:0f.0: BAR 1 [mem 0xe8000000-0xefffffff pref] Oct 13 05:26:56.620990 kernel: pci 0000:00:0f.0: BAR 2 [mem 0xfe000000-0xfe7fffff] Oct 13 05:26:56.621055 kernel: pci 0000:00:0f.0: ROM [mem 0x00000000-0x00007fff pref] Oct 13 05:26:56.621121 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 13 05:26:56.621192 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 conventional PCI bridge Oct 13 05:26:56.621259 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Oct 13 05:26:56.621328 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Oct 13 05:26:56.621396 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Oct 13 05:26:56.621462 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Oct 13 05:26:56.621531 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:26:56.621599 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Oct 13 05:26:56.621664 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Oct 13 05:26:56.621741 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Oct 13 05:26:56.621814 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Oct 13 05:26:56.621889 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:26:56.621959 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Oct 13 05:26:56.622025 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Oct 13 05:26:56.622090 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Oct 13 05:26:56.622157 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Oct 13 05:26:56.622225 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Oct 13 05:26:56.622297 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:26:56.622364 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Oct 13 05:26:56.622432 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Oct 13 05:26:56.622500 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Oct 13 05:26:56.622566 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Oct 13 05:26:56.622632 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Oct 13 05:26:56.622708 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:26:56.624003 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Oct 13 05:26:56.624083 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Oct 13 05:26:56.624157 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Oct 13 05:26:56.624225 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Oct 13 05:26:56.624298 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:26:56.624365 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Oct 13 05:26:56.624431 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Oct 13 05:26:56.624497 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Oct 13 05:26:56.624566 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Oct 13 05:26:56.624664 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:26:56.624764 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Oct 13 05:26:56.624833 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Oct 13 05:26:56.624900 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Oct 13 05:26:56.624965 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Oct 13 05:26:56.625041 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:26:56.625108 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Oct 13 05:26:56.625173 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Oct 13 05:26:56.625246 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Oct 13 05:26:56.625313 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Oct 13 05:26:56.625383 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:26:56.625452 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Oct 13 05:26:56.625517 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Oct 13 05:26:56.625582 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Oct 13 05:26:56.625647 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Oct 13 05:26:56.625723 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:26:56.626800 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Oct 13 05:26:56.626878 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Oct 13 05:26:56.626947 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Oct 13 05:26:56.627014 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Oct 13 05:26:56.627088 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:26:56.627155 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Oct 13 05:26:56.627221 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Oct 13 05:26:56.627290 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Oct 13 05:26:56.627365 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Oct 13 05:26:56.627430 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Oct 13 05:26:56.627502 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:26:56.627569 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Oct 13 05:26:56.627635 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Oct 13 05:26:56.627703 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Oct 13 05:26:56.627783 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Oct 13 05:26:56.627850 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Oct 13 05:26:56.627925 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:26:56.627991 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Oct 13 05:26:56.630086 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Oct 13 05:26:56.630197 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Oct 13 05:26:56.630269 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Oct 13 05:26:56.630347 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:26:56.630414 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Oct 13 05:26:56.630481 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Oct 13 05:26:56.630553 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Oct 13 05:26:56.630619 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Oct 13 05:26:56.630691 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:26:56.630771 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Oct 13 05:26:56.630839 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Oct 13 05:26:56.630908 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Oct 13 05:26:56.630979 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Oct 13 05:26:56.631048 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:26:56.631115 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Oct 13 05:26:56.631181 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Oct 13 05:26:56.631247 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Oct 13 05:26:56.631312 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Oct 13 05:26:56.631385 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:26:56.631452 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Oct 13 05:26:56.631517 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Oct 13 05:26:56.631582 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Oct 13 05:26:56.631647 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Oct 13 05:26:56.631716 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:26:56.631794 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Oct 13 05:26:56.631861 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Oct 13 05:26:56.631956 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Oct 13 05:26:56.632026 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Oct 13 05:26:56.632093 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Oct 13 05:26:56.632175 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:26:56.632246 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Oct 13 05:26:56.632311 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Oct 13 05:26:56.632376 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Oct 13 05:26:56.632444 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Oct 13 05:26:56.632510 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Oct 13 05:26:56.632580 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:26:56.632645 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Oct 13 05:26:56.632709 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Oct 13 05:26:56.633738 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Oct 13 05:26:56.633817 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Oct 13 05:26:56.633886 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Oct 13 05:26:56.633964 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:26:56.634031 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Oct 13 05:26:56.634100 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Oct 13 05:26:56.634166 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Oct 13 05:26:56.634234 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Oct 13 05:26:56.634306 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:26:56.634372 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Oct 13 05:26:56.634438 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Oct 13 05:26:56.634505 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Oct 13 05:26:56.634570 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Oct 13 05:26:56.634645 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:26:56.634713 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Oct 13 05:26:56.634794 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Oct 13 05:26:56.634861 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Oct 13 05:26:56.634946 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Oct 13 05:26:56.635019 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:26:56.635090 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Oct 13 05:26:56.635156 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Oct 13 05:26:56.635222 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Oct 13 05:26:56.635287 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Oct 13 05:26:56.635358 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:26:56.635424 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Oct 13 05:26:56.635491 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Oct 13 05:26:56.635556 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Oct 13 05:26:56.635620 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Oct 13 05:26:56.635690 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:26:56.635881 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Oct 13 05:26:56.635953 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Oct 13 05:26:56.636023 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Oct 13 05:26:56.636089 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Oct 13 05:26:56.636155 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Oct 13 05:26:56.636228 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:26:56.636296 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Oct 13 05:26:56.636834 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Oct 13 05:26:56.636918 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Oct 13 05:26:56.636985 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Oct 13 05:26:56.637052 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Oct 13 05:26:56.637124 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:26:56.637336 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Oct 13 05:26:56.637411 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Oct 13 05:26:56.637481 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Oct 13 05:26:56.637547 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Oct 13 05:26:56.637617 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:26:56.637684 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Oct 13 05:26:56.637767 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Oct 13 05:26:56.637847 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Oct 13 05:26:56.637913 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Oct 13 05:26:56.637983 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:26:56.638988 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Oct 13 05:26:56.639074 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Oct 13 05:26:56.639144 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Oct 13 05:26:56.639216 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Oct 13 05:26:56.639293 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:26:56.639360 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Oct 13 05:26:56.639425 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Oct 13 05:26:56.639490 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Oct 13 05:26:56.639556 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Oct 13 05:26:56.639627 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:26:56.639696 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Oct 13 05:26:56.639783 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Oct 13 05:26:56.639850 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Oct 13 05:26:56.639926 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Oct 13 05:26:56.640000 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Oct 13 05:26:56.640071 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Oct 13 05:26:56.640145 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Oct 13 05:26:56.640215 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Oct 13 05:26:56.640281 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Oct 13 05:26:56.640349 kernel: pci_bus 0000:01: extended config space not accessible Oct 13 05:26:56.640420 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Oct 13 05:26:56.640492 kernel: pci_bus 0000:02: extended config space not accessible Oct 13 05:26:56.640503 kernel: acpiphp: Slot [32] registered Oct 13 05:26:56.640510 kernel: acpiphp: Slot [33] registered Oct 13 05:26:56.640516 kernel: acpiphp: Slot [34] registered Oct 13 05:26:56.640523 kernel: acpiphp: Slot [35] registered Oct 13 05:26:56.640530 kernel: acpiphp: Slot [36] registered Oct 13 05:26:56.640537 kernel: acpiphp: Slot [37] registered Oct 13 05:26:56.640545 kernel: acpiphp: Slot [38] registered Oct 13 05:26:56.640551 kernel: acpiphp: Slot [39] registered Oct 13 05:26:56.640558 kernel: acpiphp: Slot [40] registered Oct 13 05:26:56.640564 kernel: acpiphp: Slot [41] registered Oct 13 05:26:56.640571 kernel: acpiphp: Slot [42] registered Oct 13 05:26:56.640577 kernel: acpiphp: Slot [43] registered Oct 13 05:26:56.640584 kernel: acpiphp: Slot [44] registered Oct 13 05:26:56.640590 kernel: acpiphp: Slot [45] registered Oct 13 05:26:56.640598 kernel: acpiphp: Slot [46] registered Oct 13 05:26:56.640604 kernel: acpiphp: Slot [47] registered Oct 13 05:26:56.640611 kernel: acpiphp: Slot [48] registered Oct 13 05:26:56.640617 kernel: acpiphp: Slot [49] registered Oct 13 05:26:56.640624 kernel: acpiphp: Slot [50] registered Oct 13 05:26:56.640630 kernel: acpiphp: Slot [51] registered Oct 13 05:26:56.640637 kernel: acpiphp: Slot [52] registered Oct 13 05:26:56.640645 kernel: acpiphp: Slot [53] registered Oct 13 05:26:56.640657 kernel: acpiphp: Slot [54] registered Oct 13 05:26:56.640663 kernel: acpiphp: Slot [55] registered Oct 13 05:26:56.640669 kernel: acpiphp: Slot [56] registered Oct 13 05:26:56.640676 kernel: acpiphp: Slot [57] registered Oct 13 05:26:56.640682 kernel: acpiphp: Slot [58] registered Oct 13 05:26:56.640689 kernel: acpiphp: Slot [59] registered Oct 13 05:26:56.640695 kernel: acpiphp: Slot [60] registered Oct 13 05:26:56.640703 kernel: acpiphp: Slot [61] registered Oct 13 05:26:56.640710 kernel: acpiphp: Slot [62] registered Oct 13 05:26:56.640716 kernel: acpiphp: Slot [63] registered Oct 13 05:26:56.641929 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Oct 13 05:26:56.642007 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Oct 13 05:26:56.642075 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Oct 13 05:26:56.642144 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Oct 13 05:26:56.642209 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Oct 13 05:26:56.642274 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Oct 13 05:26:56.642348 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 PCIe Endpoint Oct 13 05:26:56.642417 kernel: pci 0000:03:00.0: BAR 0 [io 0x4000-0x4007] Oct 13 05:26:56.642484 kernel: pci 0000:03:00.0: BAR 1 [mem 0xfd5f8000-0xfd5fffff 64bit] Oct 13 05:26:56.642556 kernel: pci 0000:03:00.0: ROM [mem 0x00000000-0x0000ffff pref] Oct 13 05:26:56.642628 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Oct 13 05:26:56.642697 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Oct 13 05:26:56.642780 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Oct 13 05:26:56.642852 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Oct 13 05:26:56.642929 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Oct 13 05:26:56.642997 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Oct 13 05:26:56.643069 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Oct 13 05:26:56.643137 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Oct 13 05:26:56.643204 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Oct 13 05:26:56.643283 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Oct 13 05:26:56.643366 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 PCIe Endpoint Oct 13 05:26:56.643437 kernel: pci 0000:0b:00.0: BAR 0 [mem 0xfd4fc000-0xfd4fcfff] Oct 13 05:26:56.643503 kernel: pci 0000:0b:00.0: BAR 1 [mem 0xfd4fd000-0xfd4fdfff] Oct 13 05:26:56.643570 kernel: pci 0000:0b:00.0: BAR 2 [mem 0xfd4fe000-0xfd4fffff] Oct 13 05:26:56.643636 kernel: pci 0000:0b:00.0: BAR 3 [io 0x5000-0x500f] Oct 13 05:26:56.643703 kernel: pci 0000:0b:00.0: ROM [mem 0x00000000-0x0000ffff pref] Oct 13 05:26:56.645843 kernel: pci 0000:0b:00.0: supports D1 D2 Oct 13 05:26:56.645951 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Oct 13 05:26:56.646024 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Oct 13 05:26:56.646094 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Oct 13 05:26:56.646166 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Oct 13 05:26:56.646237 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Oct 13 05:26:56.646307 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Oct 13 05:26:56.646380 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Oct 13 05:26:56.646448 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Oct 13 05:26:56.646516 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Oct 13 05:26:56.646585 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Oct 13 05:26:56.646651 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Oct 13 05:26:56.646720 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Oct 13 05:26:56.646800 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Oct 13 05:26:56.646873 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Oct 13 05:26:56.646941 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Oct 13 05:26:56.647008 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Oct 13 05:26:56.647076 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Oct 13 05:26:56.647144 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Oct 13 05:26:56.647214 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Oct 13 05:26:56.647281 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Oct 13 05:26:56.647350 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Oct 13 05:26:56.647418 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Oct 13 05:26:56.647486 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Oct 13 05:26:56.647555 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Oct 13 05:26:56.647623 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Oct 13 05:26:56.647694 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Oct 13 05:26:56.647704 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Oct 13 05:26:56.647711 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Oct 13 05:26:56.647718 kernel: ACPI: PCI: Interrupt link LNKB disabled Oct 13 05:26:56.647725 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 13 05:26:56.650646 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Oct 13 05:26:56.650664 kernel: iommu: Default domain type: Translated Oct 13 05:26:56.650671 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 13 05:26:56.650678 kernel: PCI: Using ACPI for IRQ routing Oct 13 05:26:56.650685 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 13 05:26:56.650692 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Oct 13 05:26:56.650698 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Oct 13 05:26:56.650814 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Oct 13 05:26:56.650889 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Oct 13 05:26:56.650957 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 13 05:26:56.650967 kernel: vgaarb: loaded Oct 13 05:26:56.650974 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Oct 13 05:26:56.650981 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Oct 13 05:26:56.650988 kernel: clocksource: Switched to clocksource tsc-early Oct 13 05:26:56.650995 kernel: VFS: Disk quotas dquot_6.6.0 Oct 13 05:26:56.651003 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 13 05:26:56.651010 kernel: pnp: PnP ACPI init Oct 13 05:26:56.651083 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Oct 13 05:26:56.651147 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Oct 13 05:26:56.651208 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Oct 13 05:26:56.651274 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Oct 13 05:26:56.651343 kernel: pnp 00:06: [dma 2] Oct 13 05:26:56.651409 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Oct 13 05:26:56.651470 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Oct 13 05:26:56.651530 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Oct 13 05:26:56.651539 kernel: pnp: PnP ACPI: found 8 devices Oct 13 05:26:56.651546 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 13 05:26:56.651555 kernel: NET: Registered PF_INET protocol family Oct 13 05:26:56.651561 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 13 05:26:56.651568 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 13 05:26:56.651575 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 13 05:26:56.651582 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 13 05:26:56.651588 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Oct 13 05:26:56.651595 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 13 05:26:56.651603 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 13 05:26:56.651609 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 13 05:26:56.651616 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 13 05:26:56.651623 kernel: NET: Registered PF_XDP protocol family Oct 13 05:26:56.651691 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Oct 13 05:26:56.651771 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Oct 13 05:26:56.651841 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Oct 13 05:26:56.651925 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Oct 13 05:26:56.651997 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Oct 13 05:26:56.652066 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Oct 13 05:26:56.652133 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Oct 13 05:26:56.652200 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Oct 13 05:26:56.652268 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Oct 13 05:26:56.652339 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Oct 13 05:26:56.652407 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Oct 13 05:26:56.652474 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Oct 13 05:26:56.652541 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Oct 13 05:26:56.652609 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Oct 13 05:26:56.652676 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Oct 13 05:26:56.656128 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Oct 13 05:26:56.656229 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Oct 13 05:26:56.656302 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Oct 13 05:26:56.656371 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Oct 13 05:26:56.656441 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Oct 13 05:26:56.656509 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Oct 13 05:26:56.656583 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Oct 13 05:26:56.656652 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Oct 13 05:26:56.656721 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref]: assigned Oct 13 05:26:56.656798 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref]: assigned Oct 13 05:26:56.656867 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:26:56.656933 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Oct 13 05:26:56.657001 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:26:56.657070 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Oct 13 05:26:56.657139 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:26:56.657208 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Oct 13 05:26:56.657276 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:26:56.657342 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Oct 13 05:26:56.657410 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:26:56.657480 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Oct 13 05:26:56.657548 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:26:56.657838 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Oct 13 05:26:56.657911 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:26:56.657984 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Oct 13 05:26:56.658053 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:26:56.658124 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Oct 13 05:26:56.658192 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:26:56.658259 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Oct 13 05:26:56.658328 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:26:56.658395 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Oct 13 05:26:56.658463 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:26:56.658543 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Oct 13 05:26:56.658635 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:26:56.658716 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Oct 13 05:26:56.658811 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:26:56.658898 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Oct 13 05:26:56.658991 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:26:56.659071 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Oct 13 05:26:56.659147 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:26:56.659224 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Oct 13 05:26:56.659297 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:26:56.659364 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Oct 13 05:26:56.659443 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:26:56.659512 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Oct 13 05:26:56.659580 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:26:56.659650 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Oct 13 05:26:56.659716 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:26:56.659790 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Oct 13 05:26:56.659859 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:26:56.659924 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Oct 13 05:26:56.659991 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:26:56.660059 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Oct 13 05:26:56.660125 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:26:56.660189 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Oct 13 05:26:56.660255 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:26:56.660321 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Oct 13 05:26:56.660386 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:26:56.660452 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Oct 13 05:26:56.660520 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:26:56.660586 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Oct 13 05:26:56.660651 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:26:56.660716 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Oct 13 05:26:56.660791 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:26:56.660857 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Oct 13 05:26:56.660926 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:26:56.660995 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Oct 13 05:26:56.661065 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:26:56.661131 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Oct 13 05:26:56.661197 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:26:56.661263 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Oct 13 05:26:56.661329 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:26:56.661396 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Oct 13 05:26:56.661464 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:26:56.661533 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Oct 13 05:26:56.661599 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:26:56.661664 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Oct 13 05:26:56.661737 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:26:56.661806 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Oct 13 05:26:56.661882 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:26:56.661956 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Oct 13 05:26:56.662026 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:26:56.662099 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Oct 13 05:26:56.662187 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:26:56.662263 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Oct 13 05:26:56.662332 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:26:56.662401 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Oct 13 05:26:56.662466 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:26:56.662531 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Oct 13 05:26:56.662597 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:26:56.662662 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Oct 13 05:26:56.662752 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:26:56.662819 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Oct 13 05:26:56.662889 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Oct 13 05:26:56.662955 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Oct 13 05:26:56.663021 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Oct 13 05:26:56.663087 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Oct 13 05:26:56.663153 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Oct 13 05:26:56.663216 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Oct 13 05:26:56.663282 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Oct 13 05:26:56.663357 kernel: pci 0000:03:00.0: ROM [mem 0xfd500000-0xfd50ffff pref]: assigned Oct 13 05:26:56.663424 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Oct 13 05:26:56.663490 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Oct 13 05:26:56.663556 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Oct 13 05:26:56.663634 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Oct 13 05:26:56.663708 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Oct 13 05:26:56.663784 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Oct 13 05:26:56.663852 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Oct 13 05:26:56.663917 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Oct 13 05:26:56.663988 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Oct 13 05:26:56.664053 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Oct 13 05:26:56.664119 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Oct 13 05:26:56.664183 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Oct 13 05:26:56.664248 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Oct 13 05:26:56.664312 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Oct 13 05:26:56.664379 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Oct 13 05:26:56.664445 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Oct 13 05:26:56.664509 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Oct 13 05:26:56.664574 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Oct 13 05:26:56.664645 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Oct 13 05:26:56.664710 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Oct 13 05:26:56.664940 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Oct 13 05:26:56.665008 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Oct 13 05:26:56.665080 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Oct 13 05:26:56.665145 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Oct 13 05:26:56.665632 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Oct 13 05:26:56.665716 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Oct 13 05:26:56.665812 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Oct 13 05:26:56.665895 kernel: pci 0000:0b:00.0: ROM [mem 0xfd400000-0xfd40ffff pref]: assigned Oct 13 05:26:56.665964 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Oct 13 05:26:56.666039 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Oct 13 05:26:56.666115 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Oct 13 05:26:56.666181 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Oct 13 05:26:56.666273 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Oct 13 05:26:56.666340 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Oct 13 05:26:56.666406 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Oct 13 05:26:56.666473 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Oct 13 05:26:56.666540 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Oct 13 05:26:56.666605 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Oct 13 05:26:56.666673 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Oct 13 05:26:56.666796 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Oct 13 05:26:56.666871 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Oct 13 05:26:56.666939 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Oct 13 05:26:56.667007 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Oct 13 05:26:56.667076 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Oct 13 05:26:56.667141 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Oct 13 05:26:56.667214 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Oct 13 05:26:56.667290 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Oct 13 05:26:56.667355 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Oct 13 05:26:56.667421 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Oct 13 05:26:56.667488 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Oct 13 05:26:56.667554 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Oct 13 05:26:56.667619 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Oct 13 05:26:56.667690 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Oct 13 05:26:56.667775 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Oct 13 05:26:56.667843 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Oct 13 05:26:56.667910 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Oct 13 05:26:56.667975 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Oct 13 05:26:56.668040 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Oct 13 05:26:56.668110 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Oct 13 05:26:56.668177 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Oct 13 05:26:56.668242 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Oct 13 05:26:56.668307 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Oct 13 05:26:56.668372 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Oct 13 05:26:56.668439 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Oct 13 05:26:56.668504 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Oct 13 05:26:56.668569 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Oct 13 05:26:56.668638 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Oct 13 05:26:56.668705 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Oct 13 05:26:56.668784 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Oct 13 05:26:56.668850 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Oct 13 05:26:56.668927 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Oct 13 05:26:56.668994 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Oct 13 05:26:56.669059 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Oct 13 05:26:56.669130 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Oct 13 05:26:56.669194 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Oct 13 05:26:56.669259 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Oct 13 05:26:56.669326 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Oct 13 05:26:56.669392 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Oct 13 05:26:56.669457 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Oct 13 05:26:56.669527 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Oct 13 05:26:56.669593 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Oct 13 05:26:56.669659 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Oct 13 05:26:56.669727 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Oct 13 05:26:56.669802 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Oct 13 05:26:56.669867 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Oct 13 05:26:56.669934 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Oct 13 05:26:56.670005 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Oct 13 05:26:56.670071 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Oct 13 05:26:56.670136 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Oct 13 05:26:56.670199 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Oct 13 05:26:56.670265 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Oct 13 05:26:56.670329 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Oct 13 05:26:56.670396 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Oct 13 05:26:56.670463 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Oct 13 05:26:56.670528 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Oct 13 05:26:56.670592 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Oct 13 05:26:56.670659 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Oct 13 05:26:56.670724 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Oct 13 05:26:56.670798 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Oct 13 05:26:56.670866 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Oct 13 05:26:56.670933 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Oct 13 05:26:56.670997 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Oct 13 05:26:56.671064 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Oct 13 05:26:56.671129 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Oct 13 05:26:56.671196 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Oct 13 05:26:56.671264 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Oct 13 05:26:56.671329 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Oct 13 05:26:56.671393 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Oct 13 05:26:56.671457 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Oct 13 05:26:56.671516 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Oct 13 05:26:56.671576 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Oct 13 05:26:56.671633 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Oct 13 05:26:56.671691 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Oct 13 05:26:56.671760 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Oct 13 05:26:56.671820 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Oct 13 05:26:56.671883 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Oct 13 05:26:56.671946 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Oct 13 05:26:56.672005 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Oct 13 05:26:56.672065 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Oct 13 05:26:56.672124 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Oct 13 05:26:56.672183 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Oct 13 05:26:56.672251 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Oct 13 05:26:56.672317 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Oct 13 05:26:56.672377 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Oct 13 05:26:56.672445 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Oct 13 05:26:56.672507 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Oct 13 05:26:56.672567 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Oct 13 05:26:56.672630 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Oct 13 05:26:56.672693 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Oct 13 05:26:56.672767 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Oct 13 05:26:56.672835 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Oct 13 05:26:56.672896 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Oct 13 05:26:56.672967 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Oct 13 05:26:56.673031 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Oct 13 05:26:56.673096 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Oct 13 05:26:56.673156 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Oct 13 05:26:56.673221 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Oct 13 05:26:56.673281 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Oct 13 05:26:56.673348 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Oct 13 05:26:56.673409 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Oct 13 05:26:56.673473 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Oct 13 05:26:56.673533 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Oct 13 05:26:56.673593 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Oct 13 05:26:56.673660 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Oct 13 05:26:56.673720 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Oct 13 05:26:56.673797 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Oct 13 05:26:56.673865 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Oct 13 05:26:56.673925 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Oct 13 05:26:56.673984 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Oct 13 05:26:56.674050 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Oct 13 05:26:56.674111 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Oct 13 05:26:56.674175 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Oct 13 05:26:56.674235 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Oct 13 05:26:56.674298 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Oct 13 05:26:56.674361 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Oct 13 05:26:56.674426 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Oct 13 05:26:56.674486 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Oct 13 05:26:56.674550 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Oct 13 05:26:56.674610 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Oct 13 05:26:56.674677 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Oct 13 05:26:56.674743 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Oct 13 05:26:56.674804 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Oct 13 05:26:56.674869 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Oct 13 05:26:56.674935 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Oct 13 05:26:56.674996 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Oct 13 05:26:56.675062 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Oct 13 05:26:56.675122 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Oct 13 05:26:56.675181 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Oct 13 05:26:56.675246 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Oct 13 05:26:56.675306 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Oct 13 05:26:56.675373 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Oct 13 05:26:56.675433 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Oct 13 05:26:56.675509 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Oct 13 05:26:56.675569 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Oct 13 05:26:56.675633 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Oct 13 05:26:56.675693 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Oct 13 05:26:56.675771 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Oct 13 05:26:56.675832 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Oct 13 05:26:56.675902 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Oct 13 05:26:56.675967 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Oct 13 05:26:56.676027 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Oct 13 05:26:56.676102 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Oct 13 05:26:56.676162 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Oct 13 05:26:56.676222 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Oct 13 05:26:56.676285 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Oct 13 05:26:56.676346 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Oct 13 05:26:56.676411 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Oct 13 05:26:56.676474 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Oct 13 05:26:56.676678 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Oct 13 05:26:56.676776 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Oct 13 05:26:56.676844 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Oct 13 05:26:56.676987 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Oct 13 05:26:56.677268 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Oct 13 05:26:56.677333 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Oct 13 05:26:56.677414 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Oct 13 05:26:56.677476 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Oct 13 05:26:56.677563 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 13 05:26:56.677576 kernel: PCI: CLS 32 bytes, default 64 Oct 13 05:26:56.677584 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Oct 13 05:26:56.677591 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Oct 13 05:26:56.677598 kernel: clocksource: Switched to clocksource tsc Oct 13 05:26:56.677605 kernel: Initialise system trusted keyrings Oct 13 05:26:56.677612 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 13 05:26:56.677618 kernel: Key type asymmetric registered Oct 13 05:26:56.677626 kernel: Asymmetric key parser 'x509' registered Oct 13 05:26:56.677632 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 13 05:26:56.677639 kernel: io scheduler mq-deadline registered Oct 13 05:26:56.677646 kernel: io scheduler kyber registered Oct 13 05:26:56.677653 kernel: io scheduler bfq registered Oct 13 05:26:56.677721 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Oct 13 05:26:56.677831 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:26:56.677903 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Oct 13 05:26:56.681223 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:26:56.681312 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Oct 13 05:26:56.681382 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:26:56.681453 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Oct 13 05:26:56.681525 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:26:56.681594 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Oct 13 05:26:56.681661 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:26:56.681737 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Oct 13 05:26:56.681808 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:26:56.681876 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Oct 13 05:26:56.681943 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:26:56.682019 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Oct 13 05:26:56.682086 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:26:56.682153 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Oct 13 05:26:56.682217 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:26:56.682284 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Oct 13 05:26:56.682352 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:26:56.682419 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Oct 13 05:26:56.682484 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:26:56.682552 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Oct 13 05:26:56.682629 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:26:56.682698 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Oct 13 05:26:56.686479 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:26:56.686564 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Oct 13 05:26:56.686634 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:26:56.686704 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Oct 13 05:26:56.686800 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:26:56.686872 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Oct 13 05:26:56.686941 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:26:56.687017 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Oct 13 05:26:56.687084 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:26:56.687153 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Oct 13 05:26:56.687219 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:26:56.687288 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Oct 13 05:26:56.687354 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:26:56.687424 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Oct 13 05:26:56.687491 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:26:56.687559 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Oct 13 05:26:56.687625 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:26:56.687692 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Oct 13 05:26:56.690385 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:26:56.690478 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Oct 13 05:26:56.690551 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:26:56.690622 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Oct 13 05:26:56.690691 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:26:56.691784 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Oct 13 05:26:56.691868 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:26:56.691945 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Oct 13 05:26:56.692014 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:26:56.692083 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Oct 13 05:26:56.692150 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:26:56.692218 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Oct 13 05:26:56.692284 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:26:56.692355 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Oct 13 05:26:56.692422 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:26:56.692490 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Oct 13 05:26:56.692557 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:26:56.692624 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Oct 13 05:26:56.692691 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:26:56.692769 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Oct 13 05:26:56.692837 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Oct 13 05:26:56.692850 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 13 05:26:56.692857 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 13 05:26:56.692864 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 13 05:26:56.692871 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Oct 13 05:26:56.692879 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 13 05:26:56.692886 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 13 05:26:56.692956 kernel: rtc_cmos 00:01: registered as rtc0 Oct 13 05:26:56.693020 kernel: rtc_cmos 00:01: setting system clock to 2025-10-13T05:26:55 UTC (1760333215) Oct 13 05:26:56.693030 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 13 05:26:56.693090 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Oct 13 05:26:56.693101 kernel: intel_pstate: CPU model not supported Oct 13 05:26:56.693108 kernel: NET: Registered PF_INET6 protocol family Oct 13 05:26:56.693115 kernel: Segment Routing with IPv6 Oct 13 05:26:56.693122 kernel: In-situ OAM (IOAM) with IPv6 Oct 13 05:26:56.693129 kernel: NET: Registered PF_PACKET protocol family Oct 13 05:26:56.693137 kernel: Key type dns_resolver registered Oct 13 05:26:56.693144 kernel: IPI shorthand broadcast: enabled Oct 13 05:26:56.693151 kernel: sched_clock: Marking stable (1593085482, 177977480)->(1784945223, -13882261) Oct 13 05:26:56.693159 kernel: registered taskstats version 1 Oct 13 05:26:56.693166 kernel: Loading compiled-in X.509 certificates Oct 13 05:26:56.693173 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.51-flatcar: 9f1258ccc510afd4f2a37f4774c4b2e958d823b7' Oct 13 05:26:56.693180 kernel: Demotion targets for Node 0: null Oct 13 05:26:56.693187 kernel: Key type .fscrypt registered Oct 13 05:26:56.693194 kernel: Key type fscrypt-provisioning registered Oct 13 05:26:56.693202 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 13 05:26:56.693209 kernel: ima: Allocated hash algorithm: sha1 Oct 13 05:26:56.693216 kernel: ima: No architecture policies found Oct 13 05:26:56.693223 kernel: clk: Disabling unused clocks Oct 13 05:26:56.693230 kernel: Freeing unused kernel image (initmem) memory: 24532K Oct 13 05:26:56.693237 kernel: Write protecting the kernel read-only data: 24576k Oct 13 05:26:56.693243 kernel: Freeing unused kernel image (rodata/data gap) memory: 228K Oct 13 05:26:56.693250 kernel: Run /init as init process Oct 13 05:26:56.693258 kernel: with arguments: Oct 13 05:26:56.693265 kernel: /init Oct 13 05:26:56.693272 kernel: with environment: Oct 13 05:26:56.693280 kernel: HOME=/ Oct 13 05:26:56.693287 kernel: TERM=linux Oct 13 05:26:56.693293 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 13 05:26:56.693300 kernel: SCSI subsystem initialized Oct 13 05:26:56.693308 kernel: VMware PVSCSI driver - version 1.0.7.0-k Oct 13 05:26:56.693315 kernel: vmw_pvscsi: using 64bit dma Oct 13 05:26:56.693322 kernel: vmw_pvscsi: max_id: 16 Oct 13 05:26:56.693329 kernel: vmw_pvscsi: setting ring_pages to 8 Oct 13 05:26:56.693336 kernel: vmw_pvscsi: enabling reqCallThreshold Oct 13 05:26:56.693343 kernel: vmw_pvscsi: driver-based request coalescing enabled Oct 13 05:26:56.693349 kernel: vmw_pvscsi: using MSI-X Oct 13 05:26:56.693428 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Oct 13 05:26:56.693503 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Oct 13 05:26:56.693584 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Oct 13 05:26:56.693657 kernel: sd 0:0:0:0: [sda] 25804800 512-byte logical blocks: (13.2 GB/12.3 GiB) Oct 13 05:26:56.693755 kernel: sd 0:0:0:0: [sda] Write Protect is off Oct 13 05:26:56.693831 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Oct 13 05:26:56.693916 kernel: sd 0:0:0:0: [sda] Cache data unavailable Oct 13 05:26:56.693995 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Oct 13 05:26:56.694005 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 13 05:26:56.694075 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Oct 13 05:26:56.694085 kernel: libata version 3.00 loaded. Oct 13 05:26:56.694153 kernel: ata_piix 0000:00:07.1: version 2.13 Oct 13 05:26:56.694228 kernel: scsi host1: ata_piix Oct 13 05:26:56.694302 kernel: scsi host2: ata_piix Oct 13 05:26:56.694312 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 lpm-pol 0 Oct 13 05:26:56.694319 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 lpm-pol 0 Oct 13 05:26:56.694326 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Oct 13 05:26:56.694402 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Oct 13 05:26:56.694476 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Oct 13 05:26:56.694486 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 13 05:26:56.694493 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 13 05:26:56.694500 kernel: device-mapper: uevent: version 1.0.3 Oct 13 05:26:56.694507 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 13 05:26:56.694575 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 13 05:26:56.694585 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Oct 13 05:26:56.694595 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:26:56.694602 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:26:56.694609 kernel: raid6: avx2x4 gen() 43105 MB/s Oct 13 05:26:56.694615 kernel: raid6: avx2x2 gen() 48218 MB/s Oct 13 05:26:56.694622 kernel: raid6: avx2x1 gen() 41809 MB/s Oct 13 05:26:56.694629 kernel: raid6: using algorithm avx2x2 gen() 48218 MB/s Oct 13 05:26:56.694635 kernel: raid6: .... xor() 30514 MB/s, rmw enabled Oct 13 05:26:56.694643 kernel: raid6: using avx2x2 recovery algorithm Oct 13 05:26:56.694650 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:26:56.694657 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:26:56.694663 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:26:56.694669 kernel: xor: automatically using best checksumming function avx Oct 13 05:26:56.694677 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:26:56.694684 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 13 05:26:56.694691 kernel: BTRFS: device fsid e87b15e9-127c-40e2-bae7-d0ea05b4f2e3 devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (210) Oct 13 05:26:56.694700 kernel: BTRFS info (device dm-0): first mount of filesystem e87b15e9-127c-40e2-bae7-d0ea05b4f2e3 Oct 13 05:26:56.694707 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:26:56.694714 kernel: BTRFS info (device dm-0): enabling ssd optimizations Oct 13 05:26:56.694721 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 13 05:26:56.694727 kernel: BTRFS info (device dm-0): enabling free space tree Oct 13 05:26:56.694797 kernel: Invalid ELF header magic: != \u007fELF Oct 13 05:26:56.694804 kernel: loop: module loaded Oct 13 05:26:56.694813 kernel: loop0: detected capacity change from 0 to 100048 Oct 13 05:26:56.694820 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 13 05:26:56.694829 systemd[1]: Successfully made /usr/ read-only. Oct 13 05:26:56.694838 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 05:26:56.694846 systemd[1]: Detected virtualization vmware. Oct 13 05:26:56.694853 systemd[1]: Detected architecture x86-64. Oct 13 05:26:56.694861 systemd[1]: Running in initrd. Oct 13 05:26:56.694868 systemd[1]: No hostname configured, using default hostname. Oct 13 05:26:56.694875 systemd[1]: Hostname set to . Oct 13 05:26:56.694882 systemd[1]: Initializing machine ID from random generator. Oct 13 05:26:56.694890 systemd[1]: Queued start job for default target initrd.target. Oct 13 05:26:56.694897 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 13 05:26:56.694905 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 05:26:56.694912 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 05:26:56.694920 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 13 05:26:56.694927 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 05:26:56.694934 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 13 05:26:56.694941 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 13 05:26:56.694950 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 05:26:56.694957 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 05:26:56.694964 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 13 05:26:56.694971 systemd[1]: Reached target paths.target - Path Units. Oct 13 05:26:56.694978 systemd[1]: Reached target slices.target - Slice Units. Oct 13 05:26:56.694985 systemd[1]: Reached target swap.target - Swaps. Oct 13 05:26:56.694992 systemd[1]: Reached target timers.target - Timer Units. Oct 13 05:26:56.695001 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 05:26:56.695008 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 05:26:56.695015 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 13 05:26:56.695023 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 13 05:26:56.695030 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 05:26:56.695037 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 05:26:56.695044 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 05:26:56.695052 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 05:26:56.695059 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Oct 13 05:26:56.695067 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 13 05:26:56.695074 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 05:26:56.695081 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 13 05:26:56.695088 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 13 05:26:56.695096 systemd[1]: Starting systemd-fsck-usr.service... Oct 13 05:26:56.695104 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 05:26:56.695111 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 05:26:56.695117 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:26:56.695144 systemd-journald[346]: Collecting audit messages is disabled. Oct 13 05:26:56.695163 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 13 05:26:56.695171 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 05:26:56.695179 systemd[1]: Finished systemd-fsck-usr.service. Oct 13 05:26:56.695186 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 13 05:26:56.695194 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 13 05:26:56.695201 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Oct 13 05:26:56.695208 kernel: Bridge firewalling registered Oct 13 05:26:56.695216 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 05:26:56.695224 systemd-journald[346]: Journal started Oct 13 05:26:56.695240 systemd-journald[346]: Runtime Journal (/run/log/journal/3fa6d17520f741e1a92ec922adace078) is 4.8M, max 38.8M, 34M free. Oct 13 05:26:56.686794 systemd-modules-load[353]: Inserted module 'br_netfilter' Oct 13 05:26:56.700492 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:26:56.700743 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 05:26:56.701032 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 05:26:56.702982 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 13 05:26:56.703799 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 05:26:56.705906 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 05:26:56.715917 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 05:26:56.721501 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:26:56.723234 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 05:26:56.724499 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 05:26:56.725421 systemd-tmpfiles[374]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 13 05:26:56.729048 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 05:26:56.733173 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 05:26:56.734830 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 13 05:26:56.749485 dracut-cmdline[393]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 ip=139.178.70.110::139.178.70.97:28::ens192:off:1.1.1.1:1.0.0.1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=4919840803704517a91afcb9d57d99e9935244ff049349c54216d9a31bc1da5d Oct 13 05:26:56.761753 systemd-resolved[382]: Positive Trust Anchors: Oct 13 05:26:56.761765 systemd-resolved[382]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 05:26:56.761767 systemd-resolved[382]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 13 05:26:56.761789 systemd-resolved[382]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 05:26:56.776262 systemd-resolved[382]: Defaulting to hostname 'linux'. Oct 13 05:26:56.777282 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 05:26:56.777430 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 05:26:56.824756 kernel: Loading iSCSI transport class v2.0-870. Oct 13 05:26:56.836749 kernel: iscsi: registered transport (tcp) Oct 13 05:26:56.866986 kernel: iscsi: registered transport (qla4xxx) Oct 13 05:26:56.867041 kernel: QLogic iSCSI HBA Driver Oct 13 05:26:56.883338 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 13 05:26:56.902450 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 05:26:56.903526 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 13 05:26:56.927215 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 13 05:26:56.928146 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 13 05:26:56.928802 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 13 05:26:56.949084 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 13 05:26:56.950241 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 05:26:56.967589 systemd-udevd[634]: Using default interface naming scheme 'v257'. Oct 13 05:26:56.974248 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 05:26:56.975806 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 13 05:26:56.993883 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 05:26:56.995890 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 05:26:56.996285 dracut-pre-trigger[705]: rd.md=0: removing MD RAID activation Oct 13 05:26:57.015778 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 05:26:57.017814 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 05:26:57.028797 systemd-networkd[750]: lo: Link UP Oct 13 05:26:57.028803 systemd-networkd[750]: lo: Gained carrier Oct 13 05:26:57.029307 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 05:26:57.029449 systemd[1]: Reached target network.target - Network. Oct 13 05:26:57.099586 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 05:26:57.100574 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 13 05:26:57.224457 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Oct 13 05:26:57.231829 kernel: VMware vmxnet3 virtual NIC driver - version 1.9.0.0-k-NAPI Oct 13 05:26:57.231886 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Oct 13 05:26:57.233133 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Oct 13 05:26:57.240924 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Oct 13 05:26:57.247310 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Oct 13 05:26:57.251609 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 13 05:26:57.255779 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Oct 13 05:26:57.278751 kernel: cryptd: max_cpu_qlen set to 1000 Oct 13 05:26:57.283207 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Oct 13 05:26:57.285463 systemd-networkd[750]: eth0: Interface name change detected, renamed to ens192. Oct 13 05:26:57.298195 kernel: AES CTR mode by8 optimization enabled Oct 13 05:26:57.297404 (udev-worker)[787]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Oct 13 05:26:57.318024 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 05:26:57.318134 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:26:57.318291 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:26:57.329030 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:26:57.361768 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Oct 13 05:26:57.363764 systemd-networkd[750]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Oct 13 05:26:57.365803 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Oct 13 05:26:57.365954 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Oct 13 05:26:57.367051 systemd-networkd[750]: ens192: Link UP Oct 13 05:26:57.367056 systemd-networkd[750]: ens192: Gained carrier Oct 13 05:26:57.379526 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:26:57.409136 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 13 05:26:57.409864 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 05:26:57.410128 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 05:26:57.410361 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 05:26:57.411193 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 13 05:26:57.427182 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 13 05:26:57.638787 systemd-resolved[382]: Detected conflict on linux IN A 139.178.70.110 Oct 13 05:26:57.638798 systemd-resolved[382]: Hostname conflict, changing published hostname from 'linux' to 'linux6'. Oct 13 05:26:58.348158 disk-uuid[815]: Warning: The kernel is still using the old partition table. Oct 13 05:26:58.348158 disk-uuid[815]: The new table will be used at the next reboot or after you Oct 13 05:26:58.348158 disk-uuid[815]: run partprobe(8) or kpartx(8) Oct 13 05:26:58.348158 disk-uuid[815]: The operation has completed successfully. Oct 13 05:26:58.355190 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 13 05:26:58.355287 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 13 05:26:58.356353 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 13 05:26:58.384748 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (906) Oct 13 05:26:58.387145 kernel: BTRFS info (device sda6): first mount of filesystem 56bbaf92-79f4-4948-a1fd-5992c383eba8 Oct 13 05:26:58.387172 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:26:58.394788 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 13 05:26:58.394830 kernel: BTRFS info (device sda6): enabling free space tree Oct 13 05:26:58.399851 kernel: BTRFS info (device sda6): last unmount of filesystem 56bbaf92-79f4-4948-a1fd-5992c383eba8 Oct 13 05:26:58.400222 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 13 05:26:58.401358 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 13 05:26:58.525185 systemd-networkd[750]: ens192: Gained IPv6LL Oct 13 05:26:58.538217 ignition[925]: Ignition 2.22.0 Oct 13 05:26:58.538232 ignition[925]: Stage: fetch-offline Oct 13 05:26:58.538264 ignition[925]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:26:58.538271 ignition[925]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 13 05:26:58.538323 ignition[925]: parsed url from cmdline: "" Oct 13 05:26:58.538325 ignition[925]: no config URL provided Oct 13 05:26:58.538328 ignition[925]: reading system config file "/usr/lib/ignition/user.ign" Oct 13 05:26:58.538333 ignition[925]: no config at "/usr/lib/ignition/user.ign" Oct 13 05:26:58.538925 ignition[925]: config successfully fetched Oct 13 05:26:58.538944 ignition[925]: parsing config with SHA512: e93ac046968eb6edbafec7b5628716d24d7f523f7daff4ab1330a0269a4c64e0908969587f2ee60ce7944321d138b5ee6e5f704ea17c0e86aac021bd09a735f4 Oct 13 05:26:58.543953 unknown[925]: fetched base config from "system" Oct 13 05:26:58.543959 unknown[925]: fetched user config from "vmware" Oct 13 05:26:58.544188 ignition[925]: fetch-offline: fetch-offline passed Oct 13 05:26:58.544231 ignition[925]: Ignition finished successfully Oct 13 05:26:58.545959 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 05:26:58.546210 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 13 05:26:58.546784 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 13 05:26:58.564592 ignition[932]: Ignition 2.22.0 Oct 13 05:26:58.564603 ignition[932]: Stage: kargs Oct 13 05:26:58.564747 ignition[932]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:26:58.564756 ignition[932]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 13 05:26:58.565450 ignition[932]: kargs: kargs passed Oct 13 05:26:58.565488 ignition[932]: Ignition finished successfully Oct 13 05:26:58.567158 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 13 05:26:58.568056 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 13 05:26:58.587671 ignition[939]: Ignition 2.22.0 Oct 13 05:26:58.587682 ignition[939]: Stage: disks Oct 13 05:26:58.587830 ignition[939]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:26:58.587836 ignition[939]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 13 05:26:58.588444 ignition[939]: disks: disks passed Oct 13 05:26:58.588481 ignition[939]: Ignition finished successfully Oct 13 05:26:58.589495 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 13 05:26:58.589745 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 13 05:26:58.589870 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 13 05:26:58.590064 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 05:26:58.590257 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 05:26:58.590427 systemd[1]: Reached target basic.target - Basic System. Oct 13 05:26:58.591215 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 13 05:26:58.632933 systemd-fsck[947]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Oct 13 05:26:58.634968 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 13 05:26:58.635791 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 13 05:26:59.378439 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 13 05:26:59.378839 kernel: EXT4-fs (sda9): mounted filesystem c7d6ef00-6dd1-40b4-91f2-c4c5965e3cac r/w with ordered data mode. Quota mode: none. Oct 13 05:26:59.378803 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 13 05:26:59.400507 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 05:26:59.403649 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 13 05:26:59.404013 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 13 05:26:59.404049 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 13 05:26:59.404072 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 05:26:59.412021 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 13 05:26:59.413001 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 13 05:26:59.524757 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (956) Oct 13 05:26:59.539442 kernel: BTRFS info (device sda6): first mount of filesystem 56bbaf92-79f4-4948-a1fd-5992c383eba8 Oct 13 05:26:59.539487 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:26:59.604757 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 13 05:26:59.604807 kernel: BTRFS info (device sda6): enabling free space tree Oct 13 05:26:59.605555 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 05:26:59.802568 initrd-setup-root[980]: cut: /sysroot/etc/passwd: No such file or directory Oct 13 05:26:59.806868 initrd-setup-root[987]: cut: /sysroot/etc/group: No such file or directory Oct 13 05:26:59.809769 initrd-setup-root[994]: cut: /sysroot/etc/shadow: No such file or directory Oct 13 05:26:59.813337 initrd-setup-root[1001]: cut: /sysroot/etc/gshadow: No such file or directory Oct 13 05:26:59.897672 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 13 05:26:59.898689 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 13 05:26:59.900840 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 13 05:26:59.912053 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 13 05:26:59.912989 kernel: BTRFS info (device sda6): last unmount of filesystem 56bbaf92-79f4-4948-a1fd-5992c383eba8 Oct 13 05:26:59.934857 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 13 05:26:59.940964 ignition[1070]: INFO : Ignition 2.22.0 Oct 13 05:26:59.941332 ignition[1070]: INFO : Stage: mount Oct 13 05:26:59.942397 ignition[1070]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 05:26:59.942397 ignition[1070]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 13 05:26:59.942397 ignition[1070]: INFO : mount: mount passed Oct 13 05:26:59.942397 ignition[1070]: INFO : Ignition finished successfully Oct 13 05:26:59.943950 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 13 05:26:59.945815 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 13 05:27:00.379669 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 05:27:00.400751 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1083) Oct 13 05:27:00.402768 kernel: BTRFS info (device sda6): first mount of filesystem 56bbaf92-79f4-4948-a1fd-5992c383eba8 Oct 13 05:27:00.402809 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:27:00.411004 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 13 05:27:00.411059 kernel: BTRFS info (device sda6): enabling free space tree Oct 13 05:27:00.412551 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 05:27:00.430609 ignition[1100]: INFO : Ignition 2.22.0 Oct 13 05:27:00.430609 ignition[1100]: INFO : Stage: files Oct 13 05:27:00.430994 ignition[1100]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 05:27:00.430994 ignition[1100]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 13 05:27:00.431343 ignition[1100]: DEBUG : files: compiled without relabeling support, skipping Oct 13 05:27:00.435322 ignition[1100]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 13 05:27:00.435322 ignition[1100]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 13 05:27:00.437977 ignition[1100]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 13 05:27:00.438157 ignition[1100]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 13 05:27:00.438289 ignition[1100]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 13 05:27:00.438244 unknown[1100]: wrote ssh authorized keys file for user: core Oct 13 05:27:00.439920 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 13 05:27:00.440145 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Oct 13 05:27:00.489067 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 13 05:27:00.554077 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 13 05:27:00.554077 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 13 05:27:00.554077 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 13 05:27:00.554077 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 13 05:27:00.555505 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 13 05:27:00.555505 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 13 05:27:00.555505 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 13 05:27:00.555505 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 13 05:27:00.555505 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 13 05:27:00.557530 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 05:27:00.557767 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 05:27:00.557767 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 13 05:27:00.560066 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 13 05:27:00.560315 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 13 05:27:00.560315 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Oct 13 05:27:00.995274 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 13 05:27:01.483840 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 13 05:27:01.483840 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Oct 13 05:27:01.486942 ignition[1100]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Oct 13 05:27:01.486942 ignition[1100]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Oct 13 05:27:01.490858 ignition[1100]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 13 05:27:01.491726 ignition[1100]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 13 05:27:01.491726 ignition[1100]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Oct 13 05:27:01.491726 ignition[1100]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Oct 13 05:27:01.491726 ignition[1100]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 13 05:27:01.492522 ignition[1100]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 13 05:27:01.492522 ignition[1100]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Oct 13 05:27:01.492522 ignition[1100]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Oct 13 05:27:01.701865 ignition[1100]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 13 05:27:01.704398 ignition[1100]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 13 05:27:01.704608 ignition[1100]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Oct 13 05:27:01.704608 ignition[1100]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Oct 13 05:27:01.704608 ignition[1100]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Oct 13 05:27:01.704608 ignition[1100]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 13 05:27:01.706050 ignition[1100]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 13 05:27:01.706050 ignition[1100]: INFO : files: files passed Oct 13 05:27:01.706050 ignition[1100]: INFO : Ignition finished successfully Oct 13 05:27:01.705548 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 13 05:27:01.707430 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 13 05:27:01.708848 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 13 05:27:01.724842 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 13 05:27:01.725079 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 13 05:27:01.727153 initrd-setup-root-after-ignition[1132]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 05:27:01.727549 initrd-setup-root-after-ignition[1132]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 13 05:27:01.727700 initrd-setup-root-after-ignition[1136]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 05:27:01.728284 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 05:27:01.728631 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 13 05:27:01.729363 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 13 05:27:01.768371 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 13 05:27:01.768551 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 13 05:27:01.768868 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 13 05:27:01.768987 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 13 05:27:01.769463 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 13 05:27:01.770007 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 13 05:27:01.788003 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 05:27:01.788861 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 13 05:27:01.802725 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 13 05:27:01.803128 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 13 05:27:01.803424 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 05:27:01.803723 systemd[1]: Stopped target timers.target - Timer Units. Oct 13 05:27:01.803994 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 13 05:27:01.804176 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 05:27:01.804578 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 13 05:27:01.804852 systemd[1]: Stopped target basic.target - Basic System. Oct 13 05:27:01.805100 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 13 05:27:01.805352 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 05:27:01.805653 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 13 05:27:01.805937 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 13 05:27:01.806186 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 13 05:27:01.806472 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 05:27:01.806785 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 13 05:27:01.807033 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 13 05:27:01.807315 systemd[1]: Stopped target swap.target - Swaps. Oct 13 05:27:01.807534 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 13 05:27:01.807719 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 13 05:27:01.808136 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 13 05:27:01.808390 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 05:27:01.808671 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 13 05:27:01.808844 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 05:27:01.809124 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 13 05:27:01.809203 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 13 05:27:01.809665 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 13 05:27:01.809846 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 05:27:01.810156 systemd[1]: Stopped target paths.target - Path Units. Oct 13 05:27:01.810397 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 13 05:27:01.813771 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 05:27:01.813976 systemd[1]: Stopped target slices.target - Slice Units. Oct 13 05:27:01.814111 systemd[1]: Stopped target sockets.target - Socket Units. Oct 13 05:27:01.814245 systemd[1]: iscsid.socket: Deactivated successfully. Oct 13 05:27:01.814306 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 05:27:01.814443 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 13 05:27:01.814498 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 05:27:01.814658 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 13 05:27:01.814752 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 05:27:01.814921 systemd[1]: ignition-files.service: Deactivated successfully. Oct 13 05:27:01.814992 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 13 05:27:01.816895 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 13 05:27:01.817118 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 13 05:27:01.817291 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 05:27:01.818069 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 13 05:27:01.818849 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 13 05:27:01.818940 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 05:27:01.819355 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 13 05:27:01.819426 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 05:27:01.820873 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 13 05:27:01.821084 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 05:27:01.825024 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 13 05:27:01.826841 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 13 05:27:01.835701 ignition[1156]: INFO : Ignition 2.22.0 Oct 13 05:27:01.835701 ignition[1156]: INFO : Stage: umount Oct 13 05:27:01.836492 ignition[1156]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 05:27:01.836492 ignition[1156]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Oct 13 05:27:01.836492 ignition[1156]: INFO : umount: umount passed Oct 13 05:27:01.836492 ignition[1156]: INFO : Ignition finished successfully Oct 13 05:27:01.837601 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 13 05:27:01.838990 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 13 05:27:01.839064 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 13 05:27:01.839667 systemd[1]: Stopped target network.target - Network. Oct 13 05:27:01.839909 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 13 05:27:01.840062 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 13 05:27:01.840292 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 13 05:27:01.840414 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 13 05:27:01.840655 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 13 05:27:01.840871 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 13 05:27:01.841112 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 13 05:27:01.841137 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 13 05:27:01.841529 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 13 05:27:01.841828 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 13 05:27:01.846240 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 13 05:27:01.846476 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 13 05:27:01.851921 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 13 05:27:01.852178 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 13 05:27:01.853403 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 13 05:27:01.853547 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 13 05:27:01.853572 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 13 05:27:01.854557 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 13 05:27:01.854941 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 13 05:27:01.855107 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 05:27:01.855395 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Oct 13 05:27:01.855548 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Oct 13 05:27:01.855974 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 13 05:27:01.856123 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:27:01.856353 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 13 05:27:01.856482 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 13 05:27:01.856868 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 05:27:01.870499 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 13 05:27:01.870591 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 05:27:01.870973 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 13 05:27:01.871015 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 13 05:27:01.871269 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 13 05:27:01.871287 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 05:27:01.871455 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 13 05:27:01.871483 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 13 05:27:01.871784 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 13 05:27:01.871810 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 13 05:27:01.872090 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 13 05:27:01.872114 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 05:27:01.873815 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 13 05:27:01.873934 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 13 05:27:01.873969 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 05:27:01.874172 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 13 05:27:01.874199 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 05:27:01.874344 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 05:27:01.874370 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:27:01.887491 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 13 05:27:01.887571 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 13 05:27:01.901943 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 13 05:27:01.902030 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 13 05:27:02.184210 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 13 05:27:02.184282 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 13 05:27:02.184756 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 13 05:27:02.184883 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 13 05:27:02.184917 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 13 05:27:02.185611 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 13 05:27:02.202667 systemd[1]: Switching root. Oct 13 05:27:02.238281 systemd-journald[346]: Journal stopped Oct 13 05:27:03.433598 systemd-journald[346]: Received SIGTERM from PID 1 (systemd). Oct 13 05:27:03.433627 kernel: SELinux: policy capability network_peer_controls=1 Oct 13 05:27:03.433640 kernel: SELinux: policy capability open_perms=1 Oct 13 05:27:03.433647 kernel: SELinux: policy capability extended_socket_class=1 Oct 13 05:27:03.433654 kernel: SELinux: policy capability always_check_network=0 Oct 13 05:27:03.433660 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 13 05:27:03.433672 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 13 05:27:03.433680 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 13 05:27:03.433686 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 13 05:27:03.433692 kernel: SELinux: policy capability userspace_initial_context=0 Oct 13 05:27:03.433705 systemd[1]: Successfully loaded SELinux policy in 59.195ms. Oct 13 05:27:03.433713 kernel: audit: type=1403 audit(1760333222.755:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 13 05:27:03.433720 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.701ms. Oct 13 05:27:03.439118 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 05:27:03.439145 systemd[1]: Detected virtualization vmware. Oct 13 05:27:03.439162 systemd[1]: Detected architecture x86-64. Oct 13 05:27:03.439170 systemd[1]: Detected first boot. Oct 13 05:27:03.439178 systemd[1]: Initializing machine ID from random generator. Oct 13 05:27:03.439185 zram_generator::config[1200]: No configuration found. Oct 13 05:27:03.439299 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Oct 13 05:27:03.439310 kernel: Guest personality initialized and is active Oct 13 05:27:03.439317 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 13 05:27:03.439324 kernel: Initialized host personality Oct 13 05:27:03.439331 kernel: NET: Registered PF_VSOCK protocol family Oct 13 05:27:03.439340 systemd[1]: Populated /etc with preset unit settings. Oct 13 05:27:03.439349 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Oct 13 05:27:03.439357 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Oct 13 05:27:03.439364 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 13 05:27:03.439371 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 13 05:27:03.439378 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 13 05:27:03.439386 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 13 05:27:03.439394 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 13 05:27:03.439402 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 13 05:27:03.439409 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 13 05:27:03.439416 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 13 05:27:03.439424 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 13 05:27:03.439433 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 13 05:27:03.439440 systemd[1]: Created slice user.slice - User and Session Slice. Oct 13 05:27:03.439447 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 05:27:03.439455 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 05:27:03.439464 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 13 05:27:03.439471 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 13 05:27:03.439479 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 13 05:27:03.439488 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 05:27:03.439495 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 13 05:27:03.439503 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 05:27:03.439511 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 05:27:03.439518 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 13 05:27:03.439526 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 13 05:27:03.439534 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 13 05:27:03.439541 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 13 05:27:03.439549 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 05:27:03.439556 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 05:27:03.439564 systemd[1]: Reached target slices.target - Slice Units. Oct 13 05:27:03.439571 systemd[1]: Reached target swap.target - Swaps. Oct 13 05:27:03.439578 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 13 05:27:03.439587 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 13 05:27:03.439595 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 13 05:27:03.439602 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 05:27:03.439663 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 05:27:03.439673 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 05:27:03.439681 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 13 05:27:03.439689 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 13 05:27:03.439699 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 13 05:27:03.442225 systemd[1]: Mounting media.mount - External Media Directory... Oct 13 05:27:03.442238 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:27:03.442249 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 13 05:27:03.442257 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 13 05:27:03.442265 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 13 05:27:03.442273 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 13 05:27:03.442281 systemd[1]: Reached target machines.target - Containers. Oct 13 05:27:03.442288 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 13 05:27:03.442296 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Oct 13 05:27:03.442305 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 05:27:03.442313 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 13 05:27:03.442320 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 05:27:03.442328 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 05:27:03.442336 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 05:27:03.442343 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 13 05:27:03.442351 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 05:27:03.442360 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 13 05:27:03.442368 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 13 05:27:03.442375 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 13 05:27:03.442383 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 13 05:27:03.442390 systemd[1]: Stopped systemd-fsck-usr.service. Oct 13 05:27:03.442399 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:27:03.442407 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 05:27:03.442416 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 05:27:03.442424 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 13 05:27:03.442431 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 13 05:27:03.442439 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 13 05:27:03.442447 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 05:27:03.442455 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:27:03.442463 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 13 05:27:03.442471 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 13 05:27:03.442478 systemd[1]: Mounted media.mount - External Media Directory. Oct 13 05:27:03.442486 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 13 05:27:03.442494 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 13 05:27:03.442501 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 13 05:27:03.442510 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 05:27:03.442518 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 13 05:27:03.442525 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 13 05:27:03.442533 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 05:27:03.442541 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 05:27:03.442548 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 05:27:03.442556 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 05:27:03.442564 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 05:27:03.442572 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 05:27:03.442580 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 05:27:03.442587 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 13 05:27:03.442595 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 05:27:03.442602 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 13 05:27:03.442610 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 13 05:27:03.442619 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 13 05:27:03.442626 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 05:27:03.442634 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 13 05:27:03.442641 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:27:03.442649 kernel: fuse: init (API version 7.41) Oct 13 05:27:03.442660 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 13 05:27:03.442668 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 05:27:03.442676 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 13 05:27:03.442685 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 05:27:03.442693 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 05:27:03.442701 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 13 05:27:03.442709 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 13 05:27:03.442718 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 13 05:27:03.442726 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 13 05:27:03.442742 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 13 05:27:03.442770 systemd-journald[1292]: Collecting audit messages is disabled. Oct 13 05:27:03.442789 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 13 05:27:03.442798 systemd-journald[1292]: Journal started Oct 13 05:27:03.442813 systemd-journald[1292]: Runtime Journal (/run/log/journal/50e75ce917414b31b00f6155c075b74f) is 4.8M, max 38.8M, 34M free. Oct 13 05:27:03.451755 kernel: ACPI: bus type drm_connector registered Oct 13 05:27:03.449339 ignition[1318]: Ignition 2.22.0 Oct 13 05:27:03.206066 systemd[1]: Queued start job for default target multi-user.target. Oct 13 05:27:03.449535 ignition[1318]: deleting config from guestinfo properties Oct 13 05:27:03.215705 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Oct 13 05:27:03.216000 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 13 05:27:03.452220 jq[1270]: true Oct 13 05:27:03.452913 jq[1309]: true Oct 13 05:27:03.468750 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 13 05:27:03.468795 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 05:27:03.469588 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 05:27:03.469726 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 05:27:03.479361 kernel: loop1: detected capacity change from 0 to 110984 Oct 13 05:27:03.477157 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 13 05:27:03.478286 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 13 05:27:03.479255 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 13 05:27:03.482337 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:27:03.485113 ignition[1318]: Successfully deleted config Oct 13 05:27:03.486835 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Oct 13 05:27:03.500172 systemd-journald[1292]: Time spent on flushing to /var/log/journal/50e75ce917414b31b00f6155c075b74f is 41.584ms for 1761 entries. Oct 13 05:27:03.500172 systemd-journald[1292]: System Journal (/var/log/journal/50e75ce917414b31b00f6155c075b74f) is 8M, max 588.1M, 580.1M free. Oct 13 05:27:03.548828 systemd-journald[1292]: Received client request to flush runtime journal. Oct 13 05:27:03.510162 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 13 05:27:03.552160 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 13 05:27:03.570796 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 05:27:03.615768 kernel: loop2: detected capacity change from 0 to 128048 Oct 13 05:27:03.624635 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 13 05:27:03.626815 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 05:27:03.629621 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 05:27:03.647869 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 13 05:27:03.661304 systemd-tmpfiles[1364]: ACLs are not supported, ignoring. Oct 13 05:27:03.661317 systemd-tmpfiles[1364]: ACLs are not supported, ignoring. Oct 13 05:27:03.665686 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 05:27:03.672768 kernel: loop3: detected capacity change from 0 to 219144 Oct 13 05:27:03.682184 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 13 05:27:03.720780 kernel: loop4: detected capacity change from 0 to 2960 Oct 13 05:27:03.729093 systemd-resolved[1363]: Positive Trust Anchors: Oct 13 05:27:03.729106 systemd-resolved[1363]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 05:27:03.729109 systemd-resolved[1363]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 13 05:27:03.729131 systemd-resolved[1363]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 05:27:03.733761 systemd-resolved[1363]: Defaulting to hostname 'linux'. Oct 13 05:27:03.734661 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 05:27:03.734858 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 05:27:03.763754 kernel: loop5: detected capacity change from 0 to 110984 Oct 13 05:27:03.780756 kernel: loop6: detected capacity change from 0 to 128048 Oct 13 05:27:03.803004 kernel: loop7: detected capacity change from 0 to 219144 Oct 13 05:27:03.818760 kernel: loop1: detected capacity change from 0 to 2960 Oct 13 05:27:03.826829 (sd-merge)[1377]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-vmware.raw'. Oct 13 05:27:03.830012 (sd-merge)[1377]: Merged extensions into '/usr'. Oct 13 05:27:03.832819 systemd[1]: Reload requested from client PID 1323 ('systemd-sysext') (unit systemd-sysext.service)... Oct 13 05:27:03.832896 systemd[1]: Reloading... Oct 13 05:27:03.887755 zram_generator::config[1404]: No configuration found. Oct 13 05:27:03.976694 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Oct 13 05:27:04.025967 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 13 05:27:04.026128 systemd[1]: Reloading finished in 192 ms. Oct 13 05:27:04.059239 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 13 05:27:04.065772 systemd[1]: Starting ensure-sysext.service... Oct 13 05:27:04.067802 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 05:27:04.102393 systemd[1]: Reload requested from client PID 1460 ('systemctl') (unit ensure-sysext.service)... Oct 13 05:27:04.195459 zram_generator::config[1491]: No configuration found. Oct 13 05:27:04.102404 systemd[1]: Reloading... Oct 13 05:27:04.117374 systemd-tmpfiles[1461]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 13 05:27:04.117391 systemd-tmpfiles[1461]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 13 05:27:04.117572 systemd-tmpfiles[1461]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 13 05:27:04.117747 systemd-tmpfiles[1461]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 13 05:27:04.118343 systemd-tmpfiles[1461]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 13 05:27:04.118500 systemd-tmpfiles[1461]: ACLs are not supported, ignoring. Oct 13 05:27:04.118533 systemd-tmpfiles[1461]: ACLs are not supported, ignoring. Oct 13 05:27:04.194351 systemd-tmpfiles[1461]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 05:27:04.194355 systemd-tmpfiles[1461]: Skipping /boot Oct 13 05:27:04.200482 systemd-tmpfiles[1461]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 05:27:04.200492 systemd-tmpfiles[1461]: Skipping /boot Oct 13 05:27:04.227590 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Oct 13 05:27:04.290772 systemd[1]: Reloading finished in 188 ms. Oct 13 05:27:04.299409 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 13 05:27:04.302999 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 05:27:04.308820 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 13 05:27:04.310566 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 13 05:27:04.312218 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 05:27:04.317665 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 13 05:27:04.319086 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 13 05:27:04.320897 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 13 05:27:04.326527 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 05:27:04.328094 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 13 05:27:04.329393 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 13 05:27:04.330867 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 13 05:27:04.335357 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:27:04.340598 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 05:27:04.344780 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 05:27:04.351016 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 05:27:04.351975 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:27:04.352069 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:27:04.352135 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:27:04.357924 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:27:04.358056 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:27:04.358143 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:27:04.358231 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:27:04.362533 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:27:04.367068 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 05:27:04.367366 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:27:04.367431 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:27:04.367492 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:27:04.368407 systemd[1]: Finished ensure-sysext.service. Oct 13 05:27:04.368758 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 13 05:27:04.371027 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 05:27:04.371707 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 05:27:04.382989 systemd-udevd[1556]: Using default interface naming scheme 'v257'. Oct 13 05:27:04.385137 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 13 05:27:04.393007 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 05:27:04.393136 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 05:27:04.393689 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 05:27:04.395386 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 05:27:04.395521 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 05:27:04.395719 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 05:27:04.395928 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 05:27:04.396041 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 05:27:04.405292 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 13 05:27:04.415711 augenrules[1594]: No rules Oct 13 05:27:04.415696 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 05:27:04.415916 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 05:27:04.432869 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 05:27:04.435041 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 05:27:04.460262 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 13 05:27:04.460534 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 13 05:27:04.464700 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 13 05:27:04.465037 systemd[1]: Reached target time-set.target - System Time Set. Oct 13 05:27:04.514240 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 13 05:27:04.526574 systemd-networkd[1601]: lo: Link UP Oct 13 05:27:04.527277 systemd-networkd[1601]: lo: Gained carrier Oct 13 05:27:04.528430 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 05:27:04.528600 systemd[1]: Reached target network.target - Network. Oct 13 05:27:04.530002 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 13 05:27:04.531307 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 13 05:27:04.553632 systemd-networkd[1601]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Oct 13 05:27:04.557245 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Oct 13 05:27:04.557406 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Oct 13 05:27:04.559760 systemd-networkd[1601]: ens192: Link UP Oct 13 05:27:04.560639 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 13 05:27:04.560845 systemd-networkd[1601]: ens192: Gained carrier Oct 13 05:27:04.565000 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 13 05:27:04.565039 kernel: mousedev: PS/2 mouse device common for all mice Oct 13 05:27:04.566328 systemd-timesyncd[1575]: Network configuration changed, trying to establish connection. Oct 13 05:27:04.574753 kernel: ACPI: button: Power Button [PWRF] Oct 13 05:27:04.588823 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Oct 13 05:27:04.691158 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Oct 13 05:27:04.692914 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 13 05:27:04.713918 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 13 05:27:04.784901 (udev-worker)[1616]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Oct 13 05:27:04.789639 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:27:04.937976 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:27:05.029254 ldconfig[1553]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 13 05:27:05.030890 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 13 05:27:05.031994 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 13 05:27:05.048486 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 13 05:27:05.048716 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 05:27:05.048906 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 13 05:27:05.049040 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 13 05:27:05.049164 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 13 05:27:05.049353 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 13 05:27:05.049511 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 13 05:27:05.049632 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 13 05:27:05.049754 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 13 05:27:05.049771 systemd[1]: Reached target paths.target - Path Units. Oct 13 05:27:05.049867 systemd[1]: Reached target timers.target - Timer Units. Oct 13 05:27:05.050630 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 13 05:27:05.051642 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 13 05:27:05.053013 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 13 05:27:05.053202 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 13 05:27:05.053329 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 13 05:27:05.058825 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 13 05:27:05.059130 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 13 05:27:05.059647 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 13 05:27:05.060154 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 05:27:05.060258 systemd[1]: Reached target basic.target - Basic System. Oct 13 05:27:05.060385 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 13 05:27:05.060402 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 13 05:27:05.061081 systemd[1]: Starting containerd.service - containerd container runtime... Oct 13 05:27:05.063803 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 13 05:27:05.064841 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 13 05:27:05.066773 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 13 05:27:05.067593 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 13 05:27:05.067708 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 13 05:27:05.069090 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 13 05:27:05.072820 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 13 05:27:05.074290 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 13 05:27:05.080274 jq[1670]: false Oct 13 05:27:05.080807 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 13 05:27:05.084898 extend-filesystems[1671]: Found /dev/sda6 Oct 13 05:27:05.085347 google_oslogin_nss_cache[1672]: oslogin_cache_refresh[1672]: Refreshing passwd entry cache Oct 13 05:27:05.085132 oslogin_cache_refresh[1672]: Refreshing passwd entry cache Oct 13 05:27:05.085913 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 13 05:27:05.090326 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 13 05:27:05.090450 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 13 05:27:05.090801 extend-filesystems[1671]: Found /dev/sda9 Oct 13 05:27:05.091383 extend-filesystems[1671]: Checking size of /dev/sda9 Oct 13 05:27:05.093039 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 13 05:27:05.094589 systemd[1]: Starting update-engine.service - Update Engine... Oct 13 05:27:05.098610 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 13 05:27:05.098710 extend-filesystems[1671]: Resized partition /dev/sda9 Oct 13 05:27:05.100367 extend-filesystems[1695]: resize2fs 1.47.3 (8-Jul-2025) Oct 13 05:27:05.102851 google_oslogin_nss_cache[1672]: oslogin_cache_refresh[1672]: Failure getting users, quitting Oct 13 05:27:05.102847 oslogin_cache_refresh[1672]: Failure getting users, quitting Oct 13 05:27:05.102910 google_oslogin_nss_cache[1672]: oslogin_cache_refresh[1672]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 13 05:27:05.102910 google_oslogin_nss_cache[1672]: oslogin_cache_refresh[1672]: Refreshing group entry cache Oct 13 05:27:05.102859 oslogin_cache_refresh[1672]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 13 05:27:05.102881 oslogin_cache_refresh[1672]: Refreshing group entry cache Oct 13 05:27:05.106400 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 1635323 blocks Oct 13 05:27:05.106437 kernel: EXT4-fs (sda9): resized filesystem to 1635323 Oct 13 05:27:05.104816 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Oct 13 05:27:05.107146 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 13 05:27:05.107821 oslogin_cache_refresh[1672]: Failure getting groups, quitting Oct 13 05:27:05.108124 google_oslogin_nss_cache[1672]: oslogin_cache_refresh[1672]: Failure getting groups, quitting Oct 13 05:27:05.108124 google_oslogin_nss_cache[1672]: oslogin_cache_refresh[1672]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 13 05:27:05.107413 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 13 05:27:05.107828 oslogin_cache_refresh[1672]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 13 05:27:05.107544 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 13 05:27:05.107711 systemd[1]: motdgen.service: Deactivated successfully. Oct 13 05:27:05.108526 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 13 05:27:05.108861 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 13 05:27:05.108983 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 13 05:27:05.109445 extend-filesystems[1695]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Oct 13 05:27:05.109445 extend-filesystems[1695]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 13 05:27:05.109445 extend-filesystems[1695]: The filesystem on /dev/sda9 is now 1635323 (4k) blocks long. Oct 13 05:27:05.116114 extend-filesystems[1671]: Resized filesystem in /dev/sda9 Oct 13 05:27:05.110104 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 13 05:27:05.110220 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 13 05:27:05.116432 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 13 05:27:05.117890 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 13 05:27:05.121005 update_engine[1691]: I20251013 05:27:05.120959 1691 main.cc:92] Flatcar Update Engine starting Oct 13 05:27:05.132036 jq[1692]: true Oct 13 05:27:05.140216 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Oct 13 05:27:05.142952 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Oct 13 05:27:05.150146 (ntainerd)[1712]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 13 05:27:05.155248 jq[1716]: true Oct 13 05:27:05.156211 dbus-daemon[1668]: [system] SELinux support is enabled Oct 13 05:27:05.161352 update_engine[1691]: I20251013 05:27:05.161252 1691 update_check_scheduler.cc:74] Next update check in 8m38s Oct 13 05:27:05.161359 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 13 05:27:05.164644 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 13 05:27:05.164673 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 13 05:27:05.165452 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 13 05:27:05.165469 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 13 05:27:05.165978 systemd[1]: Started update-engine.service - Update Engine. Oct 13 05:27:05.168287 tar[1700]: linux-amd64/LICENSE Oct 13 05:27:05.168417 tar[1700]: linux-amd64/helm Oct 13 05:27:05.193255 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 13 05:27:05.194663 unknown[1719]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Oct 13 05:27:05.196197 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Oct 13 05:27:05.196823 unknown[1719]: Core dump limit set to -1 Oct 13 05:27:05.240117 systemd-logind[1687]: Watching system buttons on /dev/input/event2 (Power Button) Oct 13 05:27:05.241014 bash[1745]: Updated "/home/core/.ssh/authorized_keys" Oct 13 05:27:05.241112 systemd-logind[1687]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 13 05:27:05.242036 systemd-logind[1687]: New seat seat0. Oct 13 05:27:05.242672 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 13 05:27:05.245615 systemd[1]: Started systemd-logind.service - User Login Management. Oct 13 05:27:05.245995 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 13 05:27:05.357022 locksmithd[1729]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 13 05:27:05.506752 containerd[1712]: time="2025-10-13T05:27:05Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 13 05:27:05.509069 containerd[1712]: time="2025-10-13T05:27:05.509050177Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 13 05:27:05.517979 containerd[1712]: time="2025-10-13T05:27:05.517943790Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.759µs" Oct 13 05:27:05.517979 containerd[1712]: time="2025-10-13T05:27:05.517972930Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 13 05:27:05.518059 containerd[1712]: time="2025-10-13T05:27:05.517989688Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 13 05:27:05.518228 containerd[1712]: time="2025-10-13T05:27:05.518110675Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 13 05:27:05.518228 containerd[1712]: time="2025-10-13T05:27:05.518125299Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 13 05:27:05.518228 containerd[1712]: time="2025-10-13T05:27:05.518146745Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 13 05:27:05.518228 containerd[1712]: time="2025-10-13T05:27:05.518186041Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 13 05:27:05.518228 containerd[1712]: time="2025-10-13T05:27:05.518196291Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 13 05:27:05.518366 containerd[1712]: time="2025-10-13T05:27:05.518349847Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 13 05:27:05.518366 containerd[1712]: time="2025-10-13T05:27:05.518364132Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 13 05:27:05.518405 containerd[1712]: time="2025-10-13T05:27:05.518376078Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 13 05:27:05.518405 containerd[1712]: time="2025-10-13T05:27:05.518381798Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 13 05:27:05.518433 containerd[1712]: time="2025-10-13T05:27:05.518426316Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 13 05:27:05.518953 containerd[1712]: time="2025-10-13T05:27:05.518548298Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 13 05:27:05.518953 containerd[1712]: time="2025-10-13T05:27:05.518572573Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 13 05:27:05.518953 containerd[1712]: time="2025-10-13T05:27:05.518582901Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 13 05:27:05.518953 containerd[1712]: time="2025-10-13T05:27:05.518599764Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 13 05:27:05.519750 containerd[1712]: time="2025-10-13T05:27:05.519197896Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 13 05:27:05.519750 containerd[1712]: time="2025-10-13T05:27:05.519269109Z" level=info msg="metadata content store policy set" policy=shared Oct 13 05:27:05.539832 containerd[1712]: time="2025-10-13T05:27:05.539793685Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 13 05:27:05.540742 containerd[1712]: time="2025-10-13T05:27:05.539928966Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 13 05:27:05.540742 containerd[1712]: time="2025-10-13T05:27:05.539947037Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 13 05:27:05.540742 containerd[1712]: time="2025-10-13T05:27:05.539961532Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 13 05:27:05.540742 containerd[1712]: time="2025-10-13T05:27:05.539983002Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 13 05:27:05.540742 containerd[1712]: time="2025-10-13T05:27:05.539993819Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 13 05:27:05.540742 containerd[1712]: time="2025-10-13T05:27:05.540004499Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 13 05:27:05.540742 containerd[1712]: time="2025-10-13T05:27:05.540015710Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 13 05:27:05.540742 containerd[1712]: time="2025-10-13T05:27:05.540026564Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 13 05:27:05.540742 containerd[1712]: time="2025-10-13T05:27:05.540036425Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 13 05:27:05.540742 containerd[1712]: time="2025-10-13T05:27:05.540045964Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 13 05:27:05.540742 containerd[1712]: time="2025-10-13T05:27:05.540067838Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 13 05:27:05.540742 containerd[1712]: time="2025-10-13T05:27:05.540178320Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 13 05:27:05.540742 containerd[1712]: time="2025-10-13T05:27:05.540197753Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 13 05:27:05.540742 containerd[1712]: time="2025-10-13T05:27:05.540219503Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 13 05:27:05.541062 containerd[1712]: time="2025-10-13T05:27:05.540230275Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 13 05:27:05.541062 containerd[1712]: time="2025-10-13T05:27:05.540239599Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 13 05:27:05.541062 containerd[1712]: time="2025-10-13T05:27:05.540248189Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 13 05:27:05.541062 containerd[1712]: time="2025-10-13T05:27:05.540257086Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 13 05:27:05.541062 containerd[1712]: time="2025-10-13T05:27:05.540265888Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 13 05:27:05.541062 containerd[1712]: time="2025-10-13T05:27:05.540276209Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 13 05:27:05.541062 containerd[1712]: time="2025-10-13T05:27:05.540287395Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 13 05:27:05.541062 containerd[1712]: time="2025-10-13T05:27:05.540296556Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 13 05:27:05.541062 containerd[1712]: time="2025-10-13T05:27:05.540346605Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 13 05:27:05.541062 containerd[1712]: time="2025-10-13T05:27:05.540358505Z" level=info msg="Start snapshots syncer" Oct 13 05:27:05.541062 containerd[1712]: time="2025-10-13T05:27:05.540381420Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 13 05:27:05.541308 containerd[1712]: time="2025-10-13T05:27:05.540592492Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 13 05:27:05.541308 containerd[1712]: time="2025-10-13T05:27:05.540639396Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 13 05:27:05.541422 containerd[1712]: time="2025-10-13T05:27:05.540697785Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 13 05:27:05.541670 containerd[1712]: time="2025-10-13T05:27:05.541596181Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 13 05:27:05.541670 containerd[1712]: time="2025-10-13T05:27:05.541621150Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 13 05:27:05.541670 containerd[1712]: time="2025-10-13T05:27:05.541632548Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 13 05:27:05.541670 containerd[1712]: time="2025-10-13T05:27:05.541642056Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 13 05:27:05.541670 containerd[1712]: time="2025-10-13T05:27:05.541652070Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 13 05:27:05.541900 containerd[1712]: time="2025-10-13T05:27:05.541828006Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 13 05:27:05.541900 containerd[1712]: time="2025-10-13T05:27:05.541845202Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 13 05:27:05.541900 containerd[1712]: time="2025-10-13T05:27:05.541868803Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 13 05:27:05.541900 containerd[1712]: time="2025-10-13T05:27:05.541885049Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 13 05:27:05.542022 containerd[1712]: time="2025-10-13T05:27:05.542010356Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 13 05:27:05.542153 containerd[1712]: time="2025-10-13T05:27:05.542095163Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 13 05:27:05.542209 containerd[1712]: time="2025-10-13T05:27:05.542197424Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 13 05:27:05.542297 containerd[1712]: time="2025-10-13T05:27:05.542284817Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 13 05:27:05.542605 containerd[1712]: time="2025-10-13T05:27:05.542590818Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 13 05:27:05.542717 containerd[1712]: time="2025-10-13T05:27:05.542647311Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 13 05:27:05.542717 containerd[1712]: time="2025-10-13T05:27:05.542661235Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 13 05:27:05.542839 containerd[1712]: time="2025-10-13T05:27:05.542825160Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 13 05:27:05.542944 containerd[1712]: time="2025-10-13T05:27:05.542933187Z" level=info msg="runtime interface created" Oct 13 05:27:05.543073 containerd[1712]: time="2025-10-13T05:27:05.542979667Z" level=info msg="created NRI interface" Oct 13 05:27:05.543073 containerd[1712]: time="2025-10-13T05:27:05.542994392Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 13 05:27:05.543073 containerd[1712]: time="2025-10-13T05:27:05.543007134Z" level=info msg="Connect containerd service" Oct 13 05:27:05.543257 containerd[1712]: time="2025-10-13T05:27:05.543193549Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 13 05:27:05.544520 containerd[1712]: time="2025-10-13T05:27:05.544491069Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 13 05:27:05.571770 sshd_keygen[1714]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 13 05:27:05.581383 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 13 05:27:05.585001 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 13 05:27:05.597533 tar[1700]: linux-amd64/README.md Oct 13 05:27:05.603322 systemd[1]: issuegen.service: Deactivated successfully. Oct 13 05:27:05.603571 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 13 05:27:05.605868 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 13 05:27:05.619403 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 13 05:27:05.631359 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 13 05:27:05.632963 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 13 05:27:05.636293 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 13 05:27:05.636485 systemd[1]: Reached target getty.target - Login Prompts. Oct 13 05:27:05.768106 containerd[1712]: time="2025-10-13T05:27:05.768044084Z" level=info msg="Start subscribing containerd event" Oct 13 05:27:05.768106 containerd[1712]: time="2025-10-13T05:27:05.768075409Z" level=info msg="Start recovering state" Oct 13 05:27:05.768241 containerd[1712]: time="2025-10-13T05:27:05.768132123Z" level=info msg="Start event monitor" Oct 13 05:27:05.768241 containerd[1712]: time="2025-10-13T05:27:05.768140023Z" level=info msg="Start cni network conf syncer for default" Oct 13 05:27:05.768241 containerd[1712]: time="2025-10-13T05:27:05.768143700Z" level=info msg="Start streaming server" Oct 13 05:27:05.768241 containerd[1712]: time="2025-10-13T05:27:05.768149200Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 13 05:27:05.768241 containerd[1712]: time="2025-10-13T05:27:05.768153275Z" level=info msg="runtime interface starting up..." Oct 13 05:27:05.768241 containerd[1712]: time="2025-10-13T05:27:05.768156088Z" level=info msg="starting plugins..." Oct 13 05:27:05.768241 containerd[1712]: time="2025-10-13T05:27:05.768163292Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 13 05:27:05.768377 containerd[1712]: time="2025-10-13T05:27:05.768362984Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 13 05:27:05.768450 containerd[1712]: time="2025-10-13T05:27:05.768441623Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 13 05:27:05.768526 containerd[1712]: time="2025-10-13T05:27:05.768518500Z" level=info msg="containerd successfully booted in 0.262140s" Oct 13 05:27:05.768809 systemd[1]: Started containerd.service - containerd container runtime. Oct 13 05:27:06.458895 systemd-networkd[1601]: ens192: Gained IPv6LL Oct 13 05:27:06.459200 systemd-timesyncd[1575]: Network configuration changed, trying to establish connection. Oct 13 05:27:06.460588 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 13 05:27:06.461036 systemd[1]: Reached target network-online.target - Network is Online. Oct 13 05:27:06.462173 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Oct 13 05:27:06.463318 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:27:06.472044 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 13 05:27:06.490082 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 13 05:27:06.513954 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 13 05:27:06.514208 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Oct 13 05:27:06.514785 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 13 05:27:07.350261 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:27:07.351002 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 13 05:27:07.351171 systemd[1]: Startup finished in 2.455s (kernel) + 6.327s (initrd) + 4.653s (userspace) = 13.436s. Oct 13 05:27:07.355736 (kubelet)[1872]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:27:07.411911 login[1835]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 13 05:27:07.417030 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 13 05:27:07.417639 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 13 05:27:07.426149 systemd-logind[1687]: New session 1 of user core. Oct 13 05:27:07.433416 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 13 05:27:07.437014 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 13 05:27:07.448081 (systemd)[1877]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 13 05:27:07.449858 systemd-logind[1687]: New session c1 of user core. Oct 13 05:27:07.565427 systemd[1877]: Queued start job for default target default.target. Oct 13 05:27:07.571718 systemd[1877]: Created slice app.slice - User Application Slice. Oct 13 05:27:07.571760 systemd[1877]: Reached target paths.target - Paths. Oct 13 05:27:07.571790 systemd[1877]: Reached target timers.target - Timers. Oct 13 05:27:07.572658 systemd[1877]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 13 05:27:07.584251 systemd[1877]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 13 05:27:07.584805 systemd[1877]: Reached target sockets.target - Sockets. Oct 13 05:27:07.584835 systemd[1877]: Reached target basic.target - Basic System. Oct 13 05:27:07.584859 systemd[1877]: Reached target default.target - Main User Target. Oct 13 05:27:07.584875 systemd[1877]: Startup finished in 131ms. Oct 13 05:27:07.585099 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 13 05:27:07.592884 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 13 05:27:07.709370 login[1836]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 13 05:27:07.712416 systemd-logind[1687]: New session 2 of user core. Oct 13 05:27:07.716809 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 13 05:27:07.838063 kubelet[1872]: E1013 05:27:07.838033 1872 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:27:07.840073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:27:07.840250 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:27:07.840738 systemd[1]: kubelet.service: Consumed 578ms CPU time, 257.2M memory peak. Oct 13 05:27:08.329576 systemd-timesyncd[1575]: Network configuration changed, trying to establish connection. Oct 13 05:27:18.090598 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 13 05:27:18.092069 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:27:18.543425 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:27:18.546514 (kubelet)[1923]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:27:18.595484 kubelet[1923]: E1013 05:27:18.595457 1923 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:27:18.597905 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:27:18.598009 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:27:18.598268 systemd[1]: kubelet.service: Consumed 108ms CPU time, 110.5M memory peak. Oct 13 05:27:28.848483 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 13 05:27:28.849953 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:27:29.199259 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:27:29.201934 (kubelet)[1938]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:27:29.253623 kubelet[1938]: E1013 05:27:29.253575 1938 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:27:29.254958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:27:29.255041 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:27:29.255514 systemd[1]: kubelet.service: Consumed 95ms CPU time, 110.8M memory peak. Oct 13 05:27:35.268917 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 13 05:27:35.270052 systemd[1]: Started sshd@0-139.178.70.110:22-139.178.89.65:56382.service - OpenSSH per-connection server daemon (139.178.89.65:56382). Oct 13 05:27:35.324956 sshd[1946]: Accepted publickey for core from 139.178.89.65 port 56382 ssh2: RSA SHA256:6zDqqjOIzEVL6yYi04udNWcyv9SVeoJ+g8IY0fMDRdw Oct 13 05:27:35.325686 sshd-session[1946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:27:35.328249 systemd-logind[1687]: New session 3 of user core. Oct 13 05:27:35.338977 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 13 05:27:35.391866 systemd[1]: Started sshd@1-139.178.70.110:22-139.178.89.65:56394.service - OpenSSH per-connection server daemon (139.178.89.65:56394). Oct 13 05:27:35.432391 sshd[1952]: Accepted publickey for core from 139.178.89.65 port 56394 ssh2: RSA SHA256:6zDqqjOIzEVL6yYi04udNWcyv9SVeoJ+g8IY0fMDRdw Oct 13 05:27:35.432875 sshd-session[1952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:27:35.435499 systemd-logind[1687]: New session 4 of user core. Oct 13 05:27:35.452278 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 13 05:27:35.501696 sshd[1955]: Connection closed by 139.178.89.65 port 56394 Oct 13 05:27:35.502448 sshd-session[1952]: pam_unix(sshd:session): session closed for user core Oct 13 05:27:35.512026 systemd[1]: sshd@1-139.178.70.110:22-139.178.89.65:56394.service: Deactivated successfully. Oct 13 05:27:35.512994 systemd[1]: session-4.scope: Deactivated successfully. Oct 13 05:27:35.513503 systemd-logind[1687]: Session 4 logged out. Waiting for processes to exit. Oct 13 05:27:35.514961 systemd[1]: Started sshd@2-139.178.70.110:22-139.178.89.65:56402.service - OpenSSH per-connection server daemon (139.178.89.65:56402). Oct 13 05:27:35.515580 systemd-logind[1687]: Removed session 4. Oct 13 05:27:35.565186 sshd[1961]: Accepted publickey for core from 139.178.89.65 port 56402 ssh2: RSA SHA256:6zDqqjOIzEVL6yYi04udNWcyv9SVeoJ+g8IY0fMDRdw Oct 13 05:27:35.566039 sshd-session[1961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:27:35.569582 systemd-logind[1687]: New session 5 of user core. Oct 13 05:27:35.574825 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 13 05:27:35.620337 sshd[1964]: Connection closed by 139.178.89.65 port 56402 Oct 13 05:27:35.620717 sshd-session[1961]: pam_unix(sshd:session): session closed for user core Oct 13 05:27:35.633855 systemd[1]: sshd@2-139.178.70.110:22-139.178.89.65:56402.service: Deactivated successfully. Oct 13 05:27:35.634708 systemd[1]: session-5.scope: Deactivated successfully. Oct 13 05:27:35.635429 systemd-logind[1687]: Session 5 logged out. Waiting for processes to exit. Oct 13 05:27:35.636357 systemd[1]: Started sshd@3-139.178.70.110:22-139.178.89.65:56410.service - OpenSSH per-connection server daemon (139.178.89.65:56410). Oct 13 05:27:35.639172 systemd-logind[1687]: Removed session 5. Oct 13 05:27:35.673134 sshd[1970]: Accepted publickey for core from 139.178.89.65 port 56410 ssh2: RSA SHA256:6zDqqjOIzEVL6yYi04udNWcyv9SVeoJ+g8IY0fMDRdw Oct 13 05:27:35.673836 sshd-session[1970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:27:35.676427 systemd-logind[1687]: New session 6 of user core. Oct 13 05:27:35.684809 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 13 05:27:35.733601 sshd[1973]: Connection closed by 139.178.89.65 port 56410 Oct 13 05:27:35.733959 sshd-session[1970]: pam_unix(sshd:session): session closed for user core Oct 13 05:27:35.743600 systemd[1]: sshd@3-139.178.70.110:22-139.178.89.65:56410.service: Deactivated successfully. Oct 13 05:27:35.744642 systemd[1]: session-6.scope: Deactivated successfully. Oct 13 05:27:35.745396 systemd-logind[1687]: Session 6 logged out. Waiting for processes to exit. Oct 13 05:27:35.746458 systemd-logind[1687]: Removed session 6. Oct 13 05:27:35.747578 systemd[1]: Started sshd@4-139.178.70.110:22-139.178.89.65:56422.service - OpenSSH per-connection server daemon (139.178.89.65:56422). Oct 13 05:27:35.790352 sshd[1979]: Accepted publickey for core from 139.178.89.65 port 56422 ssh2: RSA SHA256:6zDqqjOIzEVL6yYi04udNWcyv9SVeoJ+g8IY0fMDRdw Oct 13 05:27:35.791031 sshd-session[1979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:27:35.793667 systemd-logind[1687]: New session 7 of user core. Oct 13 05:27:35.799980 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 13 05:27:35.856159 sudo[1983]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 13 05:27:35.856806 sudo[1983]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:27:35.869060 sudo[1983]: pam_unix(sudo:session): session closed for user root Oct 13 05:27:35.869945 sshd[1982]: Connection closed by 139.178.89.65 port 56422 Oct 13 05:27:35.870301 sshd-session[1979]: pam_unix(sshd:session): session closed for user core Oct 13 05:27:35.879378 systemd[1]: sshd@4-139.178.70.110:22-139.178.89.65:56422.service: Deactivated successfully. Oct 13 05:27:35.880463 systemd[1]: session-7.scope: Deactivated successfully. Oct 13 05:27:35.881059 systemd-logind[1687]: Session 7 logged out. Waiting for processes to exit. Oct 13 05:27:35.882025 systemd-logind[1687]: Removed session 7. Oct 13 05:27:35.882991 systemd[1]: Started sshd@5-139.178.70.110:22-139.178.89.65:56432.service - OpenSSH per-connection server daemon (139.178.89.65:56432). Oct 13 05:27:35.921968 sshd[1989]: Accepted publickey for core from 139.178.89.65 port 56432 ssh2: RSA SHA256:6zDqqjOIzEVL6yYi04udNWcyv9SVeoJ+g8IY0fMDRdw Oct 13 05:27:35.922586 sshd-session[1989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:27:35.925068 systemd-logind[1687]: New session 8 of user core. Oct 13 05:27:35.931811 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 13 05:27:35.979985 sudo[1994]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 13 05:27:35.980135 sudo[1994]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:27:35.982443 sudo[1994]: pam_unix(sudo:session): session closed for user root Oct 13 05:27:35.986091 sudo[1993]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 13 05:27:35.986234 sudo[1993]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:27:35.992153 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 05:27:36.016248 augenrules[2016]: No rules Oct 13 05:27:36.017078 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 05:27:36.017297 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 05:27:36.018041 sudo[1993]: pam_unix(sudo:session): session closed for user root Oct 13 05:27:36.019747 sshd[1992]: Connection closed by 139.178.89.65 port 56432 Oct 13 05:27:36.018996 sshd-session[1989]: pam_unix(sshd:session): session closed for user core Oct 13 05:27:36.027298 systemd[1]: sshd@5-139.178.70.110:22-139.178.89.65:56432.service: Deactivated successfully. Oct 13 05:27:36.028282 systemd[1]: session-8.scope: Deactivated successfully. Oct 13 05:27:36.029869 systemd-logind[1687]: Session 8 logged out. Waiting for processes to exit. Oct 13 05:27:36.031088 systemd[1]: Started sshd@6-139.178.70.110:22-139.178.89.65:56438.service - OpenSSH per-connection server daemon (139.178.89.65:56438). Oct 13 05:27:36.032855 systemd-logind[1687]: Removed session 8. Oct 13 05:27:36.068247 sshd[2025]: Accepted publickey for core from 139.178.89.65 port 56438 ssh2: RSA SHA256:6zDqqjOIzEVL6yYi04udNWcyv9SVeoJ+g8IY0fMDRdw Oct 13 05:27:36.068862 sshd-session[2025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:27:36.071424 systemd-logind[1687]: New session 9 of user core. Oct 13 05:27:36.079863 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 13 05:27:36.128659 sudo[2029]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 13 05:27:36.129091 sudo[2029]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:27:36.479857 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 13 05:27:36.490916 (dockerd)[2048]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 13 05:27:36.738416 dockerd[2048]: time="2025-10-13T05:27:36.738016461Z" level=info msg="Starting up" Oct 13 05:27:36.738934 dockerd[2048]: time="2025-10-13T05:27:36.738914139Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 13 05:27:36.745973 dockerd[2048]: time="2025-10-13T05:27:36.745944718Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 13 05:27:36.753365 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1949497371-merged.mount: Deactivated successfully. Oct 13 05:27:36.770850 dockerd[2048]: time="2025-10-13T05:27:36.770829147Z" level=info msg="Loading containers: start." Oct 13 05:27:36.777825 kernel: Initializing XFRM netlink socket Oct 13 05:27:36.909001 systemd-timesyncd[1575]: Network configuration changed, trying to establish connection. Oct 13 05:27:36.931350 systemd-networkd[1601]: docker0: Link UP Oct 13 05:27:36.932338 dockerd[2048]: time="2025-10-13T05:27:36.932316408Z" level=info msg="Loading containers: done." Oct 13 05:27:36.952240 dockerd[2048]: time="2025-10-13T05:27:36.952207699Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 13 05:27:36.952325 dockerd[2048]: time="2025-10-13T05:27:36.952264955Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 13 05:27:36.952325 dockerd[2048]: time="2025-10-13T05:27:36.952317641Z" level=info msg="Initializing buildkit" Oct 13 05:27:36.985062 dockerd[2048]: time="2025-10-13T05:27:36.984946745Z" level=info msg="Completed buildkit initialization" Oct 13 05:27:36.989954 dockerd[2048]: time="2025-10-13T05:27:36.989484947Z" level=info msg="Daemon has completed initialization" Oct 13 05:27:36.989775 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 13 05:27:36.990472 dockerd[2048]: time="2025-10-13T05:27:36.990243023Z" level=info msg="API listen on /run/docker.sock" Oct 13 05:29:14.037936 systemd-resolved[1363]: Clock change detected. Flushing caches. Oct 13 05:29:14.038389 systemd-timesyncd[1575]: Contacted time server 72.14.186.59:123 (2.flatcar.pool.ntp.org). Oct 13 05:29:14.038820 systemd-timesyncd[1575]: Initial clock synchronization to Mon 2025-10-13 05:29:14.037773 UTC. Oct 13 05:29:14.658549 containerd[1712]: time="2025-10-13T05:29:14.658500424Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Oct 13 05:29:14.767370 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3610624917-merged.mount: Deactivated successfully. Oct 13 05:29:15.337569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2828913335.mount: Deactivated successfully. Oct 13 05:29:16.339510 containerd[1712]: time="2025-10-13T05:29:16.339483790Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:29:16.340478 containerd[1712]: time="2025-10-13T05:29:16.340461839Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27065392" Oct 13 05:29:16.341991 containerd[1712]: time="2025-10-13T05:29:16.341564065Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:29:16.342420 containerd[1712]: time="2025-10-13T05:29:16.342407252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:29:16.342808 containerd[1712]: time="2025-10-13T05:29:16.342727998Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 1.684187032s" Oct 13 05:29:16.342857 containerd[1712]: time="2025-10-13T05:29:16.342849881Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Oct 13 05:29:16.343553 containerd[1712]: time="2025-10-13T05:29:16.343538320Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Oct 13 05:29:16.521557 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 13 05:29:16.523031 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:29:16.887030 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:29:16.889641 (kubelet)[2324]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:29:16.930605 kubelet[2324]: E1013 05:29:16.930568 2324 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:29:16.932483 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:29:16.932671 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:29:16.933156 systemd[1]: kubelet.service: Consumed 105ms CPU time, 110.4M memory peak. Oct 13 05:29:18.029919 containerd[1712]: time="2025-10-13T05:29:18.029508525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:29:18.036714 containerd[1712]: time="2025-10-13T05:29:18.036693086Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21159757" Oct 13 05:29:18.040009 containerd[1712]: time="2025-10-13T05:29:18.039958761Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:29:18.045904 containerd[1712]: time="2025-10-13T05:29:18.045433452Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:29:18.045965 containerd[1712]: time="2025-10-13T05:29:18.045953482Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 1.702319979s" Oct 13 05:29:18.046007 containerd[1712]: time="2025-10-13T05:29:18.045999327Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Oct 13 05:29:18.046290 containerd[1712]: time="2025-10-13T05:29:18.046276212Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Oct 13 05:29:19.108679 containerd[1712]: time="2025-10-13T05:29:19.108639832Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:29:19.109423 containerd[1712]: time="2025-10-13T05:29:19.109402073Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15725093" Oct 13 05:29:19.109699 containerd[1712]: time="2025-10-13T05:29:19.109683554Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:29:19.111490 containerd[1712]: time="2025-10-13T05:29:19.111471328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:29:19.112003 containerd[1712]: time="2025-10-13T05:29:19.111926279Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 1.065465159s" Oct 13 05:29:19.112003 containerd[1712]: time="2025-10-13T05:29:19.111946078Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Oct 13 05:29:19.112339 containerd[1712]: time="2025-10-13T05:29:19.112318299Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Oct 13 05:29:20.162059 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2010054505.mount: Deactivated successfully. Oct 13 05:29:20.613020 containerd[1712]: time="2025-10-13T05:29:20.612944814Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:29:20.622953 containerd[1712]: time="2025-10-13T05:29:20.622930509Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964699" Oct 13 05:29:20.630189 containerd[1712]: time="2025-10-13T05:29:20.630156089Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:29:20.640445 containerd[1712]: time="2025-10-13T05:29:20.640401995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:29:20.640917 containerd[1712]: time="2025-10-13T05:29:20.640798159Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 1.528435324s" Oct 13 05:29:20.640917 containerd[1712]: time="2025-10-13T05:29:20.640820480Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Oct 13 05:29:20.641160 containerd[1712]: time="2025-10-13T05:29:20.641141984Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Oct 13 05:29:21.300947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3384776050.mount: Deactivated successfully. Oct 13 05:29:22.161000 containerd[1712]: time="2025-10-13T05:29:22.160963445Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:29:22.161813 containerd[1712]: time="2025-10-13T05:29:22.161790338Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Oct 13 05:29:22.162004 containerd[1712]: time="2025-10-13T05:29:22.161990451Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:29:22.163964 containerd[1712]: time="2025-10-13T05:29:22.163944703Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:29:22.166771 containerd[1712]: time="2025-10-13T05:29:22.166739883Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.525557175s" Oct 13 05:29:22.166771 containerd[1712]: time="2025-10-13T05:29:22.166767700Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Oct 13 05:29:22.168268 containerd[1712]: time="2025-10-13T05:29:22.168159774Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Oct 13 05:29:22.773208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2729398892.mount: Deactivated successfully. Oct 13 05:29:22.909908 containerd[1712]: time="2025-10-13T05:29:22.909779907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:29:22.913188 containerd[1712]: time="2025-10-13T05:29:22.913148381Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Oct 13 05:29:22.918716 containerd[1712]: time="2025-10-13T05:29:22.918672910Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:29:22.921084 containerd[1712]: time="2025-10-13T05:29:22.920526344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:29:22.921218 containerd[1712]: time="2025-10-13T05:29:22.921190125Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 753.006388ms" Oct 13 05:29:22.921258 containerd[1712]: time="2025-10-13T05:29:22.921219144Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Oct 13 05:29:22.921823 containerd[1712]: time="2025-10-13T05:29:22.921784946Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Oct 13 05:29:25.650597 containerd[1712]: time="2025-10-13T05:29:25.650031597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:29:25.651116 containerd[1712]: time="2025-10-13T05:29:25.651088198Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73514593" Oct 13 05:29:25.651429 containerd[1712]: time="2025-10-13T05:29:25.651417747Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:29:25.656317 containerd[1712]: time="2025-10-13T05:29:25.656295203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:29:25.656810 containerd[1712]: time="2025-10-13T05:29:25.656789854Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 2.73434118s" Oct 13 05:29:25.656843 containerd[1712]: time="2025-10-13T05:29:25.656812069Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Oct 13 05:29:26.944062 update_engine[1691]: I20251013 05:29:26.944001 1691 update_attempter.cc:509] Updating boot flags... Oct 13 05:29:26.950345 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Oct 13 05:29:26.952010 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:29:27.214246 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:29:27.220053 (kubelet)[2490]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:29:27.368451 kubelet[2490]: E1013 05:29:27.368423 2490 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:29:27.369776 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:29:27.369860 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:29:27.370242 systemd[1]: kubelet.service: Consumed 97ms CPU time, 110.7M memory peak. Oct 13 05:29:28.150533 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:29:28.150746 systemd[1]: kubelet.service: Consumed 97ms CPU time, 110.7M memory peak. Oct 13 05:29:28.152695 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:29:28.168431 systemd[1]: Reload requested from client PID 2504 ('systemctl') (unit session-9.scope)... Oct 13 05:29:28.168441 systemd[1]: Reloading... Oct 13 05:29:28.236913 zram_generator::config[2550]: No configuration found. Oct 13 05:29:28.307795 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Oct 13 05:29:28.376786 systemd[1]: Reloading finished in 208 ms. Oct 13 05:29:28.406016 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 13 05:29:28.406065 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 13 05:29:28.406310 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:29:28.408167 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:29:28.769836 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:29:28.777071 (kubelet)[2616]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 13 05:29:28.828534 kubelet[2616]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 13 05:29:28.828534 kubelet[2616]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:29:28.841339 kubelet[2616]: I1013 05:29:28.841029 2616 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 05:29:29.177912 kubelet[2616]: I1013 05:29:29.177529 2616 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 13 05:29:29.177912 kubelet[2616]: I1013 05:29:29.177546 2616 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 05:29:29.177912 kubelet[2616]: I1013 05:29:29.177563 2616 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 13 05:29:29.177912 kubelet[2616]: I1013 05:29:29.177569 2616 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 13 05:29:29.177912 kubelet[2616]: I1013 05:29:29.177749 2616 server.go:956] "Client rotation is on, will bootstrap in background" Oct 13 05:29:29.191330 kubelet[2616]: I1013 05:29:29.191137 2616 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 05:29:29.192913 kubelet[2616]: E1013 05:29:29.192193 2616 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://139.178.70.110:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.110:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 13 05:29:29.207912 kubelet[2616]: I1013 05:29:29.207888 2616 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 13 05:29:29.248454 kubelet[2616]: I1013 05:29:29.248290 2616 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 13 05:29:29.257346 kubelet[2616]: I1013 05:29:29.257329 2616 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 05:29:29.267639 kubelet[2616]: I1013 05:29:29.257386 2616 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 05:29:29.267908 kubelet[2616]: I1013 05:29:29.267748 2616 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 05:29:29.267908 kubelet[2616]: I1013 05:29:29.267758 2616 container_manager_linux.go:306] "Creating device plugin manager" Oct 13 05:29:29.267908 kubelet[2616]: I1013 05:29:29.267809 2616 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 13 05:29:29.303443 kubelet[2616]: I1013 05:29:29.303433 2616 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:29:29.303613 kubelet[2616]: I1013 05:29:29.303606 2616 kubelet.go:475] "Attempting to sync node with API server" Oct 13 05:29:29.303652 kubelet[2616]: I1013 05:29:29.303647 2616 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 05:29:29.303712 kubelet[2616]: I1013 05:29:29.303707 2616 kubelet.go:387] "Adding apiserver pod source" Oct 13 05:29:29.303749 kubelet[2616]: I1013 05:29:29.303744 2616 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 05:29:29.308531 kubelet[2616]: E1013 05:29:29.308512 2616 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://139.178.70.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.110:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 13 05:29:29.320840 kubelet[2616]: E1013 05:29:29.320760 2616 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://139.178.70.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.110:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 13 05:29:29.321175 kubelet[2616]: I1013 05:29:29.321090 2616 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 13 05:29:29.343338 kubelet[2616]: I1013 05:29:29.343242 2616 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 13 05:29:29.343338 kubelet[2616]: I1013 05:29:29.343264 2616 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 13 05:29:29.366908 kubelet[2616]: W1013 05:29:29.366518 2616 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 13 05:29:29.387025 kubelet[2616]: I1013 05:29:29.387012 2616 server.go:1262] "Started kubelet" Oct 13 05:29:29.387586 kubelet[2616]: I1013 05:29:29.387577 2616 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 05:29:29.401043 kubelet[2616]: I1013 05:29:29.401021 2616 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 05:29:29.430731 kubelet[2616]: I1013 05:29:29.430652 2616 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 05:29:29.430844 kubelet[2616]: I1013 05:29:29.430833 2616 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 13 05:29:29.431362 kubelet[2616]: I1013 05:29:29.431059 2616 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 05:29:29.442546 kubelet[2616]: I1013 05:29:29.430847 2616 server.go:310] "Adding debug handlers to kubelet server" Oct 13 05:29:29.443154 kubelet[2616]: E1013 05:29:29.431065 2616 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.110:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.110:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186df5df7093d7c6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-13 05:29:29.386981318 +0000 UTC m=+0.608160886,LastTimestamp:2025-10-13 05:29:29.386981318 +0000 UTC m=+0.608160886,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 13 05:29:29.445116 kubelet[2616]: I1013 05:29:29.445104 2616 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 13 05:29:29.451010 kubelet[2616]: E1013 05:29:29.450832 2616 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:29:29.455453 kubelet[2616]: I1013 05:29:29.455439 2616 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 13 05:29:29.456976 kubelet[2616]: I1013 05:29:29.456962 2616 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 13 05:29:29.457010 kubelet[2616]: I1013 05:29:29.456990 2616 reconciler.go:29] "Reconciler: start to sync state" Oct 13 05:29:29.474056 kubelet[2616]: I1013 05:29:29.473981 2616 factory.go:223] Registration of the systemd container factory successfully Oct 13 05:29:29.474056 kubelet[2616]: I1013 05:29:29.474029 2616 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 13 05:29:29.474648 kubelet[2616]: E1013 05:29:29.474636 2616 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://139.178.70.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.110:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 13 05:29:29.480955 kubelet[2616]: I1013 05:29:29.479751 2616 factory.go:223] Registration of the containerd container factory successfully Oct 13 05:29:29.483231 kubelet[2616]: E1013 05:29:29.483217 2616 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.110:6443: connect: connection refused" interval="200ms" Oct 13 05:29:29.503183 kubelet[2616]: I1013 05:29:29.503165 2616 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 13 05:29:29.503976 kubelet[2616]: I1013 05:29:29.503968 2616 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 13 05:29:29.504027 kubelet[2616]: I1013 05:29:29.504021 2616 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 13 05:29:29.504069 kubelet[2616]: I1013 05:29:29.504064 2616 kubelet.go:2427] "Starting kubelet main sync loop" Oct 13 05:29:29.504120 kubelet[2616]: E1013 05:29:29.504111 2616 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 13 05:29:29.506014 kubelet[2616]: E1013 05:29:29.505999 2616 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://139.178.70.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.110:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 13 05:29:29.506276 kubelet[2616]: I1013 05:29:29.506268 2616 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 13 05:29:29.506345 kubelet[2616]: I1013 05:29:29.506338 2616 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 13 05:29:29.506384 kubelet[2616]: I1013 05:29:29.506380 2616 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:29:29.551008 kubelet[2616]: E1013 05:29:29.550990 2616 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:29:29.591335 kubelet[2616]: I1013 05:29:29.591268 2616 policy_none.go:49] "None policy: Start" Oct 13 05:29:29.596136 kubelet[2616]: I1013 05:29:29.596118 2616 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 13 05:29:29.596180 kubelet[2616]: I1013 05:29:29.596140 2616 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 13 05:29:29.596612 kubelet[2616]: I1013 05:29:29.596597 2616 policy_none.go:47] "Start" Oct 13 05:29:29.601541 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 13 05:29:29.605187 kubelet[2616]: E1013 05:29:29.605164 2616 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 13 05:29:29.610596 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 13 05:29:29.613954 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 13 05:29:29.620635 kubelet[2616]: E1013 05:29:29.620613 2616 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 13 05:29:29.624214 kubelet[2616]: I1013 05:29:29.624197 2616 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 05:29:29.624284 kubelet[2616]: I1013 05:29:29.624213 2616 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 05:29:29.624430 kubelet[2616]: I1013 05:29:29.624388 2616 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 05:29:29.625202 kubelet[2616]: E1013 05:29:29.625190 2616 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 13 05:29:29.625277 kubelet[2616]: E1013 05:29:29.625239 2616 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 13 05:29:29.684002 kubelet[2616]: E1013 05:29:29.683919 2616 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.110:6443: connect: connection refused" interval="400ms" Oct 13 05:29:29.726082 kubelet[2616]: I1013 05:29:29.726034 2616 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:29:29.726402 kubelet[2616]: E1013 05:29:29.726383 2616 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.110:6443/api/v1/nodes\": dial tcp 139.178.70.110:6443: connect: connection refused" node="localhost" Oct 13 05:29:29.815027 systemd[1]: Created slice kubepods-burstable-pod74359763a909b23f4cf380b8e14482b2.slice - libcontainer container kubepods-burstable-pod74359763a909b23f4cf380b8e14482b2.slice. Oct 13 05:29:29.822467 kubelet[2616]: E1013 05:29:29.822455 2616 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:29:29.824412 systemd[1]: Created slice kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice - libcontainer container kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice. Oct 13 05:29:29.832798 kubelet[2616]: E1013 05:29:29.832701 2616 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:29:29.835152 systemd[1]: Created slice kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice - libcontainer container kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice. Oct 13 05:29:29.836329 kubelet[2616]: E1013 05:29:29.836319 2616 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:29:29.928315 kubelet[2616]: I1013 05:29:29.928265 2616 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:29:29.928493 kubelet[2616]: E1013 05:29:29.928477 2616 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.110:6443/api/v1/nodes\": dial tcp 139.178.70.110:6443: connect: connection refused" node="localhost" Oct 13 05:29:29.958033 kubelet[2616]: I1013 05:29:29.957921 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:29:29.958033 kubelet[2616]: I1013 05:29:29.957967 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:29:29.958033 kubelet[2616]: I1013 05:29:29.957984 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 13 05:29:29.958033 kubelet[2616]: I1013 05:29:29.957994 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/74359763a909b23f4cf380b8e14482b2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"74359763a909b23f4cf380b8e14482b2\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:29:29.958033 kubelet[2616]: I1013 05:29:29.958002 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/74359763a909b23f4cf380b8e14482b2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"74359763a909b23f4cf380b8e14482b2\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:29:29.958210 kubelet[2616]: I1013 05:29:29.958018 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:29:29.958210 kubelet[2616]: I1013 05:29:29.958036 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:29:29.958210 kubelet[2616]: I1013 05:29:29.958058 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:29:29.958210 kubelet[2616]: I1013 05:29:29.958076 2616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/74359763a909b23f4cf380b8e14482b2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"74359763a909b23f4cf380b8e14482b2\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:29:30.084501 kubelet[2616]: E1013 05:29:30.084470 2616 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.110:6443: connect: connection refused" interval="800ms" Oct 13 05:29:30.125908 containerd[1712]: time="2025-10-13T05:29:30.125693056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:74359763a909b23f4cf380b8e14482b2,Namespace:kube-system,Attempt:0,}" Oct 13 05:29:30.134818 containerd[1712]: time="2025-10-13T05:29:30.134796465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,}" Oct 13 05:29:30.137612 containerd[1712]: time="2025-10-13T05:29:30.137584944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,}" Oct 13 05:29:30.319289 kubelet[2616]: E1013 05:29:30.319223 2616 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.110:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.110:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186df5df7093d7c6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-13 05:29:29.386981318 +0000 UTC m=+0.608160886,LastTimestamp:2025-10-13 05:29:29.386981318 +0000 UTC m=+0.608160886,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 13 05:29:30.330165 kubelet[2616]: I1013 05:29:30.330150 2616 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:29:30.330518 kubelet[2616]: E1013 05:29:30.330322 2616 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.110:6443/api/v1/nodes\": dial tcp 139.178.70.110:6443: connect: connection refused" node="localhost" Oct 13 05:29:30.542116 kubelet[2616]: E1013 05:29:30.542089 2616 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://139.178.70.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.110:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 13 05:29:30.557698 kubelet[2616]: E1013 05:29:30.557673 2616 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://139.178.70.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.110:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 13 05:29:30.581238 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount375593035.mount: Deactivated successfully. Oct 13 05:29:30.601934 containerd[1712]: time="2025-10-13T05:29:30.601908957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:29:30.615921 containerd[1712]: time="2025-10-13T05:29:30.615884795Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Oct 13 05:29:30.620839 containerd[1712]: time="2025-10-13T05:29:30.620821734Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:29:30.630939 containerd[1712]: time="2025-10-13T05:29:30.630917442Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:29:30.635912 containerd[1712]: time="2025-10-13T05:29:30.635785598Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 13 05:29:30.638003 containerd[1712]: time="2025-10-13T05:29:30.637991590Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:29:30.641432 containerd[1712]: time="2025-10-13T05:29:30.641419231Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:29:30.641831 containerd[1712]: time="2025-10-13T05:29:30.641637059Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 505.926891ms" Oct 13 05:29:30.643494 containerd[1712]: time="2025-10-13T05:29:30.643478285Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 13 05:29:30.648912 containerd[1712]: time="2025-10-13T05:29:30.648693965Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 509.177919ms" Oct 13 05:29:30.655196 containerd[1712]: time="2025-10-13T05:29:30.655148432Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 528.09977ms" Oct 13 05:29:30.674596 kubelet[2616]: E1013 05:29:30.674224 2616 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://139.178.70.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.110:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 13 05:29:30.726002 containerd[1712]: time="2025-10-13T05:29:30.725852428Z" level=info msg="connecting to shim a0041d91a3295e666578e0ec502b45362013b996c62fdbb728961a6db860ef35" address="unix:///run/containerd/s/905f1db797a3aaf990d741c376589a57ffddcf14372e1bc604e85988145e62f3" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:29:30.726224 containerd[1712]: time="2025-10-13T05:29:30.726210253Z" level=info msg="connecting to shim 5a43c8b1c7d41236bee7722fb65850b1794399d0b87d586a7d268e9bd8855efa" address="unix:///run/containerd/s/896831b10285cb6371d2a8665a585cb014af51fa4e245982419572fd683d3594" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:29:30.727902 containerd[1712]: time="2025-10-13T05:29:30.727877729Z" level=info msg="connecting to shim ad9d175440ee62121fd5f8f3d68056d0164d269dd3c0d7987e20b6856e5b19d9" address="unix:///run/containerd/s/07ac44b1a3da9d909a47759d207df1b1777f400c10fe91338256601f6c07ae7a" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:29:30.800127 systemd[1]: Started cri-containerd-5a43c8b1c7d41236bee7722fb65850b1794399d0b87d586a7d268e9bd8855efa.scope - libcontainer container 5a43c8b1c7d41236bee7722fb65850b1794399d0b87d586a7d268e9bd8855efa. Oct 13 05:29:30.801603 systemd[1]: Started cri-containerd-a0041d91a3295e666578e0ec502b45362013b996c62fdbb728961a6db860ef35.scope - libcontainer container a0041d91a3295e666578e0ec502b45362013b996c62fdbb728961a6db860ef35. Oct 13 05:29:30.803304 systemd[1]: Started cri-containerd-ad9d175440ee62121fd5f8f3d68056d0164d269dd3c0d7987e20b6856e5b19d9.scope - libcontainer container ad9d175440ee62121fd5f8f3d68056d0164d269dd3c0d7987e20b6856e5b19d9. Oct 13 05:29:30.896754 kubelet[2616]: E1013 05:29:30.885140 2616 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.110:6443: connect: connection refused" interval="1.6s" Oct 13 05:29:30.928790 kubelet[2616]: E1013 05:29:30.928764 2616 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://139.178.70.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.110:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 13 05:29:30.961651 containerd[1712]: time="2025-10-13T05:29:30.961627078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad9d175440ee62121fd5f8f3d68056d0164d269dd3c0d7987e20b6856e5b19d9\"" Oct 13 05:29:30.961848 containerd[1712]: time="2025-10-13T05:29:30.961641663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,} returns sandbox id \"a0041d91a3295e666578e0ec502b45362013b996c62fdbb728961a6db860ef35\"" Oct 13 05:29:31.030697 containerd[1712]: time="2025-10-13T05:29:31.030540137Z" level=info msg="CreateContainer within sandbox \"ad9d175440ee62121fd5f8f3d68056d0164d269dd3c0d7987e20b6856e5b19d9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 13 05:29:31.039748 containerd[1712]: time="2025-10-13T05:29:31.039585160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:74359763a909b23f4cf380b8e14482b2,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a43c8b1c7d41236bee7722fb65850b1794399d0b87d586a7d268e9bd8855efa\"" Oct 13 05:29:31.059937 containerd[1712]: time="2025-10-13T05:29:31.059918351Z" level=info msg="CreateContainer within sandbox \"5a43c8b1c7d41236bee7722fb65850b1794399d0b87d586a7d268e9bd8855efa\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 13 05:29:31.060095 containerd[1712]: time="2025-10-13T05:29:31.060081585Z" level=info msg="CreateContainer within sandbox \"a0041d91a3295e666578e0ec502b45362013b996c62fdbb728961a6db860ef35\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 13 05:29:31.079232 containerd[1712]: time="2025-10-13T05:29:31.079206340Z" level=info msg="Container 2a6adb9c11d835574283ad7a3d25e01503515f7dd496b01dd81d126272460c07: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:29:31.081627 containerd[1712]: time="2025-10-13T05:29:31.081605453Z" level=info msg="Container 0594c228bf50e4c6509ca311636cc2be13a4d9bd15cf1499b8dc1327577c5263: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:29:31.082241 containerd[1712]: time="2025-10-13T05:29:31.082221404Z" level=info msg="Container 467e5aef1c5444b5923209549ce7c7ffaa87421313f0eb831e874fa0cd667a36: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:29:31.091625 containerd[1712]: time="2025-10-13T05:29:31.091597636Z" level=info msg="CreateContainer within sandbox \"ad9d175440ee62121fd5f8f3d68056d0164d269dd3c0d7987e20b6856e5b19d9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2a6adb9c11d835574283ad7a3d25e01503515f7dd496b01dd81d126272460c07\"" Oct 13 05:29:31.092470 containerd[1712]: time="2025-10-13T05:29:31.092298343Z" level=info msg="StartContainer for \"2a6adb9c11d835574283ad7a3d25e01503515f7dd496b01dd81d126272460c07\"" Oct 13 05:29:31.092470 containerd[1712]: time="2025-10-13T05:29:31.092364715Z" level=info msg="CreateContainer within sandbox \"5a43c8b1c7d41236bee7722fb65850b1794399d0b87d586a7d268e9bd8855efa\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"467e5aef1c5444b5923209549ce7c7ffaa87421313f0eb831e874fa0cd667a36\"" Oct 13 05:29:31.092583 containerd[1712]: time="2025-10-13T05:29:31.092557285Z" level=info msg="StartContainer for \"467e5aef1c5444b5923209549ce7c7ffaa87421313f0eb831e874fa0cd667a36\"" Oct 13 05:29:31.093480 containerd[1712]: time="2025-10-13T05:29:31.093465198Z" level=info msg="connecting to shim 2a6adb9c11d835574283ad7a3d25e01503515f7dd496b01dd81d126272460c07" address="unix:///run/containerd/s/07ac44b1a3da9d909a47759d207df1b1777f400c10fe91338256601f6c07ae7a" protocol=ttrpc version=3 Oct 13 05:29:31.093960 containerd[1712]: time="2025-10-13T05:29:31.093494039Z" level=info msg="connecting to shim 467e5aef1c5444b5923209549ce7c7ffaa87421313f0eb831e874fa0cd667a36" address="unix:///run/containerd/s/896831b10285cb6371d2a8665a585cb014af51fa4e245982419572fd683d3594" protocol=ttrpc version=3 Oct 13 05:29:31.094514 containerd[1712]: time="2025-10-13T05:29:31.094496950Z" level=info msg="CreateContainer within sandbox \"a0041d91a3295e666578e0ec502b45362013b996c62fdbb728961a6db860ef35\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0594c228bf50e4c6509ca311636cc2be13a4d9bd15cf1499b8dc1327577c5263\"" Oct 13 05:29:31.094731 containerd[1712]: time="2025-10-13T05:29:31.094719724Z" level=info msg="StartContainer for \"0594c228bf50e4c6509ca311636cc2be13a4d9bd15cf1499b8dc1327577c5263\"" Oct 13 05:29:31.095668 containerd[1712]: time="2025-10-13T05:29:31.095613366Z" level=info msg="connecting to shim 0594c228bf50e4c6509ca311636cc2be13a4d9bd15cf1499b8dc1327577c5263" address="unix:///run/containerd/s/905f1db797a3aaf990d741c376589a57ffddcf14372e1bc604e85988145e62f3" protocol=ttrpc version=3 Oct 13 05:29:31.111183 systemd[1]: Started cri-containerd-2a6adb9c11d835574283ad7a3d25e01503515f7dd496b01dd81d126272460c07.scope - libcontainer container 2a6adb9c11d835574283ad7a3d25e01503515f7dd496b01dd81d126272460c07. Oct 13 05:29:31.126030 systemd[1]: Started cri-containerd-0594c228bf50e4c6509ca311636cc2be13a4d9bd15cf1499b8dc1327577c5263.scope - libcontainer container 0594c228bf50e4c6509ca311636cc2be13a4d9bd15cf1499b8dc1327577c5263. Oct 13 05:29:31.131838 kubelet[2616]: I1013 05:29:31.131817 2616 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:29:31.132636 kubelet[2616]: E1013 05:29:31.132605 2616 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.110:6443/api/v1/nodes\": dial tcp 139.178.70.110:6443: connect: connection refused" node="localhost" Oct 13 05:29:31.134034 systemd[1]: Started cri-containerd-467e5aef1c5444b5923209549ce7c7ffaa87421313f0eb831e874fa0cd667a36.scope - libcontainer container 467e5aef1c5444b5923209549ce7c7ffaa87421313f0eb831e874fa0cd667a36. Oct 13 05:29:31.172825 containerd[1712]: time="2025-10-13T05:29:31.172758708Z" level=info msg="StartContainer for \"2a6adb9c11d835574283ad7a3d25e01503515f7dd496b01dd81d126272460c07\" returns successfully" Oct 13 05:29:31.199786 containerd[1712]: time="2025-10-13T05:29:31.199756668Z" level=info msg="StartContainer for \"467e5aef1c5444b5923209549ce7c7ffaa87421313f0eb831e874fa0cd667a36\" returns successfully" Oct 13 05:29:31.211033 containerd[1712]: time="2025-10-13T05:29:31.211011755Z" level=info msg="StartContainer for \"0594c228bf50e4c6509ca311636cc2be13a4d9bd15cf1499b8dc1327577c5263\" returns successfully" Oct 13 05:29:31.309779 kubelet[2616]: E1013 05:29:31.309753 2616 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://139.178.70.110:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.110:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 13 05:29:31.525652 kubelet[2616]: E1013 05:29:31.525596 2616 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:29:31.528017 kubelet[2616]: E1013 05:29:31.528007 2616 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:29:31.529515 kubelet[2616]: E1013 05:29:31.529506 2616 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:29:32.529185 kubelet[2616]: E1013 05:29:32.529161 2616 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:29:32.529929 kubelet[2616]: E1013 05:29:32.529827 2616 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:29:32.735087 kubelet[2616]: I1013 05:29:32.735066 2616 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:29:33.004639 kubelet[2616]: E1013 05:29:33.003019 2616 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 13 05:29:33.094427 kubelet[2616]: I1013 05:29:33.093885 2616 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 13 05:29:33.094427 kubelet[2616]: E1013 05:29:33.093912 2616 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 13 05:29:33.151196 kubelet[2616]: I1013 05:29:33.151172 2616 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 05:29:33.155343 kubelet[2616]: E1013 05:29:33.154991 2616 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 13 05:29:33.155343 kubelet[2616]: I1013 05:29:33.155005 2616 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 13 05:29:33.156215 kubelet[2616]: E1013 05:29:33.156198 2616 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 13 05:29:33.156252 kubelet[2616]: I1013 05:29:33.156222 2616 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 13 05:29:33.157077 kubelet[2616]: E1013 05:29:33.157060 2616 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 13 05:29:33.392394 kubelet[2616]: I1013 05:29:33.392364 2616 apiserver.go:52] "Watching apiserver" Oct 13 05:29:33.465310 kubelet[2616]: I1013 05:29:33.465266 2616 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 13 05:29:34.753282 kubelet[2616]: I1013 05:29:34.753033 2616 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 05:29:34.851405 systemd[1]: Reload requested from client PID 2898 ('systemctl') (unit session-9.scope)... Oct 13 05:29:34.851425 systemd[1]: Reloading... Oct 13 05:29:34.929931 zram_generator::config[2952]: No configuration found. Oct 13 05:29:35.002768 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Oct 13 05:29:35.083419 systemd[1]: Reloading finished in 231 ms. Oct 13 05:29:35.102776 kubelet[2616]: I1013 05:29:35.102755 2616 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 05:29:35.104103 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:29:35.116224 systemd[1]: kubelet.service: Deactivated successfully. Oct 13 05:29:35.116508 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:29:35.116608 systemd[1]: kubelet.service: Consumed 651ms CPU time, 122.4M memory peak. Oct 13 05:29:35.118445 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:29:35.443273 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:29:35.454111 (kubelet)[3010]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 13 05:29:35.523757 kubelet[3010]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 13 05:29:35.524545 kubelet[3010]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:29:35.524632 kubelet[3010]: I1013 05:29:35.524616 3010 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 05:29:35.557305 kubelet[3010]: I1013 05:29:35.557284 3010 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 13 05:29:35.557401 kubelet[3010]: I1013 05:29:35.557390 3010 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 05:29:35.557452 kubelet[3010]: I1013 05:29:35.557446 3010 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 13 05:29:35.557487 kubelet[3010]: I1013 05:29:35.557481 3010 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 13 05:29:35.557658 kubelet[3010]: I1013 05:29:35.557650 3010 server.go:956] "Client rotation is on, will bootstrap in background" Oct 13 05:29:35.558454 kubelet[3010]: I1013 05:29:35.558445 3010 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 13 05:29:35.593198 kubelet[3010]: I1013 05:29:35.593180 3010 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 05:29:35.683257 kubelet[3010]: I1013 05:29:35.683235 3010 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 13 05:29:35.685745 kubelet[3010]: I1013 05:29:35.685536 3010 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 13 05:29:35.709150 kubelet[3010]: I1013 05:29:35.709087 3010 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 05:29:35.709555 kubelet[3010]: I1013 05:29:35.709438 3010 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 05:29:35.710390 kubelet[3010]: I1013 05:29:35.709744 3010 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 05:29:35.710390 kubelet[3010]: I1013 05:29:35.709757 3010 container_manager_linux.go:306] "Creating device plugin manager" Oct 13 05:29:35.710390 kubelet[3010]: I1013 05:29:35.709777 3010 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 13 05:29:35.716856 kubelet[3010]: I1013 05:29:35.716845 3010 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:29:35.735856 kubelet[3010]: I1013 05:29:35.735837 3010 kubelet.go:475] "Attempting to sync node with API server" Oct 13 05:29:35.735927 kubelet[3010]: I1013 05:29:35.735864 3010 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 05:29:35.735927 kubelet[3010]: I1013 05:29:35.735886 3010 kubelet.go:387] "Adding apiserver pod source" Oct 13 05:29:35.735927 kubelet[3010]: I1013 05:29:35.735912 3010 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 05:29:35.750392 kubelet[3010]: I1013 05:29:35.750306 3010 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 13 05:29:35.750611 kubelet[3010]: I1013 05:29:35.750601 3010 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 13 05:29:35.750633 kubelet[3010]: I1013 05:29:35.750621 3010 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 13 05:29:35.758800 kubelet[3010]: I1013 05:29:35.758780 3010 server.go:1262] "Started kubelet" Oct 13 05:29:35.758938 kubelet[3010]: I1013 05:29:35.758920 3010 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 05:29:35.759508 kubelet[3010]: I1013 05:29:35.759497 3010 server.go:310] "Adding debug handlers to kubelet server" Oct 13 05:29:35.761225 kubelet[3010]: I1013 05:29:35.761201 3010 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 05:29:35.761265 kubelet[3010]: I1013 05:29:35.761233 3010 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 13 05:29:35.761366 kubelet[3010]: I1013 05:29:35.761353 3010 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 05:29:35.764230 kubelet[3010]: I1013 05:29:35.763560 3010 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 05:29:35.765059 kubelet[3010]: I1013 05:29:35.765047 3010 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 13 05:29:35.767533 kubelet[3010]: I1013 05:29:35.767520 3010 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 13 05:29:35.767917 kubelet[3010]: I1013 05:29:35.767588 3010 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 13 05:29:35.767917 kubelet[3010]: I1013 05:29:35.767647 3010 reconciler.go:29] "Reconciler: start to sync state" Oct 13 05:29:35.770504 kubelet[3010]: E1013 05:29:35.770488 3010 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 13 05:29:35.771269 kubelet[3010]: I1013 05:29:35.771257 3010 factory.go:223] Registration of the containerd container factory successfully Oct 13 05:29:35.771269 kubelet[3010]: I1013 05:29:35.771267 3010 factory.go:223] Registration of the systemd container factory successfully Oct 13 05:29:35.771447 kubelet[3010]: I1013 05:29:35.771316 3010 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 13 05:29:35.773905 kubelet[3010]: I1013 05:29:35.773871 3010 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 13 05:29:35.775104 kubelet[3010]: I1013 05:29:35.774968 3010 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 13 05:29:35.775104 kubelet[3010]: I1013 05:29:35.774979 3010 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 13 05:29:35.775104 kubelet[3010]: I1013 05:29:35.774994 3010 kubelet.go:2427] "Starting kubelet main sync loop" Oct 13 05:29:35.775104 kubelet[3010]: E1013 05:29:35.775016 3010 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 13 05:29:35.822672 kubelet[3010]: I1013 05:29:35.822654 3010 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 13 05:29:35.822933 kubelet[3010]: I1013 05:29:35.822833 3010 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 13 05:29:35.822933 kubelet[3010]: I1013 05:29:35.822853 3010 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:29:35.823390 kubelet[3010]: I1013 05:29:35.823381 3010 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 13 05:29:35.823916 kubelet[3010]: I1013 05:29:35.823449 3010 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 13 05:29:35.823916 kubelet[3010]: I1013 05:29:35.823472 3010 policy_none.go:49] "None policy: Start" Oct 13 05:29:35.823916 kubelet[3010]: I1013 05:29:35.823478 3010 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 13 05:29:35.823916 kubelet[3010]: I1013 05:29:35.823484 3010 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 13 05:29:35.823916 kubelet[3010]: I1013 05:29:35.823545 3010 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Oct 13 05:29:35.823916 kubelet[3010]: I1013 05:29:35.823550 3010 policy_none.go:47] "Start" Oct 13 05:29:35.826597 kubelet[3010]: E1013 05:29:35.826583 3010 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 13 05:29:35.826800 kubelet[3010]: I1013 05:29:35.826793 3010 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 05:29:35.826852 kubelet[3010]: I1013 05:29:35.826837 3010 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 05:29:35.827603 kubelet[3010]: I1013 05:29:35.827595 3010 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 05:29:35.829549 kubelet[3010]: E1013 05:29:35.829440 3010 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 13 05:29:35.877027 kubelet[3010]: I1013 05:29:35.877008 3010 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 13 05:29:35.877150 kubelet[3010]: I1013 05:29:35.877120 3010 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 13 05:29:35.877260 kubelet[3010]: I1013 05:29:35.877038 3010 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 05:29:35.888643 kubelet[3010]: E1013 05:29:35.888619 3010 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 13 05:29:35.929763 kubelet[3010]: I1013 05:29:35.929718 3010 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:29:35.936815 kubelet[3010]: I1013 05:29:35.936746 3010 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 13 05:29:35.936815 kubelet[3010]: I1013 05:29:35.936798 3010 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 13 05:29:35.969692 kubelet[3010]: I1013 05:29:35.969621 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:29:35.969692 kubelet[3010]: I1013 05:29:35.969642 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:29:35.969692 kubelet[3010]: I1013 05:29:35.969651 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:29:35.969692 kubelet[3010]: I1013 05:29:35.969662 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:29:35.969692 kubelet[3010]: I1013 05:29:35.969672 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/74359763a909b23f4cf380b8e14482b2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"74359763a909b23f4cf380b8e14482b2\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:29:35.969851 kubelet[3010]: I1013 05:29:35.969680 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/74359763a909b23f4cf380b8e14482b2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"74359763a909b23f4cf380b8e14482b2\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:29:35.969851 kubelet[3010]: I1013 05:29:35.969687 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:29:35.969851 kubelet[3010]: I1013 05:29:35.969695 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 13 05:29:35.969851 kubelet[3010]: I1013 05:29:35.969705 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/74359763a909b23f4cf380b8e14482b2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"74359763a909b23f4cf380b8e14482b2\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:29:36.749933 kubelet[3010]: I1013 05:29:36.749804 3010 apiserver.go:52] "Watching apiserver" Oct 13 05:29:36.766849 kubelet[3010]: I1013 05:29:36.766806 3010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.766794505 podStartE2EDuration="1.766794505s" podCreationTimestamp="2025-10-13 05:29:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:29:36.766152254 +0000 UTC m=+1.297865071" watchObservedRunningTime="2025-10-13 05:29:36.766794505 +0000 UTC m=+1.298507310" Oct 13 05:29:36.767695 kubelet[3010]: I1013 05:29:36.767638 3010 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 13 05:29:36.770868 kubelet[3010]: I1013 05:29:36.770833 3010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.770823126 podStartE2EDuration="2.770823126s" podCreationTimestamp="2025-10-13 05:29:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:29:36.77042306 +0000 UTC m=+1.302135878" watchObservedRunningTime="2025-10-13 05:29:36.770823126 +0000 UTC m=+1.302535933" Oct 13 05:29:36.818380 kubelet[3010]: I1013 05:29:36.818356 3010 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 13 05:29:36.818552 kubelet[3010]: I1013 05:29:36.818540 3010 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 05:29:36.821942 kubelet[3010]: E1013 05:29:36.821916 3010 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 13 05:29:36.822241 kubelet[3010]: E1013 05:29:36.822226 3010 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 13 05:29:36.825077 kubelet[3010]: I1013 05:29:36.825035 3010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.825024843 podStartE2EDuration="1.825024843s" podCreationTimestamp="2025-10-13 05:29:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:29:36.77569721 +0000 UTC m=+1.307410028" watchObservedRunningTime="2025-10-13 05:29:36.825024843 +0000 UTC m=+1.356737660" Oct 13 05:29:41.542543 kubelet[3010]: I1013 05:29:41.542393 3010 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 13 05:29:41.542945 containerd[1712]: time="2025-10-13T05:29:41.542864280Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 13 05:29:41.543983 kubelet[3010]: I1013 05:29:41.543970 3010 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 13 05:29:42.572114 systemd[1]: Created slice kubepods-besteffort-pod83110096_13e0_4833_aba8_47efc3ea8f89.slice - libcontainer container kubepods-besteffort-pod83110096_13e0_4833_aba8_47efc3ea8f89.slice. Oct 13 05:29:42.611015 kubelet[3010]: I1013 05:29:42.610963 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/83110096-13e0-4833-aba8-47efc3ea8f89-kube-proxy\") pod \"kube-proxy-szh8z\" (UID: \"83110096-13e0-4833-aba8-47efc3ea8f89\") " pod="kube-system/kube-proxy-szh8z" Oct 13 05:29:42.611015 kubelet[3010]: I1013 05:29:42.610985 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/83110096-13e0-4833-aba8-47efc3ea8f89-xtables-lock\") pod \"kube-proxy-szh8z\" (UID: \"83110096-13e0-4833-aba8-47efc3ea8f89\") " pod="kube-system/kube-proxy-szh8z" Oct 13 05:29:42.611015 kubelet[3010]: I1013 05:29:42.610995 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83110096-13e0-4833-aba8-47efc3ea8f89-lib-modules\") pod \"kube-proxy-szh8z\" (UID: \"83110096-13e0-4833-aba8-47efc3ea8f89\") " pod="kube-system/kube-proxy-szh8z" Oct 13 05:29:42.611323 kubelet[3010]: I1013 05:29:42.611299 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jth2h\" (UniqueName: \"kubernetes.io/projected/83110096-13e0-4833-aba8-47efc3ea8f89-kube-api-access-jth2h\") pod \"kube-proxy-szh8z\" (UID: \"83110096-13e0-4833-aba8-47efc3ea8f89\") " pod="kube-system/kube-proxy-szh8z" Oct 13 05:29:42.678183 systemd[1]: Created slice kubepods-besteffort-poda0f3e329_baf2_4c24_aa26_3c3f4ae372d7.slice - libcontainer container kubepods-besteffort-poda0f3e329_baf2_4c24_aa26_3c3f4ae372d7.slice. Oct 13 05:29:42.711739 kubelet[3010]: I1013 05:29:42.711709 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a0f3e329-baf2-4c24-aa26-3c3f4ae372d7-var-lib-calico\") pod \"tigera-operator-db78d5bd4-clnhf\" (UID: \"a0f3e329-baf2-4c24-aa26-3c3f4ae372d7\") " pod="tigera-operator/tigera-operator-db78d5bd4-clnhf" Oct 13 05:29:42.711840 kubelet[3010]: I1013 05:29:42.711753 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zddgv\" (UniqueName: \"kubernetes.io/projected/a0f3e329-baf2-4c24-aa26-3c3f4ae372d7-kube-api-access-zddgv\") pod \"tigera-operator-db78d5bd4-clnhf\" (UID: \"a0f3e329-baf2-4c24-aa26-3c3f4ae372d7\") " pod="tigera-operator/tigera-operator-db78d5bd4-clnhf" Oct 13 05:29:42.883298 containerd[1712]: time="2025-10-13T05:29:42.883233274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-szh8z,Uid:83110096-13e0-4833-aba8-47efc3ea8f89,Namespace:kube-system,Attempt:0,}" Oct 13 05:29:42.919415 containerd[1712]: time="2025-10-13T05:29:42.919383292Z" level=info msg="connecting to shim 423082563e141c06e17d6822b9fea5be5aee85c494474fb52f49f547f235385d" address="unix:///run/containerd/s/85322c3cbb637e273c7e87ec140b2597928ef735f1b4f43b347c8842c51513b6" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:29:42.937011 systemd[1]: Started cri-containerd-423082563e141c06e17d6822b9fea5be5aee85c494474fb52f49f547f235385d.scope - libcontainer container 423082563e141c06e17d6822b9fea5be5aee85c494474fb52f49f547f235385d. Oct 13 05:29:42.954558 containerd[1712]: time="2025-10-13T05:29:42.954527691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-szh8z,Uid:83110096-13e0-4833-aba8-47efc3ea8f89,Namespace:kube-system,Attempt:0,} returns sandbox id \"423082563e141c06e17d6822b9fea5be5aee85c494474fb52f49f547f235385d\"" Oct 13 05:29:42.958979 containerd[1712]: time="2025-10-13T05:29:42.958890138Z" level=info msg="CreateContainer within sandbox \"423082563e141c06e17d6822b9fea5be5aee85c494474fb52f49f547f235385d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 13 05:29:42.964823 containerd[1712]: time="2025-10-13T05:29:42.964614918Z" level=info msg="Container 8f7300976246c27a2d84b113a57e417fb573128b8fbb39376b5eb0aefcbb26e1: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:29:42.968709 containerd[1712]: time="2025-10-13T05:29:42.968682073Z" level=info msg="CreateContainer within sandbox \"423082563e141c06e17d6822b9fea5be5aee85c494474fb52f49f547f235385d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8f7300976246c27a2d84b113a57e417fb573128b8fbb39376b5eb0aefcbb26e1\"" Oct 13 05:29:42.969971 containerd[1712]: time="2025-10-13T05:29:42.969287006Z" level=info msg="StartContainer for \"8f7300976246c27a2d84b113a57e417fb573128b8fbb39376b5eb0aefcbb26e1\"" Oct 13 05:29:42.970480 containerd[1712]: time="2025-10-13T05:29:42.970466840Z" level=info msg="connecting to shim 8f7300976246c27a2d84b113a57e417fb573128b8fbb39376b5eb0aefcbb26e1" address="unix:///run/containerd/s/85322c3cbb637e273c7e87ec140b2597928ef735f1b4f43b347c8842c51513b6" protocol=ttrpc version=3 Oct 13 05:29:42.989045 systemd[1]: Started cri-containerd-8f7300976246c27a2d84b113a57e417fb573128b8fbb39376b5eb0aefcbb26e1.scope - libcontainer container 8f7300976246c27a2d84b113a57e417fb573128b8fbb39376b5eb0aefcbb26e1. Oct 13 05:29:42.990004 containerd[1712]: time="2025-10-13T05:29:42.989511080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-db78d5bd4-clnhf,Uid:a0f3e329-baf2-4c24-aa26-3c3f4ae372d7,Namespace:tigera-operator,Attempt:0,}" Oct 13 05:29:43.027726 containerd[1712]: time="2025-10-13T05:29:43.027693823Z" level=info msg="StartContainer for \"8f7300976246c27a2d84b113a57e417fb573128b8fbb39376b5eb0aefcbb26e1\" returns successfully" Oct 13 05:29:43.063056 containerd[1712]: time="2025-10-13T05:29:43.062947144Z" level=info msg="connecting to shim 4d66332e5bcdceacc401965adfcb580a502ac543435bc64ef5c47e1ed0ea2617" address="unix:///run/containerd/s/e190cd00eff290d14dbb0f42616c7bfeb30dff22300665aaf70e65ed1897cc7c" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:29:43.086063 systemd[1]: Started cri-containerd-4d66332e5bcdceacc401965adfcb580a502ac543435bc64ef5c47e1ed0ea2617.scope - libcontainer container 4d66332e5bcdceacc401965adfcb580a502ac543435bc64ef5c47e1ed0ea2617. Oct 13 05:29:43.126807 containerd[1712]: time="2025-10-13T05:29:43.126755624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-db78d5bd4-clnhf,Uid:a0f3e329-baf2-4c24-aa26-3c3f4ae372d7,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4d66332e5bcdceacc401965adfcb580a502ac543435bc64ef5c47e1ed0ea2617\"" Oct 13 05:29:43.128382 containerd[1712]: time="2025-10-13T05:29:43.128336698Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Oct 13 05:29:43.835098 kubelet[3010]: I1013 05:29:43.835052 3010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-szh8z" podStartSLOduration=1.834776663 podStartE2EDuration="1.834776663s" podCreationTimestamp="2025-10-13 05:29:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:29:43.834702817 +0000 UTC m=+8.366415632" watchObservedRunningTime="2025-10-13 05:29:43.834776663 +0000 UTC m=+8.366489481" Oct 13 05:29:44.557857 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1667309009.mount: Deactivated successfully. Oct 13 05:29:45.430551 containerd[1712]: time="2025-10-13T05:29:45.430523627Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:29:45.430955 containerd[1712]: time="2025-10-13T05:29:45.430914810Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Oct 13 05:29:45.431190 containerd[1712]: time="2025-10-13T05:29:45.431175372Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:29:45.432246 containerd[1712]: time="2025-10-13T05:29:45.432231982Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:29:45.432910 containerd[1712]: time="2025-10-13T05:29:45.432698863Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.304328937s" Oct 13 05:29:45.432910 containerd[1712]: time="2025-10-13T05:29:45.432720213Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Oct 13 05:29:45.436487 containerd[1712]: time="2025-10-13T05:29:45.436459924Z" level=info msg="CreateContainer within sandbox \"4d66332e5bcdceacc401965adfcb580a502ac543435bc64ef5c47e1ed0ea2617\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 13 05:29:45.439287 containerd[1712]: time="2025-10-13T05:29:45.439205617Z" level=info msg="Container 03d609e4ff7415c043d1ec2f9904a1554144df1777cd016a1b3640050f9d3082: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:29:45.458564 containerd[1712]: time="2025-10-13T05:29:45.458497733Z" level=info msg="CreateContainer within sandbox \"4d66332e5bcdceacc401965adfcb580a502ac543435bc64ef5c47e1ed0ea2617\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"03d609e4ff7415c043d1ec2f9904a1554144df1777cd016a1b3640050f9d3082\"" Oct 13 05:29:45.459181 containerd[1712]: time="2025-10-13T05:29:45.459035476Z" level=info msg="StartContainer for \"03d609e4ff7415c043d1ec2f9904a1554144df1777cd016a1b3640050f9d3082\"" Oct 13 05:29:45.459888 containerd[1712]: time="2025-10-13T05:29:45.459872122Z" level=info msg="connecting to shim 03d609e4ff7415c043d1ec2f9904a1554144df1777cd016a1b3640050f9d3082" address="unix:///run/containerd/s/e190cd00eff290d14dbb0f42616c7bfeb30dff22300665aaf70e65ed1897cc7c" protocol=ttrpc version=3 Oct 13 05:29:45.485035 systemd[1]: Started cri-containerd-03d609e4ff7415c043d1ec2f9904a1554144df1777cd016a1b3640050f9d3082.scope - libcontainer container 03d609e4ff7415c043d1ec2f9904a1554144df1777cd016a1b3640050f9d3082. Oct 13 05:29:45.514553 containerd[1712]: time="2025-10-13T05:29:45.514516395Z" level=info msg="StartContainer for \"03d609e4ff7415c043d1ec2f9904a1554144df1777cd016a1b3640050f9d3082\" returns successfully" Oct 13 05:29:49.087853 kubelet[3010]: I1013 05:29:49.087468 3010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-db78d5bd4-clnhf" podStartSLOduration=4.782227288 podStartE2EDuration="7.087456814s" podCreationTimestamp="2025-10-13 05:29:42 +0000 UTC" firstStartedPulling="2025-10-13 05:29:43.127863238 +0000 UTC m=+7.659576043" lastFinishedPulling="2025-10-13 05:29:45.433092761 +0000 UTC m=+9.964805569" observedRunningTime="2025-10-13 05:29:45.856338201 +0000 UTC m=+10.388051018" watchObservedRunningTime="2025-10-13 05:29:49.087456814 +0000 UTC m=+13.619169639" Oct 13 05:29:50.933034 sudo[2029]: pam_unix(sudo:session): session closed for user root Oct 13 05:29:50.934826 sshd-session[2025]: pam_unix(sshd:session): session closed for user core Oct 13 05:29:50.935240 sshd[2028]: Connection closed by 139.178.89.65 port 56438 Oct 13 05:29:50.937514 systemd[1]: sshd@6-139.178.70.110:22-139.178.89.65:56438.service: Deactivated successfully. Oct 13 05:29:50.940390 systemd[1]: session-9.scope: Deactivated successfully. Oct 13 05:29:50.941136 systemd[1]: session-9.scope: Consumed 3.651s CPU time, 155.8M memory peak. Oct 13 05:29:50.944195 systemd-logind[1687]: Session 9 logged out. Waiting for processes to exit. Oct 13 05:29:50.945433 systemd-logind[1687]: Removed session 9. Oct 13 05:29:53.361594 systemd[1]: Created slice kubepods-besteffort-poda9d1a468_3ba4_40af_9dab_11562abb6be9.slice - libcontainer container kubepods-besteffort-poda9d1a468_3ba4_40af_9dab_11562abb6be9.slice. Oct 13 05:29:53.387867 kubelet[3010]: I1013 05:29:53.387842 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-886sz\" (UniqueName: \"kubernetes.io/projected/a9d1a468-3ba4-40af-9dab-11562abb6be9-kube-api-access-886sz\") pod \"calico-typha-58df4f5f96-wr2n8\" (UID: \"a9d1a468-3ba4-40af-9dab-11562abb6be9\") " pod="calico-system/calico-typha-58df4f5f96-wr2n8" Oct 13 05:29:53.388206 kubelet[3010]: I1013 05:29:53.388144 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9d1a468-3ba4-40af-9dab-11562abb6be9-tigera-ca-bundle\") pod \"calico-typha-58df4f5f96-wr2n8\" (UID: \"a9d1a468-3ba4-40af-9dab-11562abb6be9\") " pod="calico-system/calico-typha-58df4f5f96-wr2n8" Oct 13 05:29:53.388206 kubelet[3010]: I1013 05:29:53.388158 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a9d1a468-3ba4-40af-9dab-11562abb6be9-typha-certs\") pod \"calico-typha-58df4f5f96-wr2n8\" (UID: \"a9d1a468-3ba4-40af-9dab-11562abb6be9\") " pod="calico-system/calico-typha-58df4f5f96-wr2n8" Oct 13 05:29:53.666923 containerd[1712]: time="2025-10-13T05:29:53.666829489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58df4f5f96-wr2n8,Uid:a9d1a468-3ba4-40af-9dab-11562abb6be9,Namespace:calico-system,Attempt:0,}" Oct 13 05:29:53.702828 containerd[1712]: time="2025-10-13T05:29:53.702225099Z" level=info msg="connecting to shim 738e8a781129d6d0e906a47ed6777a7ec8b6dceac658c25f5c95cc87f5f4608e" address="unix:///run/containerd/s/31d685034b8f7cb656a9bea045bc04d5d7a2e16fee74ed7a5cd4f4ee952f38b4" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:29:53.733024 systemd[1]: Started cri-containerd-738e8a781129d6d0e906a47ed6777a7ec8b6dceac658c25f5c95cc87f5f4608e.scope - libcontainer container 738e8a781129d6d0e906a47ed6777a7ec8b6dceac658c25f5c95cc87f5f4608e. Oct 13 05:29:53.756752 systemd[1]: Created slice kubepods-besteffort-pod629eea79_3370_4aae_af08_622e0d9dbb16.slice - libcontainer container kubepods-besteffort-pod629eea79_3370_4aae_af08_622e0d9dbb16.slice. Oct 13 05:29:53.790283 kubelet[3010]: I1013 05:29:53.790258 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/629eea79-3370-4aae-af08-622e0d9dbb16-policysync\") pod \"calico-node-cmzkn\" (UID: \"629eea79-3370-4aae-af08-622e0d9dbb16\") " pod="calico-system/calico-node-cmzkn" Oct 13 05:29:53.790460 kubelet[3010]: I1013 05:29:53.790416 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/629eea79-3370-4aae-af08-622e0d9dbb16-var-lib-calico\") pod \"calico-node-cmzkn\" (UID: \"629eea79-3370-4aae-af08-622e0d9dbb16\") " pod="calico-system/calico-node-cmzkn" Oct 13 05:29:53.790460 kubelet[3010]: I1013 05:29:53.790432 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/629eea79-3370-4aae-af08-622e0d9dbb16-cni-bin-dir\") pod \"calico-node-cmzkn\" (UID: \"629eea79-3370-4aae-af08-622e0d9dbb16\") " pod="calico-system/calico-node-cmzkn" Oct 13 05:29:53.790460 kubelet[3010]: I1013 05:29:53.790443 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/629eea79-3370-4aae-af08-622e0d9dbb16-var-run-calico\") pod \"calico-node-cmzkn\" (UID: \"629eea79-3370-4aae-af08-622e0d9dbb16\") " pod="calico-system/calico-node-cmzkn" Oct 13 05:29:53.790646 kubelet[3010]: I1013 05:29:53.790597 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/629eea79-3370-4aae-af08-622e0d9dbb16-cni-log-dir\") pod \"calico-node-cmzkn\" (UID: \"629eea79-3370-4aae-af08-622e0d9dbb16\") " pod="calico-system/calico-node-cmzkn" Oct 13 05:29:53.790646 kubelet[3010]: I1013 05:29:53.790620 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/629eea79-3370-4aae-af08-622e0d9dbb16-flexvol-driver-host\") pod \"calico-node-cmzkn\" (UID: \"629eea79-3370-4aae-af08-622e0d9dbb16\") " pod="calico-system/calico-node-cmzkn" Oct 13 05:29:53.790768 kubelet[3010]: I1013 05:29:53.790732 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/629eea79-3370-4aae-af08-622e0d9dbb16-node-certs\") pod \"calico-node-cmzkn\" (UID: \"629eea79-3370-4aae-af08-622e0d9dbb16\") " pod="calico-system/calico-node-cmzkn" Oct 13 05:29:53.790849 kubelet[3010]: I1013 05:29:53.790813 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/629eea79-3370-4aae-af08-622e0d9dbb16-xtables-lock\") pod \"calico-node-cmzkn\" (UID: \"629eea79-3370-4aae-af08-622e0d9dbb16\") " pod="calico-system/calico-node-cmzkn" Oct 13 05:29:53.790955 kubelet[3010]: I1013 05:29:53.790888 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/629eea79-3370-4aae-af08-622e0d9dbb16-cni-net-dir\") pod \"calico-node-cmzkn\" (UID: \"629eea79-3370-4aae-af08-622e0d9dbb16\") " pod="calico-system/calico-node-cmzkn" Oct 13 05:29:53.791041 kubelet[3010]: I1013 05:29:53.791032 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhqft\" (UniqueName: \"kubernetes.io/projected/629eea79-3370-4aae-af08-622e0d9dbb16-kube-api-access-xhqft\") pod \"calico-node-cmzkn\" (UID: \"629eea79-3370-4aae-af08-622e0d9dbb16\") " pod="calico-system/calico-node-cmzkn" Oct 13 05:29:53.791161 kubelet[3010]: I1013 05:29:53.791109 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/629eea79-3370-4aae-af08-622e0d9dbb16-lib-modules\") pod \"calico-node-cmzkn\" (UID: \"629eea79-3370-4aae-af08-622e0d9dbb16\") " pod="calico-system/calico-node-cmzkn" Oct 13 05:29:53.791161 kubelet[3010]: I1013 05:29:53.791127 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/629eea79-3370-4aae-af08-622e0d9dbb16-tigera-ca-bundle\") pod \"calico-node-cmzkn\" (UID: \"629eea79-3370-4aae-af08-622e0d9dbb16\") " pod="calico-system/calico-node-cmzkn" Oct 13 05:29:53.794246 containerd[1712]: time="2025-10-13T05:29:53.794199738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58df4f5f96-wr2n8,Uid:a9d1a468-3ba4-40af-9dab-11562abb6be9,Namespace:calico-system,Attempt:0,} returns sandbox id \"738e8a781129d6d0e906a47ed6777a7ec8b6dceac658c25f5c95cc87f5f4608e\"" Oct 13 05:29:53.796082 containerd[1712]: time="2025-10-13T05:29:53.796027774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Oct 13 05:29:53.898132 kubelet[3010]: E1013 05:29:53.898050 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:53.898132 kubelet[3010]: W1013 05:29:53.898062 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:53.898132 kubelet[3010]: E1013 05:29:53.898084 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:53.902391 kubelet[3010]: E1013 05:29:53.902372 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:53.902391 kubelet[3010]: W1013 05:29:53.902388 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:53.902484 kubelet[3010]: E1013 05:29:53.902400 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:53.981988 kubelet[3010]: E1013 05:29:53.981839 3010 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5gvc" podUID="8e0aa42c-1379-4484-899b-51874e20e39d" Oct 13 05:29:54.071289 containerd[1712]: time="2025-10-13T05:29:54.070512741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cmzkn,Uid:629eea79-3370-4aae-af08-622e0d9dbb16,Namespace:calico-system,Attempt:0,}" Oct 13 05:29:54.083014 kubelet[3010]: E1013 05:29:54.082993 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.083014 kubelet[3010]: W1013 05:29:54.083008 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.083216 kubelet[3010]: E1013 05:29:54.083024 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.083216 kubelet[3010]: E1013 05:29:54.083116 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.083216 kubelet[3010]: W1013 05:29:54.083120 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.083216 kubelet[3010]: E1013 05:29:54.083125 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.083216 kubelet[3010]: E1013 05:29:54.083216 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.083485 kubelet[3010]: W1013 05:29:54.083222 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.083485 kubelet[3010]: E1013 05:29:54.083226 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.083523 kubelet[3010]: E1013 05:29:54.083508 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.083523 kubelet[3010]: W1013 05:29:54.083514 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.083658 kubelet[3010]: E1013 05:29:54.083521 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.083907 kubelet[3010]: E1013 05:29:54.083886 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.083946 kubelet[3010]: W1013 05:29:54.083907 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.083946 kubelet[3010]: E1013 05:29:54.083915 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.084003 kubelet[3010]: E1013 05:29:54.083991 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.084003 kubelet[3010]: W1013 05:29:54.084000 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.084041 kubelet[3010]: E1013 05:29:54.084006 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.084169 kubelet[3010]: E1013 05:29:54.084158 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.084169 kubelet[3010]: W1013 05:29:54.084165 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.084169 kubelet[3010]: E1013 05:29:54.084170 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.084431 kubelet[3010]: E1013 05:29:54.084421 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.084431 kubelet[3010]: W1013 05:29:54.084429 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.084478 kubelet[3010]: E1013 05:29:54.084434 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.084616 kubelet[3010]: E1013 05:29:54.084600 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.084616 kubelet[3010]: W1013 05:29:54.084607 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.084616 kubelet[3010]: E1013 05:29:54.084612 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.084801 kubelet[3010]: E1013 05:29:54.084791 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.084801 kubelet[3010]: W1013 05:29:54.084799 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.084851 kubelet[3010]: E1013 05:29:54.084806 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.085045 kubelet[3010]: E1013 05:29:54.085030 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.085045 kubelet[3010]: W1013 05:29:54.085041 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.085107 kubelet[3010]: E1013 05:29:54.085047 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.085243 kubelet[3010]: E1013 05:29:54.085227 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.085243 kubelet[3010]: W1013 05:29:54.085234 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.085243 kubelet[3010]: E1013 05:29:54.085239 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.085479 kubelet[3010]: E1013 05:29:54.085467 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.085479 kubelet[3010]: W1013 05:29:54.085475 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.085479 kubelet[3010]: E1013 05:29:54.085481 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.085667 kubelet[3010]: E1013 05:29:54.085657 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.085667 kubelet[3010]: W1013 05:29:54.085665 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.085710 kubelet[3010]: E1013 05:29:54.085671 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.085908 kubelet[3010]: E1013 05:29:54.085887 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.085908 kubelet[3010]: W1013 05:29:54.085905 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.085951 kubelet[3010]: E1013 05:29:54.085913 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.086144 kubelet[3010]: E1013 05:29:54.086129 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.086144 kubelet[3010]: W1013 05:29:54.086137 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.086144 kubelet[3010]: E1013 05:29:54.086143 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.086374 kubelet[3010]: E1013 05:29:54.086361 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.086374 kubelet[3010]: W1013 05:29:54.086370 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.086374 kubelet[3010]: E1013 05:29:54.086375 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.087902 kubelet[3010]: E1013 05:29:54.087431 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.087902 kubelet[3010]: W1013 05:29:54.087440 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.087902 kubelet[3010]: E1013 05:29:54.087447 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.088053 kubelet[3010]: E1013 05:29:54.088033 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.088053 kubelet[3010]: W1013 05:29:54.088041 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.088676 kubelet[3010]: E1013 05:29:54.088132 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.089254 kubelet[3010]: E1013 05:29:54.088735 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.089254 kubelet[3010]: W1013 05:29:54.088743 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.089254 kubelet[3010]: E1013 05:29:54.088749 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.095122 kubelet[3010]: E1013 05:29:54.095095 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.095122 kubelet[3010]: W1013 05:29:54.095111 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.095333 kubelet[3010]: E1013 05:29:54.095126 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.095333 kubelet[3010]: I1013 05:29:54.095149 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42tzq\" (UniqueName: \"kubernetes.io/projected/8e0aa42c-1379-4484-899b-51874e20e39d-kube-api-access-42tzq\") pod \"csi-node-driver-s5gvc\" (UID: \"8e0aa42c-1379-4484-899b-51874e20e39d\") " pod="calico-system/csi-node-driver-s5gvc" Oct 13 05:29:54.095863 kubelet[3010]: E1013 05:29:54.095844 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.095863 kubelet[3010]: W1013 05:29:54.095854 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.095863 kubelet[3010]: E1013 05:29:54.095861 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.096036 kubelet[3010]: I1013 05:29:54.095874 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e0aa42c-1379-4484-899b-51874e20e39d-kubelet-dir\") pod \"csi-node-driver-s5gvc\" (UID: \"8e0aa42c-1379-4484-899b-51874e20e39d\") " pod="calico-system/csi-node-driver-s5gvc" Oct 13 05:29:54.097138 kubelet[3010]: E1013 05:29:54.097119 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.097138 kubelet[3010]: W1013 05:29:54.097129 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.097138 kubelet[3010]: E1013 05:29:54.097138 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.097214 kubelet[3010]: I1013 05:29:54.097152 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8e0aa42c-1379-4484-899b-51874e20e39d-varrun\") pod \"csi-node-driver-s5gvc\" (UID: \"8e0aa42c-1379-4484-899b-51874e20e39d\") " pod="calico-system/csi-node-driver-s5gvc" Oct 13 05:29:54.097352 kubelet[3010]: E1013 05:29:54.097339 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.097352 kubelet[3010]: W1013 05:29:54.097349 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.097397 kubelet[3010]: E1013 05:29:54.097357 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.097643 kubelet[3010]: I1013 05:29:54.097630 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8e0aa42c-1379-4484-899b-51874e20e39d-socket-dir\") pod \"csi-node-driver-s5gvc\" (UID: \"8e0aa42c-1379-4484-899b-51874e20e39d\") " pod="calico-system/csi-node-driver-s5gvc" Oct 13 05:29:54.098024 kubelet[3010]: E1013 05:29:54.098011 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.098059 kubelet[3010]: W1013 05:29:54.098026 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.098059 kubelet[3010]: E1013 05:29:54.098036 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.098789 kubelet[3010]: E1013 05:29:54.098775 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.098789 kubelet[3010]: W1013 05:29:54.098784 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.098957 kubelet[3010]: E1013 05:29:54.098791 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.099458 kubelet[3010]: E1013 05:29:54.099439 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.099458 kubelet[3010]: W1013 05:29:54.099448 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.099458 kubelet[3010]: E1013 05:29:54.099456 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.100200 kubelet[3010]: E1013 05:29:54.100186 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.100200 kubelet[3010]: W1013 05:29:54.100197 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.100292 kubelet[3010]: E1013 05:29:54.100207 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.100292 kubelet[3010]: I1013 05:29:54.100242 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8e0aa42c-1379-4484-899b-51874e20e39d-registration-dir\") pod \"csi-node-driver-s5gvc\" (UID: \"8e0aa42c-1379-4484-899b-51874e20e39d\") " pod="calico-system/csi-node-driver-s5gvc" Oct 13 05:29:54.100933 kubelet[3010]: E1013 05:29:54.100920 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.100933 kubelet[3010]: W1013 05:29:54.100931 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.100990 kubelet[3010]: E1013 05:29:54.100939 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.101406 kubelet[3010]: E1013 05:29:54.101392 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.101406 kubelet[3010]: W1013 05:29:54.101404 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.101526 kubelet[3010]: E1013 05:29:54.101414 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.102122 kubelet[3010]: E1013 05:29:54.102055 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.102122 kubelet[3010]: W1013 05:29:54.102064 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.102122 kubelet[3010]: E1013 05:29:54.102072 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.102258 kubelet[3010]: E1013 05:29:54.102251 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.102475 kubelet[3010]: W1013 05:29:54.102284 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.102475 kubelet[3010]: E1013 05:29:54.102290 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.102548 kubelet[3010]: E1013 05:29:54.102540 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.102581 kubelet[3010]: W1013 05:29:54.102575 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.102625 kubelet[3010]: E1013 05:29:54.102618 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.102782 kubelet[3010]: E1013 05:29:54.102733 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.102782 kubelet[3010]: W1013 05:29:54.102740 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.102782 kubelet[3010]: E1013 05:29:54.102745 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.102872 kubelet[3010]: E1013 05:29:54.102866 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.102943 kubelet[3010]: W1013 05:29:54.102915 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.102943 kubelet[3010]: E1013 05:29:54.102922 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.114185 containerd[1712]: time="2025-10-13T05:29:54.114148420Z" level=info msg="connecting to shim c29df3509fb8234002d0ad82bd4de170886a18883cfbc927db132aa3ec6fad86" address="unix:///run/containerd/s/bfa2e1a0f80c1472cad98c0726f24aff1f641456d6efd131fdcc5ecdba1d1c14" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:29:54.138211 systemd[1]: Started cri-containerd-c29df3509fb8234002d0ad82bd4de170886a18883cfbc927db132aa3ec6fad86.scope - libcontainer container c29df3509fb8234002d0ad82bd4de170886a18883cfbc927db132aa3ec6fad86. Oct 13 05:29:54.163019 containerd[1712]: time="2025-10-13T05:29:54.162996703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cmzkn,Uid:629eea79-3370-4aae-af08-622e0d9dbb16,Namespace:calico-system,Attempt:0,} returns sandbox id \"c29df3509fb8234002d0ad82bd4de170886a18883cfbc927db132aa3ec6fad86\"" Oct 13 05:29:54.201931 kubelet[3010]: E1013 05:29:54.201887 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.201931 kubelet[3010]: W1013 05:29:54.201916 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.201931 kubelet[3010]: E1013 05:29:54.201930 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.202083 kubelet[3010]: E1013 05:29:54.202058 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.202083 kubelet[3010]: W1013 05:29:54.202063 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.202083 kubelet[3010]: E1013 05:29:54.202068 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.202195 kubelet[3010]: E1013 05:29:54.202163 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.202195 kubelet[3010]: W1013 05:29:54.202168 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.202195 kubelet[3010]: E1013 05:29:54.202173 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.202308 kubelet[3010]: E1013 05:29:54.202288 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.202308 kubelet[3010]: W1013 05:29:54.202294 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.202308 kubelet[3010]: E1013 05:29:54.202299 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.202434 kubelet[3010]: E1013 05:29:54.202420 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.202434 kubelet[3010]: W1013 05:29:54.202428 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.202434 kubelet[3010]: E1013 05:29:54.202434 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.202535 kubelet[3010]: E1013 05:29:54.202526 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.202535 kubelet[3010]: W1013 05:29:54.202531 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.202535 kubelet[3010]: E1013 05:29:54.202535 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.202713 kubelet[3010]: E1013 05:29:54.202604 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.202713 kubelet[3010]: W1013 05:29:54.202608 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.202713 kubelet[3010]: E1013 05:29:54.202612 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.202713 kubelet[3010]: E1013 05:29:54.202712 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.202809 kubelet[3010]: W1013 05:29:54.202717 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.202809 kubelet[3010]: E1013 05:29:54.202722 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.202881 kubelet[3010]: E1013 05:29:54.202870 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.202881 kubelet[3010]: W1013 05:29:54.202877 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.202954 kubelet[3010]: E1013 05:29:54.202883 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.203037 kubelet[3010]: E1013 05:29:54.202997 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.203037 kubelet[3010]: W1013 05:29:54.203011 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.203037 kubelet[3010]: E1013 05:29:54.203016 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.203116 kubelet[3010]: E1013 05:29:54.203108 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.203116 kubelet[3010]: W1013 05:29:54.203115 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.203171 kubelet[3010]: E1013 05:29:54.203120 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.203227 kubelet[3010]: E1013 05:29:54.203209 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.203227 kubelet[3010]: W1013 05:29:54.203213 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.203227 kubelet[3010]: E1013 05:29:54.203218 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.203360 kubelet[3010]: E1013 05:29:54.203350 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.203360 kubelet[3010]: W1013 05:29:54.203358 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.203414 kubelet[3010]: E1013 05:29:54.203362 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.203512 kubelet[3010]: E1013 05:29:54.203499 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.203512 kubelet[3010]: W1013 05:29:54.203507 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.203512 kubelet[3010]: E1013 05:29:54.203511 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.203604 kubelet[3010]: E1013 05:29:54.203593 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.203604 kubelet[3010]: W1013 05:29:54.203597 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.203604 kubelet[3010]: E1013 05:29:54.203602 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.203714 kubelet[3010]: E1013 05:29:54.203705 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.203714 kubelet[3010]: W1013 05:29:54.203711 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.203784 kubelet[3010]: E1013 05:29:54.203744 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.203861 kubelet[3010]: E1013 05:29:54.203842 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.203861 kubelet[3010]: W1013 05:29:54.203858 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.203861 kubelet[3010]: E1013 05:29:54.203862 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.204000 kubelet[3010]: E1013 05:29:54.203994 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.204000 kubelet[3010]: W1013 05:29:54.203999 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.204034 kubelet[3010]: E1013 05:29:54.204003 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.204141 kubelet[3010]: E1013 05:29:54.204131 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.204141 kubelet[3010]: W1013 05:29:54.204138 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.204141 kubelet[3010]: E1013 05:29:54.204142 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.204311 kubelet[3010]: E1013 05:29:54.204219 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.204311 kubelet[3010]: W1013 05:29:54.204223 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.204311 kubelet[3010]: E1013 05:29:54.204227 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.204972 kubelet[3010]: E1013 05:29:54.204958 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.204972 kubelet[3010]: W1013 05:29:54.204968 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.205015 kubelet[3010]: E1013 05:29:54.204975 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.205127 kubelet[3010]: E1013 05:29:54.205110 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.205127 kubelet[3010]: W1013 05:29:54.205126 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.205190 kubelet[3010]: E1013 05:29:54.205132 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.205389 kubelet[3010]: E1013 05:29:54.205378 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.205389 kubelet[3010]: W1013 05:29:54.205386 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.205470 kubelet[3010]: E1013 05:29:54.205392 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.205956 kubelet[3010]: E1013 05:29:54.205941 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.205956 kubelet[3010]: W1013 05:29:54.205949 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.205956 kubelet[3010]: E1013 05:29:54.205955 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.206158 kubelet[3010]: E1013 05:29:54.206139 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.206158 kubelet[3010]: W1013 05:29:54.206152 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.206192 kubelet[3010]: E1013 05:29:54.206159 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:54.211334 kubelet[3010]: E1013 05:29:54.211313 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:54.211334 kubelet[3010]: W1013 05:29:54.211327 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:54.211334 kubelet[3010]: E1013 05:29:54.211339 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:55.361955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1165844893.mount: Deactivated successfully. Oct 13 05:29:55.776884 kubelet[3010]: E1013 05:29:55.776755 3010 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5gvc" podUID="8e0aa42c-1379-4484-899b-51874e20e39d" Oct 13 05:29:56.311906 containerd[1712]: time="2025-10-13T05:29:56.311683505Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:29:56.317651 containerd[1712]: time="2025-10-13T05:29:56.317632312Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Oct 13 05:29:56.330662 containerd[1712]: time="2025-10-13T05:29:56.330622965Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:29:56.335527 containerd[1712]: time="2025-10-13T05:29:56.335500403Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:29:56.336431 containerd[1712]: time="2025-10-13T05:29:56.336360588Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.540308604s" Oct 13 05:29:56.336431 containerd[1712]: time="2025-10-13T05:29:56.336378694Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Oct 13 05:29:56.337217 containerd[1712]: time="2025-10-13T05:29:56.337057907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Oct 13 05:29:56.349459 containerd[1712]: time="2025-10-13T05:29:56.349198761Z" level=info msg="CreateContainer within sandbox \"738e8a781129d6d0e906a47ed6777a7ec8b6dceac658c25f5c95cc87f5f4608e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 13 05:29:56.354389 containerd[1712]: time="2025-10-13T05:29:56.354361491Z" level=info msg="Container 2bc97bb2f19a38cc604b3d33d8b41b3a7412beae53530d8ab456998d34f6c906: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:29:56.359766 containerd[1712]: time="2025-10-13T05:29:56.359651817Z" level=info msg="CreateContainer within sandbox \"738e8a781129d6d0e906a47ed6777a7ec8b6dceac658c25f5c95cc87f5f4608e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2bc97bb2f19a38cc604b3d33d8b41b3a7412beae53530d8ab456998d34f6c906\"" Oct 13 05:29:56.361737 containerd[1712]: time="2025-10-13T05:29:56.360391155Z" level=info msg="StartContainer for \"2bc97bb2f19a38cc604b3d33d8b41b3a7412beae53530d8ab456998d34f6c906\"" Oct 13 05:29:56.362102 containerd[1712]: time="2025-10-13T05:29:56.362086028Z" level=info msg="connecting to shim 2bc97bb2f19a38cc604b3d33d8b41b3a7412beae53530d8ab456998d34f6c906" address="unix:///run/containerd/s/31d685034b8f7cb656a9bea045bc04d5d7a2e16fee74ed7a5cd4f4ee952f38b4" protocol=ttrpc version=3 Oct 13 05:29:56.379183 systemd[1]: Started cri-containerd-2bc97bb2f19a38cc604b3d33d8b41b3a7412beae53530d8ab456998d34f6c906.scope - libcontainer container 2bc97bb2f19a38cc604b3d33d8b41b3a7412beae53530d8ab456998d34f6c906. Oct 13 05:29:56.428483 containerd[1712]: time="2025-10-13T05:29:56.428378778Z" level=info msg="StartContainer for \"2bc97bb2f19a38cc604b3d33d8b41b3a7412beae53530d8ab456998d34f6c906\" returns successfully" Oct 13 05:29:56.930777 kubelet[3010]: I1013 05:29:56.930744 3010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-58df4f5f96-wr2n8" podStartSLOduration=1.388910358 podStartE2EDuration="3.93071784s" podCreationTimestamp="2025-10-13 05:29:53 +0000 UTC" firstStartedPulling="2025-10-13 05:29:53.795124174 +0000 UTC m=+18.326836991" lastFinishedPulling="2025-10-13 05:29:56.336931663 +0000 UTC m=+20.868644473" observedRunningTime="2025-10-13 05:29:56.930335868 +0000 UTC m=+21.462048685" watchObservedRunningTime="2025-10-13 05:29:56.93071784 +0000 UTC m=+21.462430652" Oct 13 05:29:57.007852 kubelet[3010]: E1013 05:29:57.007778 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:57.007852 kubelet[3010]: W1013 05:29:57.007816 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:57.010008 kubelet[3010]: E1013 05:29:57.009963 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:57.010222 kubelet[3010]: E1013 05:29:57.010186 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:57.010222 kubelet[3010]: W1013 05:29:57.010195 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:57.010222 kubelet[3010]: E1013 05:29:57.010203 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:57.010448 kubelet[3010]: E1013 05:29:57.010412 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:57.010448 kubelet[3010]: W1013 05:29:57.010419 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:57.010448 kubelet[3010]: E1013 05:29:57.010425 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:57.010694 kubelet[3010]: E1013 05:29:57.010643 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:57.010694 kubelet[3010]: W1013 05:29:57.010652 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:57.010694 kubelet[3010]: E1013 05:29:57.010660 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:57.010886 kubelet[3010]: E1013 05:29:57.010854 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:57.010886 kubelet[3010]: W1013 05:29:57.010862 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:57.010886 kubelet[3010]: E1013 05:29:57.010868 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:57.011095 kubelet[3010]: E1013 05:29:57.011064 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:57.011095 kubelet[3010]: W1013 05:29:57.011070 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:57.011095 kubelet[3010]: E1013 05:29:57.011075 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:57.011302 kubelet[3010]: E1013 05:29:57.011267 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:57.011302 kubelet[3010]: W1013 05:29:57.011273 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:57.011302 kubelet[3010]: E1013 05:29:57.011278 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:57.011512 kubelet[3010]: E1013 05:29:57.011464 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:57.011512 kubelet[3010]: W1013 05:29:57.011470 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:57.011512 kubelet[3010]: E1013 05:29:57.011475 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:57.011716 kubelet[3010]: E1013 05:29:57.011685 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:57.011716 kubelet[3010]: W1013 05:29:57.011693 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:57.011716 kubelet[3010]: E1013 05:29:57.011698 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:57.012252 kubelet[3010]: E1013 05:29:57.012044 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:57.012252 kubelet[3010]: W1013 05:29:57.012049 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:57.012252 kubelet[3010]: E1013 05:29:57.012055 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:57.012252 kubelet[3010]: E1013 05:29:57.012159 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:57.012252 kubelet[3010]: W1013 05:29:57.012166 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:57.012252 kubelet[3010]: E1013 05:29:57.012171 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:57.012502 kubelet[3010]: E1013 05:29:57.012386 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:57.012502 kubelet[3010]: W1013 05:29:57.012391 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:57.012502 kubelet[3010]: E1013 05:29:57.012397 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:57.012709 kubelet[3010]: E1013 05:29:57.012676 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:57.012709 kubelet[3010]: W1013 05:29:57.012683 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:57.012709 kubelet[3010]: E1013 05:29:57.012688 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:57.012909 kubelet[3010]: E1013 05:29:57.012877 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:57.012909 kubelet[3010]: W1013 05:29:57.012883 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:57.012909 kubelet[3010]: E1013 05:29:57.012888 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:57.013163 kubelet[3010]: E1013 05:29:57.013107 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:57.013163 kubelet[3010]: W1013 05:29:57.013115 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:57.013163 kubelet[3010]: E1013 05:29:57.013123 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:57.023848 kubelet[3010]: E1013 05:29:57.023829 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:57.024048 kubelet[3010]: W1013 05:29:57.023973 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:57.024048 kubelet[3010]: E1013 05:29:57.023992 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:57.024317 kubelet[3010]: E1013 05:29:57.024310 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:57.024388 kubelet[3010]: W1013 05:29:57.024379 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:57.024444 kubelet[3010]: E1013 05:29:57.024436 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:57.024770 kubelet[3010]: E1013 05:29:57.024734 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:57.024770 kubelet[3010]: W1013 05:29:57.024742 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:57.024770 kubelet[3010]: E1013 05:29:57.024759 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:57.025225 kubelet[3010]: E1013 05:29:57.025119 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:57.025225 kubelet[3010]: W1013 05:29:57.025125 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:57.025225 kubelet[3010]: E1013 05:29:57.025131 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:57.025425 kubelet[3010]: E1013 05:29:57.025385 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:57.025425 kubelet[3010]: W1013 05:29:57.025392 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:57.025425 kubelet[3010]: E1013 05:29:57.025397 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:57.025610 kubelet[3010]: E1013 05:29:57.025571 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:57.025610 kubelet[3010]: W1013 05:29:57.025577 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:57.025610 kubelet[3010]: E1013 05:29:57.025582 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:57.025805 kubelet[3010]: E1013 05:29:57.025759 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:57.025805 kubelet[3010]: W1013 05:29:57.025768 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:57.025805 kubelet[3010]: E1013 05:29:57.025776 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:57.025981 kubelet[3010]: E1013 05:29:57.025975 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:57.026041 kubelet[3010]: W1013 05:29:57.026020 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:57.026041 kubelet[3010]: E1013 05:29:57.026031 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:57.026239 kubelet[3010]: E1013 05:29:57.026188 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:57.026239 kubelet[3010]: W1013 05:29:57.026195 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:57.026239 kubelet[3010]: E1013 05:29:57.026200 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:57.026411 kubelet[3010]: E1013 05:29:57.026406 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:57.026571 kubelet[3010]: W1013 05:29:57.026455 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:57.026571 kubelet[3010]: E1013 05:29:57.026464 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:57.026673 kubelet[3010]: E1013 05:29:57.026657 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:57.026703 kubelet[3010]: W1013 05:29:57.026671 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:57.026703 kubelet[3010]: E1013 05:29:57.026681 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:57.026794 kubelet[3010]: E1013 05:29:57.026781 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:57.026794 kubelet[3010]: W1013 05:29:57.026790 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:57.026841 kubelet[3010]: E1013 05:29:57.026798 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:57.026988 kubelet[3010]: E1013 05:29:57.026962 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:57.026988 kubelet[3010]: W1013 05:29:57.026971 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:57.026988 kubelet[3010]: E1013 05:29:57.026979 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:57.027229 kubelet[3010]: E1013 05:29:57.027155 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:57.027229 kubelet[3010]: W1013 05:29:57.027161 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:57.027229 kubelet[3010]: E1013 05:29:57.027167 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:57.027458 kubelet[3010]: E1013 05:29:57.027326 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:57.027458 kubelet[3010]: W1013 05:29:57.027332 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:57.027458 kubelet[3010]: E1013 05:29:57.027337 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:57.027528 kubelet[3010]: E1013 05:29:57.027516 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:57.027528 kubelet[3010]: W1013 05:29:57.027526 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:57.027611 kubelet[3010]: E1013 05:29:57.027534 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:57.027633 kubelet[3010]: E1013 05:29:57.027622 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:57.027633 kubelet[3010]: W1013 05:29:57.027626 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:57.027633 kubelet[3010]: E1013 05:29:57.027631 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:57.028011 kubelet[3010]: E1013 05:29:57.027974 3010 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:29:57.028011 kubelet[3010]: W1013 05:29:57.027983 3010 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:29:57.028011 kubelet[3010]: E1013 05:29:57.027991 3010 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:29:57.650557 containerd[1712]: time="2025-10-13T05:29:57.650189963Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:29:57.650794 containerd[1712]: time="2025-10-13T05:29:57.650782056Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Oct 13 05:29:57.661276 containerd[1712]: time="2025-10-13T05:29:57.661251092Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:29:57.662183 containerd[1712]: time="2025-10-13T05:29:57.662170222Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:29:57.662528 containerd[1712]: time="2025-10-13T05:29:57.662509271Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.325345406s" Oct 13 05:29:57.662554 containerd[1712]: time="2025-10-13T05:29:57.662529804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Oct 13 05:29:57.664747 containerd[1712]: time="2025-10-13T05:29:57.664605531Z" level=info msg="CreateContainer within sandbox \"c29df3509fb8234002d0ad82bd4de170886a18883cfbc927db132aa3ec6fad86\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 13 05:29:57.687158 containerd[1712]: time="2025-10-13T05:29:57.687106581Z" level=info msg="Container 8f1d0855c5a38ddc1f435af850983e2fd43bdaeea7347687ec1cd0416da97894: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:29:57.690012 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount167440394.mount: Deactivated successfully. Oct 13 05:29:57.692931 containerd[1712]: time="2025-10-13T05:29:57.692886720Z" level=info msg="CreateContainer within sandbox \"c29df3509fb8234002d0ad82bd4de170886a18883cfbc927db132aa3ec6fad86\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8f1d0855c5a38ddc1f435af850983e2fd43bdaeea7347687ec1cd0416da97894\"" Oct 13 05:29:57.693403 containerd[1712]: time="2025-10-13T05:29:57.693289253Z" level=info msg="StartContainer for \"8f1d0855c5a38ddc1f435af850983e2fd43bdaeea7347687ec1cd0416da97894\"" Oct 13 05:29:57.700559 containerd[1712]: time="2025-10-13T05:29:57.700519741Z" level=info msg="connecting to shim 8f1d0855c5a38ddc1f435af850983e2fd43bdaeea7347687ec1cd0416da97894" address="unix:///run/containerd/s/bfa2e1a0f80c1472cad98c0726f24aff1f641456d6efd131fdcc5ecdba1d1c14" protocol=ttrpc version=3 Oct 13 05:29:57.718011 systemd[1]: Started cri-containerd-8f1d0855c5a38ddc1f435af850983e2fd43bdaeea7347687ec1cd0416da97894.scope - libcontainer container 8f1d0855c5a38ddc1f435af850983e2fd43bdaeea7347687ec1cd0416da97894. Oct 13 05:29:57.744532 containerd[1712]: time="2025-10-13T05:29:57.744510491Z" level=info msg="StartContainer for \"8f1d0855c5a38ddc1f435af850983e2fd43bdaeea7347687ec1cd0416da97894\" returns successfully" Oct 13 05:29:57.758495 systemd[1]: cri-containerd-8f1d0855c5a38ddc1f435af850983e2fd43bdaeea7347687ec1cd0416da97894.scope: Deactivated successfully. Oct 13 05:29:57.766516 containerd[1712]: time="2025-10-13T05:29:57.766489541Z" level=info msg="received exit event container_id:\"8f1d0855c5a38ddc1f435af850983e2fd43bdaeea7347687ec1cd0416da97894\" id:\"8f1d0855c5a38ddc1f435af850983e2fd43bdaeea7347687ec1cd0416da97894\" pid:3674 exited_at:{seconds:1760333397 nanos:761298919}" Oct 13 05:29:57.780183 kubelet[3010]: E1013 05:29:57.780003 3010 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5gvc" podUID="8e0aa42c-1379-4484-899b-51874e20e39d" Oct 13 05:29:57.781115 containerd[1712]: time="2025-10-13T05:29:57.780951437Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8f1d0855c5a38ddc1f435af850983e2fd43bdaeea7347687ec1cd0416da97894\" id:\"8f1d0855c5a38ddc1f435af850983e2fd43bdaeea7347687ec1cd0416da97894\" pid:3674 exited_at:{seconds:1760333397 nanos:761298919}" Oct 13 05:29:57.792083 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f1d0855c5a38ddc1f435af850983e2fd43bdaeea7347687ec1cd0416da97894-rootfs.mount: Deactivated successfully. Oct 13 05:29:57.958459 kubelet[3010]: I1013 05:29:57.957696 3010 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:29:58.956856 containerd[1712]: time="2025-10-13T05:29:58.956807182Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Oct 13 05:29:59.777271 kubelet[3010]: E1013 05:29:59.776071 3010 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5gvc" podUID="8e0aa42c-1379-4484-899b-51874e20e39d" Oct 13 05:30:01.815605 kubelet[3010]: E1013 05:30:01.815580 3010 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5gvc" podUID="8e0aa42c-1379-4484-899b-51874e20e39d" Oct 13 05:30:03.172257 containerd[1712]: time="2025-10-13T05:30:03.171842521Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:30:03.172770 containerd[1712]: time="2025-10-13T05:30:03.172752719Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Oct 13 05:30:03.173266 containerd[1712]: time="2025-10-13T05:30:03.173243902Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:30:03.174843 containerd[1712]: time="2025-10-13T05:30:03.174561591Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:30:03.174878 containerd[1712]: time="2025-10-13T05:30:03.174861790Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 4.218024156s" Oct 13 05:30:03.178117 containerd[1712]: time="2025-10-13T05:30:03.174878458Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Oct 13 05:30:03.181642 containerd[1712]: time="2025-10-13T05:30:03.181620633Z" level=info msg="CreateContainer within sandbox \"c29df3509fb8234002d0ad82bd4de170886a18883cfbc927db132aa3ec6fad86\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 13 05:30:03.188224 containerd[1712]: time="2025-10-13T05:30:03.187602550Z" level=info msg="Container 52b0037e7f707d62903b8c3d40c753d80922336601808ba9144580b21fdb3f36: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:30:03.190365 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1620340880.mount: Deactivated successfully. Oct 13 05:30:03.195521 containerd[1712]: time="2025-10-13T05:30:03.195453023Z" level=info msg="CreateContainer within sandbox \"c29df3509fb8234002d0ad82bd4de170886a18883cfbc927db132aa3ec6fad86\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"52b0037e7f707d62903b8c3d40c753d80922336601808ba9144580b21fdb3f36\"" Oct 13 05:30:03.195971 containerd[1712]: time="2025-10-13T05:30:03.195925533Z" level=info msg="StartContainer for \"52b0037e7f707d62903b8c3d40c753d80922336601808ba9144580b21fdb3f36\"" Oct 13 05:30:03.196913 containerd[1712]: time="2025-10-13T05:30:03.196886407Z" level=info msg="connecting to shim 52b0037e7f707d62903b8c3d40c753d80922336601808ba9144580b21fdb3f36" address="unix:///run/containerd/s/bfa2e1a0f80c1472cad98c0726f24aff1f641456d6efd131fdcc5ecdba1d1c14" protocol=ttrpc version=3 Oct 13 05:30:03.213004 systemd[1]: Started cri-containerd-52b0037e7f707d62903b8c3d40c753d80922336601808ba9144580b21fdb3f36.scope - libcontainer container 52b0037e7f707d62903b8c3d40c753d80922336601808ba9144580b21fdb3f36. Oct 13 05:30:03.235747 containerd[1712]: time="2025-10-13T05:30:03.235680767Z" level=info msg="StartContainer for \"52b0037e7f707d62903b8c3d40c753d80922336601808ba9144580b21fdb3f36\" returns successfully" Oct 13 05:30:03.777926 kubelet[3010]: E1013 05:30:03.776952 3010 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s5gvc" podUID="8e0aa42c-1379-4484-899b-51874e20e39d" Oct 13 05:30:04.485211 kubelet[3010]: I1013 05:30:04.484975 3010 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:30:04.703535 systemd[1]: cri-containerd-52b0037e7f707d62903b8c3d40c753d80922336601808ba9144580b21fdb3f36.scope: Deactivated successfully. Oct 13 05:30:04.703879 systemd[1]: cri-containerd-52b0037e7f707d62903b8c3d40c753d80922336601808ba9144580b21fdb3f36.scope: Consumed 334ms CPU time, 162.2M memory peak, 6.8M read from disk, 171.3M written to disk. Oct 13 05:30:04.789625 containerd[1712]: time="2025-10-13T05:30:04.789308710Z" level=info msg="received exit event container_id:\"52b0037e7f707d62903b8c3d40c753d80922336601808ba9144580b21fdb3f36\" id:\"52b0037e7f707d62903b8c3d40c753d80922336601808ba9144580b21fdb3f36\" pid:3734 exited_at:{seconds:1760333404 nanos:733595456}" Oct 13 05:30:04.790731 containerd[1712]: time="2025-10-13T05:30:04.790719276Z" level=info msg="TaskExit event in podsandbox handler container_id:\"52b0037e7f707d62903b8c3d40c753d80922336601808ba9144580b21fdb3f36\" id:\"52b0037e7f707d62903b8c3d40c753d80922336601808ba9144580b21fdb3f36\" pid:3734 exited_at:{seconds:1760333404 nanos:733595456}" Oct 13 05:30:04.817482 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52b0037e7f707d62903b8c3d40c753d80922336601808ba9144580b21fdb3f36-rootfs.mount: Deactivated successfully. Oct 13 05:30:04.831118 kubelet[3010]: I1013 05:30:04.830849 3010 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Oct 13 05:30:04.862102 systemd[1]: Created slice kubepods-besteffort-podb9ec5d50_d75c_4ff0_8066_08c4ce64269d.slice - libcontainer container kubepods-besteffort-podb9ec5d50_d75c_4ff0_8066_08c4ce64269d.slice. Oct 13 05:30:04.872188 systemd[1]: Created slice kubepods-burstable-pod8143b661_4bb6_4b78_811c_a23e5a23b192.slice - libcontainer container kubepods-burstable-pod8143b661_4bb6_4b78_811c_a23e5a23b192.slice. Oct 13 05:30:04.878325 systemd[1]: Created slice kubepods-burstable-pod60ae1c96_5003_41fe_94bd_83bde0120109.slice - libcontainer container kubepods-burstable-pod60ae1c96_5003_41fe_94bd_83bde0120109.slice. Oct 13 05:30:04.884381 systemd[1]: Created slice kubepods-besteffort-pode8c46c11_2670_4392_980f_1e6c0dd2267e.slice - libcontainer container kubepods-besteffort-pode8c46c11_2670_4392_980f_1e6c0dd2267e.slice. Oct 13 05:30:04.893680 systemd[1]: Created slice kubepods-besteffort-podd05ae631_5eb8_4f0a_9e24_3ed77f9c3c01.slice - libcontainer container kubepods-besteffort-podd05ae631_5eb8_4f0a_9e24_3ed77f9c3c01.slice. Oct 13 05:30:04.899825 kubelet[3010]: I1013 05:30:04.899465 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9tt4\" (UniqueName: \"kubernetes.io/projected/e8c46c11-2670-4392-980f-1e6c0dd2267e-kube-api-access-f9tt4\") pod \"calico-apiserver-7b54bbbd77-rpbzl\" (UID: \"e8c46c11-2670-4392-980f-1e6c0dd2267e\") " pod="calico-apiserver/calico-apiserver-7b54bbbd77-rpbzl" Oct 13 05:30:04.899825 kubelet[3010]: I1013 05:30:04.899506 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3728c3ed-8ec9-4e59-b611-2dd20832e7d1-config\") pod \"goldmane-854f97d977-tpgxs\" (UID: \"3728c3ed-8ec9-4e59-b611-2dd20832e7d1\") " pod="calico-system/goldmane-854f97d977-tpgxs" Oct 13 05:30:04.899825 kubelet[3010]: I1013 05:30:04.899520 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6c6z\" (UniqueName: \"kubernetes.io/projected/b9ec5d50-d75c-4ff0-8066-08c4ce64269d-kube-api-access-h6c6z\") pod \"whisker-7fdfd6f6b5-b7prz\" (UID: \"b9ec5d50-d75c-4ff0-8066-08c4ce64269d\") " pod="calico-system/whisker-7fdfd6f6b5-b7prz" Oct 13 05:30:04.899825 kubelet[3010]: I1013 05:30:04.899530 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d05ae631-5eb8-4f0a-9e24-3ed77f9c3c01-tigera-ca-bundle\") pod \"calico-kube-controllers-84c44459f-h6qsm\" (UID: \"d05ae631-5eb8-4f0a-9e24-3ed77f9c3c01\") " pod="calico-system/calico-kube-controllers-84c44459f-h6qsm" Oct 13 05:30:04.899825 kubelet[3010]: I1013 05:30:04.899542 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e8c46c11-2670-4392-980f-1e6c0dd2267e-calico-apiserver-certs\") pod \"calico-apiserver-7b54bbbd77-rpbzl\" (UID: \"e8c46c11-2670-4392-980f-1e6c0dd2267e\") " pod="calico-apiserver/calico-apiserver-7b54bbbd77-rpbzl" Oct 13 05:30:04.899990 kubelet[3010]: I1013 05:30:04.899553 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwz4t\" (UniqueName: \"kubernetes.io/projected/d05ae631-5eb8-4f0a-9e24-3ed77f9c3c01-kube-api-access-kwz4t\") pod \"calico-kube-controllers-84c44459f-h6qsm\" (UID: \"d05ae631-5eb8-4f0a-9e24-3ed77f9c3c01\") " pod="calico-system/calico-kube-controllers-84c44459f-h6qsm" Oct 13 05:30:04.899990 kubelet[3010]: I1013 05:30:04.899565 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8143b661-4bb6-4b78-811c-a23e5a23b192-config-volume\") pod \"coredns-66bc5c9577-xchb4\" (UID: \"8143b661-4bb6-4b78-811c-a23e5a23b192\") " pod="kube-system/coredns-66bc5c9577-xchb4" Oct 13 05:30:04.899990 kubelet[3010]: I1013 05:30:04.899574 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3728c3ed-8ec9-4e59-b611-2dd20832e7d1-goldmane-ca-bundle\") pod \"goldmane-854f97d977-tpgxs\" (UID: \"3728c3ed-8ec9-4e59-b611-2dd20832e7d1\") " pod="calico-system/goldmane-854f97d977-tpgxs" Oct 13 05:30:04.899990 kubelet[3010]: I1013 05:30:04.899582 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxx7c\" (UniqueName: \"kubernetes.io/projected/3728c3ed-8ec9-4e59-b611-2dd20832e7d1-kube-api-access-nxx7c\") pod \"goldmane-854f97d977-tpgxs\" (UID: \"3728c3ed-8ec9-4e59-b611-2dd20832e7d1\") " pod="calico-system/goldmane-854f97d977-tpgxs" Oct 13 05:30:04.899990 kubelet[3010]: I1013 05:30:04.899593 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b9ec5d50-d75c-4ff0-8066-08c4ce64269d-whisker-backend-key-pair\") pod \"whisker-7fdfd6f6b5-b7prz\" (UID: \"b9ec5d50-d75c-4ff0-8066-08c4ce64269d\") " pod="calico-system/whisker-7fdfd6f6b5-b7prz" Oct 13 05:30:04.900077 kubelet[3010]: I1013 05:30:04.899602 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9ec5d50-d75c-4ff0-8066-08c4ce64269d-whisker-ca-bundle\") pod \"whisker-7fdfd6f6b5-b7prz\" (UID: \"b9ec5d50-d75c-4ff0-8066-08c4ce64269d\") " pod="calico-system/whisker-7fdfd6f6b5-b7prz" Oct 13 05:30:04.900077 kubelet[3010]: I1013 05:30:04.899612 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60ae1c96-5003-41fe-94bd-83bde0120109-config-volume\") pod \"coredns-66bc5c9577-8kxpz\" (UID: \"60ae1c96-5003-41fe-94bd-83bde0120109\") " pod="kube-system/coredns-66bc5c9577-8kxpz" Oct 13 05:30:04.900077 kubelet[3010]: I1013 05:30:04.899623 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlnfl\" (UniqueName: \"kubernetes.io/projected/8143b661-4bb6-4b78-811c-a23e5a23b192-kube-api-access-xlnfl\") pod \"coredns-66bc5c9577-xchb4\" (UID: \"8143b661-4bb6-4b78-811c-a23e5a23b192\") " pod="kube-system/coredns-66bc5c9577-xchb4" Oct 13 05:30:04.900077 kubelet[3010]: I1013 05:30:04.899631 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/3728c3ed-8ec9-4e59-b611-2dd20832e7d1-goldmane-key-pair\") pod \"goldmane-854f97d977-tpgxs\" (UID: \"3728c3ed-8ec9-4e59-b611-2dd20832e7d1\") " pod="calico-system/goldmane-854f97d977-tpgxs" Oct 13 05:30:04.900077 kubelet[3010]: I1013 05:30:04.899643 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rsgk\" (UniqueName: \"kubernetes.io/projected/60ae1c96-5003-41fe-94bd-83bde0120109-kube-api-access-8rsgk\") pod \"coredns-66bc5c9577-8kxpz\" (UID: \"60ae1c96-5003-41fe-94bd-83bde0120109\") " pod="kube-system/coredns-66bc5c9577-8kxpz" Oct 13 05:30:04.902846 systemd[1]: Created slice kubepods-besteffort-pod3728c3ed_8ec9_4e59_b611_2dd20832e7d1.slice - libcontainer container kubepods-besteffort-pod3728c3ed_8ec9_4e59_b611_2dd20832e7d1.slice. Oct 13 05:30:04.910190 systemd[1]: Created slice kubepods-besteffort-pod0160d0aa_6623_4d16_8d15_40145f51fe71.slice - libcontainer container kubepods-besteffort-pod0160d0aa_6623_4d16_8d15_40145f51fe71.slice. Oct 13 05:30:04.917929 systemd[1]: Created slice kubepods-besteffort-pod2f6929be_4c04_40a6_9980_63b9680f877e.slice - libcontainer container kubepods-besteffort-pod2f6929be_4c04_40a6_9980_63b9680f877e.slice. Oct 13 05:30:04.987574 containerd[1712]: time="2025-10-13T05:30:04.987542090Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Oct 13 05:30:05.000535 kubelet[3010]: I1013 05:30:05.000404 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0160d0aa-6623-4d16-8d15-40145f51fe71-calico-apiserver-certs\") pod \"calico-apiserver-c475f7844-mpl66\" (UID: \"0160d0aa-6623-4d16-8d15-40145f51fe71\") " pod="calico-apiserver/calico-apiserver-c475f7844-mpl66" Oct 13 05:30:05.000535 kubelet[3010]: I1013 05:30:05.000525 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6dr9\" (UniqueName: \"kubernetes.io/projected/0160d0aa-6623-4d16-8d15-40145f51fe71-kube-api-access-k6dr9\") pod \"calico-apiserver-c475f7844-mpl66\" (UID: \"0160d0aa-6623-4d16-8d15-40145f51fe71\") " pod="calico-apiserver/calico-apiserver-c475f7844-mpl66" Oct 13 05:30:05.000535 kubelet[3010]: I1013 05:30:05.000546 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2f6929be-4c04-40a6-9980-63b9680f877e-calico-apiserver-certs\") pod \"calico-apiserver-7b54bbbd77-6z8xp\" (UID: \"2f6929be-4c04-40a6-9980-63b9680f877e\") " pod="calico-apiserver/calico-apiserver-7b54bbbd77-6z8xp" Oct 13 05:30:05.000840 kubelet[3010]: I1013 05:30:05.000785 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qljk\" (UniqueName: \"kubernetes.io/projected/2f6929be-4c04-40a6-9980-63b9680f877e-kube-api-access-4qljk\") pod \"calico-apiserver-7b54bbbd77-6z8xp\" (UID: \"2f6929be-4c04-40a6-9980-63b9680f877e\") " pod="calico-apiserver/calico-apiserver-7b54bbbd77-6z8xp" Oct 13 05:30:05.174847 containerd[1712]: time="2025-10-13T05:30:05.174620523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7fdfd6f6b5-b7prz,Uid:b9ec5d50-d75c-4ff0-8066-08c4ce64269d,Namespace:calico-system,Attempt:0,}" Oct 13 05:30:05.177499 containerd[1712]: time="2025-10-13T05:30:05.177460197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xchb4,Uid:8143b661-4bb6-4b78-811c-a23e5a23b192,Namespace:kube-system,Attempt:0,}" Oct 13 05:30:05.182503 containerd[1712]: time="2025-10-13T05:30:05.182342942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8kxpz,Uid:60ae1c96-5003-41fe-94bd-83bde0120109,Namespace:kube-system,Attempt:0,}" Oct 13 05:30:05.210176 containerd[1712]: time="2025-10-13T05:30:05.210147030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-854f97d977-tpgxs,Uid:3728c3ed-8ec9-4e59-b611-2dd20832e7d1,Namespace:calico-system,Attempt:0,}" Oct 13 05:30:05.212220 containerd[1712]: time="2025-10-13T05:30:05.212182066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b54bbbd77-rpbzl,Uid:e8c46c11-2670-4392-980f-1e6c0dd2267e,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:30:05.212505 containerd[1712]: time="2025-10-13T05:30:05.212404986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84c44459f-h6qsm,Uid:d05ae631-5eb8-4f0a-9e24-3ed77f9c3c01,Namespace:calico-system,Attempt:0,}" Oct 13 05:30:05.228417 containerd[1712]: time="2025-10-13T05:30:05.227875465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b54bbbd77-6z8xp,Uid:2f6929be-4c04-40a6-9980-63b9680f877e,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:30:05.234924 containerd[1712]: time="2025-10-13T05:30:05.234880661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c475f7844-mpl66,Uid:0160d0aa-6623-4d16-8d15-40145f51fe71,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:30:05.472754 containerd[1712]: time="2025-10-13T05:30:05.472674540Z" level=error msg="Failed to destroy network for sandbox \"3fbd8bce4da1da8739d6835896c8bbdd0ca4a113e6ff98b790858d2c592184d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:30:05.473513 containerd[1712]: time="2025-10-13T05:30:05.473446129Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-854f97d977-tpgxs,Uid:3728c3ed-8ec9-4e59-b611-2dd20832e7d1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fbd8bce4da1da8739d6835896c8bbdd0ca4a113e6ff98b790858d2c592184d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:30:05.475236 containerd[1712]: time="2025-10-13T05:30:05.474556658Z" level=error msg="Failed to destroy network for sandbox \"c77b74204377ada4b7e5a0595a5bc16bbcffe8784a39ab8afd24b87a0cb1ab8c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:30:05.476851 kubelet[3010]: E1013 05:30:05.475839 3010 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fbd8bce4da1da8739d6835896c8bbdd0ca4a113e6ff98b790858d2c592184d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:30:05.476851 kubelet[3010]: E1013 05:30:05.475890 3010 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fbd8bce4da1da8739d6835896c8bbdd0ca4a113e6ff98b790858d2c592184d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-854f97d977-tpgxs" Oct 13 05:30:05.476851 kubelet[3010]: E1013 05:30:05.475913 3010 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fbd8bce4da1da8739d6835896c8bbdd0ca4a113e6ff98b790858d2c592184d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-854f97d977-tpgxs" Oct 13 05:30:05.476983 kubelet[3010]: E1013 05:30:05.475953 3010 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-854f97d977-tpgxs_calico-system(3728c3ed-8ec9-4e59-b611-2dd20832e7d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-854f97d977-tpgxs_calico-system(3728c3ed-8ec9-4e59-b611-2dd20832e7d1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3fbd8bce4da1da8739d6835896c8bbdd0ca4a113e6ff98b790858d2c592184d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-854f97d977-tpgxs" podUID="3728c3ed-8ec9-4e59-b611-2dd20832e7d1" Oct 13 05:30:05.477640 containerd[1712]: time="2025-10-13T05:30:05.477167553Z" level=error msg="Failed to destroy network for sandbox \"ab90f7e8d1e7a66195107175e0caa6ac20e5da1eb2e1a18071a3c3e8f321859b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:30:05.478178 containerd[1712]: time="2025-10-13T05:30:05.478159326Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84c44459f-h6qsm,Uid:d05ae631-5eb8-4f0a-9e24-3ed77f9c3c01,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c77b74204377ada4b7e5a0595a5bc16bbcffe8784a39ab8afd24b87a0cb1ab8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:30:05.478611 containerd[1712]: time="2025-10-13T05:30:05.478593490Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xchb4,Uid:8143b661-4bb6-4b78-811c-a23e5a23b192,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab90f7e8d1e7a66195107175e0caa6ac20e5da1eb2e1a18071a3c3e8f321859b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:30:05.478771 kubelet[3010]: E1013 05:30:05.478754 3010 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab90f7e8d1e7a66195107175e0caa6ac20e5da1eb2e1a18071a3c3e8f321859b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:30:05.479926 containerd[1712]: time="2025-10-13T05:30:05.478842171Z" level=error msg="Failed to destroy network for sandbox \"22d0ce62b137eb7fb7a43084b5ee4713701255d69091143d395c2e6dd8473f5c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:30:05.479979 kubelet[3010]: E1013 05:30:05.478955 3010 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab90f7e8d1e7a66195107175e0caa6ac20e5da1eb2e1a18071a3c3e8f321859b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-xchb4" Oct 13 05:30:05.479979 kubelet[3010]: E1013 05:30:05.478976 3010 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab90f7e8d1e7a66195107175e0caa6ac20e5da1eb2e1a18071a3c3e8f321859b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-xchb4" Oct 13 05:30:05.479979 kubelet[3010]: E1013 05:30:05.479004 3010 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-xchb4_kube-system(8143b661-4bb6-4b78-811c-a23e5a23b192)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-xchb4_kube-system(8143b661-4bb6-4b78-811c-a23e5a23b192)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ab90f7e8d1e7a66195107175e0caa6ac20e5da1eb2e1a18071a3c3e8f321859b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-xchb4" podUID="8143b661-4bb6-4b78-811c-a23e5a23b192" Oct 13 05:30:05.480424 kubelet[3010]: E1013 05:30:05.478925 3010 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c77b74204377ada4b7e5a0595a5bc16bbcffe8784a39ab8afd24b87a0cb1ab8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:30:05.480424 kubelet[3010]: E1013 05:30:05.479169 3010 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c77b74204377ada4b7e5a0595a5bc16bbcffe8784a39ab8afd24b87a0cb1ab8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84c44459f-h6qsm" Oct 13 05:30:05.480424 kubelet[3010]: E1013 05:30:05.479183 3010 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c77b74204377ada4b7e5a0595a5bc16bbcffe8784a39ab8afd24b87a0cb1ab8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84c44459f-h6qsm" Oct 13 05:30:05.480544 kubelet[3010]: E1013 05:30:05.479204 3010 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-84c44459f-h6qsm_calico-system(d05ae631-5eb8-4f0a-9e24-3ed77f9c3c01)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-84c44459f-h6qsm_calico-system(d05ae631-5eb8-4f0a-9e24-3ed77f9c3c01)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c77b74204377ada4b7e5a0595a5bc16bbcffe8784a39ab8afd24b87a0cb1ab8c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84c44459f-h6qsm" podUID="d05ae631-5eb8-4f0a-9e24-3ed77f9c3c01" Oct 13 05:30:05.481289 containerd[1712]: time="2025-10-13T05:30:05.481271121Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7fdfd6f6b5-b7prz,Uid:b9ec5d50-d75c-4ff0-8066-08c4ce64269d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"22d0ce62b137eb7fb7a43084b5ee4713701255d69091143d395c2e6dd8473f5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:30:05.481501 kubelet[3010]: E1013 05:30:05.481459 3010 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22d0ce62b137eb7fb7a43084b5ee4713701255d69091143d395c2e6dd8473f5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:30:05.481501 kubelet[3010]: E1013 05:30:05.481481 3010 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22d0ce62b137eb7fb7a43084b5ee4713701255d69091143d395c2e6dd8473f5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7fdfd6f6b5-b7prz" Oct 13 05:30:05.481501 kubelet[3010]: E1013 05:30:05.481492 3010 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22d0ce62b137eb7fb7a43084b5ee4713701255d69091143d395c2e6dd8473f5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7fdfd6f6b5-b7prz" Oct 13 05:30:05.481610 kubelet[3010]: E1013 05:30:05.481525 3010 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7fdfd6f6b5-b7prz_calico-system(b9ec5d50-d75c-4ff0-8066-08c4ce64269d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7fdfd6f6b5-b7prz_calico-system(b9ec5d50-d75c-4ff0-8066-08c4ce64269d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"22d0ce62b137eb7fb7a43084b5ee4713701255d69091143d395c2e6dd8473f5c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7fdfd6f6b5-b7prz" podUID="b9ec5d50-d75c-4ff0-8066-08c4ce64269d" Oct 13 05:30:05.486481 containerd[1712]: time="2025-10-13T05:30:05.486453415Z" level=error msg="Failed to destroy network for sandbox \"85b42248cd56d7e4202ee612dcb173caa17872b469fbfa9ff93209636bb8bf7c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:30:05.487365 containerd[1712]: time="2025-10-13T05:30:05.487343996Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8kxpz,Uid:60ae1c96-5003-41fe-94bd-83bde0120109,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"85b42248cd56d7e4202ee612dcb173caa17872b469fbfa9ff93209636bb8bf7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:30:05.487479 kubelet[3010]: E1013 05:30:05.487459 3010 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85b42248cd56d7e4202ee612dcb173caa17872b469fbfa9ff93209636bb8bf7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:30:05.487512 kubelet[3010]: E1013 05:30:05.487490 3010 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85b42248cd56d7e4202ee612dcb173caa17872b469fbfa9ff93209636bb8bf7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-8kxpz" Oct 13 05:30:05.487512 kubelet[3010]: E1013 05:30:05.487503 3010 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85b42248cd56d7e4202ee612dcb173caa17872b469fbfa9ff93209636bb8bf7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-8kxpz" Oct 13 05:30:05.487568 kubelet[3010]: E1013 05:30:05.487532 3010 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-8kxpz_kube-system(60ae1c96-5003-41fe-94bd-83bde0120109)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-8kxpz_kube-system(60ae1c96-5003-41fe-94bd-83bde0120109)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"85b42248cd56d7e4202ee612dcb173caa17872b469fbfa9ff93209636bb8bf7c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-8kxpz" podUID="60ae1c96-5003-41fe-94bd-83bde0120109" Oct 13 05:30:05.488076 containerd[1712]: time="2025-10-13T05:30:05.488051633Z" level=error msg="Failed to destroy network for sandbox \"c5c01baf86da7581657c9514c84735af21325229c6b0aa1052051cdc3875f85f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:30:05.488855 containerd[1712]: time="2025-10-13T05:30:05.488838501Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c475f7844-mpl66,Uid:0160d0aa-6623-4d16-8d15-40145f51fe71,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5c01baf86da7581657c9514c84735af21325229c6b0aa1052051cdc3875f85f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:30:05.489351 kubelet[3010]: E1013 05:30:05.489333 3010 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5c01baf86da7581657c9514c84735af21325229c6b0aa1052051cdc3875f85f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:30:05.489478 kubelet[3010]: E1013 05:30:05.489355 3010 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5c01baf86da7581657c9514c84735af21325229c6b0aa1052051cdc3875f85f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c475f7844-mpl66" Oct 13 05:30:05.489478 kubelet[3010]: E1013 05:30:05.489365 3010 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5c01baf86da7581657c9514c84735af21325229c6b0aa1052051cdc3875f85f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c475f7844-mpl66" Oct 13 05:30:05.489478 kubelet[3010]: E1013 05:30:05.489389 3010 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c475f7844-mpl66_calico-apiserver(0160d0aa-6623-4d16-8d15-40145f51fe71)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c475f7844-mpl66_calico-apiserver(0160d0aa-6623-4d16-8d15-40145f51fe71)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c5c01baf86da7581657c9514c84735af21325229c6b0aa1052051cdc3875f85f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c475f7844-mpl66" podUID="0160d0aa-6623-4d16-8d15-40145f51fe71" Oct 13 05:30:05.491977 containerd[1712]: time="2025-10-13T05:30:05.491947928Z" level=error msg="Failed to destroy network for sandbox \"6f5a12178a47d60c494db1bb87b9ad29a667192489a7285f3fe20df6eb287b0c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:30:05.492375 containerd[1712]: time="2025-10-13T05:30:05.492356378Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b54bbbd77-rpbzl,Uid:e8c46c11-2670-4392-980f-1e6c0dd2267e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f5a12178a47d60c494db1bb87b9ad29a667192489a7285f3fe20df6eb287b0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:30:05.492484 kubelet[3010]: E1013 05:30:05.492464 3010 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f5a12178a47d60c494db1bb87b9ad29a667192489a7285f3fe20df6eb287b0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:30:05.492537 containerd[1712]: time="2025-10-13T05:30:05.492504634Z" level=error msg="Failed to destroy network for sandbox \"e3962eb48b246aaaa2205f29dcc0455417078a5b5b78021782227ac9f64c706c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:30:05.492589 kubelet[3010]: E1013 05:30:05.492573 3010 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f5a12178a47d60c494db1bb87b9ad29a667192489a7285f3fe20df6eb287b0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b54bbbd77-rpbzl" Oct 13 05:30:05.492589 kubelet[3010]: E1013 05:30:05.492592 3010 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f5a12178a47d60c494db1bb87b9ad29a667192489a7285f3fe20df6eb287b0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b54bbbd77-rpbzl" Oct 13 05:30:05.492762 kubelet[3010]: E1013 05:30:05.492736 3010 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b54bbbd77-rpbzl_calico-apiserver(e8c46c11-2670-4392-980f-1e6c0dd2267e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b54bbbd77-rpbzl_calico-apiserver(e8c46c11-2670-4392-980f-1e6c0dd2267e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6f5a12178a47d60c494db1bb87b9ad29a667192489a7285f3fe20df6eb287b0c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b54bbbd77-rpbzl" podUID="e8c46c11-2670-4392-980f-1e6c0dd2267e" Oct 13 05:30:05.493296 containerd[1712]: time="2025-10-13T05:30:05.493273847Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b54bbbd77-6z8xp,Uid:2f6929be-4c04-40a6-9980-63b9680f877e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3962eb48b246aaaa2205f29dcc0455417078a5b5b78021782227ac9f64c706c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:30:05.493373 kubelet[3010]: E1013 05:30:05.493343 3010 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3962eb48b246aaaa2205f29dcc0455417078a5b5b78021782227ac9f64c706c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:30:05.493373 kubelet[3010]: E1013 05:30:05.493359 3010 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3962eb48b246aaaa2205f29dcc0455417078a5b5b78021782227ac9f64c706c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b54bbbd77-6z8xp" Oct 13 05:30:05.493373 kubelet[3010]: E1013 05:30:05.493368 3010 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3962eb48b246aaaa2205f29dcc0455417078a5b5b78021782227ac9f64c706c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b54bbbd77-6z8xp" Oct 13 05:30:05.493453 kubelet[3010]: E1013 05:30:05.493388 3010 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b54bbbd77-6z8xp_calico-apiserver(2f6929be-4c04-40a6-9980-63b9680f877e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b54bbbd77-6z8xp_calico-apiserver(2f6929be-4c04-40a6-9980-63b9680f877e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e3962eb48b246aaaa2205f29dcc0455417078a5b5b78021782227ac9f64c706c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b54bbbd77-6z8xp" podUID="2f6929be-4c04-40a6-9980-63b9680f877e" Oct 13 05:30:05.782646 systemd[1]: Created slice kubepods-besteffort-pod8e0aa42c_1379_4484_899b_51874e20e39d.slice - libcontainer container kubepods-besteffort-pod8e0aa42c_1379_4484_899b_51874e20e39d.slice. Oct 13 05:30:05.785426 containerd[1712]: time="2025-10-13T05:30:05.785407357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s5gvc,Uid:8e0aa42c-1379-4484-899b-51874e20e39d,Namespace:calico-system,Attempt:0,}" Oct 13 05:30:05.813298 containerd[1712]: time="2025-10-13T05:30:05.813266270Z" level=error msg="Failed to destroy network for sandbox \"bdbd13f66d13b4c35eed41ab1ec5d48cbbf3f4b2928655ff6f68adae55d6132a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:30:05.813778 containerd[1712]: time="2025-10-13T05:30:05.813757644Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s5gvc,Uid:8e0aa42c-1379-4484-899b-51874e20e39d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdbd13f66d13b4c35eed41ab1ec5d48cbbf3f4b2928655ff6f68adae55d6132a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:30:05.814525 kubelet[3010]: E1013 05:30:05.813938 3010 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdbd13f66d13b4c35eed41ab1ec5d48cbbf3f4b2928655ff6f68adae55d6132a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:30:05.814525 kubelet[3010]: E1013 05:30:05.813974 3010 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdbd13f66d13b4c35eed41ab1ec5d48cbbf3f4b2928655ff6f68adae55d6132a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s5gvc" Oct 13 05:30:05.814525 kubelet[3010]: E1013 05:30:05.813987 3010 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdbd13f66d13b4c35eed41ab1ec5d48cbbf3f4b2928655ff6f68adae55d6132a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s5gvc" Oct 13 05:30:05.814613 kubelet[3010]: E1013 05:30:05.814021 3010 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-s5gvc_calico-system(8e0aa42c-1379-4484-899b-51874e20e39d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-s5gvc_calico-system(8e0aa42c-1379-4484-899b-51874e20e39d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bdbd13f66d13b4c35eed41ab1ec5d48cbbf3f4b2928655ff6f68adae55d6132a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s5gvc" podUID="8e0aa42c-1379-4484-899b-51874e20e39d" Oct 13 05:30:11.759244 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount835364215.mount: Deactivated successfully. Oct 13 05:30:11.920154 containerd[1712]: time="2025-10-13T05:30:11.920088680Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Oct 13 05:30:11.920797 containerd[1712]: time="2025-10-13T05:30:11.909705387Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:30:11.924852 containerd[1712]: time="2025-10-13T05:30:11.924822034Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:30:11.925383 containerd[1712]: time="2025-10-13T05:30:11.925359069Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:30:11.927195 containerd[1712]: time="2025-10-13T05:30:11.927170820Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 6.938062103s" Oct 13 05:30:11.927258 containerd[1712]: time="2025-10-13T05:30:11.927196618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Oct 13 05:30:11.954752 containerd[1712]: time="2025-10-13T05:30:11.954723252Z" level=info msg="CreateContainer within sandbox \"c29df3509fb8234002d0ad82bd4de170886a18883cfbc927db132aa3ec6fad86\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 13 05:30:11.972960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount652084304.mount: Deactivated successfully. Oct 13 05:30:11.974993 containerd[1712]: time="2025-10-13T05:30:11.973187925Z" level=info msg="Container bf5a52200e6be81e0575cb0f640581ad3a6bede5b855d21f0b290b17348c7318: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:30:12.014780 containerd[1712]: time="2025-10-13T05:30:12.014664557Z" level=info msg="CreateContainer within sandbox \"c29df3509fb8234002d0ad82bd4de170886a18883cfbc927db132aa3ec6fad86\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bf5a52200e6be81e0575cb0f640581ad3a6bede5b855d21f0b290b17348c7318\"" Oct 13 05:30:12.015498 containerd[1712]: time="2025-10-13T05:30:12.015457037Z" level=info msg="StartContainer for \"bf5a52200e6be81e0575cb0f640581ad3a6bede5b855d21f0b290b17348c7318\"" Oct 13 05:30:12.019364 containerd[1712]: time="2025-10-13T05:30:12.019319025Z" level=info msg="connecting to shim bf5a52200e6be81e0575cb0f640581ad3a6bede5b855d21f0b290b17348c7318" address="unix:///run/containerd/s/bfa2e1a0f80c1472cad98c0726f24aff1f641456d6efd131fdcc5ecdba1d1c14" protocol=ttrpc version=3 Oct 13 05:30:12.128315 systemd[1]: Started cri-containerd-bf5a52200e6be81e0575cb0f640581ad3a6bede5b855d21f0b290b17348c7318.scope - libcontainer container bf5a52200e6be81e0575cb0f640581ad3a6bede5b855d21f0b290b17348c7318. Oct 13 05:30:12.183627 containerd[1712]: time="2025-10-13T05:30:12.183600446Z" level=info msg="StartContainer for \"bf5a52200e6be81e0575cb0f640581ad3a6bede5b855d21f0b290b17348c7318\" returns successfully" Oct 13 05:30:12.609959 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 13 05:30:12.611130 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 13 05:30:13.026474 kubelet[3010]: I1013 05:30:13.021341 3010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-cmzkn" podStartSLOduration=2.257480553 podStartE2EDuration="20.02133043s" podCreationTimestamp="2025-10-13 05:29:53 +0000 UTC" firstStartedPulling="2025-10-13 05:29:54.16395687 +0000 UTC m=+18.695669678" lastFinishedPulling="2025-10-13 05:30:11.92780675 +0000 UTC m=+36.459519555" observedRunningTime="2025-10-13 05:30:13.020742418 +0000 UTC m=+37.552455235" watchObservedRunningTime="2025-10-13 05:30:13.02133043 +0000 UTC m=+37.553043242" Oct 13 05:30:13.149764 kubelet[3010]: I1013 05:30:13.149300 3010 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6c6z\" (UniqueName: \"kubernetes.io/projected/b9ec5d50-d75c-4ff0-8066-08c4ce64269d-kube-api-access-h6c6z\") pod \"b9ec5d50-d75c-4ff0-8066-08c4ce64269d\" (UID: \"b9ec5d50-d75c-4ff0-8066-08c4ce64269d\") " Oct 13 05:30:13.149764 kubelet[3010]: I1013 05:30:13.149338 3010 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b9ec5d50-d75c-4ff0-8066-08c4ce64269d-whisker-backend-key-pair\") pod \"b9ec5d50-d75c-4ff0-8066-08c4ce64269d\" (UID: \"b9ec5d50-d75c-4ff0-8066-08c4ce64269d\") " Oct 13 05:30:13.149764 kubelet[3010]: I1013 05:30:13.149355 3010 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9ec5d50-d75c-4ff0-8066-08c4ce64269d-whisker-ca-bundle\") pod \"b9ec5d50-d75c-4ff0-8066-08c4ce64269d\" (UID: \"b9ec5d50-d75c-4ff0-8066-08c4ce64269d\") " Oct 13 05:30:13.149764 kubelet[3010]: I1013 05:30:13.149556 3010 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9ec5d50-d75c-4ff0-8066-08c4ce64269d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "b9ec5d50-d75c-4ff0-8066-08c4ce64269d" (UID: "b9ec5d50-d75c-4ff0-8066-08c4ce64269d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 13 05:30:13.157038 systemd[1]: var-lib-kubelet-pods-b9ec5d50\x2dd75c\x2d4ff0\x2d8066\x2d08c4ce64269d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh6c6z.mount: Deactivated successfully. Oct 13 05:30:13.159198 kubelet[3010]: I1013 05:30:13.159174 3010 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9ec5d50-d75c-4ff0-8066-08c4ce64269d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "b9ec5d50-d75c-4ff0-8066-08c4ce64269d" (UID: "b9ec5d50-d75c-4ff0-8066-08c4ce64269d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 13 05:30:13.159424 systemd[1]: var-lib-kubelet-pods-b9ec5d50\x2dd75c\x2d4ff0\x2d8066\x2d08c4ce64269d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 13 05:30:13.166685 kubelet[3010]: I1013 05:30:13.159827 3010 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9ec5d50-d75c-4ff0-8066-08c4ce64269d-kube-api-access-h6c6z" (OuterVolumeSpecName: "kube-api-access-h6c6z") pod "b9ec5d50-d75c-4ff0-8066-08c4ce64269d" (UID: "b9ec5d50-d75c-4ff0-8066-08c4ce64269d"). InnerVolumeSpecName "kube-api-access-h6c6z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 13 05:30:13.188686 containerd[1712]: time="2025-10-13T05:30:13.188661980Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bf5a52200e6be81e0575cb0f640581ad3a6bede5b855d21f0b290b17348c7318\" id:\"78c313ecd6ef4ff3e6848f937adcc6d697fc0e0a45cb9595ed695d51cbcc901b\" pid:4093 exit_status:1 exited_at:{seconds:1760333413 nanos:183342778}" Oct 13 05:30:13.250515 kubelet[3010]: I1013 05:30:13.250478 3010 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h6c6z\" (UniqueName: \"kubernetes.io/projected/b9ec5d50-d75c-4ff0-8066-08c4ce64269d-kube-api-access-h6c6z\") on node \"localhost\" DevicePath \"\"" Oct 13 05:30:13.250515 kubelet[3010]: I1013 05:30:13.250508 3010 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b9ec5d50-d75c-4ff0-8066-08c4ce64269d-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 13 05:30:13.250515 kubelet[3010]: I1013 05:30:13.250516 3010 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9ec5d50-d75c-4ff0-8066-08c4ce64269d-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 13 05:30:13.794196 systemd[1]: Removed slice kubepods-besteffort-podb9ec5d50_d75c_4ff0_8066_08c4ce64269d.slice - libcontainer container kubepods-besteffort-podb9ec5d50_d75c_4ff0_8066_08c4ce64269d.slice. Oct 13 05:30:14.110038 containerd[1712]: time="2025-10-13T05:30:14.109990100Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bf5a52200e6be81e0575cb0f640581ad3a6bede5b855d21f0b290b17348c7318\" id:\"e0b0c0527e4d773a74a6933668452ef1b3ec7dede93e92aa5386f78c142534d5\" pid:4124 exit_status:1 exited_at:{seconds:1760333414 nanos:109446502}" Oct 13 05:30:14.577556 systemd[1]: Created slice kubepods-besteffort-pod7132a020_32cb_4cd5_9c64_ec072d1f05b0.slice - libcontainer container kubepods-besteffort-pod7132a020_32cb_4cd5_9c64_ec072d1f05b0.slice. Oct 13 05:30:14.673296 kubelet[3010]: I1013 05:30:14.673260 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7132a020-32cb-4cd5-9c64-ec072d1f05b0-whisker-backend-key-pair\") pod \"whisker-54c669fd76-gsz4j\" (UID: \"7132a020-32cb-4cd5-9c64-ec072d1f05b0\") " pod="calico-system/whisker-54c669fd76-gsz4j" Oct 13 05:30:14.673676 kubelet[3010]: I1013 05:30:14.673589 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7132a020-32cb-4cd5-9c64-ec072d1f05b0-whisker-ca-bundle\") pod \"whisker-54c669fd76-gsz4j\" (UID: \"7132a020-32cb-4cd5-9c64-ec072d1f05b0\") " pod="calico-system/whisker-54c669fd76-gsz4j" Oct 13 05:30:14.673676 kubelet[3010]: I1013 05:30:14.673612 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvzvk\" (UniqueName: \"kubernetes.io/projected/7132a020-32cb-4cd5-9c64-ec072d1f05b0-kube-api-access-qvzvk\") pod \"whisker-54c669fd76-gsz4j\" (UID: \"7132a020-32cb-4cd5-9c64-ec072d1f05b0\") " pod="calico-system/whisker-54c669fd76-gsz4j" Oct 13 05:30:14.879790 systemd-networkd[1601]: vxlan.calico: Link UP Oct 13 05:30:14.879797 systemd-networkd[1601]: vxlan.calico: Gained carrier Oct 13 05:30:14.887161 containerd[1712]: time="2025-10-13T05:30:14.887138529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54c669fd76-gsz4j,Uid:7132a020-32cb-4cd5-9c64-ec072d1f05b0,Namespace:calico-system,Attempt:0,}" Oct 13 05:30:15.101830 containerd[1712]: time="2025-10-13T05:30:15.101449567Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bf5a52200e6be81e0575cb0f640581ad3a6bede5b855d21f0b290b17348c7318\" id:\"1d8c9f3c35a128360212940e65e562f02077a0d0973d577a9766d4f88df415ed\" pid:4330 exit_status:1 exited_at:{seconds:1760333415 nanos:101230238}" Oct 13 05:30:15.698595 systemd-networkd[1601]: cali1416c84cb4f: Link UP Oct 13 05:30:15.699428 systemd-networkd[1601]: cali1416c84cb4f: Gained carrier Oct 13 05:30:15.714911 containerd[1712]: 2025-10-13 05:30:15.071 [INFO][4297] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--54c669fd76--gsz4j-eth0 whisker-54c669fd76- calico-system 7132a020-32cb-4cd5-9c64-ec072d1f05b0 899 0 2025-10-13 05:30:14 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:54c669fd76 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-54c669fd76-gsz4j eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali1416c84cb4f [] [] }} ContainerID="b620e4ec038aefd66ebe47dc03f3d458b3bbf5523aee7404ae57efc4963bb9e4" Namespace="calico-system" Pod="whisker-54c669fd76-gsz4j" WorkloadEndpoint="localhost-k8s-whisker--54c669fd76--gsz4j-" Oct 13 05:30:15.714911 containerd[1712]: 2025-10-13 05:30:15.071 [INFO][4297] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b620e4ec038aefd66ebe47dc03f3d458b3bbf5523aee7404ae57efc4963bb9e4" Namespace="calico-system" Pod="whisker-54c669fd76-gsz4j" WorkloadEndpoint="localhost-k8s-whisker--54c669fd76--gsz4j-eth0" Oct 13 05:30:15.714911 containerd[1712]: 2025-10-13 05:30:15.627 [INFO][4343] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b620e4ec038aefd66ebe47dc03f3d458b3bbf5523aee7404ae57efc4963bb9e4" HandleID="k8s-pod-network.b620e4ec038aefd66ebe47dc03f3d458b3bbf5523aee7404ae57efc4963bb9e4" Workload="localhost-k8s-whisker--54c669fd76--gsz4j-eth0" Oct 13 05:30:15.715084 containerd[1712]: 2025-10-13 05:30:15.630 [INFO][4343] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b620e4ec038aefd66ebe47dc03f3d458b3bbf5523aee7404ae57efc4963bb9e4" HandleID="k8s-pod-network.b620e4ec038aefd66ebe47dc03f3d458b3bbf5523aee7404ae57efc4963bb9e4" Workload="localhost-k8s-whisker--54c669fd76--gsz4j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c22d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-54c669fd76-gsz4j", "timestamp":"2025-10-13 05:30:15.627159428 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:30:15.715084 containerd[1712]: 2025-10-13 05:30:15.630 [INFO][4343] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:30:15.715084 containerd[1712]: 2025-10-13 05:30:15.633 [INFO][4343] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:30:15.715084 containerd[1712]: 2025-10-13 05:30:15.633 [INFO][4343] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:30:15.715084 containerd[1712]: 2025-10-13 05:30:15.648 [INFO][4343] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b620e4ec038aefd66ebe47dc03f3d458b3bbf5523aee7404ae57efc4963bb9e4" host="localhost" Oct 13 05:30:15.715084 containerd[1712]: 2025-10-13 05:30:15.657 [INFO][4343] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:30:15.715084 containerd[1712]: 2025-10-13 05:30:15.661 [INFO][4343] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:30:15.715084 containerd[1712]: 2025-10-13 05:30:15.662 [INFO][4343] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:30:15.715084 containerd[1712]: 2025-10-13 05:30:15.664 [INFO][4343] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:30:15.715084 containerd[1712]: 2025-10-13 05:30:15.664 [INFO][4343] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b620e4ec038aefd66ebe47dc03f3d458b3bbf5523aee7404ae57efc4963bb9e4" host="localhost" Oct 13 05:30:15.715257 containerd[1712]: 2025-10-13 05:30:15.665 [INFO][4343] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b620e4ec038aefd66ebe47dc03f3d458b3bbf5523aee7404ae57efc4963bb9e4 Oct 13 05:30:15.715257 containerd[1712]: 2025-10-13 05:30:15.667 [INFO][4343] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b620e4ec038aefd66ebe47dc03f3d458b3bbf5523aee7404ae57efc4963bb9e4" host="localhost" Oct 13 05:30:15.715257 containerd[1712]: 2025-10-13 05:30:15.670 [INFO][4343] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.b620e4ec038aefd66ebe47dc03f3d458b3bbf5523aee7404ae57efc4963bb9e4" host="localhost" Oct 13 05:30:15.715257 containerd[1712]: 2025-10-13 05:30:15.670 [INFO][4343] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.b620e4ec038aefd66ebe47dc03f3d458b3bbf5523aee7404ae57efc4963bb9e4" host="localhost" Oct 13 05:30:15.715257 containerd[1712]: 2025-10-13 05:30:15.671 [INFO][4343] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:30:15.715257 containerd[1712]: 2025-10-13 05:30:15.671 [INFO][4343] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="b620e4ec038aefd66ebe47dc03f3d458b3bbf5523aee7404ae57efc4963bb9e4" HandleID="k8s-pod-network.b620e4ec038aefd66ebe47dc03f3d458b3bbf5523aee7404ae57efc4963bb9e4" Workload="localhost-k8s-whisker--54c669fd76--gsz4j-eth0" Oct 13 05:30:15.720184 containerd[1712]: 2025-10-13 05:30:15.674 [INFO][4297] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b620e4ec038aefd66ebe47dc03f3d458b3bbf5523aee7404ae57efc4963bb9e4" Namespace="calico-system" Pod="whisker-54c669fd76-gsz4j" WorkloadEndpoint="localhost-k8s-whisker--54c669fd76--gsz4j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--54c669fd76--gsz4j-eth0", GenerateName:"whisker-54c669fd76-", Namespace:"calico-system", SelfLink:"", UID:"7132a020-32cb-4cd5-9c64-ec072d1f05b0", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 30, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"54c669fd76", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-54c669fd76-gsz4j", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1416c84cb4f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:30:15.720184 containerd[1712]: 2025-10-13 05:30:15.675 [INFO][4297] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="b620e4ec038aefd66ebe47dc03f3d458b3bbf5523aee7404ae57efc4963bb9e4" Namespace="calico-system" Pod="whisker-54c669fd76-gsz4j" WorkloadEndpoint="localhost-k8s-whisker--54c669fd76--gsz4j-eth0" Oct 13 05:30:15.720260 containerd[1712]: 2025-10-13 05:30:15.675 [INFO][4297] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1416c84cb4f ContainerID="b620e4ec038aefd66ebe47dc03f3d458b3bbf5523aee7404ae57efc4963bb9e4" Namespace="calico-system" Pod="whisker-54c669fd76-gsz4j" WorkloadEndpoint="localhost-k8s-whisker--54c669fd76--gsz4j-eth0" Oct 13 05:30:15.720260 containerd[1712]: 2025-10-13 05:30:15.698 [INFO][4297] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b620e4ec038aefd66ebe47dc03f3d458b3bbf5523aee7404ae57efc4963bb9e4" Namespace="calico-system" Pod="whisker-54c669fd76-gsz4j" WorkloadEndpoint="localhost-k8s-whisker--54c669fd76--gsz4j-eth0" Oct 13 05:30:15.720293 containerd[1712]: 2025-10-13 05:30:15.699 [INFO][4297] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b620e4ec038aefd66ebe47dc03f3d458b3bbf5523aee7404ae57efc4963bb9e4" Namespace="calico-system" Pod="whisker-54c669fd76-gsz4j" WorkloadEndpoint="localhost-k8s-whisker--54c669fd76--gsz4j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--54c669fd76--gsz4j-eth0", GenerateName:"whisker-54c669fd76-", Namespace:"calico-system", SelfLink:"", UID:"7132a020-32cb-4cd5-9c64-ec072d1f05b0", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 30, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"54c669fd76", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b620e4ec038aefd66ebe47dc03f3d458b3bbf5523aee7404ae57efc4963bb9e4", Pod:"whisker-54c669fd76-gsz4j", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1416c84cb4f", MAC:"76:e9:9b:10:33:d7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:30:15.720333 containerd[1712]: 2025-10-13 05:30:15.712 [INFO][4297] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b620e4ec038aefd66ebe47dc03f3d458b3bbf5523aee7404ae57efc4963bb9e4" Namespace="calico-system" Pod="whisker-54c669fd76-gsz4j" WorkloadEndpoint="localhost-k8s-whisker--54c669fd76--gsz4j-eth0" Oct 13 05:30:15.780539 kubelet[3010]: I1013 05:30:15.780402 3010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9ec5d50-d75c-4ff0-8066-08c4ce64269d" path="/var/lib/kubelet/pods/b9ec5d50-d75c-4ff0-8066-08c4ce64269d/volumes" Oct 13 05:30:15.792365 containerd[1712]: time="2025-10-13T05:30:15.792330843Z" level=info msg="connecting to shim b620e4ec038aefd66ebe47dc03f3d458b3bbf5523aee7404ae57efc4963bb9e4" address="unix:///run/containerd/s/ee60d0a2aaae82778092e1297b0b67bad0618168e038aad6f445507a81c5b674" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:30:15.816058 systemd[1]: Started cri-containerd-b620e4ec038aefd66ebe47dc03f3d458b3bbf5523aee7404ae57efc4963bb9e4.scope - libcontainer container b620e4ec038aefd66ebe47dc03f3d458b3bbf5523aee7404ae57efc4963bb9e4. Oct 13 05:30:15.826778 systemd-resolved[1363]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:30:15.875641 containerd[1712]: time="2025-10-13T05:30:15.875612045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54c669fd76-gsz4j,Uid:7132a020-32cb-4cd5-9c64-ec072d1f05b0,Namespace:calico-system,Attempt:0,} returns sandbox id \"b620e4ec038aefd66ebe47dc03f3d458b3bbf5523aee7404ae57efc4963bb9e4\"" Oct 13 05:30:15.876975 containerd[1712]: time="2025-10-13T05:30:15.876531460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Oct 13 05:30:16.787050 systemd-networkd[1601]: vxlan.calico: Gained IPv6LL Oct 13 05:30:17.618979 systemd-networkd[1601]: cali1416c84cb4f: Gained IPv6LL Oct 13 05:30:17.760780 containerd[1712]: time="2025-10-13T05:30:17.760363194Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:30:17.761257 containerd[1712]: time="2025-10-13T05:30:17.761246202Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Oct 13 05:30:17.761759 containerd[1712]: time="2025-10-13T05:30:17.761746911Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:30:17.763092 containerd[1712]: time="2025-10-13T05:30:17.763079629Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:30:17.763847 containerd[1712]: time="2025-10-13T05:30:17.763834429Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.887286684s" Oct 13 05:30:17.763912 containerd[1712]: time="2025-10-13T05:30:17.763890283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Oct 13 05:30:17.766950 containerd[1712]: time="2025-10-13T05:30:17.766679823Z" level=info msg="CreateContainer within sandbox \"b620e4ec038aefd66ebe47dc03f3d458b3bbf5523aee7404ae57efc4963bb9e4\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Oct 13 05:30:17.773879 containerd[1712]: time="2025-10-13T05:30:17.773410164Z" level=info msg="Container 5f1d83bc5dc4246f501cf40ddc70615bde7f9117a106a7d537dd8215276a39f5: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:30:17.776794 containerd[1712]: time="2025-10-13T05:30:17.776771112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84c44459f-h6qsm,Uid:d05ae631-5eb8-4f0a-9e24-3ed77f9c3c01,Namespace:calico-system,Attempt:0,}" Oct 13 05:30:17.777544 containerd[1712]: time="2025-10-13T05:30:17.777530351Z" level=info msg="CreateContainer within sandbox \"b620e4ec038aefd66ebe47dc03f3d458b3bbf5523aee7404ae57efc4963bb9e4\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"5f1d83bc5dc4246f501cf40ddc70615bde7f9117a106a7d537dd8215276a39f5\"" Oct 13 05:30:17.777767 containerd[1712]: time="2025-10-13T05:30:17.777756101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s5gvc,Uid:8e0aa42c-1379-4484-899b-51874e20e39d,Namespace:calico-system,Attempt:0,}" Oct 13 05:30:17.780519 containerd[1712]: time="2025-10-13T05:30:17.780502283Z" level=info msg="StartContainer for \"5f1d83bc5dc4246f501cf40ddc70615bde7f9117a106a7d537dd8215276a39f5\"" Oct 13 05:30:17.785054 containerd[1712]: time="2025-10-13T05:30:17.785030237Z" level=info msg="connecting to shim 5f1d83bc5dc4246f501cf40ddc70615bde7f9117a106a7d537dd8215276a39f5" address="unix:///run/containerd/s/ee60d0a2aaae82778092e1297b0b67bad0618168e038aad6f445507a81c5b674" protocol=ttrpc version=3 Oct 13 05:30:17.807584 systemd[1]: Started cri-containerd-5f1d83bc5dc4246f501cf40ddc70615bde7f9117a106a7d537dd8215276a39f5.scope - libcontainer container 5f1d83bc5dc4246f501cf40ddc70615bde7f9117a106a7d537dd8215276a39f5. Oct 13 05:30:17.881338 containerd[1712]: time="2025-10-13T05:30:17.881229767Z" level=info msg="StartContainer for \"5f1d83bc5dc4246f501cf40ddc70615bde7f9117a106a7d537dd8215276a39f5\" returns successfully" Oct 13 05:30:17.883096 containerd[1712]: time="2025-10-13T05:30:17.883022592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Oct 13 05:30:17.912252 systemd-networkd[1601]: cali821da083a6d: Link UP Oct 13 05:30:17.914833 systemd-networkd[1601]: cali821da083a6d: Gained carrier Oct 13 05:30:17.930675 containerd[1712]: 2025-10-13 05:30:17.855 [INFO][4474] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--s5gvc-eth0 csi-node-driver- calico-system 8e0aa42c-1379-4484-899b-51874e20e39d 695 0 2025-10-13 05:29:53 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:f8549cf5c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-s5gvc eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali821da083a6d [] [] }} ContainerID="c338b50f434aaba4dd86bde39ae2b6d6d640d6164df7cf0d82e96edf510c8158" Namespace="calico-system" Pod="csi-node-driver-s5gvc" WorkloadEndpoint="localhost-k8s-csi--node--driver--s5gvc-" Oct 13 05:30:17.930675 containerd[1712]: 2025-10-13 05:30:17.855 [INFO][4474] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c338b50f434aaba4dd86bde39ae2b6d6d640d6164df7cf0d82e96edf510c8158" Namespace="calico-system" Pod="csi-node-driver-s5gvc" WorkloadEndpoint="localhost-k8s-csi--node--driver--s5gvc-eth0" Oct 13 05:30:17.930675 containerd[1712]: 2025-10-13 05:30:17.877 [INFO][4494] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c338b50f434aaba4dd86bde39ae2b6d6d640d6164df7cf0d82e96edf510c8158" HandleID="k8s-pod-network.c338b50f434aaba4dd86bde39ae2b6d6d640d6164df7cf0d82e96edf510c8158" Workload="localhost-k8s-csi--node--driver--s5gvc-eth0" Oct 13 05:30:17.935716 containerd[1712]: 2025-10-13 05:30:17.877 [INFO][4494] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c338b50f434aaba4dd86bde39ae2b6d6d640d6164df7cf0d82e96edf510c8158" HandleID="k8s-pod-network.c338b50f434aaba4dd86bde39ae2b6d6d640d6164df7cf0d82e96edf510c8158" Workload="localhost-k8s-csi--node--driver--s5gvc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024ef60), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-s5gvc", "timestamp":"2025-10-13 05:30:17.877470845 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:30:17.935716 containerd[1712]: 2025-10-13 05:30:17.877 [INFO][4494] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:30:17.935716 containerd[1712]: 2025-10-13 05:30:17.877 [INFO][4494] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:30:17.935716 containerd[1712]: 2025-10-13 05:30:17.877 [INFO][4494] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:30:17.935716 containerd[1712]: 2025-10-13 05:30:17.888 [INFO][4494] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c338b50f434aaba4dd86bde39ae2b6d6d640d6164df7cf0d82e96edf510c8158" host="localhost" Oct 13 05:30:17.935716 containerd[1712]: 2025-10-13 05:30:17.891 [INFO][4494] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:30:17.935716 containerd[1712]: 2025-10-13 05:30:17.893 [INFO][4494] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:30:17.935716 containerd[1712]: 2025-10-13 05:30:17.895 [INFO][4494] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:30:17.935716 containerd[1712]: 2025-10-13 05:30:17.896 [INFO][4494] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:30:17.935716 containerd[1712]: 2025-10-13 05:30:17.896 [INFO][4494] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c338b50f434aaba4dd86bde39ae2b6d6d640d6164df7cf0d82e96edf510c8158" host="localhost" Oct 13 05:30:17.935992 containerd[1712]: 2025-10-13 05:30:17.899 [INFO][4494] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c338b50f434aaba4dd86bde39ae2b6d6d640d6164df7cf0d82e96edf510c8158 Oct 13 05:30:17.935992 containerd[1712]: 2025-10-13 05:30:17.901 [INFO][4494] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c338b50f434aaba4dd86bde39ae2b6d6d640d6164df7cf0d82e96edf510c8158" host="localhost" Oct 13 05:30:17.935992 containerd[1712]: 2025-10-13 05:30:17.906 [INFO][4494] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.c338b50f434aaba4dd86bde39ae2b6d6d640d6164df7cf0d82e96edf510c8158" host="localhost" Oct 13 05:30:17.935992 containerd[1712]: 2025-10-13 05:30:17.906 [INFO][4494] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.c338b50f434aaba4dd86bde39ae2b6d6d640d6164df7cf0d82e96edf510c8158" host="localhost" Oct 13 05:30:17.935992 containerd[1712]: 2025-10-13 05:30:17.906 [INFO][4494] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:30:17.935992 containerd[1712]: 2025-10-13 05:30:17.906 [INFO][4494] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="c338b50f434aaba4dd86bde39ae2b6d6d640d6164df7cf0d82e96edf510c8158" HandleID="k8s-pod-network.c338b50f434aaba4dd86bde39ae2b6d6d640d6164df7cf0d82e96edf510c8158" Workload="localhost-k8s-csi--node--driver--s5gvc-eth0" Oct 13 05:30:17.936113 containerd[1712]: 2025-10-13 05:30:17.909 [INFO][4474] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c338b50f434aaba4dd86bde39ae2b6d6d640d6164df7cf0d82e96edf510c8158" Namespace="calico-system" Pod="csi-node-driver-s5gvc" WorkloadEndpoint="localhost-k8s-csi--node--driver--s5gvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--s5gvc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8e0aa42c-1379-4484-899b-51874e20e39d", ResourceVersion:"695", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 29, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"f8549cf5c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-s5gvc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali821da083a6d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:30:17.936163 containerd[1712]: 2025-10-13 05:30:17.910 [INFO][4474] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="c338b50f434aaba4dd86bde39ae2b6d6d640d6164df7cf0d82e96edf510c8158" Namespace="calico-system" Pod="csi-node-driver-s5gvc" WorkloadEndpoint="localhost-k8s-csi--node--driver--s5gvc-eth0" Oct 13 05:30:17.936163 containerd[1712]: 2025-10-13 05:30:17.910 [INFO][4474] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali821da083a6d ContainerID="c338b50f434aaba4dd86bde39ae2b6d6d640d6164df7cf0d82e96edf510c8158" Namespace="calico-system" Pod="csi-node-driver-s5gvc" WorkloadEndpoint="localhost-k8s-csi--node--driver--s5gvc-eth0" Oct 13 05:30:17.936163 containerd[1712]: 2025-10-13 05:30:17.915 [INFO][4474] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c338b50f434aaba4dd86bde39ae2b6d6d640d6164df7cf0d82e96edf510c8158" Namespace="calico-system" Pod="csi-node-driver-s5gvc" WorkloadEndpoint="localhost-k8s-csi--node--driver--s5gvc-eth0" Oct 13 05:30:17.941330 containerd[1712]: 2025-10-13 05:30:17.915 [INFO][4474] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c338b50f434aaba4dd86bde39ae2b6d6d640d6164df7cf0d82e96edf510c8158" Namespace="calico-system" Pod="csi-node-driver-s5gvc" WorkloadEndpoint="localhost-k8s-csi--node--driver--s5gvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--s5gvc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8e0aa42c-1379-4484-899b-51874e20e39d", ResourceVersion:"695", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 29, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"f8549cf5c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c338b50f434aaba4dd86bde39ae2b6d6d640d6164df7cf0d82e96edf510c8158", Pod:"csi-node-driver-s5gvc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali821da083a6d", MAC:"46:85:a3:6b:19:bf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:30:17.941390 containerd[1712]: 2025-10-13 05:30:17.928 [INFO][4474] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c338b50f434aaba4dd86bde39ae2b6d6d640d6164df7cf0d82e96edf510c8158" Namespace="calico-system" Pod="csi-node-driver-s5gvc" WorkloadEndpoint="localhost-k8s-csi--node--driver--s5gvc-eth0" Oct 13 05:30:17.958443 containerd[1712]: time="2025-10-13T05:30:17.958385078Z" level=info msg="connecting to shim c338b50f434aaba4dd86bde39ae2b6d6d640d6164df7cf0d82e96edf510c8158" address="unix:///run/containerd/s/8c9e7f9bb37fe3e636081fe8f4d39f737711559dc980e9b367938531858e4fa8" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:30:17.975000 systemd[1]: Started cri-containerd-c338b50f434aaba4dd86bde39ae2b6d6d640d6164df7cf0d82e96edf510c8158.scope - libcontainer container c338b50f434aaba4dd86bde39ae2b6d6d640d6164df7cf0d82e96edf510c8158. Oct 13 05:30:17.982213 systemd-resolved[1363]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:30:17.996362 containerd[1712]: time="2025-10-13T05:30:17.996010011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s5gvc,Uid:8e0aa42c-1379-4484-899b-51874e20e39d,Namespace:calico-system,Attempt:0,} returns sandbox id \"c338b50f434aaba4dd86bde39ae2b6d6d640d6164df7cf0d82e96edf510c8158\"" Oct 13 05:30:18.017017 systemd-networkd[1601]: calic9bd72dc9e5: Link UP Oct 13 05:30:18.017358 systemd-networkd[1601]: calic9bd72dc9e5: Gained carrier Oct 13 05:30:18.033264 containerd[1712]: 2025-10-13 05:30:17.835 [INFO][4458] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--84c44459f--h6qsm-eth0 calico-kube-controllers-84c44459f- calico-system d05ae631-5eb8-4f0a-9e24-3ed77f9c3c01 822 0 2025-10-13 05:29:54 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:84c44459f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-84c44459f-h6qsm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic9bd72dc9e5 [] [] }} ContainerID="0e044d289a4ebfffd68d555ba91d0460cc36cf60a6b3d454e690119c24637d64" Namespace="calico-system" Pod="calico-kube-controllers-84c44459f-h6qsm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84c44459f--h6qsm-" Oct 13 05:30:18.033264 containerd[1712]: 2025-10-13 05:30:17.835 [INFO][4458] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0e044d289a4ebfffd68d555ba91d0460cc36cf60a6b3d454e690119c24637d64" Namespace="calico-system" Pod="calico-kube-controllers-84c44459f-h6qsm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84c44459f--h6qsm-eth0" Oct 13 05:30:18.033264 containerd[1712]: 2025-10-13 05:30:17.883 [INFO][4488] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0e044d289a4ebfffd68d555ba91d0460cc36cf60a6b3d454e690119c24637d64" HandleID="k8s-pod-network.0e044d289a4ebfffd68d555ba91d0460cc36cf60a6b3d454e690119c24637d64" Workload="localhost-k8s-calico--kube--controllers--84c44459f--h6qsm-eth0" Oct 13 05:30:18.033416 containerd[1712]: 2025-10-13 05:30:17.884 [INFO][4488] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0e044d289a4ebfffd68d555ba91d0460cc36cf60a6b3d454e690119c24637d64" HandleID="k8s-pod-network.0e044d289a4ebfffd68d555ba91d0460cc36cf60a6b3d454e690119c24637d64" Workload="localhost-k8s-calico--kube--controllers--84c44459f--h6qsm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002b7660), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-84c44459f-h6qsm", "timestamp":"2025-10-13 05:30:17.883819258 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:30:18.033416 containerd[1712]: 2025-10-13 05:30:17.884 [INFO][4488] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:30:18.033416 containerd[1712]: 2025-10-13 05:30:17.906 [INFO][4488] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:30:18.033416 containerd[1712]: 2025-10-13 05:30:17.906 [INFO][4488] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:30:18.033416 containerd[1712]: 2025-10-13 05:30:17.984 [INFO][4488] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0e044d289a4ebfffd68d555ba91d0460cc36cf60a6b3d454e690119c24637d64" host="localhost" Oct 13 05:30:18.033416 containerd[1712]: 2025-10-13 05:30:17.994 [INFO][4488] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:30:18.033416 containerd[1712]: 2025-10-13 05:30:17.997 [INFO][4488] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:30:18.033416 containerd[1712]: 2025-10-13 05:30:17.999 [INFO][4488] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:30:18.033416 containerd[1712]: 2025-10-13 05:30:18.000 [INFO][4488] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:30:18.033416 containerd[1712]: 2025-10-13 05:30:18.000 [INFO][4488] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0e044d289a4ebfffd68d555ba91d0460cc36cf60a6b3d454e690119c24637d64" host="localhost" Oct 13 05:30:18.033597 containerd[1712]: 2025-10-13 05:30:18.001 [INFO][4488] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0e044d289a4ebfffd68d555ba91d0460cc36cf60a6b3d454e690119c24637d64 Oct 13 05:30:18.033597 containerd[1712]: 2025-10-13 05:30:18.005 [INFO][4488] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0e044d289a4ebfffd68d555ba91d0460cc36cf60a6b3d454e690119c24637d64" host="localhost" Oct 13 05:30:18.033597 containerd[1712]: 2025-10-13 05:30:18.012 [INFO][4488] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.0e044d289a4ebfffd68d555ba91d0460cc36cf60a6b3d454e690119c24637d64" host="localhost" Oct 13 05:30:18.033597 containerd[1712]: 2025-10-13 05:30:18.012 [INFO][4488] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.0e044d289a4ebfffd68d555ba91d0460cc36cf60a6b3d454e690119c24637d64" host="localhost" Oct 13 05:30:18.033597 containerd[1712]: 2025-10-13 05:30:18.012 [INFO][4488] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:30:18.033597 containerd[1712]: 2025-10-13 05:30:18.012 [INFO][4488] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="0e044d289a4ebfffd68d555ba91d0460cc36cf60a6b3d454e690119c24637d64" HandleID="k8s-pod-network.0e044d289a4ebfffd68d555ba91d0460cc36cf60a6b3d454e690119c24637d64" Workload="localhost-k8s-calico--kube--controllers--84c44459f--h6qsm-eth0" Oct 13 05:30:18.038290 containerd[1712]: 2025-10-13 05:30:18.014 [INFO][4458] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0e044d289a4ebfffd68d555ba91d0460cc36cf60a6b3d454e690119c24637d64" Namespace="calico-system" Pod="calico-kube-controllers-84c44459f-h6qsm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84c44459f--h6qsm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--84c44459f--h6qsm-eth0", GenerateName:"calico-kube-controllers-84c44459f-", Namespace:"calico-system", SelfLink:"", UID:"d05ae631-5eb8-4f0a-9e24-3ed77f9c3c01", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 29, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84c44459f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-84c44459f-h6qsm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic9bd72dc9e5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:30:18.038705 containerd[1712]: 2025-10-13 05:30:18.014 [INFO][4458] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="0e044d289a4ebfffd68d555ba91d0460cc36cf60a6b3d454e690119c24637d64" Namespace="calico-system" Pod="calico-kube-controllers-84c44459f-h6qsm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84c44459f--h6qsm-eth0" Oct 13 05:30:18.038705 containerd[1712]: 2025-10-13 05:30:18.014 [INFO][4458] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic9bd72dc9e5 ContainerID="0e044d289a4ebfffd68d555ba91d0460cc36cf60a6b3d454e690119c24637d64" Namespace="calico-system" Pod="calico-kube-controllers-84c44459f-h6qsm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84c44459f--h6qsm-eth0" Oct 13 05:30:18.038705 containerd[1712]: 2025-10-13 05:30:18.017 [INFO][4458] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0e044d289a4ebfffd68d555ba91d0460cc36cf60a6b3d454e690119c24637d64" Namespace="calico-system" Pod="calico-kube-controllers-84c44459f-h6qsm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84c44459f--h6qsm-eth0" Oct 13 05:30:18.038771 containerd[1712]: 2025-10-13 05:30:18.017 [INFO][4458] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0e044d289a4ebfffd68d555ba91d0460cc36cf60a6b3d454e690119c24637d64" Namespace="calico-system" Pod="calico-kube-controllers-84c44459f-h6qsm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84c44459f--h6qsm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--84c44459f--h6qsm-eth0", GenerateName:"calico-kube-controllers-84c44459f-", Namespace:"calico-system", SelfLink:"", UID:"d05ae631-5eb8-4f0a-9e24-3ed77f9c3c01", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 29, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84c44459f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0e044d289a4ebfffd68d555ba91d0460cc36cf60a6b3d454e690119c24637d64", Pod:"calico-kube-controllers-84c44459f-h6qsm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic9bd72dc9e5", MAC:"2e:86:86:f0:ef:aa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:30:18.038814 containerd[1712]: 2025-10-13 05:30:18.030 [INFO][4458] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0e044d289a4ebfffd68d555ba91d0460cc36cf60a6b3d454e690119c24637d64" Namespace="calico-system" Pod="calico-kube-controllers-84c44459f-h6qsm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84c44459f--h6qsm-eth0" Oct 13 05:30:18.055919 containerd[1712]: time="2025-10-13T05:30:18.054324333Z" level=info msg="connecting to shim 0e044d289a4ebfffd68d555ba91d0460cc36cf60a6b3d454e690119c24637d64" address="unix:///run/containerd/s/b5d9f6d8b9b2050835214a4646bf47d734d085cb823963222c0b4dfa4c65604c" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:30:18.075030 systemd[1]: Started cri-containerd-0e044d289a4ebfffd68d555ba91d0460cc36cf60a6b3d454e690119c24637d64.scope - libcontainer container 0e044d289a4ebfffd68d555ba91d0460cc36cf60a6b3d454e690119c24637d64. Oct 13 05:30:18.084478 systemd-resolved[1363]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:30:18.122487 containerd[1712]: time="2025-10-13T05:30:18.122459132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84c44459f-h6qsm,Uid:d05ae631-5eb8-4f0a-9e24-3ed77f9c3c01,Namespace:calico-system,Attempt:0,} returns sandbox id \"0e044d289a4ebfffd68d555ba91d0460cc36cf60a6b3d454e690119c24637d64\"" Oct 13 05:30:18.777108 containerd[1712]: time="2025-10-13T05:30:18.777043116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c475f7844-mpl66,Uid:0160d0aa-6623-4d16-8d15-40145f51fe71,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:30:18.777803 containerd[1712]: time="2025-10-13T05:30:18.777542807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b54bbbd77-6z8xp,Uid:2f6929be-4c04-40a6-9980-63b9680f877e,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:30:18.778724 containerd[1712]: time="2025-10-13T05:30:18.778519526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xchb4,Uid:8143b661-4bb6-4b78-811c-a23e5a23b192,Namespace:kube-system,Attempt:0,}" Oct 13 05:30:18.779476 containerd[1712]: time="2025-10-13T05:30:18.779455080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b54bbbd77-rpbzl,Uid:e8c46c11-2670-4392-980f-1e6c0dd2267e,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:30:18.930818 systemd-networkd[1601]: cali094efdb6f51: Link UP Oct 13 05:30:18.931549 systemd-networkd[1601]: cali094efdb6f51: Gained carrier Oct 13 05:30:18.954800 containerd[1712]: 2025-10-13 05:30:18.830 [INFO][4627] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--c475f7844--mpl66-eth0 calico-apiserver-c475f7844- calico-apiserver 0160d0aa-6623-4d16-8d15-40145f51fe71 825 0 2025-10-13 05:29:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c475f7844 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-c475f7844-mpl66 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali094efdb6f51 [] [] }} ContainerID="2c1f137f80333f268f770073d18b9cff646ed2182a8e029e4758d9db7f3ae645" Namespace="calico-apiserver" Pod="calico-apiserver-c475f7844-mpl66" WorkloadEndpoint="localhost-k8s-calico--apiserver--c475f7844--mpl66-" Oct 13 05:30:18.954800 containerd[1712]: 2025-10-13 05:30:18.830 [INFO][4627] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2c1f137f80333f268f770073d18b9cff646ed2182a8e029e4758d9db7f3ae645" Namespace="calico-apiserver" Pod="calico-apiserver-c475f7844-mpl66" WorkloadEndpoint="localhost-k8s-calico--apiserver--c475f7844--mpl66-eth0" Oct 13 05:30:18.954800 containerd[1712]: 2025-10-13 05:30:18.871 [INFO][4650] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2c1f137f80333f268f770073d18b9cff646ed2182a8e029e4758d9db7f3ae645" HandleID="k8s-pod-network.2c1f137f80333f268f770073d18b9cff646ed2182a8e029e4758d9db7f3ae645" Workload="localhost-k8s-calico--apiserver--c475f7844--mpl66-eth0" Oct 13 05:30:18.955200 containerd[1712]: 2025-10-13 05:30:18.873 [INFO][4650] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2c1f137f80333f268f770073d18b9cff646ed2182a8e029e4758d9db7f3ae645" HandleID="k8s-pod-network.2c1f137f80333f268f770073d18b9cff646ed2182a8e029e4758d9db7f3ae645" Workload="localhost-k8s-calico--apiserver--c475f7844--mpl66-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5600), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-c475f7844-mpl66", "timestamp":"2025-10-13 05:30:18.871252761 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:30:18.955200 containerd[1712]: 2025-10-13 05:30:18.873 [INFO][4650] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:30:18.955200 containerd[1712]: 2025-10-13 05:30:18.873 [INFO][4650] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:30:18.955200 containerd[1712]: 2025-10-13 05:30:18.873 [INFO][4650] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:30:18.955200 containerd[1712]: 2025-10-13 05:30:18.885 [INFO][4650] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2c1f137f80333f268f770073d18b9cff646ed2182a8e029e4758d9db7f3ae645" host="localhost" Oct 13 05:30:18.955200 containerd[1712]: 2025-10-13 05:30:18.890 [INFO][4650] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:30:18.955200 containerd[1712]: 2025-10-13 05:30:18.895 [INFO][4650] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:30:18.955200 containerd[1712]: 2025-10-13 05:30:18.897 [INFO][4650] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:30:18.955200 containerd[1712]: 2025-10-13 05:30:18.904 [INFO][4650] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:30:18.955200 containerd[1712]: 2025-10-13 05:30:18.904 [INFO][4650] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2c1f137f80333f268f770073d18b9cff646ed2182a8e029e4758d9db7f3ae645" host="localhost" Oct 13 05:30:18.962229 containerd[1712]: 2025-10-13 05:30:18.908 [INFO][4650] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2c1f137f80333f268f770073d18b9cff646ed2182a8e029e4758d9db7f3ae645 Oct 13 05:30:18.962229 containerd[1712]: 2025-10-13 05:30:18.912 [INFO][4650] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2c1f137f80333f268f770073d18b9cff646ed2182a8e029e4758d9db7f3ae645" host="localhost" Oct 13 05:30:18.962229 containerd[1712]: 2025-10-13 05:30:18.922 [INFO][4650] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.2c1f137f80333f268f770073d18b9cff646ed2182a8e029e4758d9db7f3ae645" host="localhost" Oct 13 05:30:18.962229 containerd[1712]: 2025-10-13 05:30:18.922 [INFO][4650] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.2c1f137f80333f268f770073d18b9cff646ed2182a8e029e4758d9db7f3ae645" host="localhost" Oct 13 05:30:18.962229 containerd[1712]: 2025-10-13 05:30:18.922 [INFO][4650] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:30:18.962229 containerd[1712]: 2025-10-13 05:30:18.922 [INFO][4650] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="2c1f137f80333f268f770073d18b9cff646ed2182a8e029e4758d9db7f3ae645" HandleID="k8s-pod-network.2c1f137f80333f268f770073d18b9cff646ed2182a8e029e4758d9db7f3ae645" Workload="localhost-k8s-calico--apiserver--c475f7844--mpl66-eth0" Oct 13 05:30:18.963190 containerd[1712]: 2025-10-13 05:30:18.925 [INFO][4627] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2c1f137f80333f268f770073d18b9cff646ed2182a8e029e4758d9db7f3ae645" Namespace="calico-apiserver" Pod="calico-apiserver-c475f7844-mpl66" WorkloadEndpoint="localhost-k8s-calico--apiserver--c475f7844--mpl66-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c475f7844--mpl66-eth0", GenerateName:"calico-apiserver-c475f7844-", Namespace:"calico-apiserver", SelfLink:"", UID:"0160d0aa-6623-4d16-8d15-40145f51fe71", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 29, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c475f7844", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-c475f7844-mpl66", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali094efdb6f51", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:30:18.963247 containerd[1712]: 2025-10-13 05:30:18.925 [INFO][4627] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="2c1f137f80333f268f770073d18b9cff646ed2182a8e029e4758d9db7f3ae645" Namespace="calico-apiserver" Pod="calico-apiserver-c475f7844-mpl66" WorkloadEndpoint="localhost-k8s-calico--apiserver--c475f7844--mpl66-eth0" Oct 13 05:30:18.963247 containerd[1712]: 2025-10-13 05:30:18.925 [INFO][4627] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali094efdb6f51 ContainerID="2c1f137f80333f268f770073d18b9cff646ed2182a8e029e4758d9db7f3ae645" Namespace="calico-apiserver" Pod="calico-apiserver-c475f7844-mpl66" WorkloadEndpoint="localhost-k8s-calico--apiserver--c475f7844--mpl66-eth0" Oct 13 05:30:18.963247 containerd[1712]: 2025-10-13 05:30:18.932 [INFO][4627] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2c1f137f80333f268f770073d18b9cff646ed2182a8e029e4758d9db7f3ae645" Namespace="calico-apiserver" Pod="calico-apiserver-c475f7844-mpl66" WorkloadEndpoint="localhost-k8s-calico--apiserver--c475f7844--mpl66-eth0" Oct 13 05:30:18.963298 containerd[1712]: 2025-10-13 05:30:18.935 [INFO][4627] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2c1f137f80333f268f770073d18b9cff646ed2182a8e029e4758d9db7f3ae645" Namespace="calico-apiserver" Pod="calico-apiserver-c475f7844-mpl66" WorkloadEndpoint="localhost-k8s-calico--apiserver--c475f7844--mpl66-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c475f7844--mpl66-eth0", GenerateName:"calico-apiserver-c475f7844-", Namespace:"calico-apiserver", SelfLink:"", UID:"0160d0aa-6623-4d16-8d15-40145f51fe71", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 29, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c475f7844", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2c1f137f80333f268f770073d18b9cff646ed2182a8e029e4758d9db7f3ae645", Pod:"calico-apiserver-c475f7844-mpl66", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali094efdb6f51", MAC:"2e:da:71:e3:4a:41", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:30:18.963338 containerd[1712]: 2025-10-13 05:30:18.950 [INFO][4627] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2c1f137f80333f268f770073d18b9cff646ed2182a8e029e4758d9db7f3ae645" Namespace="calico-apiserver" Pod="calico-apiserver-c475f7844-mpl66" WorkloadEndpoint="localhost-k8s-calico--apiserver--c475f7844--mpl66-eth0" Oct 13 05:30:19.026154 systemd-networkd[1601]: calibf4f052c74b: Link UP Oct 13 05:30:19.027389 systemd-networkd[1601]: calibf4f052c74b: Gained carrier Oct 13 05:30:19.049134 containerd[1712]: 2025-10-13 05:30:18.890 [INFO][4659] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7b54bbbd77--rpbzl-eth0 calico-apiserver-7b54bbbd77- calico-apiserver e8c46c11-2670-4392-980f-1e6c0dd2267e 815 0 2025-10-13 05:29:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b54bbbd77 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7b54bbbd77-rpbzl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibf4f052c74b [] [] }} ContainerID="26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" Namespace="calico-apiserver" Pod="calico-apiserver-7b54bbbd77-rpbzl" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b54bbbd77--rpbzl-" Oct 13 05:30:19.049134 containerd[1712]: 2025-10-13 05:30:18.890 [INFO][4659] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" Namespace="calico-apiserver" Pod="calico-apiserver-7b54bbbd77-rpbzl" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b54bbbd77--rpbzl-eth0" Oct 13 05:30:19.049134 containerd[1712]: 2025-10-13 05:30:18.941 [INFO][4688] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" HandleID="k8s-pod-network.26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" Workload="localhost-k8s-calico--apiserver--7b54bbbd77--rpbzl-eth0" Oct 13 05:30:19.049302 containerd[1712]: 2025-10-13 05:30:18.943 [INFO][4688] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" HandleID="k8s-pod-network.26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" Workload="localhost-k8s-calico--apiserver--7b54bbbd77--rpbzl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cb270), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7b54bbbd77-rpbzl", "timestamp":"2025-10-13 05:30:18.941524739 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:30:19.049302 containerd[1712]: 2025-10-13 05:30:18.943 [INFO][4688] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:30:19.049302 containerd[1712]: 2025-10-13 05:30:18.943 [INFO][4688] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:30:19.049302 containerd[1712]: 2025-10-13 05:30:18.943 [INFO][4688] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:30:19.049302 containerd[1712]: 2025-10-13 05:30:18.985 [INFO][4688] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" host="localhost" Oct 13 05:30:19.049302 containerd[1712]: 2025-10-13 05:30:18.994 [INFO][4688] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:30:19.049302 containerd[1712]: 2025-10-13 05:30:18.996 [INFO][4688] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:30:19.049302 containerd[1712]: 2025-10-13 05:30:18.997 [INFO][4688] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:30:19.049302 containerd[1712]: 2025-10-13 05:30:18.998 [INFO][4688] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:30:19.049302 containerd[1712]: 2025-10-13 05:30:18.998 [INFO][4688] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" host="localhost" Oct 13 05:30:19.050226 containerd[1712]: 2025-10-13 05:30:18.999 [INFO][4688] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8 Oct 13 05:30:19.050226 containerd[1712]: 2025-10-13 05:30:19.005 [INFO][4688] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" host="localhost" Oct 13 05:30:19.050226 containerd[1712]: 2025-10-13 05:30:19.020 [INFO][4688] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" host="localhost" Oct 13 05:30:19.050226 containerd[1712]: 2025-10-13 05:30:19.020 [INFO][4688] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" host="localhost" Oct 13 05:30:19.050226 containerd[1712]: 2025-10-13 05:30:19.020 [INFO][4688] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:30:19.050226 containerd[1712]: 2025-10-13 05:30:19.020 [INFO][4688] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" HandleID="k8s-pod-network.26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" Workload="localhost-k8s-calico--apiserver--7b54bbbd77--rpbzl-eth0" Oct 13 05:30:19.050332 containerd[1712]: 2025-10-13 05:30:19.023 [INFO][4659] cni-plugin/k8s.go 418: Populated endpoint ContainerID="26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" Namespace="calico-apiserver" Pod="calico-apiserver-7b54bbbd77-rpbzl" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b54bbbd77--rpbzl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b54bbbd77--rpbzl-eth0", GenerateName:"calico-apiserver-7b54bbbd77-", Namespace:"calico-apiserver", SelfLink:"", UID:"e8c46c11-2670-4392-980f-1e6c0dd2267e", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 29, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b54bbbd77", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7b54bbbd77-rpbzl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibf4f052c74b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:30:19.050383 containerd[1712]: 2025-10-13 05:30:19.024 [INFO][4659] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" Namespace="calico-apiserver" Pod="calico-apiserver-7b54bbbd77-rpbzl" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b54bbbd77--rpbzl-eth0" Oct 13 05:30:19.050383 containerd[1712]: 2025-10-13 05:30:19.024 [INFO][4659] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibf4f052c74b ContainerID="26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" Namespace="calico-apiserver" Pod="calico-apiserver-7b54bbbd77-rpbzl" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b54bbbd77--rpbzl-eth0" Oct 13 05:30:19.050383 containerd[1712]: 2025-10-13 05:30:19.033 [INFO][4659] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" Namespace="calico-apiserver" Pod="calico-apiserver-7b54bbbd77-rpbzl" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b54bbbd77--rpbzl-eth0" Oct 13 05:30:19.050434 containerd[1712]: 2025-10-13 05:30:19.037 [INFO][4659] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" Namespace="calico-apiserver" Pod="calico-apiserver-7b54bbbd77-rpbzl" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b54bbbd77--rpbzl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b54bbbd77--rpbzl-eth0", GenerateName:"calico-apiserver-7b54bbbd77-", Namespace:"calico-apiserver", SelfLink:"", UID:"e8c46c11-2670-4392-980f-1e6c0dd2267e", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 29, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b54bbbd77", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8", Pod:"calico-apiserver-7b54bbbd77-rpbzl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibf4f052c74b", MAC:"86:ed:d6:95:ec:5d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:30:19.050479 containerd[1712]: 2025-10-13 05:30:19.045 [INFO][4659] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" Namespace="calico-apiserver" Pod="calico-apiserver-7b54bbbd77-rpbzl" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b54bbbd77--rpbzl-eth0" Oct 13 05:30:19.057216 containerd[1712]: time="2025-10-13T05:30:19.056987172Z" level=info msg="connecting to shim 2c1f137f80333f268f770073d18b9cff646ed2182a8e029e4758d9db7f3ae645" address="unix:///run/containerd/s/0eb69fedae2263536566f04e98f874cc70d34ed4e5c37e7f2d9ea5bc14e3e3c0" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:30:19.070843 containerd[1712]: time="2025-10-13T05:30:19.070813139Z" level=info msg="connecting to shim 26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" address="unix:///run/containerd/s/ca24318a09c69cee6bdec8ad637e0e61e36904e99fe4f7a951198a61b5f9c53d" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:30:19.081052 systemd[1]: Started cri-containerd-2c1f137f80333f268f770073d18b9cff646ed2182a8e029e4758d9db7f3ae645.scope - libcontainer container 2c1f137f80333f268f770073d18b9cff646ed2182a8e029e4758d9db7f3ae645. Oct 13 05:30:19.095155 systemd[1]: Started cri-containerd-26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8.scope - libcontainer container 26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8. Oct 13 05:30:19.102611 systemd-resolved[1363]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:30:19.109982 systemd-resolved[1363]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:30:19.129602 systemd-networkd[1601]: calif8433d3262c: Link UP Oct 13 05:30:19.130458 systemd-networkd[1601]: calif8433d3262c: Gained carrier Oct 13 05:30:19.155350 systemd-networkd[1601]: cali821da083a6d: Gained IPv6LL Oct 13 05:30:19.160010 containerd[1712]: 2025-10-13 05:30:18.878 [INFO][4644] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--xchb4-eth0 coredns-66bc5c9577- kube-system 8143b661-4bb6-4b78-811c-a23e5a23b192 823 0 2025-10-13 05:29:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-xchb4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif8433d3262c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="981b1a4f34df542f1e03ccc97e1fea7bfac5918023cd8e32bb82c24c0f92be5d" Namespace="kube-system" Pod="coredns-66bc5c9577-xchb4" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xchb4-" Oct 13 05:30:19.160010 containerd[1712]: 2025-10-13 05:30:18.878 [INFO][4644] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="981b1a4f34df542f1e03ccc97e1fea7bfac5918023cd8e32bb82c24c0f92be5d" Namespace="kube-system" Pod="coredns-66bc5c9577-xchb4" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xchb4-eth0" Oct 13 05:30:19.160010 containerd[1712]: 2025-10-13 05:30:18.960 [INFO][4679] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="981b1a4f34df542f1e03ccc97e1fea7bfac5918023cd8e32bb82c24c0f92be5d" HandleID="k8s-pod-network.981b1a4f34df542f1e03ccc97e1fea7bfac5918023cd8e32bb82c24c0f92be5d" Workload="localhost-k8s-coredns--66bc5c9577--xchb4-eth0" Oct 13 05:30:19.160148 containerd[1712]: 2025-10-13 05:30:18.960 [INFO][4679] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="981b1a4f34df542f1e03ccc97e1fea7bfac5918023cd8e32bb82c24c0f92be5d" HandleID="k8s-pod-network.981b1a4f34df542f1e03ccc97e1fea7bfac5918023cd8e32bb82c24c0f92be5d" Workload="localhost-k8s-coredns--66bc5c9577--xchb4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00047d110), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-xchb4", "timestamp":"2025-10-13 05:30:18.960252708 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:30:19.160148 containerd[1712]: 2025-10-13 05:30:18.960 [INFO][4679] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:30:19.160148 containerd[1712]: 2025-10-13 05:30:19.020 [INFO][4679] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:30:19.160148 containerd[1712]: 2025-10-13 05:30:19.020 [INFO][4679] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:30:19.160148 containerd[1712]: 2025-10-13 05:30:19.087 [INFO][4679] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.981b1a4f34df542f1e03ccc97e1fea7bfac5918023cd8e32bb82c24c0f92be5d" host="localhost" Oct 13 05:30:19.160148 containerd[1712]: 2025-10-13 05:30:19.095 [INFO][4679] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:30:19.160148 containerd[1712]: 2025-10-13 05:30:19.099 [INFO][4679] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:30:19.160148 containerd[1712]: 2025-10-13 05:30:19.102 [INFO][4679] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:30:19.160148 containerd[1712]: 2025-10-13 05:30:19.105 [INFO][4679] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:30:19.160148 containerd[1712]: 2025-10-13 05:30:19.105 [INFO][4679] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.981b1a4f34df542f1e03ccc97e1fea7bfac5918023cd8e32bb82c24c0f92be5d" host="localhost" Oct 13 05:30:19.160589 containerd[1712]: 2025-10-13 05:30:19.106 [INFO][4679] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.981b1a4f34df542f1e03ccc97e1fea7bfac5918023cd8e32bb82c24c0f92be5d Oct 13 05:30:19.160589 containerd[1712]: 2025-10-13 05:30:19.111 [INFO][4679] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.981b1a4f34df542f1e03ccc97e1fea7bfac5918023cd8e32bb82c24c0f92be5d" host="localhost" Oct 13 05:30:19.160589 containerd[1712]: 2025-10-13 05:30:19.116 [INFO][4679] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.981b1a4f34df542f1e03ccc97e1fea7bfac5918023cd8e32bb82c24c0f92be5d" host="localhost" Oct 13 05:30:19.160589 containerd[1712]: 2025-10-13 05:30:19.116 [INFO][4679] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.981b1a4f34df542f1e03ccc97e1fea7bfac5918023cd8e32bb82c24c0f92be5d" host="localhost" Oct 13 05:30:19.160589 containerd[1712]: 2025-10-13 05:30:19.116 [INFO][4679] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:30:19.160589 containerd[1712]: 2025-10-13 05:30:19.116 [INFO][4679] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="981b1a4f34df542f1e03ccc97e1fea7bfac5918023cd8e32bb82c24c0f92be5d" HandleID="k8s-pod-network.981b1a4f34df542f1e03ccc97e1fea7bfac5918023cd8e32bb82c24c0f92be5d" Workload="localhost-k8s-coredns--66bc5c9577--xchb4-eth0" Oct 13 05:30:19.160727 containerd[1712]: 2025-10-13 05:30:19.118 [INFO][4644] cni-plugin/k8s.go 418: Populated endpoint ContainerID="981b1a4f34df542f1e03ccc97e1fea7bfac5918023cd8e32bb82c24c0f92be5d" Namespace="kube-system" Pod="coredns-66bc5c9577-xchb4" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xchb4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--xchb4-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"8143b661-4bb6-4b78-811c-a23e5a23b192", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 29, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-xchb4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif8433d3262c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:30:19.160727 containerd[1712]: 2025-10-13 05:30:19.118 [INFO][4644] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="981b1a4f34df542f1e03ccc97e1fea7bfac5918023cd8e32bb82c24c0f92be5d" Namespace="kube-system" Pod="coredns-66bc5c9577-xchb4" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xchb4-eth0" Oct 13 05:30:19.160727 containerd[1712]: 2025-10-13 05:30:19.118 [INFO][4644] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif8433d3262c ContainerID="981b1a4f34df542f1e03ccc97e1fea7bfac5918023cd8e32bb82c24c0f92be5d" Namespace="kube-system" Pod="coredns-66bc5c9577-xchb4" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xchb4-eth0" Oct 13 05:30:19.160727 containerd[1712]: 2025-10-13 05:30:19.135 [INFO][4644] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="981b1a4f34df542f1e03ccc97e1fea7bfac5918023cd8e32bb82c24c0f92be5d" Namespace="kube-system" Pod="coredns-66bc5c9577-xchb4" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xchb4-eth0" Oct 13 05:30:19.160727 containerd[1712]: 2025-10-13 05:30:19.135 [INFO][4644] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="981b1a4f34df542f1e03ccc97e1fea7bfac5918023cd8e32bb82c24c0f92be5d" Namespace="kube-system" Pod="coredns-66bc5c9577-xchb4" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xchb4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--xchb4-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"8143b661-4bb6-4b78-811c-a23e5a23b192", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 29, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"981b1a4f34df542f1e03ccc97e1fea7bfac5918023cd8e32bb82c24c0f92be5d", Pod:"coredns-66bc5c9577-xchb4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif8433d3262c", MAC:"8e:ed:f5:a1:00:2f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:30:19.160727 containerd[1712]: 2025-10-13 05:30:19.157 [INFO][4644] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="981b1a4f34df542f1e03ccc97e1fea7bfac5918023cd8e32bb82c24c0f92be5d" Namespace="kube-system" Pod="coredns-66bc5c9577-xchb4" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--xchb4-eth0" Oct 13 05:30:19.166555 containerd[1712]: time="2025-10-13T05:30:19.166522800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c475f7844-mpl66,Uid:0160d0aa-6623-4d16-8d15-40145f51fe71,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2c1f137f80333f268f770073d18b9cff646ed2182a8e029e4758d9db7f3ae645\"" Oct 13 05:30:19.177086 containerd[1712]: time="2025-10-13T05:30:19.177062532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b54bbbd77-rpbzl,Uid:e8c46c11-2670-4392-980f-1e6c0dd2267e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8\"" Oct 13 05:30:19.200164 containerd[1712]: time="2025-10-13T05:30:19.199327114Z" level=info msg="connecting to shim 981b1a4f34df542f1e03ccc97e1fea7bfac5918023cd8e32bb82c24c0f92be5d" address="unix:///run/containerd/s/28127620d09611319706e4b798dbd8f9d66ecff5ba93d43f59edb7b42c89894d" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:30:19.235866 systemd-networkd[1601]: cali36f3226cd08: Link UP Oct 13 05:30:19.236024 systemd[1]: Started cri-containerd-981b1a4f34df542f1e03ccc97e1fea7bfac5918023cd8e32bb82c24c0f92be5d.scope - libcontainer container 981b1a4f34df542f1e03ccc97e1fea7bfac5918023cd8e32bb82c24c0f92be5d. Oct 13 05:30:19.236046 systemd-networkd[1601]: cali36f3226cd08: Gained carrier Oct 13 05:30:19.251145 systemd-resolved[1363]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:30:19.257620 containerd[1712]: 2025-10-13 05:30:18.878 [INFO][4632] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7b54bbbd77--6z8xp-eth0 calico-apiserver-7b54bbbd77- calico-apiserver 2f6929be-4c04-40a6-9980-63b9680f877e 818 0 2025-10-13 05:29:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b54bbbd77 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7b54bbbd77-6z8xp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali36f3226cd08 [] [] }} ContainerID="9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" Namespace="calico-apiserver" Pod="calico-apiserver-7b54bbbd77-6z8xp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b54bbbd77--6z8xp-" Oct 13 05:30:19.257620 containerd[1712]: 2025-10-13 05:30:18.878 [INFO][4632] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" Namespace="calico-apiserver" Pod="calico-apiserver-7b54bbbd77-6z8xp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b54bbbd77--6z8xp-eth0" Oct 13 05:30:19.257620 containerd[1712]: 2025-10-13 05:30:18.960 [INFO][4677] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" HandleID="k8s-pod-network.9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" Workload="localhost-k8s-calico--apiserver--7b54bbbd77--6z8xp-eth0" Oct 13 05:30:19.257620 containerd[1712]: 2025-10-13 05:30:18.960 [INFO][4677] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" HandleID="k8s-pod-network.9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" Workload="localhost-k8s-calico--apiserver--7b54bbbd77--6z8xp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000345920), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7b54bbbd77-6z8xp", "timestamp":"2025-10-13 05:30:18.960385069 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:30:19.257620 containerd[1712]: 2025-10-13 05:30:18.960 [INFO][4677] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:30:19.257620 containerd[1712]: 2025-10-13 05:30:19.116 [INFO][4677] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:30:19.257620 containerd[1712]: 2025-10-13 05:30:19.116 [INFO][4677] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:30:19.257620 containerd[1712]: 2025-10-13 05:30:19.188 [INFO][4677] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" host="localhost" Oct 13 05:30:19.257620 containerd[1712]: 2025-10-13 05:30:19.196 [INFO][4677] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:30:19.257620 containerd[1712]: 2025-10-13 05:30:19.202 [INFO][4677] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:30:19.257620 containerd[1712]: 2025-10-13 05:30:19.204 [INFO][4677] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:30:19.257620 containerd[1712]: 2025-10-13 05:30:19.207 [INFO][4677] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:30:19.257620 containerd[1712]: 2025-10-13 05:30:19.208 [INFO][4677] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" host="localhost" Oct 13 05:30:19.257620 containerd[1712]: 2025-10-13 05:30:19.210 [INFO][4677] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5 Oct 13 05:30:19.257620 containerd[1712]: 2025-10-13 05:30:19.216 [INFO][4677] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" host="localhost" Oct 13 05:30:19.257620 containerd[1712]: 2025-10-13 05:30:19.223 [INFO][4677] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" host="localhost" Oct 13 05:30:19.257620 containerd[1712]: 2025-10-13 05:30:19.223 [INFO][4677] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" host="localhost" Oct 13 05:30:19.257620 containerd[1712]: 2025-10-13 05:30:19.223 [INFO][4677] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:30:19.257620 containerd[1712]: 2025-10-13 05:30:19.224 [INFO][4677] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" HandleID="k8s-pod-network.9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" Workload="localhost-k8s-calico--apiserver--7b54bbbd77--6z8xp-eth0" Oct 13 05:30:19.261065 containerd[1712]: 2025-10-13 05:30:19.228 [INFO][4632] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" Namespace="calico-apiserver" Pod="calico-apiserver-7b54bbbd77-6z8xp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b54bbbd77--6z8xp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b54bbbd77--6z8xp-eth0", GenerateName:"calico-apiserver-7b54bbbd77-", Namespace:"calico-apiserver", SelfLink:"", UID:"2f6929be-4c04-40a6-9980-63b9680f877e", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 29, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b54bbbd77", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7b54bbbd77-6z8xp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali36f3226cd08", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:30:19.261065 containerd[1712]: 2025-10-13 05:30:19.228 [INFO][4632] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" Namespace="calico-apiserver" Pod="calico-apiserver-7b54bbbd77-6z8xp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b54bbbd77--6z8xp-eth0" Oct 13 05:30:19.261065 containerd[1712]: 2025-10-13 05:30:19.228 [INFO][4632] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali36f3226cd08 ContainerID="9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" Namespace="calico-apiserver" Pod="calico-apiserver-7b54bbbd77-6z8xp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b54bbbd77--6z8xp-eth0" Oct 13 05:30:19.261065 containerd[1712]: 2025-10-13 05:30:19.235 [INFO][4632] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" Namespace="calico-apiserver" Pod="calico-apiserver-7b54bbbd77-6z8xp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b54bbbd77--6z8xp-eth0" Oct 13 05:30:19.261065 containerd[1712]: 2025-10-13 05:30:19.240 [INFO][4632] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" Namespace="calico-apiserver" Pod="calico-apiserver-7b54bbbd77-6z8xp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b54bbbd77--6z8xp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b54bbbd77--6z8xp-eth0", GenerateName:"calico-apiserver-7b54bbbd77-", Namespace:"calico-apiserver", SelfLink:"", UID:"2f6929be-4c04-40a6-9980-63b9680f877e", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 29, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b54bbbd77", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5", Pod:"calico-apiserver-7b54bbbd77-6z8xp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali36f3226cd08", MAC:"7e:63:b9:01:68:de", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:30:19.261065 containerd[1712]: 2025-10-13 05:30:19.254 [INFO][4632] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" Namespace="calico-apiserver" Pod="calico-apiserver-7b54bbbd77-6z8xp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b54bbbd77--6z8xp-eth0" Oct 13 05:30:19.291391 containerd[1712]: time="2025-10-13T05:30:19.291325984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xchb4,Uid:8143b661-4bb6-4b78-811c-a23e5a23b192,Namespace:kube-system,Attempt:0,} returns sandbox id \"981b1a4f34df542f1e03ccc97e1fea7bfac5918023cd8e32bb82c24c0f92be5d\"" Oct 13 05:30:19.304087 containerd[1712]: time="2025-10-13T05:30:19.303984783Z" level=info msg="connecting to shim 9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" address="unix:///run/containerd/s/fc8af71d27318ae156b6f8e00481219036b40f9b460332acbb0923083fda8fa7" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:30:19.310000 containerd[1712]: time="2025-10-13T05:30:19.309965456Z" level=info msg="CreateContainer within sandbox \"981b1a4f34df542f1e03ccc97e1fea7bfac5918023cd8e32bb82c24c0f92be5d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 05:30:19.323481 containerd[1712]: time="2025-10-13T05:30:19.323441933Z" level=info msg="Container 9ff0fc365403d0219218059e65c88ae37805937d14fa44ef69fcd3dae8fde71d: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:30:19.328207 containerd[1712]: time="2025-10-13T05:30:19.328177578Z" level=info msg="CreateContainer within sandbox \"981b1a4f34df542f1e03ccc97e1fea7bfac5918023cd8e32bb82c24c0f92be5d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9ff0fc365403d0219218059e65c88ae37805937d14fa44ef69fcd3dae8fde71d\"" Oct 13 05:30:19.328603 containerd[1712]: time="2025-10-13T05:30:19.328570776Z" level=info msg="StartContainer for \"9ff0fc365403d0219218059e65c88ae37805937d14fa44ef69fcd3dae8fde71d\"" Oct 13 05:30:19.329196 systemd[1]: Started cri-containerd-9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5.scope - libcontainer container 9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5. Oct 13 05:30:19.330877 containerd[1712]: time="2025-10-13T05:30:19.330542487Z" level=info msg="connecting to shim 9ff0fc365403d0219218059e65c88ae37805937d14fa44ef69fcd3dae8fde71d" address="unix:///run/containerd/s/28127620d09611319706e4b798dbd8f9d66ecff5ba93d43f59edb7b42c89894d" protocol=ttrpc version=3 Oct 13 05:30:19.352054 systemd[1]: Started cri-containerd-9ff0fc365403d0219218059e65c88ae37805937d14fa44ef69fcd3dae8fde71d.scope - libcontainer container 9ff0fc365403d0219218059e65c88ae37805937d14fa44ef69fcd3dae8fde71d. Oct 13 05:30:19.355800 systemd-resolved[1363]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:30:19.383854 containerd[1712]: time="2025-10-13T05:30:19.383827804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b54bbbd77-6z8xp,Uid:2f6929be-4c04-40a6-9980-63b9680f877e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5\"" Oct 13 05:30:19.417114 containerd[1712]: time="2025-10-13T05:30:19.417074390Z" level=info msg="StartContainer for \"9ff0fc365403d0219218059e65c88ae37805937d14fa44ef69fcd3dae8fde71d\" returns successfully" Oct 13 05:30:19.602996 systemd-networkd[1601]: calic9bd72dc9e5: Gained IPv6LL Oct 13 05:30:19.783704 containerd[1712]: time="2025-10-13T05:30:19.783678583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-854f97d977-tpgxs,Uid:3728c3ed-8ec9-4e59-b611-2dd20832e7d1,Namespace:calico-system,Attempt:0,}" Oct 13 05:30:19.928205 systemd-networkd[1601]: cali317f9654746: Link UP Oct 13 05:30:19.930766 systemd-networkd[1601]: cali317f9654746: Gained carrier Oct 13 05:30:19.948742 containerd[1712]: 2025-10-13 05:30:19.847 [INFO][4948] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--854f97d977--tpgxs-eth0 goldmane-854f97d977- calico-system 3728c3ed-8ec9-4e59-b611-2dd20832e7d1 824 0 2025-10-13 05:29:53 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:854f97d977 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-854f97d977-tpgxs eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali317f9654746 [] [] }} ContainerID="9d86486d6dd3b9b7bc7244c696ebdb8178318424bdac3b2c7194cbc4518fac15" Namespace="calico-system" Pod="goldmane-854f97d977-tpgxs" WorkloadEndpoint="localhost-k8s-goldmane--854f97d977--tpgxs-" Oct 13 05:30:19.948742 containerd[1712]: 2025-10-13 05:30:19.847 [INFO][4948] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9d86486d6dd3b9b7bc7244c696ebdb8178318424bdac3b2c7194cbc4518fac15" Namespace="calico-system" Pod="goldmane-854f97d977-tpgxs" WorkloadEndpoint="localhost-k8s-goldmane--854f97d977--tpgxs-eth0" Oct 13 05:30:19.948742 containerd[1712]: 2025-10-13 05:30:19.866 [INFO][4961] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9d86486d6dd3b9b7bc7244c696ebdb8178318424bdac3b2c7194cbc4518fac15" HandleID="k8s-pod-network.9d86486d6dd3b9b7bc7244c696ebdb8178318424bdac3b2c7194cbc4518fac15" Workload="localhost-k8s-goldmane--854f97d977--tpgxs-eth0" Oct 13 05:30:19.948742 containerd[1712]: 2025-10-13 05:30:19.866 [INFO][4961] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9d86486d6dd3b9b7bc7244c696ebdb8178318424bdac3b2c7194cbc4518fac15" HandleID="k8s-pod-network.9d86486d6dd3b9b7bc7244c696ebdb8178318424bdac3b2c7194cbc4518fac15" Workload="localhost-k8s-goldmane--854f97d977--tpgxs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024eff0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-854f97d977-tpgxs", "timestamp":"2025-10-13 05:30:19.866508907 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:30:19.948742 containerd[1712]: 2025-10-13 05:30:19.866 [INFO][4961] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:30:19.948742 containerd[1712]: 2025-10-13 05:30:19.866 [INFO][4961] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:30:19.948742 containerd[1712]: 2025-10-13 05:30:19.866 [INFO][4961] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:30:19.948742 containerd[1712]: 2025-10-13 05:30:19.871 [INFO][4961] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9d86486d6dd3b9b7bc7244c696ebdb8178318424bdac3b2c7194cbc4518fac15" host="localhost" Oct 13 05:30:19.948742 containerd[1712]: 2025-10-13 05:30:19.874 [INFO][4961] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:30:19.948742 containerd[1712]: 2025-10-13 05:30:19.885 [INFO][4961] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:30:19.948742 containerd[1712]: 2025-10-13 05:30:19.886 [INFO][4961] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:30:19.948742 containerd[1712]: 2025-10-13 05:30:19.887 [INFO][4961] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:30:19.948742 containerd[1712]: 2025-10-13 05:30:19.888 [INFO][4961] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9d86486d6dd3b9b7bc7244c696ebdb8178318424bdac3b2c7194cbc4518fac15" host="localhost" Oct 13 05:30:19.948742 containerd[1712]: 2025-10-13 05:30:19.888 [INFO][4961] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9d86486d6dd3b9b7bc7244c696ebdb8178318424bdac3b2c7194cbc4518fac15 Oct 13 05:30:19.948742 containerd[1712]: 2025-10-13 05:30:19.895 [INFO][4961] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9d86486d6dd3b9b7bc7244c696ebdb8178318424bdac3b2c7194cbc4518fac15" host="localhost" Oct 13 05:30:19.948742 containerd[1712]: 2025-10-13 05:30:19.921 [INFO][4961] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.9d86486d6dd3b9b7bc7244c696ebdb8178318424bdac3b2c7194cbc4518fac15" host="localhost" Oct 13 05:30:19.948742 containerd[1712]: 2025-10-13 05:30:19.921 [INFO][4961] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.9d86486d6dd3b9b7bc7244c696ebdb8178318424bdac3b2c7194cbc4518fac15" host="localhost" Oct 13 05:30:19.948742 containerd[1712]: 2025-10-13 05:30:19.921 [INFO][4961] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:30:19.948742 containerd[1712]: 2025-10-13 05:30:19.921 [INFO][4961] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="9d86486d6dd3b9b7bc7244c696ebdb8178318424bdac3b2c7194cbc4518fac15" HandleID="k8s-pod-network.9d86486d6dd3b9b7bc7244c696ebdb8178318424bdac3b2c7194cbc4518fac15" Workload="localhost-k8s-goldmane--854f97d977--tpgxs-eth0" Oct 13 05:30:19.949307 containerd[1712]: 2025-10-13 05:30:19.923 [INFO][4948] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9d86486d6dd3b9b7bc7244c696ebdb8178318424bdac3b2c7194cbc4518fac15" Namespace="calico-system" Pod="goldmane-854f97d977-tpgxs" WorkloadEndpoint="localhost-k8s-goldmane--854f97d977--tpgxs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--854f97d977--tpgxs-eth0", GenerateName:"goldmane-854f97d977-", Namespace:"calico-system", SelfLink:"", UID:"3728c3ed-8ec9-4e59-b611-2dd20832e7d1", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 29, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"854f97d977", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-854f97d977-tpgxs", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali317f9654746", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:30:19.949307 containerd[1712]: 2025-10-13 05:30:19.923 [INFO][4948] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="9d86486d6dd3b9b7bc7244c696ebdb8178318424bdac3b2c7194cbc4518fac15" Namespace="calico-system" Pod="goldmane-854f97d977-tpgxs" WorkloadEndpoint="localhost-k8s-goldmane--854f97d977--tpgxs-eth0" Oct 13 05:30:19.949307 containerd[1712]: 2025-10-13 05:30:19.923 [INFO][4948] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali317f9654746 ContainerID="9d86486d6dd3b9b7bc7244c696ebdb8178318424bdac3b2c7194cbc4518fac15" Namespace="calico-system" Pod="goldmane-854f97d977-tpgxs" WorkloadEndpoint="localhost-k8s-goldmane--854f97d977--tpgxs-eth0" Oct 13 05:30:19.949307 containerd[1712]: 2025-10-13 05:30:19.935 [INFO][4948] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9d86486d6dd3b9b7bc7244c696ebdb8178318424bdac3b2c7194cbc4518fac15" Namespace="calico-system" Pod="goldmane-854f97d977-tpgxs" WorkloadEndpoint="localhost-k8s-goldmane--854f97d977--tpgxs-eth0" Oct 13 05:30:19.949307 containerd[1712]: 2025-10-13 05:30:19.936 [INFO][4948] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9d86486d6dd3b9b7bc7244c696ebdb8178318424bdac3b2c7194cbc4518fac15" Namespace="calico-system" Pod="goldmane-854f97d977-tpgxs" WorkloadEndpoint="localhost-k8s-goldmane--854f97d977--tpgxs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--854f97d977--tpgxs-eth0", GenerateName:"goldmane-854f97d977-", Namespace:"calico-system", SelfLink:"", UID:"3728c3ed-8ec9-4e59-b611-2dd20832e7d1", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 29, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"854f97d977", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9d86486d6dd3b9b7bc7244c696ebdb8178318424bdac3b2c7194cbc4518fac15", Pod:"goldmane-854f97d977-tpgxs", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali317f9654746", MAC:"ee:c7:19:0c:24:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:30:19.949307 containerd[1712]: 2025-10-13 05:30:19.945 [INFO][4948] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9d86486d6dd3b9b7bc7244c696ebdb8178318424bdac3b2c7194cbc4518fac15" Namespace="calico-system" Pod="goldmane-854f97d977-tpgxs" WorkloadEndpoint="localhost-k8s-goldmane--854f97d977--tpgxs-eth0" Oct 13 05:30:19.980501 containerd[1712]: time="2025-10-13T05:30:19.980155728Z" level=info msg="connecting to shim 9d86486d6dd3b9b7bc7244c696ebdb8178318424bdac3b2c7194cbc4518fac15" address="unix:///run/containerd/s/dac1bd0a1263d13cc596c66e0f255af6f3c2e06b7b08ccbd2ede414ec5fbe131" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:30:20.006051 systemd[1]: Started cri-containerd-9d86486d6dd3b9b7bc7244c696ebdb8178318424bdac3b2c7194cbc4518fac15.scope - libcontainer container 9d86486d6dd3b9b7bc7244c696ebdb8178318424bdac3b2c7194cbc4518fac15. Oct 13 05:30:20.020449 systemd-resolved[1363]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:30:20.058823 containerd[1712]: time="2025-10-13T05:30:20.058507093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-854f97d977-tpgxs,Uid:3728c3ed-8ec9-4e59-b611-2dd20832e7d1,Namespace:calico-system,Attempt:0,} returns sandbox id \"9d86486d6dd3b9b7bc7244c696ebdb8178318424bdac3b2c7194cbc4518fac15\"" Oct 13 05:30:20.691047 systemd-networkd[1601]: calif8433d3262c: Gained IPv6LL Oct 13 05:30:20.691478 systemd-networkd[1601]: cali36f3226cd08: Gained IPv6LL Oct 13 05:30:20.777367 containerd[1712]: time="2025-10-13T05:30:20.777182234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8kxpz,Uid:60ae1c96-5003-41fe-94bd-83bde0120109,Namespace:kube-system,Attempt:0,}" Oct 13 05:30:20.819062 systemd-networkd[1601]: cali094efdb6f51: Gained IPv6LL Oct 13 05:30:20.883017 systemd-networkd[1601]: calibf4f052c74b: Gained IPv6LL Oct 13 05:30:21.065934 kubelet[3010]: I1013 05:30:21.065514 3010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-xchb4" podStartSLOduration=39.065499638 podStartE2EDuration="39.065499638s" podCreationTimestamp="2025-10-13 05:29:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:30:20.117601073 +0000 UTC m=+44.649313889" watchObservedRunningTime="2025-10-13 05:30:21.065499638 +0000 UTC m=+45.597212448" Oct 13 05:30:21.205996 systemd-networkd[1601]: calib5b42ac3f24: Link UP Oct 13 05:30:21.206717 systemd-networkd[1601]: calib5b42ac3f24: Gained carrier Oct 13 05:30:21.226910 containerd[1712]: 2025-10-13 05:30:21.008 [INFO][5035] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--8kxpz-eth0 coredns-66bc5c9577- kube-system 60ae1c96-5003-41fe-94bd-83bde0120109 821 0 2025-10-13 05:29:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-8kxpz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib5b42ac3f24 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="20c00cca1a26f49ad498c44e278546416d862a6e6591cbffd71145d7ac5b1f99" Namespace="kube-system" Pod="coredns-66bc5c9577-8kxpz" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--8kxpz-" Oct 13 05:30:21.226910 containerd[1712]: 2025-10-13 05:30:21.008 [INFO][5035] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="20c00cca1a26f49ad498c44e278546416d862a6e6591cbffd71145d7ac5b1f99" Namespace="kube-system" Pod="coredns-66bc5c9577-8kxpz" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--8kxpz-eth0" Oct 13 05:30:21.226910 containerd[1712]: 2025-10-13 05:30:21.125 [INFO][5050] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="20c00cca1a26f49ad498c44e278546416d862a6e6591cbffd71145d7ac5b1f99" HandleID="k8s-pod-network.20c00cca1a26f49ad498c44e278546416d862a6e6591cbffd71145d7ac5b1f99" Workload="localhost-k8s-coredns--66bc5c9577--8kxpz-eth0" Oct 13 05:30:21.226910 containerd[1712]: 2025-10-13 05:30:21.125 [INFO][5050] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="20c00cca1a26f49ad498c44e278546416d862a6e6591cbffd71145d7ac5b1f99" HandleID="k8s-pod-network.20c00cca1a26f49ad498c44e278546416d862a6e6591cbffd71145d7ac5b1f99" Workload="localhost-k8s-coredns--66bc5c9577--8kxpz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003323d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-8kxpz", "timestamp":"2025-10-13 05:30:21.125533911 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:30:21.226910 containerd[1712]: 2025-10-13 05:30:21.125 [INFO][5050] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:30:21.226910 containerd[1712]: 2025-10-13 05:30:21.125 [INFO][5050] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:30:21.226910 containerd[1712]: 2025-10-13 05:30:21.125 [INFO][5050] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:30:21.226910 containerd[1712]: 2025-10-13 05:30:21.131 [INFO][5050] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.20c00cca1a26f49ad498c44e278546416d862a6e6591cbffd71145d7ac5b1f99" host="localhost" Oct 13 05:30:21.226910 containerd[1712]: 2025-10-13 05:30:21.142 [INFO][5050] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:30:21.226910 containerd[1712]: 2025-10-13 05:30:21.154 [INFO][5050] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:30:21.226910 containerd[1712]: 2025-10-13 05:30:21.158 [INFO][5050] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:30:21.226910 containerd[1712]: 2025-10-13 05:30:21.160 [INFO][5050] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:30:21.226910 containerd[1712]: 2025-10-13 05:30:21.160 [INFO][5050] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.20c00cca1a26f49ad498c44e278546416d862a6e6591cbffd71145d7ac5b1f99" host="localhost" Oct 13 05:30:21.226910 containerd[1712]: 2025-10-13 05:30:21.162 [INFO][5050] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.20c00cca1a26f49ad498c44e278546416d862a6e6591cbffd71145d7ac5b1f99 Oct 13 05:30:21.226910 containerd[1712]: 2025-10-13 05:30:21.165 [INFO][5050] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.20c00cca1a26f49ad498c44e278546416d862a6e6591cbffd71145d7ac5b1f99" host="localhost" Oct 13 05:30:21.226910 containerd[1712]: 2025-10-13 05:30:21.177 [INFO][5050] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.20c00cca1a26f49ad498c44e278546416d862a6e6591cbffd71145d7ac5b1f99" host="localhost" Oct 13 05:30:21.226910 containerd[1712]: 2025-10-13 05:30:21.177 [INFO][5050] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.20c00cca1a26f49ad498c44e278546416d862a6e6591cbffd71145d7ac5b1f99" host="localhost" Oct 13 05:30:21.226910 containerd[1712]: 2025-10-13 05:30:21.177 [INFO][5050] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:30:21.226910 containerd[1712]: 2025-10-13 05:30:21.177 [INFO][5050] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="20c00cca1a26f49ad498c44e278546416d862a6e6591cbffd71145d7ac5b1f99" HandleID="k8s-pod-network.20c00cca1a26f49ad498c44e278546416d862a6e6591cbffd71145d7ac5b1f99" Workload="localhost-k8s-coredns--66bc5c9577--8kxpz-eth0" Oct 13 05:30:21.230687 containerd[1712]: 2025-10-13 05:30:21.188 [INFO][5035] cni-plugin/k8s.go 418: Populated endpoint ContainerID="20c00cca1a26f49ad498c44e278546416d862a6e6591cbffd71145d7ac5b1f99" Namespace="kube-system" Pod="coredns-66bc5c9577-8kxpz" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--8kxpz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--8kxpz-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"60ae1c96-5003-41fe-94bd-83bde0120109", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 29, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-8kxpz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib5b42ac3f24", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:30:21.230687 containerd[1712]: 2025-10-13 05:30:21.197 [INFO][5035] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="20c00cca1a26f49ad498c44e278546416d862a6e6591cbffd71145d7ac5b1f99" Namespace="kube-system" Pod="coredns-66bc5c9577-8kxpz" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--8kxpz-eth0" Oct 13 05:30:21.230687 containerd[1712]: 2025-10-13 05:30:21.202 [INFO][5035] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib5b42ac3f24 ContainerID="20c00cca1a26f49ad498c44e278546416d862a6e6591cbffd71145d7ac5b1f99" Namespace="kube-system" Pod="coredns-66bc5c9577-8kxpz" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--8kxpz-eth0" Oct 13 05:30:21.230687 containerd[1712]: 2025-10-13 05:30:21.207 [INFO][5035] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="20c00cca1a26f49ad498c44e278546416d862a6e6591cbffd71145d7ac5b1f99" Namespace="kube-system" Pod="coredns-66bc5c9577-8kxpz" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--8kxpz-eth0" Oct 13 05:30:21.230687 containerd[1712]: 2025-10-13 05:30:21.207 [INFO][5035] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="20c00cca1a26f49ad498c44e278546416d862a6e6591cbffd71145d7ac5b1f99" Namespace="kube-system" Pod="coredns-66bc5c9577-8kxpz" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--8kxpz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--8kxpz-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"60ae1c96-5003-41fe-94bd-83bde0120109", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 29, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"20c00cca1a26f49ad498c44e278546416d862a6e6591cbffd71145d7ac5b1f99", Pod:"coredns-66bc5c9577-8kxpz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib5b42ac3f24", MAC:"d6:92:bc:04:34:c1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:30:21.230687 containerd[1712]: 2025-10-13 05:30:21.222 [INFO][5035] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="20c00cca1a26f49ad498c44e278546416d862a6e6591cbffd71145d7ac5b1f99" Namespace="kube-system" Pod="coredns-66bc5c9577-8kxpz" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--8kxpz-eth0" Oct 13 05:30:21.271890 containerd[1712]: time="2025-10-13T05:30:21.271853865Z" level=info msg="connecting to shim 20c00cca1a26f49ad498c44e278546416d862a6e6591cbffd71145d7ac5b1f99" address="unix:///run/containerd/s/1fd3f864f8baf3e85f90bde01eefdae207dd1ea23815bada26899b47abef08d1" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:30:21.305339 systemd[1]: Started cri-containerd-20c00cca1a26f49ad498c44e278546416d862a6e6591cbffd71145d7ac5b1f99.scope - libcontainer container 20c00cca1a26f49ad498c44e278546416d862a6e6591cbffd71145d7ac5b1f99. Oct 13 05:30:21.321585 systemd-resolved[1363]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:30:21.355138 containerd[1712]: time="2025-10-13T05:30:21.355113329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8kxpz,Uid:60ae1c96-5003-41fe-94bd-83bde0120109,Namespace:kube-system,Attempt:0,} returns sandbox id \"20c00cca1a26f49ad498c44e278546416d862a6e6591cbffd71145d7ac5b1f99\"" Oct 13 05:30:21.410236 containerd[1712]: time="2025-10-13T05:30:21.410208670Z" level=info msg="CreateContainer within sandbox \"20c00cca1a26f49ad498c44e278546416d862a6e6591cbffd71145d7ac5b1f99\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 05:30:21.423408 containerd[1712]: time="2025-10-13T05:30:21.423359457Z" level=info msg="Container b7629d925f22db967aa2f0c46a6a8d99300856a1d19c6c750bc2dbf02e6c25b6: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:30:21.429276 containerd[1712]: time="2025-10-13T05:30:21.429245478Z" level=info msg="CreateContainer within sandbox \"20c00cca1a26f49ad498c44e278546416d862a6e6591cbffd71145d7ac5b1f99\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b7629d925f22db967aa2f0c46a6a8d99300856a1d19c6c750bc2dbf02e6c25b6\"" Oct 13 05:30:21.474927 containerd[1712]: time="2025-10-13T05:30:21.474522269Z" level=info msg="StartContainer for \"b7629d925f22db967aa2f0c46a6a8d99300856a1d19c6c750bc2dbf02e6c25b6\"" Oct 13 05:30:21.475078 containerd[1712]: time="2025-10-13T05:30:21.475060477Z" level=info msg="connecting to shim b7629d925f22db967aa2f0c46a6a8d99300856a1d19c6c750bc2dbf02e6c25b6" address="unix:///run/containerd/s/1fd3f864f8baf3e85f90bde01eefdae207dd1ea23815bada26899b47abef08d1" protocol=ttrpc version=3 Oct 13 05:30:21.491123 systemd[1]: Started cri-containerd-b7629d925f22db967aa2f0c46a6a8d99300856a1d19c6c750bc2dbf02e6c25b6.scope - libcontainer container b7629d925f22db967aa2f0c46a6a8d99300856a1d19c6c750bc2dbf02e6c25b6. Oct 13 05:30:21.517365 containerd[1712]: time="2025-10-13T05:30:21.517338574Z" level=info msg="StartContainer for \"b7629d925f22db967aa2f0c46a6a8d99300856a1d19c6c750bc2dbf02e6c25b6\" returns successfully" Oct 13 05:30:21.527848 containerd[1712]: time="2025-10-13T05:30:21.527820723Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:30:21.528090 containerd[1712]: time="2025-10-13T05:30:21.528072370Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Oct 13 05:30:21.528510 containerd[1712]: time="2025-10-13T05:30:21.528495402Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:30:21.529913 containerd[1712]: time="2025-10-13T05:30:21.529884775Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:30:21.530656 containerd[1712]: time="2025-10-13T05:30:21.530589562Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 3.647104856s" Oct 13 05:30:21.530656 containerd[1712]: time="2025-10-13T05:30:21.530608319Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Oct 13 05:30:21.536577 containerd[1712]: time="2025-10-13T05:30:21.536556345Z" level=info msg="CreateContainer within sandbox \"b620e4ec038aefd66ebe47dc03f3d458b3bbf5523aee7404ae57efc4963bb9e4\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Oct 13 05:30:21.539999 containerd[1712]: time="2025-10-13T05:30:21.539978109Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Oct 13 05:30:21.541934 containerd[1712]: time="2025-10-13T05:30:21.541917257Z" level=info msg="Container d1bafce000d45f0bae93a3884b21f5f79edd8f7d615de4a753926b81bb09712a: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:30:21.547470 containerd[1712]: time="2025-10-13T05:30:21.547442716Z" level=info msg="CreateContainer within sandbox \"b620e4ec038aefd66ebe47dc03f3d458b3bbf5523aee7404ae57efc4963bb9e4\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"d1bafce000d45f0bae93a3884b21f5f79edd8f7d615de4a753926b81bb09712a\"" Oct 13 05:30:21.548297 containerd[1712]: time="2025-10-13T05:30:21.548279678Z" level=info msg="StartContainer for \"d1bafce000d45f0bae93a3884b21f5f79edd8f7d615de4a753926b81bb09712a\"" Oct 13 05:30:21.550862 containerd[1712]: time="2025-10-13T05:30:21.550832519Z" level=info msg="connecting to shim d1bafce000d45f0bae93a3884b21f5f79edd8f7d615de4a753926b81bb09712a" address="unix:///run/containerd/s/ee60d0a2aaae82778092e1297b0b67bad0618168e038aad6f445507a81c5b674" protocol=ttrpc version=3 Oct 13 05:30:21.573176 systemd[1]: Started cri-containerd-d1bafce000d45f0bae93a3884b21f5f79edd8f7d615de4a753926b81bb09712a.scope - libcontainer container d1bafce000d45f0bae93a3884b21f5f79edd8f7d615de4a753926b81bb09712a. Oct 13 05:30:21.614105 containerd[1712]: time="2025-10-13T05:30:21.614081958Z" level=info msg="StartContainer for \"d1bafce000d45f0bae93a3884b21f5f79edd8f7d615de4a753926b81bb09712a\" returns successfully" Oct 13 05:30:21.651098 systemd-networkd[1601]: cali317f9654746: Gained IPv6LL Oct 13 05:30:21.947714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1325332160.mount: Deactivated successfully. Oct 13 05:30:21.947773 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3204906983.mount: Deactivated successfully. Oct 13 05:30:22.057988 kubelet[3010]: I1013 05:30:22.057923 3010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-8kxpz" podStartSLOduration=40.057889561 podStartE2EDuration="40.057889561s" podCreationTimestamp="2025-10-13 05:29:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:30:22.057257869 +0000 UTC m=+46.588970685" watchObservedRunningTime="2025-10-13 05:30:22.057889561 +0000 UTC m=+46.589602378" Oct 13 05:30:22.204750 kubelet[3010]: I1013 05:30:22.204649 3010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-54c669fd76-gsz4j" podStartSLOduration=2.549716505 podStartE2EDuration="8.204637264s" podCreationTimestamp="2025-10-13 05:30:14 +0000 UTC" firstStartedPulling="2025-10-13 05:30:15.876401144 +0000 UTC m=+40.408113950" lastFinishedPulling="2025-10-13 05:30:21.531321903 +0000 UTC m=+46.063034709" observedRunningTime="2025-10-13 05:30:22.204403666 +0000 UTC m=+46.736116483" watchObservedRunningTime="2025-10-13 05:30:22.204637264 +0000 UTC m=+46.736350074" Oct 13 05:30:22.995046 systemd-networkd[1601]: calib5b42ac3f24: Gained IPv6LL Oct 13 05:30:23.494915 containerd[1712]: time="2025-10-13T05:30:23.494581794Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:30:23.495577 containerd[1712]: time="2025-10-13T05:30:23.495560282Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Oct 13 05:30:23.496197 containerd[1712]: time="2025-10-13T05:30:23.496178287Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:30:23.497539 containerd[1712]: time="2025-10-13T05:30:23.497522096Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:30:23.498328 containerd[1712]: time="2025-10-13T05:30:23.498308175Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.958307522s" Oct 13 05:30:23.498365 containerd[1712]: time="2025-10-13T05:30:23.498344134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Oct 13 05:30:23.498994 containerd[1712]: time="2025-10-13T05:30:23.498972745Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Oct 13 05:30:23.502775 containerd[1712]: time="2025-10-13T05:30:23.502758377Z" level=info msg="CreateContainer within sandbox \"c338b50f434aaba4dd86bde39ae2b6d6d640d6164df7cf0d82e96edf510c8158\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 13 05:30:23.516270 containerd[1712]: time="2025-10-13T05:30:23.516242374Z" level=info msg="Container 4ac9c4bec465281f1279ef2268f8a0df386b64d4a44aef50ecdd24617116877a: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:30:23.518502 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1734019227.mount: Deactivated successfully. Oct 13 05:30:23.526626 containerd[1712]: time="2025-10-13T05:30:23.526579529Z" level=info msg="CreateContainer within sandbox \"c338b50f434aaba4dd86bde39ae2b6d6d640d6164df7cf0d82e96edf510c8158\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"4ac9c4bec465281f1279ef2268f8a0df386b64d4a44aef50ecdd24617116877a\"" Oct 13 05:30:23.527119 containerd[1712]: time="2025-10-13T05:30:23.527091571Z" level=info msg="StartContainer for \"4ac9c4bec465281f1279ef2268f8a0df386b64d4a44aef50ecdd24617116877a\"" Oct 13 05:30:23.529660 containerd[1712]: time="2025-10-13T05:30:23.529623679Z" level=info msg="connecting to shim 4ac9c4bec465281f1279ef2268f8a0df386b64d4a44aef50ecdd24617116877a" address="unix:///run/containerd/s/8c9e7f9bb37fe3e636081fe8f4d39f737711559dc980e9b367938531858e4fa8" protocol=ttrpc version=3 Oct 13 05:30:23.547023 systemd[1]: Started cri-containerd-4ac9c4bec465281f1279ef2268f8a0df386b64d4a44aef50ecdd24617116877a.scope - libcontainer container 4ac9c4bec465281f1279ef2268f8a0df386b64d4a44aef50ecdd24617116877a. Oct 13 05:30:23.582141 containerd[1712]: time="2025-10-13T05:30:23.582045128Z" level=info msg="StartContainer for \"4ac9c4bec465281f1279ef2268f8a0df386b64d4a44aef50ecdd24617116877a\" returns successfully" Oct 13 05:30:26.898522 containerd[1712]: time="2025-10-13T05:30:26.898487028Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:30:26.941200 containerd[1712]: time="2025-10-13T05:30:26.941164339Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Oct 13 05:30:26.971784 containerd[1712]: time="2025-10-13T05:30:26.971750866Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:30:27.025432 containerd[1712]: time="2025-10-13T05:30:27.025391590Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:30:27.025787 containerd[1712]: time="2025-10-13T05:30:27.025681025Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.526687781s" Oct 13 05:30:27.025787 containerd[1712]: time="2025-10-13T05:30:27.025699076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Oct 13 05:30:27.033805 containerd[1712]: time="2025-10-13T05:30:27.026279631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Oct 13 05:30:27.290576 containerd[1712]: time="2025-10-13T05:30:27.290486212Z" level=info msg="CreateContainer within sandbox \"0e044d289a4ebfffd68d555ba91d0460cc36cf60a6b3d454e690119c24637d64\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 13 05:30:27.616255 containerd[1712]: time="2025-10-13T05:30:27.616190689Z" level=info msg="Container cbddc81dd4e47139ded19098649085db2de30c787a6992ca56907974685533ed: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:30:27.619537 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount147415462.mount: Deactivated successfully. Oct 13 05:30:27.679453 containerd[1712]: time="2025-10-13T05:30:27.679423726Z" level=info msg="CreateContainer within sandbox \"0e044d289a4ebfffd68d555ba91d0460cc36cf60a6b3d454e690119c24637d64\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"cbddc81dd4e47139ded19098649085db2de30c787a6992ca56907974685533ed\"" Oct 13 05:30:27.679886 containerd[1712]: time="2025-10-13T05:30:27.679808630Z" level=info msg="StartContainer for \"cbddc81dd4e47139ded19098649085db2de30c787a6992ca56907974685533ed\"" Oct 13 05:30:27.681300 containerd[1712]: time="2025-10-13T05:30:27.681270197Z" level=info msg="connecting to shim cbddc81dd4e47139ded19098649085db2de30c787a6992ca56907974685533ed" address="unix:///run/containerd/s/b5d9f6d8b9b2050835214a4646bf47d734d085cb823963222c0b4dfa4c65604c" protocol=ttrpc version=3 Oct 13 05:30:27.772984 systemd[1]: Started cri-containerd-cbddc81dd4e47139ded19098649085db2de30c787a6992ca56907974685533ed.scope - libcontainer container cbddc81dd4e47139ded19098649085db2de30c787a6992ca56907974685533ed. Oct 13 05:30:27.833424 containerd[1712]: time="2025-10-13T05:30:27.833392950Z" level=info msg="StartContainer for \"cbddc81dd4e47139ded19098649085db2de30c787a6992ca56907974685533ed\" returns successfully" Oct 13 05:30:28.253109 containerd[1712]: time="2025-10-13T05:30:28.253078954Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cbddc81dd4e47139ded19098649085db2de30c787a6992ca56907974685533ed\" id:\"b45d6ddac351638b3410e55f6c762715471be552e26c7eb358d48297e2c7421d\" pid:5286 exited_at:{seconds:1760333428 nanos:244554502}" Oct 13 05:30:28.259853 kubelet[3010]: I1013 05:30:28.259815 3010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-84c44459f-h6qsm" podStartSLOduration=25.356959902 podStartE2EDuration="34.25980162s" podCreationTimestamp="2025-10-13 05:29:54 +0000 UTC" firstStartedPulling="2025-10-13 05:30:18.123385973 +0000 UTC m=+42.655098778" lastFinishedPulling="2025-10-13 05:30:27.026227691 +0000 UTC m=+51.557940496" observedRunningTime="2025-10-13 05:30:28.154599994 +0000 UTC m=+52.686312810" watchObservedRunningTime="2025-10-13 05:30:28.25980162 +0000 UTC m=+52.791514437" Oct 13 05:30:30.629939 containerd[1712]: time="2025-10-13T05:30:30.629576987Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:30:30.635133 containerd[1712]: time="2025-10-13T05:30:30.635104716Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Oct 13 05:30:30.646298 containerd[1712]: time="2025-10-13T05:30:30.646268988Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:30:30.651961 containerd[1712]: time="2025-10-13T05:30:30.651921320Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:30:30.652359 containerd[1712]: time="2025-10-13T05:30:30.652221455Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.625922858s" Oct 13 05:30:30.652359 containerd[1712]: time="2025-10-13T05:30:30.652245372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Oct 13 05:30:30.653006 containerd[1712]: time="2025-10-13T05:30:30.652993895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Oct 13 05:30:30.664194 containerd[1712]: time="2025-10-13T05:30:30.663522745Z" level=info msg="CreateContainer within sandbox \"2c1f137f80333f268f770073d18b9cff646ed2182a8e029e4758d9db7f3ae645\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 05:30:30.669285 containerd[1712]: time="2025-10-13T05:30:30.668588103Z" level=info msg="Container 02513ea2861adf6c75bd13dc58369fc7167040d63bf64d404fd1197ff55165eb: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:30:30.702707 containerd[1712]: time="2025-10-13T05:30:30.702685730Z" level=info msg="CreateContainer within sandbox \"2c1f137f80333f268f770073d18b9cff646ed2182a8e029e4758d9db7f3ae645\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"02513ea2861adf6c75bd13dc58369fc7167040d63bf64d404fd1197ff55165eb\"" Oct 13 05:30:30.704418 containerd[1712]: time="2025-10-13T05:30:30.703295619Z" level=info msg="StartContainer for \"02513ea2861adf6c75bd13dc58369fc7167040d63bf64d404fd1197ff55165eb\"" Oct 13 05:30:30.704666 containerd[1712]: time="2025-10-13T05:30:30.704647209Z" level=info msg="connecting to shim 02513ea2861adf6c75bd13dc58369fc7167040d63bf64d404fd1197ff55165eb" address="unix:///run/containerd/s/0eb69fedae2263536566f04e98f874cc70d34ed4e5c37e7f2d9ea5bc14e3e3c0" protocol=ttrpc version=3 Oct 13 05:30:30.725975 systemd[1]: Started cri-containerd-02513ea2861adf6c75bd13dc58369fc7167040d63bf64d404fd1197ff55165eb.scope - libcontainer container 02513ea2861adf6c75bd13dc58369fc7167040d63bf64d404fd1197ff55165eb. Oct 13 05:30:30.770672 containerd[1712]: time="2025-10-13T05:30:30.770640546Z" level=info msg="StartContainer for \"02513ea2861adf6c75bd13dc58369fc7167040d63bf64d404fd1197ff55165eb\" returns successfully" Oct 13 05:30:31.087018 containerd[1712]: time="2025-10-13T05:30:31.086966701Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:30:31.087341 containerd[1712]: time="2025-10-13T05:30:31.087321042Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Oct 13 05:30:31.088738 containerd[1712]: time="2025-10-13T05:30:31.088664043Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 435.616583ms" Oct 13 05:30:31.088738 containerd[1712]: time="2025-10-13T05:30:31.088682802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Oct 13 05:30:31.089630 containerd[1712]: time="2025-10-13T05:30:31.089391701Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Oct 13 05:30:31.103724 containerd[1712]: time="2025-10-13T05:30:31.103700016Z" level=info msg="CreateContainer within sandbox \"26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 05:30:31.106676 containerd[1712]: time="2025-10-13T05:30:31.106658236Z" level=info msg="Container 57574050a6c9bfa5284e367db1471518b073243b3ea7f0447a0796af24baf9fa: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:30:31.122803 containerd[1712]: time="2025-10-13T05:30:31.122728101Z" level=info msg="CreateContainer within sandbox \"26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"57574050a6c9bfa5284e367db1471518b073243b3ea7f0447a0796af24baf9fa\"" Oct 13 05:30:31.123851 containerd[1712]: time="2025-10-13T05:30:31.123178450Z" level=info msg="StartContainer for \"57574050a6c9bfa5284e367db1471518b073243b3ea7f0447a0796af24baf9fa\"" Oct 13 05:30:31.123851 containerd[1712]: time="2025-10-13T05:30:31.123763807Z" level=info msg="connecting to shim 57574050a6c9bfa5284e367db1471518b073243b3ea7f0447a0796af24baf9fa" address="unix:///run/containerd/s/ca24318a09c69cee6bdec8ad637e0e61e36904e99fe4f7a951198a61b5f9c53d" protocol=ttrpc version=3 Oct 13 05:30:31.137997 systemd[1]: Started cri-containerd-57574050a6c9bfa5284e367db1471518b073243b3ea7f0447a0796af24baf9fa.scope - libcontainer container 57574050a6c9bfa5284e367db1471518b073243b3ea7f0447a0796af24baf9fa. Oct 13 05:30:31.246457 containerd[1712]: time="2025-10-13T05:30:31.246050033Z" level=info msg="StartContainer for \"57574050a6c9bfa5284e367db1471518b073243b3ea7f0447a0796af24baf9fa\" returns successfully" Oct 13 05:30:31.517167 containerd[1712]: time="2025-10-13T05:30:31.517100525Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:30:31.517915 containerd[1712]: time="2025-10-13T05:30:31.517897761Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Oct 13 05:30:31.518748 containerd[1712]: time="2025-10-13T05:30:31.518730926Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 429.324768ms" Oct 13 05:30:31.520262 containerd[1712]: time="2025-10-13T05:30:31.518751032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Oct 13 05:30:31.520262 containerd[1712]: time="2025-10-13T05:30:31.520133364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Oct 13 05:30:31.535621 containerd[1712]: time="2025-10-13T05:30:31.535050783Z" level=info msg="CreateContainer within sandbox \"9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 05:30:31.537905 containerd[1712]: time="2025-10-13T05:30:31.537883529Z" level=info msg="Container f4fdd176a64d72ad4b2c9174085e95b48481b5bbab591d59c71c3bc04a7184a6: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:30:31.556920 containerd[1712]: time="2025-10-13T05:30:31.556898771Z" level=info msg="CreateContainer within sandbox \"9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f4fdd176a64d72ad4b2c9174085e95b48481b5bbab591d59c71c3bc04a7184a6\"" Oct 13 05:30:31.557582 containerd[1712]: time="2025-10-13T05:30:31.557409709Z" level=info msg="StartContainer for \"f4fdd176a64d72ad4b2c9174085e95b48481b5bbab591d59c71c3bc04a7184a6\"" Oct 13 05:30:31.559903 containerd[1712]: time="2025-10-13T05:30:31.558710377Z" level=info msg="connecting to shim f4fdd176a64d72ad4b2c9174085e95b48481b5bbab591d59c71c3bc04a7184a6" address="unix:///run/containerd/s/fc8af71d27318ae156b6f8e00481219036b40f9b460332acbb0923083fda8fa7" protocol=ttrpc version=3 Oct 13 05:30:31.581823 systemd[1]: Started cri-containerd-f4fdd176a64d72ad4b2c9174085e95b48481b5bbab591d59c71c3bc04a7184a6.scope - libcontainer container f4fdd176a64d72ad4b2c9174085e95b48481b5bbab591d59c71c3bc04a7184a6. Oct 13 05:30:31.678918 containerd[1712]: time="2025-10-13T05:30:31.678879017Z" level=info msg="StartContainer for \"f4fdd176a64d72ad4b2c9174085e95b48481b5bbab591d59c71c3bc04a7184a6\" returns successfully" Oct 13 05:30:32.254351 kubelet[3010]: I1013 05:30:32.253712 3010 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:30:32.262687 kubelet[3010]: I1013 05:30:32.262642 3010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-c475f7844-mpl66" podStartSLOduration=29.778172696 podStartE2EDuration="41.262630166s" podCreationTimestamp="2025-10-13 05:29:51 +0000 UTC" firstStartedPulling="2025-10-13 05:30:19.168414751 +0000 UTC m=+43.700127556" lastFinishedPulling="2025-10-13 05:30:30.65287222 +0000 UTC m=+55.184585026" observedRunningTime="2025-10-13 05:30:31.200612057 +0000 UTC m=+55.732324881" watchObservedRunningTime="2025-10-13 05:30:32.262630166 +0000 UTC m=+56.794342978" Oct 13 05:30:32.262819 kubelet[3010]: I1013 05:30:32.262707 3010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7b54bbbd77-rpbzl" podStartSLOduration=29.352641029 podStartE2EDuration="41.262703815s" podCreationTimestamp="2025-10-13 05:29:51 +0000 UTC" firstStartedPulling="2025-10-13 05:30:19.179081682 +0000 UTC m=+43.710794487" lastFinishedPulling="2025-10-13 05:30:31.089144468 +0000 UTC m=+55.620857273" observedRunningTime="2025-10-13 05:30:32.228962278 +0000 UTC m=+56.760675094" watchObservedRunningTime="2025-10-13 05:30:32.262703815 +0000 UTC m=+56.794416627" Oct 13 05:30:32.266756 kubelet[3010]: I1013 05:30:32.266171 3010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7b54bbbd77-6z8xp" podStartSLOduration=29.130951643 podStartE2EDuration="41.266158757s" podCreationTimestamp="2025-10-13 05:29:51 +0000 UTC" firstStartedPulling="2025-10-13 05:30:19.384567881 +0000 UTC m=+43.916280687" lastFinishedPulling="2025-10-13 05:30:31.519774996 +0000 UTC m=+56.051487801" observedRunningTime="2025-10-13 05:30:32.265589486 +0000 UTC m=+56.797302303" watchObservedRunningTime="2025-10-13 05:30:32.266158757 +0000 UTC m=+56.797871569" Oct 13 05:30:35.961495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2491567576.mount: Deactivated successfully. Oct 13 05:30:36.961505 containerd[1712]: time="2025-10-13T05:30:36.961455579Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:30:36.968259 containerd[1712]: time="2025-10-13T05:30:36.967413219Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Oct 13 05:30:37.006866 containerd[1712]: time="2025-10-13T05:30:37.006833974Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:30:37.008396 containerd[1712]: time="2025-10-13T05:30:37.008191102Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:30:37.009804 containerd[1712]: time="2025-10-13T05:30:37.009781663Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 5.488457836s" Oct 13 05:30:37.009804 containerd[1712]: time="2025-10-13T05:30:37.009802801Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Oct 13 05:30:37.021923 containerd[1712]: time="2025-10-13T05:30:37.021757564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Oct 13 05:30:37.153797 containerd[1712]: time="2025-10-13T05:30:37.153773781Z" level=info msg="CreateContainer within sandbox \"9d86486d6dd3b9b7bc7244c696ebdb8178318424bdac3b2c7194cbc4518fac15\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Oct 13 05:30:37.239781 containerd[1712]: time="2025-10-13T05:30:37.239385343Z" level=info msg="Container bbf0f38afac7fb4683300536edfb13b2d300a58bcb7cc09eac4e0000d2b92157: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:30:37.258814 containerd[1712]: time="2025-10-13T05:30:37.258781860Z" level=info msg="CreateContainer within sandbox \"9d86486d6dd3b9b7bc7244c696ebdb8178318424bdac3b2c7194cbc4518fac15\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"bbf0f38afac7fb4683300536edfb13b2d300a58bcb7cc09eac4e0000d2b92157\"" Oct 13 05:30:37.259690 containerd[1712]: time="2025-10-13T05:30:37.259666119Z" level=info msg="StartContainer for \"bbf0f38afac7fb4683300536edfb13b2d300a58bcb7cc09eac4e0000d2b92157\"" Oct 13 05:30:37.268287 containerd[1712]: time="2025-10-13T05:30:37.268259678Z" level=info msg="connecting to shim bbf0f38afac7fb4683300536edfb13b2d300a58bcb7cc09eac4e0000d2b92157" address="unix:///run/containerd/s/dac1bd0a1263d13cc596c66e0f255af6f3c2e06b7b08ccbd2ede414ec5fbe131" protocol=ttrpc version=3 Oct 13 05:30:37.379052 systemd[1]: Started cri-containerd-bbf0f38afac7fb4683300536edfb13b2d300a58bcb7cc09eac4e0000d2b92157.scope - libcontainer container bbf0f38afac7fb4683300536edfb13b2d300a58bcb7cc09eac4e0000d2b92157. Oct 13 05:30:37.470176 containerd[1712]: time="2025-10-13T05:30:37.470133506Z" level=info msg="StartContainer for \"bbf0f38afac7fb4683300536edfb13b2d300a58bcb7cc09eac4e0000d2b92157\" returns successfully" Oct 13 05:30:37.808815 kubelet[3010]: I1013 05:30:37.795865 3010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-854f97d977-tpgxs" podStartSLOduration=27.797047386 podStartE2EDuration="44.755740963s" podCreationTimestamp="2025-10-13 05:29:53 +0000 UTC" firstStartedPulling="2025-10-13 05:30:20.059565444 +0000 UTC m=+44.591278249" lastFinishedPulling="2025-10-13 05:30:37.018259017 +0000 UTC m=+61.549971826" observedRunningTime="2025-10-13 05:30:37.610282381 +0000 UTC m=+62.141995192" watchObservedRunningTime="2025-10-13 05:30:37.755740963 +0000 UTC m=+62.287453775" Oct 13 05:30:37.892685 containerd[1712]: time="2025-10-13T05:30:37.892654597Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bbf0f38afac7fb4683300536edfb13b2d300a58bcb7cc09eac4e0000d2b92157\" id:\"cbb7337b339637aae38e0289545d076a9dd053404ba60edc7ae7722635954763\" pid:5488 exit_status:1 exited_at:{seconds:1760333437 nanos:855323610}" Oct 13 05:30:38.731643 containerd[1712]: time="2025-10-13T05:30:38.731522821Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bbf0f38afac7fb4683300536edfb13b2d300a58bcb7cc09eac4e0000d2b92157\" id:\"c43038afde090943ba4fd6098622e00131dd1f28b41f523ebc0e38225447c7ac\" pid:5523 exit_status:1 exited_at:{seconds:1760333438 nanos:730850466}" Oct 13 05:30:38.807716 kubelet[3010]: I1013 05:30:38.806634 3010 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:30:38.999289 containerd[1712]: time="2025-10-13T05:30:38.999205910Z" level=info msg="StopContainer for \"f4fdd176a64d72ad4b2c9174085e95b48481b5bbab591d59c71c3bc04a7184a6\" with timeout 30 (s)" Oct 13 05:30:39.004339 containerd[1712]: time="2025-10-13T05:30:39.004224353Z" level=info msg="Stop container \"f4fdd176a64d72ad4b2c9174085e95b48481b5bbab591d59c71c3bc04a7184a6\" with signal terminated" Oct 13 05:30:39.018763 systemd[1]: cri-containerd-f4fdd176a64d72ad4b2c9174085e95b48481b5bbab591d59c71c3bc04a7184a6.scope: Deactivated successfully. Oct 13 05:30:39.023096 containerd[1712]: time="2025-10-13T05:30:39.023073491Z" level=info msg="received exit event container_id:\"f4fdd176a64d72ad4b2c9174085e95b48481b5bbab591d59c71c3bc04a7184a6\" id:\"f4fdd176a64d72ad4b2c9174085e95b48481b5bbab591d59c71c3bc04a7184a6\" pid:5390 exit_status:1 exited_at:{seconds:1760333439 nanos:22719170}" Oct 13 05:30:39.023265 containerd[1712]: time="2025-10-13T05:30:39.023188979Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f4fdd176a64d72ad4b2c9174085e95b48481b5bbab591d59c71c3bc04a7184a6\" id:\"f4fdd176a64d72ad4b2c9174085e95b48481b5bbab591d59c71c3bc04a7184a6\" pid:5390 exit_status:1 exited_at:{seconds:1760333439 nanos:22719170}" Oct 13 05:30:39.088806 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f4fdd176a64d72ad4b2c9174085e95b48481b5bbab591d59c71c3bc04a7184a6-rootfs.mount: Deactivated successfully. Oct 13 05:30:39.149319 containerd[1712]: time="2025-10-13T05:30:39.149182138Z" level=info msg="StopContainer for \"f4fdd176a64d72ad4b2c9174085e95b48481b5bbab591d59c71c3bc04a7184a6\" returns successfully" Oct 13 05:30:39.157583 containerd[1712]: time="2025-10-13T05:30:39.157545881Z" level=info msg="StopPodSandbox for \"9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5\"" Oct 13 05:30:39.159361 containerd[1712]: time="2025-10-13T05:30:39.158466126Z" level=info msg="Container to stop \"f4fdd176a64d72ad4b2c9174085e95b48481b5bbab591d59c71c3bc04a7184a6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 13 05:30:39.164379 systemd[1]: Created slice kubepods-besteffort-podd76ad80c_e1b6_49fb_b305_ab1129d4b6be.slice - libcontainer container kubepods-besteffort-podd76ad80c_e1b6_49fb_b305_ab1129d4b6be.slice. Oct 13 05:30:39.191671 systemd[1]: cri-containerd-9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5.scope: Deactivated successfully. Oct 13 05:30:39.198677 containerd[1712]: time="2025-10-13T05:30:39.198480030Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5\" id:\"9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5\" pid:4903 exit_status:137 exited_at:{seconds:1760333439 nanos:197277855}" Oct 13 05:30:39.202670 kubelet[3010]: I1013 05:30:39.202538 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d76ad80c-e1b6-49fb-b305-ab1129d4b6be-calico-apiserver-certs\") pod \"calico-apiserver-c475f7844-7fdqm\" (UID: \"d76ad80c-e1b6-49fb-b305-ab1129d4b6be\") " pod="calico-apiserver/calico-apiserver-c475f7844-7fdqm" Oct 13 05:30:39.202670 kubelet[3010]: I1013 05:30:39.202595 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb874\" (UniqueName: \"kubernetes.io/projected/d76ad80c-e1b6-49fb-b305-ab1129d4b6be-kube-api-access-jb874\") pod \"calico-apiserver-c475f7844-7fdqm\" (UID: \"d76ad80c-e1b6-49fb-b305-ab1129d4b6be\") " pod="calico-apiserver/calico-apiserver-c475f7844-7fdqm" Oct 13 05:30:39.227426 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5-rootfs.mount: Deactivated successfully. Oct 13 05:30:39.254709 containerd[1712]: time="2025-10-13T05:30:39.254358222Z" level=info msg="shim disconnected" id=9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5 namespace=k8s.io Oct 13 05:30:39.254709 containerd[1712]: time="2025-10-13T05:30:39.254388331Z" level=warning msg="cleaning up after shim disconnected" id=9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5 namespace=k8s.io Oct 13 05:30:39.262909 containerd[1712]: time="2025-10-13T05:30:39.254396068Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 13 05:30:39.379872 containerd[1712]: time="2025-10-13T05:30:39.379448041Z" level=info msg="received exit event sandbox_id:\"9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5\" exit_status:137 exited_at:{seconds:1760333439 nanos:197277855}" Oct 13 05:30:39.388517 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5-shm.mount: Deactivated successfully. Oct 13 05:30:39.474888 containerd[1712]: time="2025-10-13T05:30:39.474857541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c475f7844-7fdqm,Uid:d76ad80c-e1b6-49fb-b305-ab1129d4b6be,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:30:39.603155 kubelet[3010]: I1013 05:30:39.598082 3010 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" Oct 13 05:30:39.928381 containerd[1712]: time="2025-10-13T05:30:39.928305626Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bbf0f38afac7fb4683300536edfb13b2d300a58bcb7cc09eac4e0000d2b92157\" id:\"925c328f8876c688b73aecea2a840b3aa750f4fadd84bcfc685cb6bb47e79555\" pid:5638 exit_status:1 exited_at:{seconds:1760333439 nanos:928089843}" Oct 13 05:30:40.073077 systemd-networkd[1601]: cali36f3226cd08: Link DOWN Oct 13 05:30:40.073083 systemd-networkd[1601]: cali36f3226cd08: Lost carrier Oct 13 05:30:40.202560 containerd[1712]: time="2025-10-13T05:30:40.202298316Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:30:40.204210 containerd[1712]: time="2025-10-13T05:30:40.204175498Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Oct 13 05:30:40.204853 containerd[1712]: time="2025-10-13T05:30:40.204541453Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:30:40.205922 containerd[1712]: time="2025-10-13T05:30:40.205681939Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:30:40.206137 containerd[1712]: time="2025-10-13T05:30:40.206090303Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 3.184301207s" Oct 13 05:30:40.206137 containerd[1712]: time="2025-10-13T05:30:40.206112344Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Oct 13 05:30:40.252057 containerd[1712]: time="2025-10-13T05:30:40.252024921Z" level=info msg="CreateContainer within sandbox \"c338b50f434aaba4dd86bde39ae2b6d6d640d6164df7cf0d82e96edf510c8158\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 13 05:30:40.288061 containerd[1712]: time="2025-10-13T05:30:40.287820206Z" level=info msg="Container 1a7cb0f7c213ccc6edbea2d1ffef460cfcd68a23a01626acb6912cc728e18bb8: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:30:40.292780 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3521501983.mount: Deactivated successfully. Oct 13 05:30:40.304220 containerd[1712]: time="2025-10-13T05:30:40.304178613Z" level=info msg="CreateContainer within sandbox \"c338b50f434aaba4dd86bde39ae2b6d6d640d6164df7cf0d82e96edf510c8158\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"1a7cb0f7c213ccc6edbea2d1ffef460cfcd68a23a01626acb6912cc728e18bb8\"" Oct 13 05:30:40.307804 containerd[1712]: time="2025-10-13T05:30:40.307607746Z" level=info msg="StartContainer for \"1a7cb0f7c213ccc6edbea2d1ffef460cfcd68a23a01626acb6912cc728e18bb8\"" Oct 13 05:30:40.312681 containerd[1712]: time="2025-10-13T05:30:40.312647418Z" level=info msg="connecting to shim 1a7cb0f7c213ccc6edbea2d1ffef460cfcd68a23a01626acb6912cc728e18bb8" address="unix:///run/containerd/s/8c9e7f9bb37fe3e636081fe8f4d39f737711559dc980e9b367938531858e4fa8" protocol=ttrpc version=3 Oct 13 05:30:40.339018 systemd[1]: Started cri-containerd-1a7cb0f7c213ccc6edbea2d1ffef460cfcd68a23a01626acb6912cc728e18bb8.scope - libcontainer container 1a7cb0f7c213ccc6edbea2d1ffef460cfcd68a23a01626acb6912cc728e18bb8. Oct 13 05:30:40.375935 containerd[1712]: time="2025-10-13T05:30:40.375833203Z" level=info msg="StartContainer for \"1a7cb0f7c213ccc6edbea2d1ffef460cfcd68a23a01626acb6912cc728e18bb8\" returns successfully" Oct 13 05:30:40.729383 containerd[1712]: 2025-10-13 05:30:40.067 [INFO][5613] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" Oct 13 05:30:40.729383 containerd[1712]: 2025-10-13 05:30:40.069 [INFO][5613] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" iface="eth0" netns="/var/run/netns/cni-c71f7e16-2154-f6ba-ac3e-549128e22273" Oct 13 05:30:40.729383 containerd[1712]: 2025-10-13 05:30:40.070 [INFO][5613] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" iface="eth0" netns="/var/run/netns/cni-c71f7e16-2154-f6ba-ac3e-549128e22273" Oct 13 05:30:40.729383 containerd[1712]: 2025-10-13 05:30:40.081 [INFO][5613] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" after=11.439284ms iface="eth0" netns="/var/run/netns/cni-c71f7e16-2154-f6ba-ac3e-549128e22273" Oct 13 05:30:40.729383 containerd[1712]: 2025-10-13 05:30:40.081 [INFO][5613] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" Oct 13 05:30:40.729383 containerd[1712]: 2025-10-13 05:30:40.081 [INFO][5613] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" Oct 13 05:30:40.729383 containerd[1712]: 2025-10-13 05:30:40.460 [INFO][5662] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" HandleID="k8s-pod-network.9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" Workload="localhost-k8s-calico--apiserver--7b54bbbd77--6z8xp-eth0" Oct 13 05:30:40.729383 containerd[1712]: 2025-10-13 05:30:40.464 [INFO][5662] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:30:40.729383 containerd[1712]: 2025-10-13 05:30:40.465 [INFO][5662] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:30:40.729383 containerd[1712]: 2025-10-13 05:30:40.720 [INFO][5662] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" HandleID="k8s-pod-network.9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" Workload="localhost-k8s-calico--apiserver--7b54bbbd77--6z8xp-eth0" Oct 13 05:30:40.729383 containerd[1712]: 2025-10-13 05:30:40.720 [INFO][5662] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" HandleID="k8s-pod-network.9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" Workload="localhost-k8s-calico--apiserver--7b54bbbd77--6z8xp-eth0" Oct 13 05:30:40.729383 containerd[1712]: 2025-10-13 05:30:40.723 [INFO][5662] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:30:40.729383 containerd[1712]: 2025-10-13 05:30:40.725 [INFO][5613] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" Oct 13 05:30:40.738704 containerd[1712]: time="2025-10-13T05:30:40.732929793Z" level=info msg="TearDown network for sandbox \"9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5\" successfully" Oct 13 05:30:40.738704 containerd[1712]: time="2025-10-13T05:30:40.732952404Z" level=info msg="StopPodSandbox for \"9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5\" returns successfully" Oct 13 05:30:40.732881 systemd[1]: run-netns-cni\x2dc71f7e16\x2d2154\x2df6ba\x2dac3e\x2d549128e22273.mount: Deactivated successfully. Oct 13 05:30:40.784881 systemd-networkd[1601]: cali9b4821d4baf: Link UP Oct 13 05:30:40.785420 systemd-networkd[1601]: cali9b4821d4baf: Gained carrier Oct 13 05:30:40.810137 containerd[1712]: 2025-10-13 05:30:40.060 [INFO][5615] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--c475f7844--7fdqm-eth0 calico-apiserver-c475f7844- calico-apiserver d76ad80c-e1b6-49fb-b305-ab1129d4b6be 1109 0 2025-10-13 05:30:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c475f7844 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-c475f7844-7fdqm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9b4821d4baf [] [] }} ContainerID="fa12e9eb204f51e3d3dc0f6a720f567d1c7e62132938018f97645e9c36abdc7f" Namespace="calico-apiserver" Pod="calico-apiserver-c475f7844-7fdqm" WorkloadEndpoint="localhost-k8s-calico--apiserver--c475f7844--7fdqm-" Oct 13 05:30:40.810137 containerd[1712]: 2025-10-13 05:30:40.065 [INFO][5615] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fa12e9eb204f51e3d3dc0f6a720f567d1c7e62132938018f97645e9c36abdc7f" Namespace="calico-apiserver" Pod="calico-apiserver-c475f7844-7fdqm" WorkloadEndpoint="localhost-k8s-calico--apiserver--c475f7844--7fdqm-eth0" Oct 13 05:30:40.810137 containerd[1712]: 2025-10-13 05:30:40.460 [INFO][5661] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fa12e9eb204f51e3d3dc0f6a720f567d1c7e62132938018f97645e9c36abdc7f" HandleID="k8s-pod-network.fa12e9eb204f51e3d3dc0f6a720f567d1c7e62132938018f97645e9c36abdc7f" Workload="localhost-k8s-calico--apiserver--c475f7844--7fdqm-eth0" Oct 13 05:30:40.810137 containerd[1712]: 2025-10-13 05:30:40.464 [INFO][5661] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fa12e9eb204f51e3d3dc0f6a720f567d1c7e62132938018f97645e9c36abdc7f" HandleID="k8s-pod-network.fa12e9eb204f51e3d3dc0f6a720f567d1c7e62132938018f97645e9c36abdc7f" Workload="localhost-k8s-calico--apiserver--c475f7844--7fdqm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000313c30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-c475f7844-7fdqm", "timestamp":"2025-10-13 05:30:40.460138806 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:30:40.810137 containerd[1712]: 2025-10-13 05:30:40.464 [INFO][5661] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:30:40.810137 containerd[1712]: 2025-10-13 05:30:40.723 [INFO][5661] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:30:40.810137 containerd[1712]: 2025-10-13 05:30:40.724 [INFO][5661] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:30:40.810137 containerd[1712]: 2025-10-13 05:30:40.735 [INFO][5661] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fa12e9eb204f51e3d3dc0f6a720f567d1c7e62132938018f97645e9c36abdc7f" host="localhost" Oct 13 05:30:40.810137 containerd[1712]: 2025-10-13 05:30:40.747 [INFO][5661] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:30:40.810137 containerd[1712]: 2025-10-13 05:30:40.752 [INFO][5661] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:30:40.810137 containerd[1712]: 2025-10-13 05:30:40.754 [INFO][5661] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:30:40.810137 containerd[1712]: 2025-10-13 05:30:40.757 [INFO][5661] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:30:40.810137 containerd[1712]: 2025-10-13 05:30:40.757 [INFO][5661] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fa12e9eb204f51e3d3dc0f6a720f567d1c7e62132938018f97645e9c36abdc7f" host="localhost" Oct 13 05:30:40.810137 containerd[1712]: 2025-10-13 05:30:40.758 [INFO][5661] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fa12e9eb204f51e3d3dc0f6a720f567d1c7e62132938018f97645e9c36abdc7f Oct 13 05:30:40.810137 containerd[1712]: 2025-10-13 05:30:40.763 [INFO][5661] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fa12e9eb204f51e3d3dc0f6a720f567d1c7e62132938018f97645e9c36abdc7f" host="localhost" Oct 13 05:30:40.810137 containerd[1712]: 2025-10-13 05:30:40.769 [INFO][5661] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.138/26] block=192.168.88.128/26 handle="k8s-pod-network.fa12e9eb204f51e3d3dc0f6a720f567d1c7e62132938018f97645e9c36abdc7f" host="localhost" Oct 13 05:30:40.810137 containerd[1712]: 2025-10-13 05:30:40.769 [INFO][5661] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.138/26] handle="k8s-pod-network.fa12e9eb204f51e3d3dc0f6a720f567d1c7e62132938018f97645e9c36abdc7f" host="localhost" Oct 13 05:30:40.810137 containerd[1712]: 2025-10-13 05:30:40.769 [INFO][5661] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:30:40.810137 containerd[1712]: 2025-10-13 05:30:40.769 [INFO][5661] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.138/26] IPv6=[] ContainerID="fa12e9eb204f51e3d3dc0f6a720f567d1c7e62132938018f97645e9c36abdc7f" HandleID="k8s-pod-network.fa12e9eb204f51e3d3dc0f6a720f567d1c7e62132938018f97645e9c36abdc7f" Workload="localhost-k8s-calico--apiserver--c475f7844--7fdqm-eth0" Oct 13 05:30:40.814335 containerd[1712]: 2025-10-13 05:30:40.774 [INFO][5615] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fa12e9eb204f51e3d3dc0f6a720f567d1c7e62132938018f97645e9c36abdc7f" Namespace="calico-apiserver" Pod="calico-apiserver-c475f7844-7fdqm" WorkloadEndpoint="localhost-k8s-calico--apiserver--c475f7844--7fdqm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c475f7844--7fdqm-eth0", GenerateName:"calico-apiserver-c475f7844-", Namespace:"calico-apiserver", SelfLink:"", UID:"d76ad80c-e1b6-49fb-b305-ab1129d4b6be", ResourceVersion:"1109", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 30, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c475f7844", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-c475f7844-7fdqm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9b4821d4baf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:30:40.814335 containerd[1712]: 2025-10-13 05:30:40.775 [INFO][5615] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.138/32] ContainerID="fa12e9eb204f51e3d3dc0f6a720f567d1c7e62132938018f97645e9c36abdc7f" Namespace="calico-apiserver" Pod="calico-apiserver-c475f7844-7fdqm" WorkloadEndpoint="localhost-k8s-calico--apiserver--c475f7844--7fdqm-eth0" Oct 13 05:30:40.814335 containerd[1712]: 2025-10-13 05:30:40.775 [INFO][5615] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9b4821d4baf ContainerID="fa12e9eb204f51e3d3dc0f6a720f567d1c7e62132938018f97645e9c36abdc7f" Namespace="calico-apiserver" Pod="calico-apiserver-c475f7844-7fdqm" WorkloadEndpoint="localhost-k8s-calico--apiserver--c475f7844--7fdqm-eth0" Oct 13 05:30:40.814335 containerd[1712]: 2025-10-13 05:30:40.791 [INFO][5615] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fa12e9eb204f51e3d3dc0f6a720f567d1c7e62132938018f97645e9c36abdc7f" Namespace="calico-apiserver" Pod="calico-apiserver-c475f7844-7fdqm" WorkloadEndpoint="localhost-k8s-calico--apiserver--c475f7844--7fdqm-eth0" Oct 13 05:30:40.814335 containerd[1712]: 2025-10-13 05:30:40.793 [INFO][5615] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fa12e9eb204f51e3d3dc0f6a720f567d1c7e62132938018f97645e9c36abdc7f" Namespace="calico-apiserver" Pod="calico-apiserver-c475f7844-7fdqm" WorkloadEndpoint="localhost-k8s-calico--apiserver--c475f7844--7fdqm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c475f7844--7fdqm-eth0", GenerateName:"calico-apiserver-c475f7844-", Namespace:"calico-apiserver", SelfLink:"", UID:"d76ad80c-e1b6-49fb-b305-ab1129d4b6be", ResourceVersion:"1109", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 30, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c475f7844", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fa12e9eb204f51e3d3dc0f6a720f567d1c7e62132938018f97645e9c36abdc7f", Pod:"calico-apiserver-c475f7844-7fdqm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9b4821d4baf", MAC:"72:bd:f6:c0:6e:e9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:30:40.814335 containerd[1712]: 2025-10-13 05:30:40.805 [INFO][5615] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fa12e9eb204f51e3d3dc0f6a720f567d1c7e62132938018f97645e9c36abdc7f" Namespace="calico-apiserver" Pod="calico-apiserver-c475f7844-7fdqm" WorkloadEndpoint="localhost-k8s-calico--apiserver--c475f7844--7fdqm-eth0" Oct 13 05:30:40.885218 kubelet[3010]: I1013 05:30:40.885164 3010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-s5gvc" podStartSLOduration=25.673446765 podStartE2EDuration="47.882028336s" podCreationTimestamp="2025-10-13 05:29:53 +0000 UTC" firstStartedPulling="2025-10-13 05:30:17.998013685 +0000 UTC m=+42.529726492" lastFinishedPulling="2025-10-13 05:30:40.206595258 +0000 UTC m=+64.738308063" observedRunningTime="2025-10-13 05:30:40.868253795 +0000 UTC m=+65.399966613" watchObservedRunningTime="2025-10-13 05:30:40.882028336 +0000 UTC m=+65.413741148" Oct 13 05:30:40.968420 containerd[1712]: time="2025-10-13T05:30:40.968277628Z" level=info msg="connecting to shim fa12e9eb204f51e3d3dc0f6a720f567d1c7e62132938018f97645e9c36abdc7f" address="unix:///run/containerd/s/198afb1302c3a0a9dc1f7f613488457b41c44ecc874b4f772a67a65ea94ceb5e" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:30:40.976527 kubelet[3010]: I1013 05:30:40.975940 3010 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qljk\" (UniqueName: \"kubernetes.io/projected/2f6929be-4c04-40a6-9980-63b9680f877e-kube-api-access-4qljk\") pod \"2f6929be-4c04-40a6-9980-63b9680f877e\" (UID: \"2f6929be-4c04-40a6-9980-63b9680f877e\") " Oct 13 05:30:40.976527 kubelet[3010]: I1013 05:30:40.975983 3010 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2f6929be-4c04-40a6-9980-63b9680f877e-calico-apiserver-certs\") pod \"2f6929be-4c04-40a6-9980-63b9680f877e\" (UID: \"2f6929be-4c04-40a6-9980-63b9680f877e\") " Oct 13 05:30:40.997087 systemd[1]: Started cri-containerd-fa12e9eb204f51e3d3dc0f6a720f567d1c7e62132938018f97645e9c36abdc7f.scope - libcontainer container fa12e9eb204f51e3d3dc0f6a720f567d1c7e62132938018f97645e9c36abdc7f. Oct 13 05:30:41.011731 systemd-resolved[1363]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:30:41.064000 containerd[1712]: time="2025-10-13T05:30:41.063977985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c475f7844-7fdqm,Uid:d76ad80c-e1b6-49fb-b305-ab1129d4b6be,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"fa12e9eb204f51e3d3dc0f6a720f567d1c7e62132938018f97645e9c36abdc7f\"" Oct 13 05:30:41.148722 systemd[1]: var-lib-kubelet-pods-2f6929be\x2d4c04\x2d40a6\x2d9980\x2d63b9680f877e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4qljk.mount: Deactivated successfully. Oct 13 05:30:41.161029 systemd[1]: var-lib-kubelet-pods-2f6929be\x2d4c04\x2d40a6\x2d9980\x2d63b9680f877e-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Oct 13 05:30:41.166044 kubelet[3010]: I1013 05:30:41.162083 3010 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f6929be-4c04-40a6-9980-63b9680f877e-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "2f6929be-4c04-40a6-9980-63b9680f877e" (UID: "2f6929be-4c04-40a6-9980-63b9680f877e"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 13 05:30:41.190613 kubelet[3010]: I1013 05:30:41.190578 3010 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f6929be-4c04-40a6-9980-63b9680f877e-kube-api-access-4qljk" (OuterVolumeSpecName: "kube-api-access-4qljk") pod "2f6929be-4c04-40a6-9980-63b9680f877e" (UID: "2f6929be-4c04-40a6-9980-63b9680f877e"). InnerVolumeSpecName "kube-api-access-4qljk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 13 05:30:41.232159 containerd[1712]: time="2025-10-13T05:30:41.232137533Z" level=info msg="CreateContainer within sandbox \"fa12e9eb204f51e3d3dc0f6a720f567d1c7e62132938018f97645e9c36abdc7f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 05:30:41.236868 kubelet[3010]: I1013 05:30:41.236849 3010 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2f6929be-4c04-40a6-9980-63b9680f877e-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" Oct 13 05:30:41.236868 kubelet[3010]: I1013 05:30:41.236864 3010 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4qljk\" (UniqueName: \"kubernetes.io/projected/2f6929be-4c04-40a6-9980-63b9680f877e-kube-api-access-4qljk\") on node \"localhost\" DevicePath \"\"" Oct 13 05:30:41.247574 containerd[1712]: time="2025-10-13T05:30:41.247518526Z" level=info msg="Container a1941eee6b9e39f8aec610aec7a81d56616ab366ef10b48479dd70360e8955f9: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:30:41.252340 containerd[1712]: time="2025-10-13T05:30:41.252300803Z" level=info msg="CreateContainer within sandbox \"fa12e9eb204f51e3d3dc0f6a720f567d1c7e62132938018f97645e9c36abdc7f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a1941eee6b9e39f8aec610aec7a81d56616ab366ef10b48479dd70360e8955f9\"" Oct 13 05:30:41.252984 containerd[1712]: time="2025-10-13T05:30:41.252972399Z" level=info msg="StartContainer for \"a1941eee6b9e39f8aec610aec7a81d56616ab366ef10b48479dd70360e8955f9\"" Oct 13 05:30:41.254697 containerd[1712]: time="2025-10-13T05:30:41.254650070Z" level=info msg="connecting to shim a1941eee6b9e39f8aec610aec7a81d56616ab366ef10b48479dd70360e8955f9" address="unix:///run/containerd/s/198afb1302c3a0a9dc1f7f613488457b41c44ecc874b4f772a67a65ea94ceb5e" protocol=ttrpc version=3 Oct 13 05:30:41.267365 kubelet[3010]: I1013 05:30:41.267309 3010 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 13 05:30:41.268594 kubelet[3010]: I1013 05:30:41.268215 3010 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 13 05:30:41.282121 systemd[1]: Started cri-containerd-a1941eee6b9e39f8aec610aec7a81d56616ab366ef10b48479dd70360e8955f9.scope - libcontainer container a1941eee6b9e39f8aec610aec7a81d56616ab366ef10b48479dd70360e8955f9. Oct 13 05:30:41.334321 containerd[1712]: time="2025-10-13T05:30:41.334292896Z" level=info msg="StartContainer for \"a1941eee6b9e39f8aec610aec7a81d56616ab366ef10b48479dd70360e8955f9\" returns successfully" Oct 13 05:30:41.876599 systemd[1]: Removed slice kubepods-besteffort-pod2f6929be_4c04_40a6_9980_63b9680f877e.slice - libcontainer container kubepods-besteffort-pod2f6929be_4c04_40a6_9980_63b9680f877e.slice. Oct 13 05:30:41.891945 kubelet[3010]: I1013 05:30:41.891569 3010 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-c475f7844-7fdqm" podStartSLOduration=3.891554534 podStartE2EDuration="3.891554534s" podCreationTimestamp="2025-10-13 05:30:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:30:41.869536175 +0000 UTC m=+66.401248991" watchObservedRunningTime="2025-10-13 05:30:41.891554534 +0000 UTC m=+66.423267346" Oct 13 05:30:41.922996 containerd[1712]: time="2025-10-13T05:30:41.922933258Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bbf0f38afac7fb4683300536edfb13b2d300a58bcb7cc09eac4e0000d2b92157\" id:\"25cfa968d1cbeceb5b70a99d8c1c2c4aab4a870e5755b769eab9e9f0c0fb5cd6\" pid:5807 exited_at:{seconds:1760333441 nanos:914870693}" Oct 13 05:30:42.451037 systemd-networkd[1601]: cali9b4821d4baf: Gained IPv6LL Oct 13 05:30:44.028921 kubelet[3010]: I1013 05:30:44.028211 3010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f6929be-4c04-40a6-9980-63b9680f877e" path="/var/lib/kubelet/pods/2f6929be-4c04-40a6-9980-63b9680f877e/volumes" Oct 13 05:30:44.299204 containerd[1712]: time="2025-10-13T05:30:44.298469906Z" level=info msg="StopContainer for \"57574050a6c9bfa5284e367db1471518b073243b3ea7f0447a0796af24baf9fa\" with timeout 30 (s)" Oct 13 05:30:44.355863 containerd[1712]: time="2025-10-13T05:30:44.355753081Z" level=info msg="Stop container \"57574050a6c9bfa5284e367db1471518b073243b3ea7f0447a0796af24baf9fa\" with signal terminated" Oct 13 05:30:44.449370 systemd[1]: cri-containerd-57574050a6c9bfa5284e367db1471518b073243b3ea7f0447a0796af24baf9fa.scope: Deactivated successfully. Oct 13 05:30:44.468133 systemd[1]: cri-containerd-57574050a6c9bfa5284e367db1471518b073243b3ea7f0447a0796af24baf9fa.scope: Consumed 1.465s CPU time, 68.6M memory peak, 14.1M read from disk. Oct 13 05:30:44.517630 containerd[1712]: time="2025-10-13T05:30:44.517544584Z" level=info msg="TaskExit event in podsandbox handler container_id:\"57574050a6c9bfa5284e367db1471518b073243b3ea7f0447a0796af24baf9fa\" id:\"57574050a6c9bfa5284e367db1471518b073243b3ea7f0447a0796af24baf9fa\" pid:5351 exit_status:1 exited_at:{seconds:1760333444 nanos:460730936}" Oct 13 05:30:44.522114 containerd[1712]: time="2025-10-13T05:30:44.520851729Z" level=info msg="received exit event container_id:\"57574050a6c9bfa5284e367db1471518b073243b3ea7f0447a0796af24baf9fa\" id:\"57574050a6c9bfa5284e367db1471518b073243b3ea7f0447a0796af24baf9fa\" pid:5351 exit_status:1 exited_at:{seconds:1760333444 nanos:460730936}" Oct 13 05:30:44.566168 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-57574050a6c9bfa5284e367db1471518b073243b3ea7f0447a0796af24baf9fa-rootfs.mount: Deactivated successfully. Oct 13 05:30:44.656770 containerd[1712]: time="2025-10-13T05:30:44.656733192Z" level=info msg="StopContainer for \"57574050a6c9bfa5284e367db1471518b073243b3ea7f0447a0796af24baf9fa\" returns successfully" Oct 13 05:30:44.661928 containerd[1712]: time="2025-10-13T05:30:44.661901405Z" level=info msg="StopPodSandbox for \"26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8\"" Oct 13 05:30:44.663008 containerd[1712]: time="2025-10-13T05:30:44.662987831Z" level=info msg="Container to stop \"57574050a6c9bfa5284e367db1471518b073243b3ea7f0447a0796af24baf9fa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 13 05:30:44.680366 systemd[1]: cri-containerd-26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8.scope: Deactivated successfully. Oct 13 05:30:44.682129 containerd[1712]: time="2025-10-13T05:30:44.681792462Z" level=info msg="TaskExit event in podsandbox handler container_id:\"26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8\" id:\"26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8\" pid:4785 exit_status:137 exited_at:{seconds:1760333444 nanos:681617780}" Oct 13 05:30:44.702258 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8-rootfs.mount: Deactivated successfully. Oct 13 05:30:44.703738 containerd[1712]: time="2025-10-13T05:30:44.703711244Z" level=info msg="shim disconnected" id=26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8 namespace=k8s.io Oct 13 05:30:44.705231 containerd[1712]: time="2025-10-13T05:30:44.703837979Z" level=warning msg="cleaning up after shim disconnected" id=26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8 namespace=k8s.io Oct 13 05:30:44.711344 containerd[1712]: time="2025-10-13T05:30:44.703846529Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 13 05:30:44.755359 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8-shm.mount: Deactivated successfully. Oct 13 05:30:44.762885 containerd[1712]: time="2025-10-13T05:30:44.762721173Z" level=info msg="received exit event sandbox_id:\"26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8\" exit_status:137 exited_at:{seconds:1760333444 nanos:681617780}" Oct 13 05:30:44.888411 systemd-networkd[1601]: calibf4f052c74b: Link DOWN Oct 13 05:30:44.888602 systemd-networkd[1601]: calibf4f052c74b: Lost carrier Oct 13 05:30:44.963929 containerd[1712]: 2025-10-13 05:30:44.885 [INFO][5903] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" Oct 13 05:30:44.963929 containerd[1712]: 2025-10-13 05:30:44.886 [INFO][5903] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" iface="eth0" netns="/var/run/netns/cni-cb9f9b7f-78e1-c696-981b-7e67a6c03cbb" Oct 13 05:30:44.963929 containerd[1712]: 2025-10-13 05:30:44.887 [INFO][5903] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" iface="eth0" netns="/var/run/netns/cni-cb9f9b7f-78e1-c696-981b-7e67a6c03cbb" Oct 13 05:30:44.963929 containerd[1712]: 2025-10-13 05:30:44.893 [INFO][5903] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" after=6.810206ms iface="eth0" netns="/var/run/netns/cni-cb9f9b7f-78e1-c696-981b-7e67a6c03cbb" Oct 13 05:30:44.963929 containerd[1712]: 2025-10-13 05:30:44.893 [INFO][5903] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" Oct 13 05:30:44.963929 containerd[1712]: 2025-10-13 05:30:44.893 [INFO][5903] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" Oct 13 05:30:44.963929 containerd[1712]: 2025-10-13 05:30:44.930 [INFO][5913] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" HandleID="k8s-pod-network.26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" Workload="localhost-k8s-calico--apiserver--7b54bbbd77--rpbzl-eth0" Oct 13 05:30:44.963929 containerd[1712]: 2025-10-13 05:30:44.930 [INFO][5913] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:30:44.963929 containerd[1712]: 2025-10-13 05:30:44.930 [INFO][5913] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:30:44.963929 containerd[1712]: 2025-10-13 05:30:44.955 [INFO][5913] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" HandleID="k8s-pod-network.26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" Workload="localhost-k8s-calico--apiserver--7b54bbbd77--rpbzl-eth0" Oct 13 05:30:44.963929 containerd[1712]: 2025-10-13 05:30:44.955 [INFO][5913] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" HandleID="k8s-pod-network.26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" Workload="localhost-k8s-calico--apiserver--7b54bbbd77--rpbzl-eth0" Oct 13 05:30:44.963929 containerd[1712]: 2025-10-13 05:30:44.956 [INFO][5913] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:30:44.963929 containerd[1712]: 2025-10-13 05:30:44.958 [INFO][5903] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" Oct 13 05:30:44.963781 systemd[1]: run-netns-cni\x2dcb9f9b7f\x2d78e1\x2dc696\x2d981b\x2d7e67a6c03cbb.mount: Deactivated successfully. Oct 13 05:30:44.964590 containerd[1712]: time="2025-10-13T05:30:44.964381144Z" level=info msg="TearDown network for sandbox \"26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8\" successfully" Oct 13 05:30:44.964590 containerd[1712]: time="2025-10-13T05:30:44.964399746Z" level=info msg="StopPodSandbox for \"26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8\" returns successfully" Oct 13 05:30:45.045743 kubelet[3010]: I1013 05:30:45.045644 3010 scope.go:117] "RemoveContainer" containerID="57574050a6c9bfa5284e367db1471518b073243b3ea7f0447a0796af24baf9fa" Oct 13 05:30:45.079152 kubelet[3010]: I1013 05:30:45.079091 3010 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9tt4\" (UniqueName: \"kubernetes.io/projected/e8c46c11-2670-4392-980f-1e6c0dd2267e-kube-api-access-f9tt4\") pod \"e8c46c11-2670-4392-980f-1e6c0dd2267e\" (UID: \"e8c46c11-2670-4392-980f-1e6c0dd2267e\") " Oct 13 05:30:45.079377 kubelet[3010]: I1013 05:30:45.079366 3010 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e8c46c11-2670-4392-980f-1e6c0dd2267e-calico-apiserver-certs\") pod \"e8c46c11-2670-4392-980f-1e6c0dd2267e\" (UID: \"e8c46c11-2670-4392-980f-1e6c0dd2267e\") " Oct 13 05:30:45.110124 containerd[1712]: time="2025-10-13T05:30:45.109276928Z" level=info msg="RemoveContainer for \"57574050a6c9bfa5284e367db1471518b073243b3ea7f0447a0796af24baf9fa\"" Oct 13 05:30:45.120843 systemd[1]: var-lib-kubelet-pods-e8c46c11\x2d2670\x2d4392\x2d980f\x2d1e6c0dd2267e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2df9tt4.mount: Deactivated successfully. Oct 13 05:30:45.122853 systemd[1]: var-lib-kubelet-pods-e8c46c11\x2d2670\x2d4392\x2d980f\x2d1e6c0dd2267e-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Oct 13 05:30:45.124168 kubelet[3010]: I1013 05:30:45.124131 3010 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8c46c11-2670-4392-980f-1e6c0dd2267e-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "e8c46c11-2670-4392-980f-1e6c0dd2267e" (UID: "e8c46c11-2670-4392-980f-1e6c0dd2267e"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 13 05:30:45.124257 kubelet[3010]: I1013 05:30:45.124239 3010 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8c46c11-2670-4392-980f-1e6c0dd2267e-kube-api-access-f9tt4" (OuterVolumeSpecName: "kube-api-access-f9tt4") pod "e8c46c11-2670-4392-980f-1e6c0dd2267e" (UID: "e8c46c11-2670-4392-980f-1e6c0dd2267e"). InnerVolumeSpecName "kube-api-access-f9tt4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 13 05:30:45.145099 containerd[1712]: time="2025-10-13T05:30:45.144603903Z" level=info msg="RemoveContainer for \"57574050a6c9bfa5284e367db1471518b073243b3ea7f0447a0796af24baf9fa\" returns successfully" Oct 13 05:30:45.145373 kubelet[3010]: I1013 05:30:45.145352 3010 scope.go:117] "RemoveContainer" containerID="57574050a6c9bfa5284e367db1471518b073243b3ea7f0447a0796af24baf9fa" Oct 13 05:30:45.148316 containerd[1712]: time="2025-10-13T05:30:45.148285801Z" level=error msg="ContainerStatus for \"57574050a6c9bfa5284e367db1471518b073243b3ea7f0447a0796af24baf9fa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"57574050a6c9bfa5284e367db1471518b073243b3ea7f0447a0796af24baf9fa\": not found" Oct 13 05:30:45.155690 kubelet[3010]: E1013 05:30:45.155626 3010 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"57574050a6c9bfa5284e367db1471518b073243b3ea7f0447a0796af24baf9fa\": not found" containerID="57574050a6c9bfa5284e367db1471518b073243b3ea7f0447a0796af24baf9fa" Oct 13 05:30:45.172442 kubelet[3010]: I1013 05:30:45.155660 3010 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"57574050a6c9bfa5284e367db1471518b073243b3ea7f0447a0796af24baf9fa"} err="failed to get container status \"57574050a6c9bfa5284e367db1471518b073243b3ea7f0447a0796af24baf9fa\": rpc error: code = NotFound desc = an error occurred when try to find container \"57574050a6c9bfa5284e367db1471518b073243b3ea7f0447a0796af24baf9fa\": not found" Oct 13 05:30:45.180106 kubelet[3010]: I1013 05:30:45.179964 3010 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e8c46c11-2670-4392-980f-1e6c0dd2267e-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" Oct 13 05:30:45.180106 kubelet[3010]: I1013 05:30:45.179984 3010 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f9tt4\" (UniqueName: \"kubernetes.io/projected/e8c46c11-2670-4392-980f-1e6c0dd2267e-kube-api-access-f9tt4\") on node \"localhost\" DevicePath \"\"" Oct 13 05:30:45.325591 systemd[1]: Removed slice kubepods-besteffort-pode8c46c11_2670_4392_980f_1e6c0dd2267e.slice - libcontainer container kubepods-besteffort-pode8c46c11_2670_4392_980f_1e6c0dd2267e.slice. Oct 13 05:30:45.325721 systemd[1]: kubepods-besteffort-pode8c46c11_2670_4392_980f_1e6c0dd2267e.slice: Consumed 1.487s CPU time, 69.2M memory peak, 14.1M read from disk. Oct 13 05:30:45.784041 containerd[1712]: time="2025-10-13T05:30:45.784014679Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bf5a52200e6be81e0575cb0f640581ad3a6bede5b855d21f0b290b17348c7318\" id:\"609156d611eb4f05d6aed888a48c53158672781b074054c3fb616d564bb683b8\" pid:5938 exited_at:{seconds:1760333445 nanos:777840779}" Oct 13 05:30:45.846174 kubelet[3010]: I1013 05:30:45.846143 3010 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8c46c11-2670-4392-980f-1e6c0dd2267e" path="/var/lib/kubelet/pods/e8c46c11-2670-4392-980f-1e6c0dd2267e/volumes" Oct 13 05:30:52.115778 systemd[1]: Started sshd@7-139.178.70.110:22-139.178.89.65:59810.service - OpenSSH per-connection server daemon (139.178.89.65:59810). Oct 13 05:30:52.293012 sshd[5962]: Accepted publickey for core from 139.178.89.65 port 59810 ssh2: RSA SHA256:6zDqqjOIzEVL6yYi04udNWcyv9SVeoJ+g8IY0fMDRdw Oct 13 05:30:52.296433 sshd-session[5962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:30:52.309966 systemd-logind[1687]: New session 10 of user core. Oct 13 05:30:52.316094 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 13 05:30:52.840985 sshd[5968]: Connection closed by 139.178.89.65 port 59810 Oct 13 05:30:52.841438 sshd-session[5962]: pam_unix(sshd:session): session closed for user core Oct 13 05:30:52.850105 systemd[1]: sshd@7-139.178.70.110:22-139.178.89.65:59810.service: Deactivated successfully. Oct 13 05:30:52.851362 systemd[1]: session-10.scope: Deactivated successfully. Oct 13 05:30:52.856076 systemd-logind[1687]: Session 10 logged out. Waiting for processes to exit. Oct 13 05:30:52.858807 systemd-logind[1687]: Removed session 10. Oct 13 05:30:57.852516 systemd[1]: Started sshd@8-139.178.70.110:22-139.178.89.65:59816.service - OpenSSH per-connection server daemon (139.178.89.65:59816). Oct 13 05:30:58.009943 sshd[5989]: Accepted publickey for core from 139.178.89.65 port 59816 ssh2: RSA SHA256:6zDqqjOIzEVL6yYi04udNWcyv9SVeoJ+g8IY0fMDRdw Oct 13 05:30:58.011547 sshd-session[5989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:30:58.017750 systemd-logind[1687]: New session 11 of user core. Oct 13 05:30:58.023223 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 13 05:30:58.440576 containerd[1712]: time="2025-10-13T05:30:58.440521048Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cbddc81dd4e47139ded19098649085db2de30c787a6992ca56907974685533ed\" id:\"61247f1316f1ffd8e636e645b7e766c59d9f95191db140ac974e1cb0bb190330\" pid:6013 exited_at:{seconds:1760333458 nanos:440250542}" Oct 13 05:30:58.468003 sshd[5992]: Connection closed by 139.178.89.65 port 59816 Oct 13 05:30:58.468486 sshd-session[5989]: pam_unix(sshd:session): session closed for user core Oct 13 05:30:58.471056 systemd[1]: sshd@8-139.178.70.110:22-139.178.89.65:59816.service: Deactivated successfully. Oct 13 05:30:58.472090 systemd[1]: session-11.scope: Deactivated successfully. Oct 13 05:30:58.473150 systemd-logind[1687]: Session 11 logged out. Waiting for processes to exit. Oct 13 05:30:58.473651 systemd-logind[1687]: Removed session 11. Oct 13 05:30:59.071407 containerd[1712]: time="2025-10-13T05:30:59.071327779Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cbddc81dd4e47139ded19098649085db2de30c787a6992ca56907974685533ed\" id:\"a6303a1da5cba1dfdbba3f7f76baff7dbbea52163b7cc3dafc4a474be0198b83\" pid:6039 exited_at:{seconds:1760333459 nanos:71171820}" Oct 13 05:31:03.484274 systemd[1]: Started sshd@9-139.178.70.110:22-139.178.89.65:48298.service - OpenSSH per-connection server daemon (139.178.89.65:48298). Oct 13 05:31:03.619911 sshd[6048]: Accepted publickey for core from 139.178.89.65 port 48298 ssh2: RSA SHA256:6zDqqjOIzEVL6yYi04udNWcyv9SVeoJ+g8IY0fMDRdw Oct 13 05:31:03.622032 sshd-session[6048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:31:03.628460 systemd-logind[1687]: New session 12 of user core. Oct 13 05:31:03.634410 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 13 05:31:03.812736 sshd[6051]: Connection closed by 139.178.89.65 port 48298 Oct 13 05:31:03.815254 sshd-session[6048]: pam_unix(sshd:session): session closed for user core Oct 13 05:31:03.821136 systemd[1]: sshd@9-139.178.70.110:22-139.178.89.65:48298.service: Deactivated successfully. Oct 13 05:31:03.822267 systemd[1]: session-12.scope: Deactivated successfully. Oct 13 05:31:03.823263 systemd-logind[1687]: Session 12 logged out. Waiting for processes to exit. Oct 13 05:31:03.826713 systemd[1]: Started sshd@10-139.178.70.110:22-139.178.89.65:48304.service - OpenSSH per-connection server daemon (139.178.89.65:48304). Oct 13 05:31:03.828163 systemd-logind[1687]: Removed session 12. Oct 13 05:31:03.889632 sshd[6064]: Accepted publickey for core from 139.178.89.65 port 48304 ssh2: RSA SHA256:6zDqqjOIzEVL6yYi04udNWcyv9SVeoJ+g8IY0fMDRdw Oct 13 05:31:03.892249 sshd-session[6064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:31:03.902762 systemd-logind[1687]: New session 13 of user core. Oct 13 05:31:03.907009 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 13 05:31:04.095976 sshd[6067]: Connection closed by 139.178.89.65 port 48304 Oct 13 05:31:04.097119 sshd-session[6064]: pam_unix(sshd:session): session closed for user core Oct 13 05:31:04.104366 systemd[1]: sshd@10-139.178.70.110:22-139.178.89.65:48304.service: Deactivated successfully. Oct 13 05:31:04.105933 systemd[1]: session-13.scope: Deactivated successfully. Oct 13 05:31:04.106841 systemd-logind[1687]: Session 13 logged out. Waiting for processes to exit. Oct 13 05:31:04.108749 systemd[1]: Started sshd@11-139.178.70.110:22-139.178.89.65:48310.service - OpenSSH per-connection server daemon (139.178.89.65:48310). Oct 13 05:31:04.112626 systemd-logind[1687]: Removed session 13. Oct 13 05:31:04.180735 sshd[6077]: Accepted publickey for core from 139.178.89.65 port 48310 ssh2: RSA SHA256:6zDqqjOIzEVL6yYi04udNWcyv9SVeoJ+g8IY0fMDRdw Oct 13 05:31:04.181390 sshd-session[6077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:31:04.186955 systemd-logind[1687]: New session 14 of user core. Oct 13 05:31:04.191129 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 13 05:31:04.318371 sshd[6080]: Connection closed by 139.178.89.65 port 48310 Oct 13 05:31:04.319029 sshd-session[6077]: pam_unix(sshd:session): session closed for user core Oct 13 05:31:04.321887 systemd[1]: sshd@11-139.178.70.110:22-139.178.89.65:48310.service: Deactivated successfully. Oct 13 05:31:04.325334 systemd[1]: session-14.scope: Deactivated successfully. Oct 13 05:31:04.326716 systemd-logind[1687]: Session 14 logged out. Waiting for processes to exit. Oct 13 05:31:04.327333 systemd-logind[1687]: Removed session 14. Oct 13 05:31:09.328692 systemd[1]: Started sshd@12-139.178.70.110:22-139.178.89.65:48314.service - OpenSSH per-connection server daemon (139.178.89.65:48314). Oct 13 05:31:09.634038 sshd[6098]: Accepted publickey for core from 139.178.89.65 port 48314 ssh2: RSA SHA256:6zDqqjOIzEVL6yYi04udNWcyv9SVeoJ+g8IY0fMDRdw Oct 13 05:31:09.636672 sshd-session[6098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:31:09.641108 systemd-logind[1687]: New session 15 of user core. Oct 13 05:31:09.645106 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 13 05:31:10.342857 containerd[1712]: time="2025-10-13T05:31:10.342810397Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bbf0f38afac7fb4683300536edfb13b2d300a58bcb7cc09eac4e0000d2b92157\" id:\"3ba934b7256a0757f1d0ae2a7700c26a997b83bcf803efebe1b1e700daa2bd68\" pid:6113 exited_at:{seconds:1760333470 nanos:278053331}" Oct 13 05:31:11.040748 sshd[6114]: Connection closed by 139.178.89.65 port 48314 Oct 13 05:31:11.050979 sshd-session[6098]: pam_unix(sshd:session): session closed for user core Oct 13 05:31:11.157003 systemd[1]: sshd@12-139.178.70.110:22-139.178.89.65:48314.service: Deactivated successfully. Oct 13 05:31:11.158340 systemd[1]: session-15.scope: Deactivated successfully. Oct 13 05:31:11.159857 systemd-logind[1687]: Session 15 logged out. Waiting for processes to exit. Oct 13 05:31:11.162701 systemd-logind[1687]: Removed session 15. Oct 13 05:31:16.070086 systemd[1]: Started sshd@13-139.178.70.110:22-139.178.89.65:48170.service - OpenSSH per-connection server daemon (139.178.89.65:48170). Oct 13 05:31:16.385668 containerd[1712]: time="2025-10-13T05:31:16.384549377Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bf5a52200e6be81e0575cb0f640581ad3a6bede5b855d21f0b290b17348c7318\" id:\"a32aa0005631933457b65741cba13ba787f4f459c70d9609cb37e875a58df63b\" pid:6153 exited_at:{seconds:1760333476 nanos:347910321}" Oct 13 05:31:16.425157 sshd[6166]: Accepted publickey for core from 139.178.89.65 port 48170 ssh2: RSA SHA256:6zDqqjOIzEVL6yYi04udNWcyv9SVeoJ+g8IY0fMDRdw Oct 13 05:31:16.427184 sshd-session[6166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:31:16.442933 systemd-logind[1687]: New session 16 of user core. Oct 13 05:31:16.447005 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 13 05:31:17.954113 sshd[6169]: Connection closed by 139.178.89.65 port 48170 Oct 13 05:31:17.968864 sshd-session[6166]: pam_unix(sshd:session): session closed for user core Oct 13 05:31:17.982671 systemd[1]: sshd@13-139.178.70.110:22-139.178.89.65:48170.service: Deactivated successfully. Oct 13 05:31:17.982791 systemd-logind[1687]: Session 16 logged out. Waiting for processes to exit. Oct 13 05:31:17.986700 systemd[1]: session-16.scope: Deactivated successfully. Oct 13 05:31:17.990689 systemd-logind[1687]: Removed session 16. Oct 13 05:31:22.993459 systemd[1]: Started sshd@14-139.178.70.110:22-139.178.89.65:32940.service - OpenSSH per-connection server daemon (139.178.89.65:32940). Oct 13 05:31:23.347982 sshd[6182]: Accepted publickey for core from 139.178.89.65 port 32940 ssh2: RSA SHA256:6zDqqjOIzEVL6yYi04udNWcyv9SVeoJ+g8IY0fMDRdw Oct 13 05:31:23.353158 sshd-session[6182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:31:23.356249 systemd-logind[1687]: New session 17 of user core. Oct 13 05:31:23.362985 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 13 05:31:24.400786 sshd[6185]: Connection closed by 139.178.89.65 port 32940 Oct 13 05:31:24.402090 sshd-session[6182]: pam_unix(sshd:session): session closed for user core Oct 13 05:31:24.413786 systemd[1]: Started sshd@15-139.178.70.110:22-139.178.89.65:32954.service - OpenSSH per-connection server daemon (139.178.89.65:32954). Oct 13 05:31:24.418807 systemd[1]: sshd@14-139.178.70.110:22-139.178.89.65:32940.service: Deactivated successfully. Oct 13 05:31:24.420069 systemd[1]: session-17.scope: Deactivated successfully. Oct 13 05:31:24.421079 systemd-logind[1687]: Session 17 logged out. Waiting for processes to exit. Oct 13 05:31:24.422561 systemd-logind[1687]: Removed session 17. Oct 13 05:31:24.471864 sshd[6195]: Accepted publickey for core from 139.178.89.65 port 32954 ssh2: RSA SHA256:6zDqqjOIzEVL6yYi04udNWcyv9SVeoJ+g8IY0fMDRdw Oct 13 05:31:24.472539 sshd-session[6195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:31:24.475432 systemd-logind[1687]: New session 18 of user core. Oct 13 05:31:24.483983 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 13 05:31:25.471745 sshd[6201]: Connection closed by 139.178.89.65 port 32954 Oct 13 05:31:25.492948 sshd-session[6195]: pam_unix(sshd:session): session closed for user core Oct 13 05:31:25.500788 systemd[1]: Started sshd@16-139.178.70.110:22-139.178.89.65:32960.service - OpenSSH per-connection server daemon (139.178.89.65:32960). Oct 13 05:31:25.523083 systemd[1]: sshd@15-139.178.70.110:22-139.178.89.65:32954.service: Deactivated successfully. Oct 13 05:31:25.527617 systemd[1]: session-18.scope: Deactivated successfully. Oct 13 05:31:25.529450 systemd-logind[1687]: Session 18 logged out. Waiting for processes to exit. Oct 13 05:31:25.530427 systemd-logind[1687]: Removed session 18. Oct 13 05:31:25.759541 sshd[6210]: Accepted publickey for core from 139.178.89.65 port 32960 ssh2: RSA SHA256:6zDqqjOIzEVL6yYi04udNWcyv9SVeoJ+g8IY0fMDRdw Oct 13 05:31:25.760476 sshd-session[6210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:31:25.766327 systemd-logind[1687]: New session 19 of user core. Oct 13 05:31:25.773051 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 13 05:31:27.222027 sshd[6216]: Connection closed by 139.178.89.65 port 32960 Oct 13 05:31:27.227315 sshd-session[6210]: pam_unix(sshd:session): session closed for user core Oct 13 05:31:27.230097 systemd[1]: Started sshd@17-139.178.70.110:22-139.178.89.65:32970.service - OpenSSH per-connection server daemon (139.178.89.65:32970). Oct 13 05:31:27.231750 systemd[1]: sshd@16-139.178.70.110:22-139.178.89.65:32960.service: Deactivated successfully. Oct 13 05:31:27.231942 systemd-logind[1687]: Session 19 logged out. Waiting for processes to exit. Oct 13 05:31:27.233022 systemd[1]: session-19.scope: Deactivated successfully. Oct 13 05:31:27.233882 systemd-logind[1687]: Removed session 19. Oct 13 05:31:27.428466 sshd[6230]: Accepted publickey for core from 139.178.89.65 port 32970 ssh2: RSA SHA256:6zDqqjOIzEVL6yYi04udNWcyv9SVeoJ+g8IY0fMDRdw Oct 13 05:31:27.434600 sshd-session[6230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:31:27.438562 systemd-logind[1687]: New session 20 of user core. Oct 13 05:31:27.444041 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 13 05:31:28.982572 containerd[1712]: time="2025-10-13T05:31:28.982531679Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cbddc81dd4e47139ded19098649085db2de30c787a6992ca56907974685533ed\" id:\"f59a0cf9dacef1ade4aad05dc679f69e1696388a7729c2bb55013a3f6be317a0\" pid:6257 exited_at:{seconds:1760333488 nanos:857423949}" Oct 13 05:31:31.769263 sshd[6237]: Connection closed by 139.178.89.65 port 32970 Oct 13 05:31:31.788431 sshd-session[6230]: pam_unix(sshd:session): session closed for user core Oct 13 05:31:31.813360 systemd[1]: Started sshd@18-139.178.70.110:22-139.178.89.65:32980.service - OpenSSH per-connection server daemon (139.178.89.65:32980). Oct 13 05:31:31.814176 systemd[1]: sshd@17-139.178.70.110:22-139.178.89.65:32970.service: Deactivated successfully. Oct 13 05:31:31.817650 systemd[1]: session-20.scope: Deactivated successfully. Oct 13 05:31:31.818552 systemd[1]: session-20.scope: Consumed 724ms CPU time, 70.9M memory peak. Oct 13 05:31:31.821501 systemd-logind[1687]: Session 20 logged out. Waiting for processes to exit. Oct 13 05:31:31.825647 systemd-logind[1687]: Removed session 20. Oct 13 05:31:32.085448 sshd[6298]: Accepted publickey for core from 139.178.89.65 port 32980 ssh2: RSA SHA256:6zDqqjOIzEVL6yYi04udNWcyv9SVeoJ+g8IY0fMDRdw Oct 13 05:31:32.087514 sshd-session[6298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:31:32.094256 systemd-logind[1687]: New session 21 of user core. Oct 13 05:31:32.103141 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 13 05:31:32.545575 sshd[6304]: Connection closed by 139.178.89.65 port 32980 Oct 13 05:31:32.544495 sshd-session[6298]: pam_unix(sshd:session): session closed for user core Oct 13 05:31:32.548996 systemd[1]: sshd@18-139.178.70.110:22-139.178.89.65:32980.service: Deactivated successfully. Oct 13 05:31:32.551826 systemd[1]: session-21.scope: Deactivated successfully. Oct 13 05:31:32.552800 systemd-logind[1687]: Session 21 logged out. Waiting for processes to exit. Oct 13 05:31:32.555360 systemd-logind[1687]: Removed session 21. Oct 13 05:31:36.421929 kubelet[3010]: I1013 05:31:36.420716 3010 scope.go:117] "RemoveContainer" containerID="f4fdd176a64d72ad4b2c9174085e95b48481b5bbab591d59c71c3bc04a7184a6" Oct 13 05:31:36.617677 containerd[1712]: time="2025-10-13T05:31:36.577882085Z" level=info msg="RemoveContainer for \"f4fdd176a64d72ad4b2c9174085e95b48481b5bbab591d59c71c3bc04a7184a6\"" Oct 13 05:31:36.968738 containerd[1712]: time="2025-10-13T05:31:36.968699634Z" level=info msg="RemoveContainer for \"f4fdd176a64d72ad4b2c9174085e95b48481b5bbab591d59c71c3bc04a7184a6\" returns successfully" Oct 13 05:31:36.981665 containerd[1712]: time="2025-10-13T05:31:36.981562825Z" level=info msg="StopPodSandbox for \"9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5\"" Oct 13 05:31:37.605088 systemd[1]: Started sshd@19-139.178.70.110:22-139.178.89.65:60406.service - OpenSSH per-connection server daemon (139.178.89.65:60406). Oct 13 05:31:37.871174 sshd[6341]: Accepted publickey for core from 139.178.89.65 port 60406 ssh2: RSA SHA256:6zDqqjOIzEVL6yYi04udNWcyv9SVeoJ+g8IY0fMDRdw Oct 13 05:31:37.874357 sshd-session[6341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:31:37.881343 systemd-logind[1687]: New session 22 of user core. Oct 13 05:31:37.885031 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 13 05:31:38.174410 containerd[1712]: 2025-10-13 05:31:37.651 [WARNING][6335] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b54bbbd77--6z8xp-eth0" Oct 13 05:31:38.174410 containerd[1712]: 2025-10-13 05:31:37.656 [INFO][6335] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" Oct 13 05:31:38.174410 containerd[1712]: 2025-10-13 05:31:37.656 [INFO][6335] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" iface="eth0" netns="" Oct 13 05:31:38.174410 containerd[1712]: 2025-10-13 05:31:37.656 [INFO][6335] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" Oct 13 05:31:38.174410 containerd[1712]: 2025-10-13 05:31:37.656 [INFO][6335] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" Oct 13 05:31:38.174410 containerd[1712]: 2025-10-13 05:31:38.077 [INFO][6344] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" HandleID="k8s-pod-network.9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" Workload="localhost-k8s-calico--apiserver--7b54bbbd77--6z8xp-eth0" Oct 13 05:31:38.174410 containerd[1712]: 2025-10-13 05:31:38.098 [INFO][6344] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:31:38.174410 containerd[1712]: 2025-10-13 05:31:38.108 [INFO][6344] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:31:38.174410 containerd[1712]: 2025-10-13 05:31:38.148 [WARNING][6344] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" HandleID="k8s-pod-network.9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" Workload="localhost-k8s-calico--apiserver--7b54bbbd77--6z8xp-eth0" Oct 13 05:31:38.174410 containerd[1712]: 2025-10-13 05:31:38.148 [INFO][6344] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" HandleID="k8s-pod-network.9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" Workload="localhost-k8s-calico--apiserver--7b54bbbd77--6z8xp-eth0" Oct 13 05:31:38.174410 containerd[1712]: 2025-10-13 05:31:38.150 [INFO][6344] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:31:38.174410 containerd[1712]: 2025-10-13 05:31:38.161 [INFO][6335] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" Oct 13 05:31:38.174410 containerd[1712]: time="2025-10-13T05:31:38.174240819Z" level=info msg="TearDown network for sandbox \"9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5\" successfully" Oct 13 05:31:38.174410 containerd[1712]: time="2025-10-13T05:31:38.174262181Z" level=info msg="StopPodSandbox for \"9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5\" returns successfully" Oct 13 05:31:38.352293 containerd[1712]: time="2025-10-13T05:31:38.352001470Z" level=info msg="RemovePodSandbox for \"9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5\"" Oct 13 05:31:38.352293 containerd[1712]: time="2025-10-13T05:31:38.352061612Z" level=info msg="Forcibly stopping sandbox \"9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5\"" Oct 13 05:31:39.340747 containerd[1712]: 2025-10-13 05:31:38.608 [WARNING][6367] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b54bbbd77--6z8xp-eth0" Oct 13 05:31:39.340747 containerd[1712]: 2025-10-13 05:31:38.608 [INFO][6367] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" Oct 13 05:31:39.340747 containerd[1712]: 2025-10-13 05:31:38.608 [INFO][6367] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" iface="eth0" netns="" Oct 13 05:31:39.340747 containerd[1712]: 2025-10-13 05:31:38.608 [INFO][6367] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" Oct 13 05:31:39.340747 containerd[1712]: 2025-10-13 05:31:38.608 [INFO][6367] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" Oct 13 05:31:39.340747 containerd[1712]: 2025-10-13 05:31:39.222 [INFO][6374] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" HandleID="k8s-pod-network.9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" Workload="localhost-k8s-calico--apiserver--7b54bbbd77--6z8xp-eth0" Oct 13 05:31:39.340747 containerd[1712]: 2025-10-13 05:31:39.233 [INFO][6374] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:31:39.340747 containerd[1712]: 2025-10-13 05:31:39.234 [INFO][6374] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:31:39.340747 containerd[1712]: 2025-10-13 05:31:39.297 [WARNING][6374] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" HandleID="k8s-pod-network.9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" Workload="localhost-k8s-calico--apiserver--7b54bbbd77--6z8xp-eth0" Oct 13 05:31:39.340747 containerd[1712]: 2025-10-13 05:31:39.297 [INFO][6374] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" HandleID="k8s-pod-network.9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" Workload="localhost-k8s-calico--apiserver--7b54bbbd77--6z8xp-eth0" Oct 13 05:31:39.340747 containerd[1712]: 2025-10-13 05:31:39.308 [INFO][6374] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:31:39.340747 containerd[1712]: 2025-10-13 05:31:39.318 [INFO][6367] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5" Oct 13 05:31:39.637870 containerd[1712]: time="2025-10-13T05:31:39.356947777Z" level=info msg="TearDown network for sandbox \"9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5\" successfully" Oct 13 05:31:39.680879 containerd[1712]: time="2025-10-13T05:31:39.680778980Z" level=info msg="Ensure that sandbox 9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5 in task-service has been cleanup successfully" Oct 13 05:31:40.641616 containerd[1712]: time="2025-10-13T05:31:40.619776841Z" level=info msg="RemovePodSandbox \"9e2ee5cadee715ec8e13eb7822aa441dbddf3e310c73c5ebe2075bf2413389b5\" returns successfully" Oct 13 05:31:40.900027 containerd[1712]: time="2025-10-13T05:31:40.898280079Z" level=info msg="StopPodSandbox for \"26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8\"" Oct 13 05:31:42.146786 containerd[1712]: time="2025-10-13T05:31:42.146745267Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bbf0f38afac7fb4683300536edfb13b2d300a58bcb7cc09eac4e0000d2b92157\" id:\"489152ed9b553a4a9be9338fe1822d09bc673a343556961907c9dde0ad2762ca\" pid:6393 exited_at:{seconds:1760333501 nanos:899657813}" Oct 13 05:31:42.720316 containerd[1712]: 2025-10-13 05:31:41.707 [WARNING][6406] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b54bbbd77--rpbzl-eth0" Oct 13 05:31:42.720316 containerd[1712]: 2025-10-13 05:31:41.713 [INFO][6406] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" Oct 13 05:31:42.720316 containerd[1712]: 2025-10-13 05:31:41.713 [INFO][6406] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" iface="eth0" netns="" Oct 13 05:31:42.720316 containerd[1712]: 2025-10-13 05:31:41.713 [INFO][6406] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" Oct 13 05:31:42.720316 containerd[1712]: 2025-10-13 05:31:41.713 [INFO][6406] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" Oct 13 05:31:42.720316 containerd[1712]: 2025-10-13 05:31:42.576 [INFO][6417] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" HandleID="k8s-pod-network.26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" Workload="localhost-k8s-calico--apiserver--7b54bbbd77--rpbzl-eth0" Oct 13 05:31:42.720316 containerd[1712]: 2025-10-13 05:31:42.585 [INFO][6417] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:31:42.720316 containerd[1712]: 2025-10-13 05:31:42.585 [INFO][6417] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:31:42.720316 containerd[1712]: 2025-10-13 05:31:42.650 [WARNING][6417] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" HandleID="k8s-pod-network.26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" Workload="localhost-k8s-calico--apiserver--7b54bbbd77--rpbzl-eth0" Oct 13 05:31:42.720316 containerd[1712]: 2025-10-13 05:31:42.650 [INFO][6417] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" HandleID="k8s-pod-network.26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" Workload="localhost-k8s-calico--apiserver--7b54bbbd77--rpbzl-eth0" Oct 13 05:31:42.720316 containerd[1712]: 2025-10-13 05:31:42.658 [INFO][6417] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:31:42.720316 containerd[1712]: 2025-10-13 05:31:42.679 [INFO][6406] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" Oct 13 05:31:42.781708 containerd[1712]: time="2025-10-13T05:31:42.781131345Z" level=info msg="TearDown network for sandbox \"26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8\" successfully" Oct 13 05:31:42.787699 containerd[1712]: time="2025-10-13T05:31:42.787670141Z" level=info msg="StopPodSandbox for \"26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8\" returns successfully" Oct 13 05:31:42.981290 containerd[1712]: time="2025-10-13T05:31:42.980694064Z" level=info msg="RemovePodSandbox for \"26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8\"" Oct 13 05:31:42.990750 containerd[1712]: time="2025-10-13T05:31:42.990708394Z" level=info msg="Forcibly stopping sandbox \"26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8\"" Oct 13 05:31:43.507666 containerd[1712]: time="2025-10-13T05:31:43.507635223Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bbf0f38afac7fb4683300536edfb13b2d300a58bcb7cc09eac4e0000d2b92157\" id:\"e715c5680533775f06e2030d8ee7ce63a7e85da9b98ef1fa769d53e5141b0969\" pid:6433 exited_at:{seconds:1760333503 nanos:506632160}" Oct 13 05:31:44.060094 sshd[6350]: Connection closed by 139.178.89.65 port 60406 Oct 13 05:31:44.214186 sshd-session[6341]: pam_unix(sshd:session): session closed for user core Oct 13 05:31:44.604240 systemd-logind[1687]: Session 22 logged out. Waiting for processes to exit. Oct 13 05:31:44.607677 systemd[1]: sshd@19-139.178.70.110:22-139.178.89.65:60406.service: Deactivated successfully. Oct 13 05:31:44.609396 systemd[1]: session-22.scope: Deactivated successfully. Oct 13 05:31:44.609603 systemd[1]: session-22.scope: Consumed 1.183s CPU time, 41.7M memory peak. Oct 13 05:31:44.659841 systemd-logind[1687]: Removed session 22. Oct 13 05:31:46.413613 containerd[1712]: 2025-10-13 05:31:43.686 [WARNING][6455] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b54bbbd77--rpbzl-eth0" Oct 13 05:31:46.413613 containerd[1712]: 2025-10-13 05:31:43.687 [INFO][6455] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" Oct 13 05:31:46.413613 containerd[1712]: 2025-10-13 05:31:43.687 [INFO][6455] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" iface="eth0" netns="" Oct 13 05:31:46.413613 containerd[1712]: 2025-10-13 05:31:43.687 [INFO][6455] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" Oct 13 05:31:46.413613 containerd[1712]: 2025-10-13 05:31:43.687 [INFO][6455] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" Oct 13 05:31:46.413613 containerd[1712]: 2025-10-13 05:31:46.299 [INFO][6462] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" HandleID="k8s-pod-network.26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" Workload="localhost-k8s-calico--apiserver--7b54bbbd77--rpbzl-eth0" Oct 13 05:31:46.413613 containerd[1712]: 2025-10-13 05:31:46.303 [INFO][6462] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:31:46.413613 containerd[1712]: 2025-10-13 05:31:46.304 [INFO][6462] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:31:46.413613 containerd[1712]: 2025-10-13 05:31:46.334 [WARNING][6462] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" HandleID="k8s-pod-network.26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" Workload="localhost-k8s-calico--apiserver--7b54bbbd77--rpbzl-eth0" Oct 13 05:31:46.413613 containerd[1712]: 2025-10-13 05:31:46.334 [INFO][6462] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" HandleID="k8s-pod-network.26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" Workload="localhost-k8s-calico--apiserver--7b54bbbd77--rpbzl-eth0" Oct 13 05:31:46.413613 containerd[1712]: 2025-10-13 05:31:46.353 [INFO][6462] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:31:46.413613 containerd[1712]: 2025-10-13 05:31:46.403 [INFO][6455] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8" Oct 13 05:31:46.413613 containerd[1712]: time="2025-10-13T05:31:46.413226019Z" level=info msg="TearDown network for sandbox \"26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8\" successfully" Oct 13 05:31:46.516154 containerd[1712]: time="2025-10-13T05:31:46.515718489Z" level=info msg="Ensure that sandbox 26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8 in task-service has been cleanup successfully" Oct 13 05:31:46.553272 containerd[1712]: time="2025-10-13T05:31:46.553234526Z" level=info msg="RemovePodSandbox \"26d295d8e53dd09249582d1f52d16d07c50d37fce2f540b23bf1457dc78eabc8\" returns successfully" Oct 13 05:31:47.015355 containerd[1712]: time="2025-10-13T05:31:47.015314990Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bf5a52200e6be81e0575cb0f640581ad3a6bede5b855d21f0b290b17348c7318\" id:\"3dbf3be6b8029eaf49a282f70dc5ecd64ca2df724344d2e838fc67aeb8d3085f\" pid:6485 exited_at:{seconds:1760333507 nanos:15025225}"