Sep 9 21:52:29.715896 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Sep 9 19:55:16 -00 2025 Sep 9 21:52:29.715912 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=f0ebd120fc09fb344715b1492c3f1d02e1457be2c9792ea5ffb3fe4b15efa812 Sep 9 21:52:29.715918 kernel: Disabled fast string operations Sep 9 21:52:29.715922 kernel: BIOS-provided physical RAM map: Sep 9 21:52:29.715926 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Sep 9 21:52:29.715930 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Sep 9 21:52:29.715936 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Sep 9 21:52:29.715940 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Sep 9 21:52:29.715944 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Sep 9 21:52:29.715948 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Sep 9 21:52:29.715952 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Sep 9 21:52:29.715956 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Sep 9 21:52:29.715960 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Sep 9 21:52:29.715964 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Sep 9 21:52:29.715970 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Sep 9 21:52:29.715975 kernel: NX (Execute Disable) protection: active Sep 9 21:52:29.715980 kernel: APIC: Static calls initialized Sep 9 21:52:29.715985 kernel: SMBIOS 2.7 present. Sep 9 21:52:29.715989 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Sep 9 21:52:29.715994 kernel: DMI: Memory slots populated: 1/128 Sep 9 21:52:29.716000 kernel: vmware: hypercall mode: 0x00 Sep 9 21:52:29.716004 kernel: Hypervisor detected: VMware Sep 9 21:52:29.716009 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Sep 9 21:52:29.716013 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Sep 9 21:52:29.716018 kernel: vmware: using clock offset of 4637429791 ns Sep 9 21:52:29.716023 kernel: tsc: Detected 3408.000 MHz processor Sep 9 21:52:29.716028 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 9 21:52:29.716033 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 9 21:52:29.716038 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Sep 9 21:52:29.716043 kernel: total RAM covered: 3072M Sep 9 21:52:29.716048 kernel: Found optimal setting for mtrr clean up Sep 9 21:52:29.716054 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Sep 9 21:52:29.716059 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Sep 9 21:52:29.716063 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 9 21:52:29.716068 kernel: Using GB pages for direct mapping Sep 9 21:52:29.716073 kernel: ACPI: Early table checksum verification disabled Sep 9 21:52:29.716077 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Sep 9 21:52:29.716082 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Sep 9 21:52:29.716087 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Sep 9 21:52:29.716093 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Sep 9 21:52:29.716100 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Sep 9 21:52:29.716105 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Sep 9 21:52:29.716110 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Sep 9 21:52:29.716115 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Sep 9 21:52:29.716121 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Sep 9 21:52:29.716126 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Sep 9 21:52:29.716131 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Sep 9 21:52:29.716136 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Sep 9 21:52:29.716141 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Sep 9 21:52:29.716146 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Sep 9 21:52:29.716151 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Sep 9 21:52:29.716156 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Sep 9 21:52:29.716160 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Sep 9 21:52:29.716165 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Sep 9 21:52:29.716171 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Sep 9 21:52:29.716176 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Sep 9 21:52:29.716181 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Sep 9 21:52:29.716186 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Sep 9 21:52:29.716191 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 9 21:52:29.716196 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 9 21:52:29.716201 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Sep 9 21:52:29.716206 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00001000-0x7fffffff] Sep 9 21:52:29.716211 kernel: NODE_DATA(0) allocated [mem 0x7fff8dc0-0x7fffffff] Sep 9 21:52:29.716217 kernel: Zone ranges: Sep 9 21:52:29.716222 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 9 21:52:29.716227 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Sep 9 21:52:29.716232 kernel: Normal empty Sep 9 21:52:29.716237 kernel: Device empty Sep 9 21:52:29.716242 kernel: Movable zone start for each node Sep 9 21:52:29.716247 kernel: Early memory node ranges Sep 9 21:52:29.716252 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Sep 9 21:52:29.716256 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Sep 9 21:52:29.716261 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Sep 9 21:52:29.716267 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Sep 9 21:52:29.716272 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 9 21:52:29.716303 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Sep 9 21:52:29.716310 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Sep 9 21:52:29.716315 kernel: ACPI: PM-Timer IO Port: 0x1008 Sep 9 21:52:29.716320 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Sep 9 21:52:29.716325 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Sep 9 21:52:29.716330 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Sep 9 21:52:29.716335 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Sep 9 21:52:29.716342 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Sep 9 21:52:29.716347 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Sep 9 21:52:29.716352 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Sep 9 21:52:29.716358 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Sep 9 21:52:29.716362 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Sep 9 21:52:29.716367 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Sep 9 21:52:29.716372 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Sep 9 21:52:29.716377 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Sep 9 21:52:29.716382 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Sep 9 21:52:29.716387 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Sep 9 21:52:29.716393 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Sep 9 21:52:29.716398 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Sep 9 21:52:29.716402 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Sep 9 21:52:29.716407 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Sep 9 21:52:29.716412 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Sep 9 21:52:29.716417 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Sep 9 21:52:29.716422 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Sep 9 21:52:29.716427 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Sep 9 21:52:29.716431 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Sep 9 21:52:29.716438 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Sep 9 21:52:29.716443 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Sep 9 21:52:29.716447 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Sep 9 21:52:29.716452 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Sep 9 21:52:29.716457 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Sep 9 21:52:29.716462 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Sep 9 21:52:29.716467 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Sep 9 21:52:29.716472 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Sep 9 21:52:29.716477 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Sep 9 21:52:29.716482 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Sep 9 21:52:29.716488 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Sep 9 21:52:29.716493 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Sep 9 21:52:29.716498 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Sep 9 21:52:29.716503 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Sep 9 21:52:29.716508 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Sep 9 21:52:29.716512 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Sep 9 21:52:29.716518 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Sep 9 21:52:29.716527 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Sep 9 21:52:29.716532 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Sep 9 21:52:29.716537 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Sep 9 21:52:29.716543 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Sep 9 21:52:29.716548 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Sep 9 21:52:29.716554 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Sep 9 21:52:29.716559 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Sep 9 21:52:29.716564 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Sep 9 21:52:29.716569 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Sep 9 21:52:29.716593 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Sep 9 21:52:29.716598 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Sep 9 21:52:29.716605 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Sep 9 21:52:29.716610 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Sep 9 21:52:29.716631 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Sep 9 21:52:29.716636 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Sep 9 21:52:29.716641 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Sep 9 21:52:29.716647 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Sep 9 21:52:29.716652 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Sep 9 21:52:29.716657 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Sep 9 21:52:29.716662 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Sep 9 21:52:29.716667 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Sep 9 21:52:29.716674 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Sep 9 21:52:29.716679 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Sep 9 21:52:29.716684 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Sep 9 21:52:29.716689 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Sep 9 21:52:29.716694 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Sep 9 21:52:29.716699 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Sep 9 21:52:29.716705 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Sep 9 21:52:29.716710 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Sep 9 21:52:29.716715 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Sep 9 21:52:29.716720 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Sep 9 21:52:29.716726 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Sep 9 21:52:29.716732 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Sep 9 21:52:29.716737 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Sep 9 21:52:29.716742 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Sep 9 21:52:29.716747 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Sep 9 21:52:29.716753 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Sep 9 21:52:29.716758 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Sep 9 21:52:29.716763 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Sep 9 21:52:29.716768 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Sep 9 21:52:29.716773 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Sep 9 21:52:29.716780 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Sep 9 21:52:29.716785 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Sep 9 21:52:29.716790 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Sep 9 21:52:29.716795 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Sep 9 21:52:29.716800 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Sep 9 21:52:29.716805 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Sep 9 21:52:29.716811 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Sep 9 21:52:29.716816 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Sep 9 21:52:29.716821 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Sep 9 21:52:29.716827 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Sep 9 21:52:29.716832 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Sep 9 21:52:29.716837 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Sep 9 21:52:29.716843 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Sep 9 21:52:29.716848 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Sep 9 21:52:29.716853 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Sep 9 21:52:29.716858 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Sep 9 21:52:29.716863 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Sep 9 21:52:29.716868 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Sep 9 21:52:29.716873 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Sep 9 21:52:29.716880 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Sep 9 21:52:29.716885 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Sep 9 21:52:29.716890 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Sep 9 21:52:29.716895 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Sep 9 21:52:29.716900 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Sep 9 21:52:29.716906 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Sep 9 21:52:29.716911 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Sep 9 21:52:29.716916 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Sep 9 21:52:29.716921 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Sep 9 21:52:29.716926 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Sep 9 21:52:29.716932 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Sep 9 21:52:29.716938 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Sep 9 21:52:29.716943 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Sep 9 21:52:29.716948 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Sep 9 21:52:29.716953 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Sep 9 21:52:29.716959 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Sep 9 21:52:29.716964 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Sep 9 21:52:29.716969 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Sep 9 21:52:29.716974 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Sep 9 21:52:29.716979 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Sep 9 21:52:29.716985 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Sep 9 21:52:29.716991 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Sep 9 21:52:29.716996 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Sep 9 21:52:29.717001 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Sep 9 21:52:29.717006 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Sep 9 21:52:29.717011 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Sep 9 21:52:29.717016 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Sep 9 21:52:29.717022 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Sep 9 21:52:29.717030 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Sep 9 21:52:29.717040 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Sep 9 21:52:29.717049 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 9 21:52:29.717057 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Sep 9 21:52:29.717066 kernel: TSC deadline timer available Sep 9 21:52:29.717074 kernel: CPU topo: Max. logical packages: 128 Sep 9 21:52:29.717083 kernel: CPU topo: Max. logical dies: 128 Sep 9 21:52:29.717091 kernel: CPU topo: Max. dies per package: 1 Sep 9 21:52:29.717100 kernel: CPU topo: Max. threads per core: 1 Sep 9 21:52:29.717109 kernel: CPU topo: Num. cores per package: 1 Sep 9 21:52:29.717118 kernel: CPU topo: Num. threads per package: 1 Sep 9 21:52:29.717128 kernel: CPU topo: Allowing 2 present CPUs plus 126 hotplug CPUs Sep 9 21:52:29.717137 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Sep 9 21:52:29.717143 kernel: Booting paravirtualized kernel on VMware hypervisor Sep 9 21:52:29.717148 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 9 21:52:29.717154 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Sep 9 21:52:29.717160 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Sep 9 21:52:29.717165 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Sep 9 21:52:29.717170 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Sep 9 21:52:29.717176 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Sep 9 21:52:29.717182 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Sep 9 21:52:29.717188 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Sep 9 21:52:29.717193 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Sep 9 21:52:29.717198 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Sep 9 21:52:29.717203 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Sep 9 21:52:29.717208 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Sep 9 21:52:29.717213 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Sep 9 21:52:29.717219 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Sep 9 21:52:29.717224 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Sep 9 21:52:29.717231 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Sep 9 21:52:29.717236 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Sep 9 21:52:29.717241 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Sep 9 21:52:29.717246 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Sep 9 21:52:29.717251 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Sep 9 21:52:29.717257 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=f0ebd120fc09fb344715b1492c3f1d02e1457be2c9792ea5ffb3fe4b15efa812 Sep 9 21:52:29.717263 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 21:52:29.717291 kernel: random: crng init done Sep 9 21:52:29.717298 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Sep 9 21:52:29.717303 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Sep 9 21:52:29.717309 kernel: printk: log_buf_len min size: 262144 bytes Sep 9 21:52:29.717314 kernel: printk: log_buf_len: 1048576 bytes Sep 9 21:52:29.717319 kernel: printk: early log buf free: 245592(93%) Sep 9 21:52:29.717324 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 21:52:29.717329 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 9 21:52:29.717334 kernel: Fallback order for Node 0: 0 Sep 9 21:52:29.717340 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524157 Sep 9 21:52:29.717347 kernel: Policy zone: DMA32 Sep 9 21:52:29.717352 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 21:52:29.717358 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Sep 9 21:52:29.717363 kernel: ftrace: allocating 40102 entries in 157 pages Sep 9 21:52:29.717368 kernel: ftrace: allocated 157 pages with 5 groups Sep 9 21:52:29.717374 kernel: Dynamic Preempt: voluntary Sep 9 21:52:29.717379 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 21:52:29.717384 kernel: rcu: RCU event tracing is enabled. Sep 9 21:52:29.717389 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Sep 9 21:52:29.717396 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 21:52:29.717401 kernel: Rude variant of Tasks RCU enabled. Sep 9 21:52:29.717406 kernel: Tracing variant of Tasks RCU enabled. Sep 9 21:52:29.717411 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 21:52:29.717417 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Sep 9 21:52:29.717422 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Sep 9 21:52:29.717427 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Sep 9 21:52:29.717433 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Sep 9 21:52:29.717438 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Sep 9 21:52:29.717444 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Sep 9 21:52:29.717453 kernel: Console: colour VGA+ 80x25 Sep 9 21:52:29.717458 kernel: printk: legacy console [tty0] enabled Sep 9 21:52:29.717463 kernel: printk: legacy console [ttyS0] enabled Sep 9 21:52:29.717469 kernel: ACPI: Core revision 20240827 Sep 9 21:52:29.717474 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Sep 9 21:52:29.717483 kernel: APIC: Switch to symmetric I/O mode setup Sep 9 21:52:29.717488 kernel: x2apic enabled Sep 9 21:52:29.717493 kernel: APIC: Switched APIC routing to: physical x2apic Sep 9 21:52:29.717500 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 9 21:52:29.717505 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Sep 9 21:52:29.717510 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Sep 9 21:52:29.717516 kernel: Disabled fast string operations Sep 9 21:52:29.717521 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 9 21:52:29.717526 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 9 21:52:29.717532 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 9 21:52:29.717537 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Sep 9 21:52:29.717542 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Sep 9 21:52:29.717548 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Sep 9 21:52:29.717554 kernel: RETBleed: Mitigation: Enhanced IBRS Sep 9 21:52:29.717559 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 9 21:52:29.717564 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 9 21:52:29.717573 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 9 21:52:29.717578 kernel: SRBDS: Unknown: Dependent on hypervisor status Sep 9 21:52:29.717583 kernel: GDS: Unknown: Dependent on hypervisor status Sep 9 21:52:29.717604 kernel: active return thunk: its_return_thunk Sep 9 21:52:29.717610 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 9 21:52:29.717616 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 9 21:52:29.717637 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 9 21:52:29.717642 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 9 21:52:29.717647 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 9 21:52:29.717652 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 9 21:52:29.717658 kernel: Freeing SMP alternatives memory: 32K Sep 9 21:52:29.717663 kernel: pid_max: default: 131072 minimum: 1024 Sep 9 21:52:29.717668 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 9 21:52:29.717673 kernel: landlock: Up and running. Sep 9 21:52:29.717680 kernel: SELinux: Initializing. Sep 9 21:52:29.717685 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 9 21:52:29.717690 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 9 21:52:29.717696 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Sep 9 21:52:29.717701 kernel: Performance Events: Skylake events, core PMU driver. Sep 9 21:52:29.717706 kernel: core: CPUID marked event: 'cpu cycles' unavailable Sep 9 21:52:29.717711 kernel: core: CPUID marked event: 'instructions' unavailable Sep 9 21:52:29.717717 kernel: core: CPUID marked event: 'bus cycles' unavailable Sep 9 21:52:29.717722 kernel: core: CPUID marked event: 'cache references' unavailable Sep 9 21:52:29.717728 kernel: core: CPUID marked event: 'cache misses' unavailable Sep 9 21:52:29.717733 kernel: core: CPUID marked event: 'branch instructions' unavailable Sep 9 21:52:29.717738 kernel: core: CPUID marked event: 'branch misses' unavailable Sep 9 21:52:29.717743 kernel: ... version: 1 Sep 9 21:52:29.717749 kernel: ... bit width: 48 Sep 9 21:52:29.717754 kernel: ... generic registers: 4 Sep 9 21:52:29.717759 kernel: ... value mask: 0000ffffffffffff Sep 9 21:52:29.717764 kernel: ... max period: 000000007fffffff Sep 9 21:52:29.717770 kernel: ... fixed-purpose events: 0 Sep 9 21:52:29.717776 kernel: ... event mask: 000000000000000f Sep 9 21:52:29.717781 kernel: signal: max sigframe size: 1776 Sep 9 21:52:29.717786 kernel: rcu: Hierarchical SRCU implementation. Sep 9 21:52:29.717792 kernel: rcu: Max phase no-delay instances is 400. Sep 9 21:52:29.717797 kernel: Timer migration: 3 hierarchy levels; 8 children per group; 3 crossnode level Sep 9 21:52:29.717802 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 9 21:52:29.717807 kernel: smp: Bringing up secondary CPUs ... Sep 9 21:52:29.717813 kernel: smpboot: x86: Booting SMP configuration: Sep 9 21:52:29.717818 kernel: .... node #0, CPUs: #1 Sep 9 21:52:29.717824 kernel: Disabled fast string operations Sep 9 21:52:29.717829 kernel: smp: Brought up 1 node, 2 CPUs Sep 9 21:52:29.717834 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Sep 9 21:52:29.717840 kernel: Memory: 1924232K/2096628K available (14336K kernel code, 2428K rwdata, 9988K rodata, 54092K init, 2876K bss, 161020K reserved, 0K cma-reserved) Sep 9 21:52:29.717845 kernel: devtmpfs: initialized Sep 9 21:52:29.717850 kernel: x86/mm: Memory block size: 128MB Sep 9 21:52:29.717856 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Sep 9 21:52:29.717861 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 21:52:29.717866 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Sep 9 21:52:29.717872 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 21:52:29.717878 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 21:52:29.717883 kernel: audit: initializing netlink subsys (disabled) Sep 9 21:52:29.717888 kernel: audit: type=2000 audit(1757454746.280:1): state=initialized audit_enabled=0 res=1 Sep 9 21:52:29.717893 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 21:52:29.717899 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 9 21:52:29.717904 kernel: cpuidle: using governor menu Sep 9 21:52:29.717909 kernel: Simple Boot Flag at 0x36 set to 0x80 Sep 9 21:52:29.717915 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 21:52:29.717921 kernel: dca service started, version 1.12.1 Sep 9 21:52:29.717933 kernel: PCI: ECAM [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) for domain 0000 [bus 00-7f] Sep 9 21:52:29.717940 kernel: PCI: Using configuration type 1 for base access Sep 9 21:52:29.717946 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 9 21:52:29.717951 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 21:52:29.717957 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 21:52:29.717962 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 21:52:29.717967 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 21:52:29.717973 kernel: ACPI: Added _OSI(Module Device) Sep 9 21:52:29.717979 kernel: ACPI: Added _OSI(Processor Device) Sep 9 21:52:29.717985 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 21:52:29.717990 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 21:52:29.717996 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Sep 9 21:52:29.718001 kernel: ACPI: Interpreter enabled Sep 9 21:52:29.718007 kernel: ACPI: PM: (supports S0 S1 S5) Sep 9 21:52:29.718013 kernel: ACPI: Using IOAPIC for interrupt routing Sep 9 21:52:29.718018 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 9 21:52:29.718031 kernel: PCI: Using E820 reservations for host bridge windows Sep 9 21:52:29.718050 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Sep 9 21:52:29.718056 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Sep 9 21:52:29.718141 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 21:52:29.718212 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Sep 9 21:52:29.718262 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Sep 9 21:52:29.718271 kernel: PCI host bridge to bus 0000:00 Sep 9 21:52:29.718363 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 9 21:52:29.720296 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Sep 9 21:52:29.720357 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 9 21:52:29.720404 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 9 21:52:29.720449 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Sep 9 21:52:29.720492 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Sep 9 21:52:29.720554 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 conventional PCI endpoint Sep 9 21:52:29.720623 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 conventional PCI bridge Sep 9 21:52:29.720675 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 9 21:52:29.720732 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Sep 9 21:52:29.720786 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a conventional PCI endpoint Sep 9 21:52:29.720839 kernel: pci 0000:00:07.1: BAR 4 [io 0x1060-0x106f] Sep 9 21:52:29.720890 kernel: pci 0000:00:07.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Sep 9 21:52:29.720940 kernel: pci 0000:00:07.1: BAR 1 [io 0x03f6]: legacy IDE quirk Sep 9 21:52:29.720989 kernel: pci 0000:00:07.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Sep 9 21:52:29.721038 kernel: pci 0000:00:07.1: BAR 3 [io 0x0376]: legacy IDE quirk Sep 9 21:52:29.721092 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Sep 9 21:52:29.721145 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Sep 9 21:52:29.721226 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Sep 9 21:52:29.723480 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 conventional PCI endpoint Sep 9 21:52:29.724320 kernel: pci 0000:00:07.7: BAR 0 [io 0x1080-0x10bf] Sep 9 21:52:29.724393 kernel: pci 0000:00:07.7: BAR 1 [mem 0xfebfe000-0xfebfffff 64bit] Sep 9 21:52:29.724453 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 conventional PCI endpoint Sep 9 21:52:29.724506 kernel: pci 0000:00:0f.0: BAR 0 [io 0x1070-0x107f] Sep 9 21:52:29.724562 kernel: pci 0000:00:0f.0: BAR 1 [mem 0xe8000000-0xefffffff pref] Sep 9 21:52:29.724671 kernel: pci 0000:00:0f.0: BAR 2 [mem 0xfe000000-0xfe7fffff] Sep 9 21:52:29.724723 kernel: pci 0000:00:0f.0: ROM [mem 0x00000000-0x00007fff pref] Sep 9 21:52:29.724774 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 9 21:52:29.724830 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 conventional PCI bridge Sep 9 21:52:29.724882 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Sep 9 21:52:29.724933 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Sep 9 21:52:29.724987 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Sep 9 21:52:29.725037 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Sep 9 21:52:29.725096 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 21:52:29.725148 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Sep 9 21:52:29.725199 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Sep 9 21:52:29.725270 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Sep 9 21:52:29.727354 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Sep 9 21:52:29.727418 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 21:52:29.727471 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Sep 9 21:52:29.727522 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Sep 9 21:52:29.727576 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Sep 9 21:52:29.727627 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Sep 9 21:52:29.727677 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Sep 9 21:52:29.727732 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 21:52:29.727786 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Sep 9 21:52:29.727836 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Sep 9 21:52:29.727886 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Sep 9 21:52:29.727936 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Sep 9 21:52:29.727986 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Sep 9 21:52:29.728040 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 21:52:29.728093 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Sep 9 21:52:29.728143 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Sep 9 21:52:29.728192 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Sep 9 21:52:29.728241 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Sep 9 21:52:29.730320 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 21:52:29.730381 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Sep 9 21:52:29.730436 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Sep 9 21:52:29.730491 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Sep 9 21:52:29.730542 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Sep 9 21:52:29.730603 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 21:52:29.730655 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Sep 9 21:52:29.730706 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Sep 9 21:52:29.730756 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Sep 9 21:52:29.730805 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Sep 9 21:52:29.730862 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 21:52:29.730914 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Sep 9 21:52:29.730963 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Sep 9 21:52:29.731013 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Sep 9 21:52:29.731063 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Sep 9 21:52:29.731117 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 21:52:29.731168 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Sep 9 21:52:29.731221 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Sep 9 21:52:29.731271 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Sep 9 21:52:29.731553 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Sep 9 21:52:29.731612 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 21:52:29.731663 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Sep 9 21:52:29.731714 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Sep 9 21:52:29.731765 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Sep 9 21:52:29.731815 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Sep 9 21:52:29.731874 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 21:52:29.731925 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Sep 9 21:52:29.731975 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Sep 9 21:52:29.732024 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Sep 9 21:52:29.732074 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Sep 9 21:52:29.732124 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Sep 9 21:52:29.732180 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 21:52:29.732232 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Sep 9 21:52:29.732299 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Sep 9 21:52:29.732357 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Sep 9 21:52:29.732408 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Sep 9 21:52:29.732458 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Sep 9 21:52:29.732514 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 21:52:29.732568 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Sep 9 21:52:29.732617 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Sep 9 21:52:29.732666 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Sep 9 21:52:29.732717 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Sep 9 21:52:29.732771 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 21:52:29.732823 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Sep 9 21:52:29.732873 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Sep 9 21:52:29.732922 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Sep 9 21:52:29.732975 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Sep 9 21:52:29.733032 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 21:52:29.733084 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Sep 9 21:52:29.733135 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Sep 9 21:52:29.733185 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Sep 9 21:52:29.733245 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Sep 9 21:52:29.733314 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 21:52:29.733369 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Sep 9 21:52:29.733419 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Sep 9 21:52:29.733469 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Sep 9 21:52:29.733518 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Sep 9 21:52:29.733576 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 21:52:29.733628 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Sep 9 21:52:29.733679 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Sep 9 21:52:29.733746 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Sep 9 21:52:29.733797 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Sep 9 21:52:29.733852 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 21:52:29.733903 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Sep 9 21:52:29.733952 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Sep 9 21:52:29.734001 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Sep 9 21:52:29.734050 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Sep 9 21:52:29.734101 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Sep 9 21:52:29.734160 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 21:52:29.734211 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Sep 9 21:52:29.734261 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Sep 9 21:52:29.736353 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Sep 9 21:52:29.736414 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Sep 9 21:52:29.736469 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Sep 9 21:52:29.736529 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 21:52:29.736583 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Sep 9 21:52:29.736634 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Sep 9 21:52:29.736684 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Sep 9 21:52:29.736734 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Sep 9 21:52:29.736788 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Sep 9 21:52:29.736843 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 21:52:29.736894 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Sep 9 21:52:29.736943 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Sep 9 21:52:29.736993 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Sep 9 21:52:29.737042 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Sep 9 21:52:29.737096 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 21:52:29.737150 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Sep 9 21:52:29.737200 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Sep 9 21:52:29.737249 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Sep 9 21:52:29.737350 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Sep 9 21:52:29.737407 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 21:52:29.737458 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Sep 9 21:52:29.737512 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Sep 9 21:52:29.737564 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Sep 9 21:52:29.737614 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Sep 9 21:52:29.737668 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 21:52:29.737719 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Sep 9 21:52:29.737768 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Sep 9 21:52:29.737818 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Sep 9 21:52:29.737867 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Sep 9 21:52:29.737924 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 21:52:29.737974 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Sep 9 21:52:29.738024 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Sep 9 21:52:29.738074 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Sep 9 21:52:29.738123 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Sep 9 21:52:29.738179 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 21:52:29.738230 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Sep 9 21:52:29.738290 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Sep 9 21:52:29.738347 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Sep 9 21:52:29.738397 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Sep 9 21:52:29.738446 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Sep 9 21:52:29.738503 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 21:52:29.738554 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Sep 9 21:52:29.738609 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Sep 9 21:52:29.738662 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Sep 9 21:52:29.738712 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Sep 9 21:52:29.738761 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Sep 9 21:52:29.738815 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 21:52:29.738883 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Sep 9 21:52:29.738934 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Sep 9 21:52:29.738984 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Sep 9 21:52:29.739036 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Sep 9 21:52:29.739090 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 21:52:29.739141 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Sep 9 21:52:29.739191 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Sep 9 21:52:29.739241 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Sep 9 21:52:29.739530 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Sep 9 21:52:29.739592 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 21:52:29.739649 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Sep 9 21:52:29.739701 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Sep 9 21:52:29.739752 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Sep 9 21:52:29.739802 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Sep 9 21:52:29.739858 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 21:52:29.739910 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Sep 9 21:52:29.739960 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Sep 9 21:52:29.740019 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Sep 9 21:52:29.740086 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Sep 9 21:52:29.740155 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 21:52:29.740222 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Sep 9 21:52:29.740519 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Sep 9 21:52:29.740597 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Sep 9 21:52:29.740652 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Sep 9 21:52:29.740710 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 21:52:29.740765 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Sep 9 21:52:29.740815 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Sep 9 21:52:29.740866 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Sep 9 21:52:29.740915 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Sep 9 21:52:29.740972 kernel: pci_bus 0000:01: extended config space not accessible Sep 9 21:52:29.741025 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 9 21:52:29.741076 kernel: pci_bus 0000:02: extended config space not accessible Sep 9 21:52:29.741087 kernel: acpiphp: Slot [32] registered Sep 9 21:52:29.741093 kernel: acpiphp: Slot [33] registered Sep 9 21:52:29.741099 kernel: acpiphp: Slot [34] registered Sep 9 21:52:29.741105 kernel: acpiphp: Slot [35] registered Sep 9 21:52:29.741112 kernel: acpiphp: Slot [36] registered Sep 9 21:52:29.741120 kernel: acpiphp: Slot [37] registered Sep 9 21:52:29.741126 kernel: acpiphp: Slot [38] registered Sep 9 21:52:29.741132 kernel: acpiphp: Slot [39] registered Sep 9 21:52:29.741137 kernel: acpiphp: Slot [40] registered Sep 9 21:52:29.741144 kernel: acpiphp: Slot [41] registered Sep 9 21:52:29.741150 kernel: acpiphp: Slot [42] registered Sep 9 21:52:29.741156 kernel: acpiphp: Slot [43] registered Sep 9 21:52:29.741162 kernel: acpiphp: Slot [44] registered Sep 9 21:52:29.741168 kernel: acpiphp: Slot [45] registered Sep 9 21:52:29.741173 kernel: acpiphp: Slot [46] registered Sep 9 21:52:29.741179 kernel: acpiphp: Slot [47] registered Sep 9 21:52:29.741185 kernel: acpiphp: Slot [48] registered Sep 9 21:52:29.741191 kernel: acpiphp: Slot [49] registered Sep 9 21:52:29.741196 kernel: acpiphp: Slot [50] registered Sep 9 21:52:29.741203 kernel: acpiphp: Slot [51] registered Sep 9 21:52:29.741209 kernel: acpiphp: Slot [52] registered Sep 9 21:52:29.741215 kernel: acpiphp: Slot [53] registered Sep 9 21:52:29.741220 kernel: acpiphp: Slot [54] registered Sep 9 21:52:29.741226 kernel: acpiphp: Slot [55] registered Sep 9 21:52:29.741232 kernel: acpiphp: Slot [56] registered Sep 9 21:52:29.741238 kernel: acpiphp: Slot [57] registered Sep 9 21:52:29.741244 kernel: acpiphp: Slot [58] registered Sep 9 21:52:29.741249 kernel: acpiphp: Slot [59] registered Sep 9 21:52:29.741256 kernel: acpiphp: Slot [60] registered Sep 9 21:52:29.741262 kernel: acpiphp: Slot [61] registered Sep 9 21:52:29.741268 kernel: acpiphp: Slot [62] registered Sep 9 21:52:29.741274 kernel: acpiphp: Slot [63] registered Sep 9 21:52:29.741357 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Sep 9 21:52:29.741411 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Sep 9 21:52:29.741467 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Sep 9 21:52:29.741520 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Sep 9 21:52:29.741578 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Sep 9 21:52:29.741632 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Sep 9 21:52:29.741699 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 PCIe Endpoint Sep 9 21:52:29.742360 kernel: pci 0000:03:00.0: BAR 0 [io 0x4000-0x4007] Sep 9 21:52:29.742420 kernel: pci 0000:03:00.0: BAR 1 [mem 0xfd5f8000-0xfd5fffff 64bit] Sep 9 21:52:29.742475 kernel: pci 0000:03:00.0: ROM [mem 0x00000000-0x0000ffff pref] Sep 9 21:52:29.742538 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Sep 9 21:52:29.742590 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Sep 9 21:52:29.742646 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Sep 9 21:52:29.742698 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Sep 9 21:52:29.742753 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Sep 9 21:52:29.742806 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Sep 9 21:52:29.742857 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Sep 9 21:52:29.742909 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Sep 9 21:52:29.742960 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Sep 9 21:52:29.743014 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Sep 9 21:52:29.743072 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 PCIe Endpoint Sep 9 21:52:29.743125 kernel: pci 0000:0b:00.0: BAR 0 [mem 0xfd4fc000-0xfd4fcfff] Sep 9 21:52:29.743175 kernel: pci 0000:0b:00.0: BAR 1 [mem 0xfd4fd000-0xfd4fdfff] Sep 9 21:52:29.743226 kernel: pci 0000:0b:00.0: BAR 2 [mem 0xfd4fe000-0xfd4fffff] Sep 9 21:52:29.743276 kernel: pci 0000:0b:00.0: BAR 3 [io 0x5000-0x500f] Sep 9 21:52:29.744613 kernel: pci 0000:0b:00.0: ROM [mem 0x00000000-0x0000ffff pref] Sep 9 21:52:29.744670 kernel: pci 0000:0b:00.0: supports D1 D2 Sep 9 21:52:29.744722 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 9 21:52:29.744773 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Sep 9 21:52:29.744825 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Sep 9 21:52:29.744877 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Sep 9 21:52:29.744929 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Sep 9 21:52:29.744981 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Sep 9 21:52:29.745035 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Sep 9 21:52:29.745090 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Sep 9 21:52:29.745142 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Sep 9 21:52:29.745194 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Sep 9 21:52:29.745246 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Sep 9 21:52:29.745340 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Sep 9 21:52:29.745393 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Sep 9 21:52:29.745445 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Sep 9 21:52:29.745499 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Sep 9 21:52:29.745550 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Sep 9 21:52:29.745602 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Sep 9 21:52:29.745655 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Sep 9 21:52:29.745707 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Sep 9 21:52:29.745757 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Sep 9 21:52:29.745808 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Sep 9 21:52:29.745858 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Sep 9 21:52:29.745911 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Sep 9 21:52:29.745962 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Sep 9 21:52:29.746013 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Sep 9 21:52:29.746062 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Sep 9 21:52:29.746071 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Sep 9 21:52:29.746078 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Sep 9 21:52:29.746084 kernel: ACPI: PCI: Interrupt link LNKB disabled Sep 9 21:52:29.746092 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 9 21:52:29.746098 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Sep 9 21:52:29.746104 kernel: iommu: Default domain type: Translated Sep 9 21:52:29.746110 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 9 21:52:29.746116 kernel: PCI: Using ACPI for IRQ routing Sep 9 21:52:29.746121 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 9 21:52:29.746127 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Sep 9 21:52:29.746133 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Sep 9 21:52:29.746184 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Sep 9 21:52:29.746236 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Sep 9 21:52:29.747306 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 9 21:52:29.747319 kernel: vgaarb: loaded Sep 9 21:52:29.747326 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Sep 9 21:52:29.747332 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Sep 9 21:52:29.747338 kernel: clocksource: Switched to clocksource tsc-early Sep 9 21:52:29.747343 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 21:52:29.747350 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 21:52:29.747356 kernel: pnp: PnP ACPI init Sep 9 21:52:29.747419 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Sep 9 21:52:29.747468 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Sep 9 21:52:29.747514 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Sep 9 21:52:29.747565 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Sep 9 21:52:29.747621 kernel: pnp 00:06: [dma 2] Sep 9 21:52:29.747670 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Sep 9 21:52:29.747718 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Sep 9 21:52:29.747763 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Sep 9 21:52:29.747771 kernel: pnp: PnP ACPI: found 8 devices Sep 9 21:52:29.747777 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 9 21:52:29.747784 kernel: NET: Registered PF_INET protocol family Sep 9 21:52:29.747790 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 21:52:29.747796 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 9 21:52:29.747801 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 21:52:29.747809 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 9 21:52:29.747815 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 9 21:52:29.747821 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 9 21:52:29.747827 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 9 21:52:29.747832 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 9 21:52:29.747838 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 21:52:29.747844 kernel: NET: Registered PF_XDP protocol family Sep 9 21:52:29.747895 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Sep 9 21:52:29.747948 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Sep 9 21:52:29.748001 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 9 21:52:29.748052 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 9 21:52:29.748103 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 9 21:52:29.748153 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Sep 9 21:52:29.748203 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Sep 9 21:52:29.748253 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Sep 9 21:52:29.748316 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Sep 9 21:52:29.748370 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Sep 9 21:52:29.748424 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Sep 9 21:52:29.748475 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Sep 9 21:52:29.748526 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Sep 9 21:52:29.748591 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Sep 9 21:52:29.748641 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Sep 9 21:52:29.748689 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Sep 9 21:52:29.748738 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Sep 9 21:52:29.748790 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Sep 9 21:52:29.748838 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Sep 9 21:52:29.748892 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Sep 9 21:52:29.748954 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Sep 9 21:52:29.749003 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Sep 9 21:52:29.749054 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Sep 9 21:52:29.749104 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref]: assigned Sep 9 21:52:29.749154 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref]: assigned Sep 9 21:52:29.749205 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Sep 9 21:52:29.749253 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Sep 9 21:52:29.750336 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Sep 9 21:52:29.750414 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Sep 9 21:52:29.750469 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Sep 9 21:52:29.750520 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Sep 9 21:52:29.750573 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Sep 9 21:52:29.750659 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Sep 9 21:52:29.750712 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Sep 9 21:52:29.750761 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Sep 9 21:52:29.750811 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Sep 9 21:52:29.750861 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Sep 9 21:52:29.750929 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Sep 9 21:52:29.750994 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Sep 9 21:52:29.751053 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Sep 9 21:52:29.751103 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Sep 9 21:52:29.751156 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Sep 9 21:52:29.751205 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Sep 9 21:52:29.751255 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Sep 9 21:52:29.752565 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Sep 9 21:52:29.752620 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Sep 9 21:52:29.752671 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Sep 9 21:52:29.752720 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Sep 9 21:52:29.752769 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Sep 9 21:52:29.752821 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Sep 9 21:52:29.752870 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Sep 9 21:52:29.752919 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Sep 9 21:52:29.752970 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Sep 9 21:52:29.753019 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Sep 9 21:52:29.753067 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Sep 9 21:52:29.753116 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Sep 9 21:52:29.753169 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Sep 9 21:52:29.753218 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Sep 9 21:52:29.753266 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Sep 9 21:52:29.753344 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Sep 9 21:52:29.753393 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Sep 9 21:52:29.753443 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Sep 9 21:52:29.753491 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Sep 9 21:52:29.753539 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Sep 9 21:52:29.753590 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Sep 9 21:52:29.753657 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Sep 9 21:52:29.753721 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Sep 9 21:52:29.753786 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Sep 9 21:52:29.753835 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Sep 9 21:52:29.753884 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Sep 9 21:52:29.753933 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Sep 9 21:52:29.753983 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Sep 9 21:52:29.754037 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Sep 9 21:52:29.754110 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Sep 9 21:52:29.754161 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Sep 9 21:52:29.754210 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Sep 9 21:52:29.754269 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Sep 9 21:52:29.754343 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Sep 9 21:52:29.754409 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Sep 9 21:52:29.754470 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Sep 9 21:52:29.754530 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Sep 9 21:52:29.754596 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Sep 9 21:52:29.754669 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Sep 9 21:52:29.754730 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Sep 9 21:52:29.754791 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Sep 9 21:52:29.754844 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Sep 9 21:52:29.754901 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Sep 9 21:52:29.754959 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Sep 9 21:52:29.755014 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Sep 9 21:52:29.755070 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Sep 9 21:52:29.755128 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Sep 9 21:52:29.755183 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Sep 9 21:52:29.755240 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Sep 9 21:52:29.755412 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Sep 9 21:52:29.755465 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Sep 9 21:52:29.755514 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Sep 9 21:52:29.755563 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Sep 9 21:52:29.755612 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Sep 9 21:52:29.755660 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Sep 9 21:52:29.755711 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Sep 9 21:52:29.755759 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Sep 9 21:52:29.755807 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Sep 9 21:52:29.755856 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Sep 9 21:52:29.755904 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Sep 9 21:52:29.755954 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Sep 9 21:52:29.756021 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Sep 9 21:52:29.756090 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Sep 9 21:52:29.756144 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Sep 9 21:52:29.756194 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Sep 9 21:52:29.756246 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 9 21:52:29.756309 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Sep 9 21:52:29.756360 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Sep 9 21:52:29.756409 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Sep 9 21:52:29.756458 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Sep 9 21:52:29.756510 kernel: pci 0000:03:00.0: ROM [mem 0xfd500000-0xfd50ffff pref]: assigned Sep 9 21:52:29.756560 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Sep 9 21:52:29.756613 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Sep 9 21:52:29.756666 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Sep 9 21:52:29.756715 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Sep 9 21:52:29.756767 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Sep 9 21:52:29.756817 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Sep 9 21:52:29.756866 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Sep 9 21:52:29.756914 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Sep 9 21:52:29.756965 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Sep 9 21:52:29.757014 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Sep 9 21:52:29.757065 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Sep 9 21:52:29.757114 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Sep 9 21:52:29.757164 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Sep 9 21:52:29.757217 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Sep 9 21:52:29.757266 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Sep 9 21:52:29.759754 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Sep 9 21:52:29.759819 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Sep 9 21:52:29.759873 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Sep 9 21:52:29.759928 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Sep 9 21:52:29.759979 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Sep 9 21:52:29.760029 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Sep 9 21:52:29.760080 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Sep 9 21:52:29.760129 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Sep 9 21:52:29.760179 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Sep 9 21:52:29.760228 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Sep 9 21:52:29.760277 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Sep 9 21:52:29.760356 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Sep 9 21:52:29.760412 kernel: pci 0000:0b:00.0: ROM [mem 0xfd400000-0xfd40ffff pref]: assigned Sep 9 21:52:29.760464 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Sep 9 21:52:29.760513 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Sep 9 21:52:29.760562 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Sep 9 21:52:29.760612 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Sep 9 21:52:29.760663 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Sep 9 21:52:29.760712 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Sep 9 21:52:29.760764 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Sep 9 21:52:29.760813 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Sep 9 21:52:29.760864 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Sep 9 21:52:29.760913 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Sep 9 21:52:29.760963 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Sep 9 21:52:29.761012 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Sep 9 21:52:29.761062 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Sep 9 21:52:29.761111 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Sep 9 21:52:29.761162 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Sep 9 21:52:29.761215 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Sep 9 21:52:29.761264 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Sep 9 21:52:29.761336 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Sep 9 21:52:29.761388 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Sep 9 21:52:29.761437 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Sep 9 21:52:29.761486 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Sep 9 21:52:29.761536 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Sep 9 21:52:29.761588 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Sep 9 21:52:29.761637 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Sep 9 21:52:29.761688 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Sep 9 21:52:29.761737 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Sep 9 21:52:29.761786 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Sep 9 21:52:29.761837 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Sep 9 21:52:29.761889 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Sep 9 21:52:29.761939 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Sep 9 21:52:29.761992 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Sep 9 21:52:29.762042 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Sep 9 21:52:29.762092 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Sep 9 21:52:29.762140 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Sep 9 21:52:29.762190 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Sep 9 21:52:29.762240 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Sep 9 21:52:29.762671 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Sep 9 21:52:29.762732 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Sep 9 21:52:29.762785 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Sep 9 21:52:29.762837 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Sep 9 21:52:29.762905 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Sep 9 21:52:29.763119 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Sep 9 21:52:29.763179 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Sep 9 21:52:29.763230 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Sep 9 21:52:29.763294 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Sep 9 21:52:29.763362 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Sep 9 21:52:29.763412 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Sep 9 21:52:29.763603 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Sep 9 21:52:29.763657 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Sep 9 21:52:29.763707 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Sep 9 21:52:29.763757 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Sep 9 21:52:29.763807 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Sep 9 21:52:29.763859 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Sep 9 21:52:29.763909 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Sep 9 21:52:29.763962 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Sep 9 21:52:29.764012 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Sep 9 21:52:29.764061 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Sep 9 21:52:29.764110 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Sep 9 21:52:29.764160 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Sep 9 21:52:29.764209 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Sep 9 21:52:29.764277 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Sep 9 21:52:29.764411 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Sep 9 21:52:29.764474 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Sep 9 21:52:29.764536 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Sep 9 21:52:29.764612 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Sep 9 21:52:29.764904 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Sep 9 21:52:29.764969 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Sep 9 21:52:29.765033 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Sep 9 21:52:29.765095 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Sep 9 21:52:29.765153 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Sep 9 21:52:29.765212 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Sep 9 21:52:29.765274 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Sep 9 21:52:29.765501 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Sep 9 21:52:29.765552 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Sep 9 21:52:29.765624 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Sep 9 21:52:29.765689 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Sep 9 21:52:29.765739 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Sep 9 21:52:29.765946 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Sep 9 21:52:29.766002 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Sep 9 21:52:29.766053 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Sep 9 21:52:29.766103 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Sep 9 21:52:29.766164 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Sep 9 21:52:29.766210 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Sep 9 21:52:29.766253 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Sep 9 21:52:29.766329 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Sep 9 21:52:29.766383 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Sep 9 21:52:29.766430 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Sep 9 21:52:29.766475 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Sep 9 21:52:29.766522 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Sep 9 21:52:29.766567 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Sep 9 21:52:29.766612 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Sep 9 21:52:29.766656 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Sep 9 21:52:29.766700 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Sep 9 21:52:29.766753 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Sep 9 21:52:29.766799 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Sep 9 21:52:29.766843 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Sep 9 21:52:29.766892 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Sep 9 21:52:29.766937 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Sep 9 21:52:29.766982 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Sep 9 21:52:29.767033 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Sep 9 21:52:29.767079 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Sep 9 21:52:29.767123 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Sep 9 21:52:29.767172 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Sep 9 21:52:29.767217 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Sep 9 21:52:29.767271 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Sep 9 21:52:29.769040 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Sep 9 21:52:29.769098 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Sep 9 21:52:29.769145 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Sep 9 21:52:29.769195 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Sep 9 21:52:29.769241 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Sep 9 21:52:29.769314 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Sep 9 21:52:29.769364 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Sep 9 21:52:29.769416 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Sep 9 21:52:29.769462 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Sep 9 21:52:29.769509 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Sep 9 21:52:29.769561 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Sep 9 21:52:29.769606 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Sep 9 21:52:29.769651 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Sep 9 21:52:29.769702 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Sep 9 21:52:29.769747 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Sep 9 21:52:29.769792 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Sep 9 21:52:29.769847 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Sep 9 21:52:29.769893 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Sep 9 21:52:29.769944 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Sep 9 21:52:29.769991 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Sep 9 21:52:29.770043 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Sep 9 21:52:29.770090 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Sep 9 21:52:29.770139 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Sep 9 21:52:29.770184 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Sep 9 21:52:29.770233 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Sep 9 21:52:29.770296 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Sep 9 21:52:29.770352 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Sep 9 21:52:29.770398 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Sep 9 21:52:29.770443 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Sep 9 21:52:29.770495 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Sep 9 21:52:29.770541 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Sep 9 21:52:29.770585 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Sep 9 21:52:29.770636 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Sep 9 21:52:29.770682 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Sep 9 21:52:29.770727 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Sep 9 21:52:29.770776 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Sep 9 21:52:29.770823 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Sep 9 21:52:29.770875 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Sep 9 21:52:29.770922 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Sep 9 21:52:29.770972 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Sep 9 21:52:29.771017 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Sep 9 21:52:29.771068 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Sep 9 21:52:29.771113 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Sep 9 21:52:29.771161 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Sep 9 21:52:29.771205 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Sep 9 21:52:29.771257 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Sep 9 21:52:29.772335 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Sep 9 21:52:29.772389 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Sep 9 21:52:29.772441 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Sep 9 21:52:29.772487 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Sep 9 21:52:29.772533 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Sep 9 21:52:29.772582 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Sep 9 21:52:29.772631 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Sep 9 21:52:29.772680 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Sep 9 21:52:29.772726 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Sep 9 21:52:29.772776 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Sep 9 21:52:29.772823 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Sep 9 21:52:29.772874 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Sep 9 21:52:29.772922 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Sep 9 21:52:29.772971 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Sep 9 21:52:29.773017 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Sep 9 21:52:29.773065 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Sep 9 21:52:29.773111 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Sep 9 21:52:29.773166 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 9 21:52:29.773177 kernel: PCI: CLS 32 bytes, default 64 Sep 9 21:52:29.773184 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 9 21:52:29.773190 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Sep 9 21:52:29.773197 kernel: clocksource: Switched to clocksource tsc Sep 9 21:52:29.773202 kernel: Initialise system trusted keyrings Sep 9 21:52:29.773209 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 9 21:52:29.773215 kernel: Key type asymmetric registered Sep 9 21:52:29.773220 kernel: Asymmetric key parser 'x509' registered Sep 9 21:52:29.773226 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 9 21:52:29.773233 kernel: io scheduler mq-deadline registered Sep 9 21:52:29.773239 kernel: io scheduler kyber registered Sep 9 21:52:29.773245 kernel: io scheduler bfq registered Sep 9 21:52:29.774144 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Sep 9 21:52:29.774205 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 21:52:29.774261 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Sep 9 21:52:29.774342 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 21:52:29.774397 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Sep 9 21:52:29.774448 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 21:52:29.774499 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Sep 9 21:52:29.774548 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 21:52:29.774631 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Sep 9 21:52:29.774698 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 21:52:29.774772 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Sep 9 21:52:29.774824 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 21:52:29.774877 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Sep 9 21:52:29.774927 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 21:52:29.774995 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Sep 9 21:52:29.775048 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 21:52:29.775097 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Sep 9 21:52:29.775148 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 21:52:29.775198 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Sep 9 21:52:29.775251 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 21:52:29.775351 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Sep 9 21:52:29.775402 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 21:52:29.775452 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Sep 9 21:52:29.775510 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 21:52:29.775562 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Sep 9 21:52:29.775611 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 21:52:29.775663 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Sep 9 21:52:29.775713 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 21:52:29.775762 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Sep 9 21:52:29.775814 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 21:52:29.775891 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Sep 9 21:52:29.775962 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 21:52:29.776020 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Sep 9 21:52:29.776081 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 21:52:29.776135 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Sep 9 21:52:29.776185 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 21:52:29.776236 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Sep 9 21:52:29.776651 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 21:52:29.776714 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Sep 9 21:52:29.776767 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 21:52:29.776819 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Sep 9 21:52:29.776874 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 21:52:29.776925 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Sep 9 21:52:29.776976 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 21:52:29.777035 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Sep 9 21:52:29.777112 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 21:52:29.777187 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Sep 9 21:52:29.777269 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 21:52:29.777383 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Sep 9 21:52:29.777475 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 21:52:29.777562 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Sep 9 21:52:29.777647 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 21:52:29.777734 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Sep 9 21:52:29.777821 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 21:52:29.777907 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Sep 9 21:52:29.777994 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 21:52:29.778086 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Sep 9 21:52:29.778171 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 21:52:29.778252 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Sep 9 21:52:29.778325 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 21:52:29.778380 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Sep 9 21:52:29.778444 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 21:52:29.778530 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Sep 9 21:52:29.778621 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 21:52:29.778640 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 9 21:52:29.778653 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 21:52:29.778665 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 9 21:52:29.778672 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Sep 9 21:52:29.778679 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 9 21:52:29.778685 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 9 21:52:29.778761 kernel: rtc_cmos 00:01: registered as rtc0 Sep 9 21:52:29.778823 kernel: rtc_cmos 00:01: setting system clock to 2025-09-09T21:52:29 UTC (1757454749) Sep 9 21:52:29.778834 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 9 21:52:29.778878 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Sep 9 21:52:29.778887 kernel: intel_pstate: CPU model not supported Sep 9 21:52:29.778894 kernel: NET: Registered PF_INET6 protocol family Sep 9 21:52:29.778900 kernel: Segment Routing with IPv6 Sep 9 21:52:29.778907 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 21:52:29.778913 kernel: NET: Registered PF_PACKET protocol family Sep 9 21:52:29.778921 kernel: Key type dns_resolver registered Sep 9 21:52:29.778928 kernel: IPI shorthand broadcast: enabled Sep 9 21:52:29.778935 kernel: sched_clock: Marking stable (2622002379, 173412423)->(2810335754, -14920952) Sep 9 21:52:29.778941 kernel: registered taskstats version 1 Sep 9 21:52:29.778947 kernel: Loading compiled-in X.509 certificates Sep 9 21:52:29.778954 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: 003b39862f2a560eb5545d7d88a07fc5bdfce075' Sep 9 21:52:29.778961 kernel: Demotion targets for Node 0: null Sep 9 21:52:29.778967 kernel: Key type .fscrypt registered Sep 9 21:52:29.778974 kernel: Key type fscrypt-provisioning registered Sep 9 21:52:29.778981 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 21:52:29.778987 kernel: ima: Allocated hash algorithm: sha1 Sep 9 21:52:29.778993 kernel: ima: No architecture policies found Sep 9 21:52:29.778999 kernel: clk: Disabling unused clocks Sep 9 21:52:29.779006 kernel: Warning: unable to open an initial console. Sep 9 21:52:29.779012 kernel: Freeing unused kernel image (initmem) memory: 54092K Sep 9 21:52:29.779018 kernel: Write protecting the kernel read-only data: 24576k Sep 9 21:52:29.779024 kernel: Freeing unused kernel image (rodata/data gap) memory: 252K Sep 9 21:52:29.779030 kernel: Run /init as init process Sep 9 21:52:29.779038 kernel: with arguments: Sep 9 21:52:29.779044 kernel: /init Sep 9 21:52:29.779050 kernel: with environment: Sep 9 21:52:29.779056 kernel: HOME=/ Sep 9 21:52:29.779062 kernel: TERM=linux Sep 9 21:52:29.779068 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 21:52:29.779075 systemd[1]: Successfully made /usr/ read-only. Sep 9 21:52:29.779084 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 21:52:29.779092 systemd[1]: Detected virtualization vmware. Sep 9 21:52:29.779098 systemd[1]: Detected architecture x86-64. Sep 9 21:52:29.779104 systemd[1]: Running in initrd. Sep 9 21:52:29.779111 systemd[1]: No hostname configured, using default hostname. Sep 9 21:52:29.779117 systemd[1]: Hostname set to . Sep 9 21:52:29.779123 systemd[1]: Initializing machine ID from random generator. Sep 9 21:52:29.779130 systemd[1]: Queued start job for default target initrd.target. Sep 9 21:52:29.779136 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 21:52:29.779144 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 21:52:29.779152 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 21:52:29.779158 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 21:52:29.779165 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 21:52:29.779172 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 21:52:29.779180 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 21:52:29.779187 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 21:52:29.779195 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 21:52:29.779201 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 21:52:29.779207 systemd[1]: Reached target paths.target - Path Units. Sep 9 21:52:29.779214 systemd[1]: Reached target slices.target - Slice Units. Sep 9 21:52:29.779221 systemd[1]: Reached target swap.target - Swaps. Sep 9 21:52:29.779227 systemd[1]: Reached target timers.target - Timer Units. Sep 9 21:52:29.779234 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 21:52:29.779240 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 21:52:29.779248 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 21:52:29.779254 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 9 21:52:29.779261 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 21:52:29.779267 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 21:52:29.779274 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 21:52:29.779302 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 21:52:29.779312 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 21:52:29.779319 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 21:52:29.779325 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 21:52:29.779334 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 9 21:52:29.779341 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 21:52:29.779348 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 21:52:29.779355 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 21:52:29.779362 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 21:52:29.779368 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 21:52:29.779376 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 21:52:29.779383 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 21:52:29.779389 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 21:52:29.779412 systemd-journald[244]: Collecting audit messages is disabled. Sep 9 21:52:29.779431 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 21:52:29.779438 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 21:52:29.779445 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 21:52:29.779451 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 21:52:29.779458 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 21:52:29.779465 kernel: Bridge firewalling registered Sep 9 21:52:29.779473 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 21:52:29.779480 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 21:52:29.779486 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 21:52:29.779494 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 21:52:29.779501 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 21:52:29.779508 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 21:52:29.779515 systemd-journald[244]: Journal started Sep 9 21:52:29.779532 systemd-journald[244]: Runtime Journal (/run/log/journal/b93fc08519b540d7af1dc65642216f99) is 4.8M, max 38.8M, 34M free. Sep 9 21:52:29.719267 systemd-modules-load[245]: Inserted module 'overlay' Sep 9 21:52:29.751072 systemd-modules-load[245]: Inserted module 'br_netfilter' Sep 9 21:52:29.784363 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 21:52:29.785044 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 21:52:29.791486 dracut-cmdline[271]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=f0ebd120fc09fb344715b1492c3f1d02e1457be2c9792ea5ffb3fe4b15efa812 Sep 9 21:52:29.794947 systemd-tmpfiles[276]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 9 21:52:29.796959 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 21:52:29.798400 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 21:52:29.825695 systemd-resolved[310]: Positive Trust Anchors: Sep 9 21:52:29.825702 systemd-resolved[310]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 21:52:29.825724 systemd-resolved[310]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 21:52:29.827506 systemd-resolved[310]: Defaulting to hostname 'linux'. Sep 9 21:52:29.828121 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 21:52:29.828495 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 21:52:29.846302 kernel: SCSI subsystem initialized Sep 9 21:52:29.863297 kernel: Loading iSCSI transport class v2.0-870. Sep 9 21:52:29.871296 kernel: iscsi: registered transport (tcp) Sep 9 21:52:29.893298 kernel: iscsi: registered transport (qla4xxx) Sep 9 21:52:29.893334 kernel: QLogic iSCSI HBA Driver Sep 9 21:52:29.903370 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 21:52:29.913911 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 21:52:29.914943 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 21:52:29.937159 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 21:52:29.937981 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 21:52:29.980312 kernel: raid6: avx2x4 gen() 46829 MB/s Sep 9 21:52:29.995296 kernel: raid6: avx2x2 gen() 52807 MB/s Sep 9 21:52:30.012496 kernel: raid6: avx2x1 gen() 44581 MB/s Sep 9 21:52:30.012514 kernel: raid6: using algorithm avx2x2 gen() 52807 MB/s Sep 9 21:52:30.030515 kernel: raid6: .... xor() 31291 MB/s, rmw enabled Sep 9 21:52:30.030554 kernel: raid6: using avx2x2 recovery algorithm Sep 9 21:52:30.044296 kernel: xor: automatically using best checksumming function avx Sep 9 21:52:30.149305 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 21:52:30.152417 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 21:52:30.153463 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 21:52:30.173162 systemd-udevd[494]: Using default interface naming scheme 'v255'. Sep 9 21:52:30.177008 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 21:52:30.178044 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 21:52:30.196174 dracut-pre-trigger[500]: rd.md=0: removing MD RAID activation Sep 9 21:52:30.209666 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 21:52:30.210456 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 21:52:30.291169 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 21:52:30.292448 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 21:52:30.352295 kernel: VMware PVSCSI driver - version 1.0.7.0-k Sep 9 21:52:30.369450 kernel: vmw_pvscsi: using 64bit dma Sep 9 21:52:30.369487 kernel: vmw_pvscsi: max_id: 16 Sep 9 21:52:30.369495 kernel: vmw_pvscsi: setting ring_pages to 8 Sep 9 21:52:30.375050 kernel: vmw_pvscsi: enabling reqCallThreshold Sep 9 21:52:30.375078 kernel: vmw_pvscsi: driver-based request coalescing enabled Sep 9 21:52:30.375086 kernel: vmw_pvscsi: using MSI-X Sep 9 21:52:30.378299 kernel: VMware vmxnet3 virtual NIC driver - version 1.9.0.0-k-NAPI Sep 9 21:52:30.383124 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Sep 9 21:52:30.383244 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Sep 9 21:52:30.383336 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Sep 9 21:52:30.390315 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Sep 9 21:52:30.393303 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Sep 9 21:52:30.396128 (udev-worker)[540]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Sep 9 21:52:30.400313 kernel: libata version 3.00 loaded. Sep 9 21:52:30.402307 kernel: ata_piix 0000:00:07.1: version 2.13 Sep 9 21:52:30.403290 kernel: cryptd: max_cpu_qlen set to 1000 Sep 9 21:52:30.405295 kernel: scsi host1: ata_piix Sep 9 21:52:30.404999 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 21:52:30.405075 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 21:52:30.405257 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 21:52:30.405955 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 21:52:30.411294 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Sep 9 21:52:30.411386 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Sep 9 21:52:30.414129 kernel: scsi host2: ata_piix Sep 9 21:52:30.414221 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 lpm-pol 0 Sep 9 21:52:30.414234 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 lpm-pol 0 Sep 9 21:52:30.419702 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Sep 9 21:52:30.419806 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 9 21:52:30.419873 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Sep 9 21:52:30.420559 kernel: sd 0:0:0:0: [sda] Cache data unavailable Sep 9 21:52:30.420640 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Sep 9 21:52:30.432301 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 9 21:52:30.433294 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 9 21:52:30.437134 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 21:52:30.579308 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Sep 9 21:52:30.583324 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Sep 9 21:52:30.589792 kernel: AES CTR mode by8 optimization enabled Sep 9 21:52:30.622830 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Sep 9 21:52:30.622989 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 9 21:52:30.635353 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 9 21:52:30.670506 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Sep 9 21:52:30.675967 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Sep 9 21:52:30.680382 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Sep 9 21:52:30.680536 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Sep 9 21:52:30.686035 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Sep 9 21:52:30.686732 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 21:52:30.723302 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 9 21:52:30.735294 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 9 21:52:30.847441 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 21:52:30.848006 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 21:52:30.848279 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 21:52:30.848537 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 21:52:30.849216 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 21:52:30.862987 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 21:52:31.805335 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 9 21:52:31.806671 disk-uuid[646]: The operation has completed successfully. Sep 9 21:52:32.185785 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 21:52:32.186092 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 21:52:32.187200 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 21:52:32.200274 sh[675]: Success Sep 9 21:52:32.217616 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 21:52:32.217659 kernel: device-mapper: uevent: version 1.0.3 Sep 9 21:52:32.218825 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 9 21:52:32.226300 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Sep 9 21:52:32.267515 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 21:52:32.270333 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 21:52:32.278512 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 21:52:32.290298 kernel: BTRFS: device fsid f72d0a81-8b28-47a3-b3ab-bf6ecd8938f0 devid 1 transid 35 /dev/mapper/usr (254:0) scanned by mount (687) Sep 9 21:52:32.292600 kernel: BTRFS info (device dm-0): first mount of filesystem f72d0a81-8b28-47a3-b3ab-bf6ecd8938f0 Sep 9 21:52:32.292622 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 9 21:52:32.299359 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 9 21:52:32.299386 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 21:52:32.299394 kernel: BTRFS info (device dm-0): enabling free space tree Sep 9 21:52:32.302194 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 21:52:32.302559 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 9 21:52:32.303157 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Sep 9 21:52:32.304339 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 21:52:32.401297 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (710) Sep 9 21:52:32.406920 kernel: BTRFS info (device sda6): first mount of filesystem 0420e4c2-e4f2-4134-b76b-6a7c4e652ed7 Sep 9 21:52:32.406950 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 21:52:32.430319 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 9 21:52:32.430359 kernel: BTRFS info (device sda6): enabling free space tree Sep 9 21:52:32.433314 kernel: BTRFS info (device sda6): last unmount of filesystem 0420e4c2-e4f2-4134-b76b-6a7c4e652ed7 Sep 9 21:52:32.433952 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 21:52:32.434663 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 21:52:32.586846 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Sep 9 21:52:32.588390 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 21:52:32.654128 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 21:52:32.655494 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 21:52:32.680900 systemd-networkd[872]: lo: Link UP Sep 9 21:52:32.681126 systemd-networkd[872]: lo: Gained carrier Sep 9 21:52:32.681831 systemd-networkd[872]: Enumeration completed Sep 9 21:52:32.682059 systemd-networkd[872]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Sep 9 21:52:32.682460 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 21:52:32.682706 systemd[1]: Reached target network.target - Network. Sep 9 21:52:32.684917 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Sep 9 21:52:32.686352 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Sep 9 21:52:32.685174 systemd-networkd[872]: ens192: Link UP Sep 9 21:52:32.685176 systemd-networkd[872]: ens192: Gained carrier Sep 9 21:52:32.806916 ignition[730]: Ignition 2.22.0 Sep 9 21:52:32.807218 ignition[730]: Stage: fetch-offline Sep 9 21:52:32.807246 ignition[730]: no configs at "/usr/lib/ignition/base.d" Sep 9 21:52:32.807253 ignition[730]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 9 21:52:32.807338 ignition[730]: parsed url from cmdline: "" Sep 9 21:52:32.807340 ignition[730]: no config URL provided Sep 9 21:52:32.807345 ignition[730]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 21:52:32.807350 ignition[730]: no config at "/usr/lib/ignition/user.ign" Sep 9 21:52:32.808326 ignition[730]: config successfully fetched Sep 9 21:52:32.808347 ignition[730]: parsing config with SHA512: 945e7b369eb61f8085f2ef5aebd015a25a5499dd35e5bc40006ca05b618679e216b09d8acc78962a6bd642bea1884c8e6a4d7bd433292ee79c434978854c9a57 Sep 9 21:52:32.813019 unknown[730]: fetched base config from "system" Sep 9 21:52:32.813030 unknown[730]: fetched user config from "vmware" Sep 9 21:52:32.814166 ignition[730]: fetch-offline: fetch-offline passed Sep 9 21:52:32.814373 ignition[730]: Ignition finished successfully Sep 9 21:52:32.815851 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 21:52:32.816240 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 21:52:32.816841 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 21:52:32.835036 ignition[882]: Ignition 2.22.0 Sep 9 21:52:32.835302 ignition[882]: Stage: kargs Sep 9 21:52:32.835481 ignition[882]: no configs at "/usr/lib/ignition/base.d" Sep 9 21:52:32.835606 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 9 21:52:32.836050 ignition[882]: kargs: kargs passed Sep 9 21:52:32.836073 ignition[882]: Ignition finished successfully Sep 9 21:52:32.837680 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 21:52:32.838332 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 21:52:32.854679 ignition[889]: Ignition 2.22.0 Sep 9 21:52:32.854950 ignition[889]: Stage: disks Sep 9 21:52:32.855134 ignition[889]: no configs at "/usr/lib/ignition/base.d" Sep 9 21:52:32.855265 ignition[889]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 9 21:52:32.855997 ignition[889]: disks: disks passed Sep 9 21:52:32.856132 ignition[889]: Ignition finished successfully Sep 9 21:52:32.857148 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 21:52:32.857378 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 21:52:32.857481 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 21:52:32.857671 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 21:52:32.857867 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 21:52:32.858037 systemd[1]: Reached target basic.target - Basic System. Sep 9 21:52:32.858713 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 21:52:33.460981 systemd-fsck[897]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Sep 9 21:52:33.474891 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 21:52:33.476335 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 21:52:33.757267 kernel: EXT4-fs (sda9): mounted filesystem b54acc07-9600-49db-baed-d5fd6f41a1a5 r/w with ordered data mode. Quota mode: none. Sep 9 21:52:33.756776 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 21:52:33.757112 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 21:52:33.758150 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 21:52:33.759324 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 21:52:33.759702 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 21:52:33.759905 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 21:52:33.760094 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 21:52:33.768111 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 21:52:33.768947 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 21:52:33.776303 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (905) Sep 9 21:52:33.778784 kernel: BTRFS info (device sda6): first mount of filesystem 0420e4c2-e4f2-4134-b76b-6a7c4e652ed7 Sep 9 21:52:33.778817 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 21:52:33.783301 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 9 21:52:33.783335 kernel: BTRFS info (device sda6): enabling free space tree Sep 9 21:52:33.784243 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 21:52:33.806879 initrd-setup-root[929]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 21:52:33.809266 initrd-setup-root[936]: cut: /sysroot/etc/group: No such file or directory Sep 9 21:52:33.811502 initrd-setup-root[943]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 21:52:33.813508 initrd-setup-root[950]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 21:52:33.869879 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 21:52:33.870725 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 21:52:33.871346 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 21:52:33.885959 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 21:52:33.888304 kernel: BTRFS info (device sda6): last unmount of filesystem 0420e4c2-e4f2-4134-b76b-6a7c4e652ed7 Sep 9 21:52:33.911136 ignition[1018]: INFO : Ignition 2.22.0 Sep 9 21:52:33.911136 ignition[1018]: INFO : Stage: mount Sep 9 21:52:33.911807 ignition[1018]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 21:52:33.911807 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 9 21:52:33.912098 ignition[1018]: INFO : mount: mount passed Sep 9 21:52:33.912098 ignition[1018]: INFO : Ignition finished successfully Sep 9 21:52:33.912735 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 21:52:33.914339 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 21:52:33.923421 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 21:52:33.924521 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 21:52:34.058306 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1028) Sep 9 21:52:34.072795 kernel: BTRFS info (device sda6): first mount of filesystem 0420e4c2-e4f2-4134-b76b-6a7c4e652ed7 Sep 9 21:52:34.072844 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 21:52:34.124322 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 9 21:52:34.124381 kernel: BTRFS info (device sda6): enabling free space tree Sep 9 21:52:34.125906 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 21:52:34.153537 ignition[1045]: INFO : Ignition 2.22.0 Sep 9 21:52:34.153537 ignition[1045]: INFO : Stage: files Sep 9 21:52:34.153932 ignition[1045]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 21:52:34.153932 ignition[1045]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 9 21:52:34.154184 ignition[1045]: DEBUG : files: compiled without relabeling support, skipping Sep 9 21:52:34.166249 ignition[1045]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 21:52:34.166249 ignition[1045]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 21:52:34.184738 ignition[1045]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 21:52:34.185073 ignition[1045]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 21:52:34.185392 unknown[1045]: wrote ssh authorized keys file for user: core Sep 9 21:52:34.185679 ignition[1045]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 21:52:34.224363 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 9 21:52:34.224761 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 9 21:52:34.266102 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 21:52:34.628380 systemd-networkd[872]: ens192: Gained IPv6LL Sep 9 21:52:34.662831 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 9 21:52:34.663097 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 21:52:34.663097 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 9 21:52:34.892321 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 9 21:52:35.041511 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 21:52:35.041777 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 9 21:52:35.041777 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 21:52:35.041777 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 21:52:35.041777 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 21:52:35.041777 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 21:52:35.041777 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 21:52:35.042650 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 21:52:35.042650 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 21:52:35.054414 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 21:52:35.054647 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 21:52:35.054647 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 9 21:52:35.060582 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 9 21:52:35.060582 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 9 21:52:35.061019 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 9 21:52:35.545279 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 9 21:52:37.393629 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 9 21:52:37.394151 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Sep 9 21:52:37.394835 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Sep 9 21:52:37.395014 ignition[1045]: INFO : files: op(d): [started] processing unit "prepare-helm.service" Sep 9 21:52:37.395156 ignition[1045]: INFO : files: op(d): op(e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 21:52:37.395613 ignition[1045]: INFO : files: op(d): op(e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 21:52:37.395613 ignition[1045]: INFO : files: op(d): [finished] processing unit "prepare-helm.service" Sep 9 21:52:37.395613 ignition[1045]: INFO : files: op(f): [started] processing unit "coreos-metadata.service" Sep 9 21:52:37.396050 ignition[1045]: INFO : files: op(f): op(10): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 21:52:37.396050 ignition[1045]: INFO : files: op(f): op(10): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 21:52:37.396050 ignition[1045]: INFO : files: op(f): [finished] processing unit "coreos-metadata.service" Sep 9 21:52:37.396050 ignition[1045]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 21:52:37.558350 ignition[1045]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 21:52:37.560493 ignition[1045]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 21:52:37.560659 ignition[1045]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 21:52:37.560659 ignition[1045]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Sep 9 21:52:37.560659 ignition[1045]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 21:52:37.561076 ignition[1045]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 21:52:37.562173 ignition[1045]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 21:52:37.562173 ignition[1045]: INFO : files: files passed Sep 9 21:52:37.562173 ignition[1045]: INFO : Ignition finished successfully Sep 9 21:52:37.563322 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 21:52:37.564608 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 21:52:37.565346 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 21:52:37.577502 initrd-setup-root-after-ignition[1077]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 21:52:37.577502 initrd-setup-root-after-ignition[1077]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 21:52:37.578765 initrd-setup-root-after-ignition[1081]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 21:52:37.579676 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 21:52:37.580179 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 21:52:37.581076 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 21:52:37.581635 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 21:52:37.581680 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 21:52:37.616924 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 21:52:37.617041 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 21:52:37.617492 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 21:52:37.617705 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 21:52:37.617973 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 21:52:37.618611 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 21:52:37.635970 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 21:52:37.637067 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 21:52:37.648805 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 21:52:37.649155 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 21:52:37.649530 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 21:52:37.649852 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 21:52:37.650075 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 21:52:37.650541 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 21:52:37.650851 systemd[1]: Stopped target basic.target - Basic System. Sep 9 21:52:37.651116 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 21:52:37.651374 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 21:52:37.651523 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 21:52:37.651766 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 9 21:52:37.651947 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 21:52:37.652131 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 21:52:37.652337 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 21:52:37.652532 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 21:52:37.652707 systemd[1]: Stopped target swap.target - Swaps. Sep 9 21:52:37.652847 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 21:52:37.652935 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 21:52:37.653231 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 21:52:37.653425 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 21:52:37.653583 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 21:52:37.653644 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 21:52:37.653792 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 21:52:37.653871 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 21:52:37.654177 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 21:52:37.654262 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 21:52:37.654489 systemd[1]: Stopped target paths.target - Path Units. Sep 9 21:52:37.654730 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 21:52:37.659310 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 21:52:37.659519 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 21:52:37.659704 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 21:52:37.659902 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 21:52:37.659972 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 21:52:37.660182 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 21:52:37.660225 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 21:52:37.660442 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 21:52:37.660529 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 21:52:37.660733 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 21:52:37.660812 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 21:52:37.661564 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 21:52:37.661665 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 21:52:37.661751 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 21:52:37.663380 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 21:52:37.663498 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 21:52:37.663596 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 21:52:37.663855 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 21:52:37.663913 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 21:52:37.666858 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 21:52:37.671424 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 21:52:37.682682 ignition[1102]: INFO : Ignition 2.22.0 Sep 9 21:52:37.682980 ignition[1102]: INFO : Stage: umount Sep 9 21:52:37.683171 ignition[1102]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 21:52:37.683317 ignition[1102]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 9 21:52:37.684021 ignition[1102]: INFO : umount: umount passed Sep 9 21:52:37.684166 ignition[1102]: INFO : Ignition finished successfully Sep 9 21:52:37.685064 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 21:52:37.685276 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 21:52:37.685657 systemd[1]: Stopped target network.target - Network. Sep 9 21:52:37.685998 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 21:52:37.686031 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 21:52:37.686150 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 21:52:37.686173 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 21:52:37.686400 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 21:52:37.686425 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 21:52:37.686772 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 21:52:37.686794 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 21:52:37.687328 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 21:52:37.688002 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 21:52:37.692218 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 21:52:37.692628 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 21:52:37.694448 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 9 21:52:37.694712 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 21:52:37.694920 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 21:52:37.695799 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 9 21:52:37.696502 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 9 21:52:37.696839 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 21:52:37.696984 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 21:52:37.697633 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 21:52:37.697893 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 21:52:37.697930 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 21:52:37.698355 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Sep 9 21:52:37.698377 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Sep 9 21:52:37.698492 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 21:52:37.698513 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 21:52:37.699224 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 21:52:37.699560 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 21:52:37.699863 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 21:52:37.700000 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 21:52:37.700365 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 21:52:37.702180 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 21:52:37.702216 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 9 21:52:37.703509 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 21:52:37.710471 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 21:52:37.711593 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 21:52:37.712080 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 21:52:37.712115 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 21:52:37.712639 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 21:52:37.712770 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 21:52:37.713009 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 21:52:37.713034 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 21:52:37.713443 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 21:52:37.713469 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 21:52:37.713892 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 21:52:37.713916 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 21:52:37.714785 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 21:52:37.715036 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 9 21:52:37.715070 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 21:52:37.715551 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 21:52:37.715576 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 21:52:37.715868 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 21:52:37.715891 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 21:52:37.716898 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 9 21:52:37.716928 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 9 21:52:37.716951 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 21:52:37.717110 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 21:52:37.720357 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 21:52:37.720576 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 21:52:37.720625 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 21:52:37.721048 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 21:52:37.721094 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 21:52:37.724070 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 21:52:37.724125 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 21:52:37.724485 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 21:52:37.725006 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 21:52:37.738510 systemd[1]: Switching root. Sep 9 21:52:37.790056 systemd-journald[244]: Journal stopped Sep 9 21:52:39.098195 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Sep 9 21:52:39.098224 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 21:52:39.098233 kernel: SELinux: policy capability open_perms=1 Sep 9 21:52:39.098239 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 21:52:39.098244 kernel: SELinux: policy capability always_check_network=0 Sep 9 21:52:39.098251 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 21:52:39.098257 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 21:52:39.098263 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 21:52:39.098269 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 21:52:39.098275 kernel: SELinux: policy capability userspace_initial_context=0 Sep 9 21:52:39.100005 kernel: audit: type=1403 audit(1757454758.192:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 21:52:39.100023 systemd[1]: Successfully loaded SELinux policy in 40.021ms. Sep 9 21:52:39.100035 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 3.681ms. Sep 9 21:52:39.100043 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 21:52:39.100050 systemd[1]: Detected virtualization vmware. Sep 9 21:52:39.100056 systemd[1]: Detected architecture x86-64. Sep 9 21:52:39.100064 systemd[1]: Detected first boot. Sep 9 21:52:39.100071 systemd[1]: Initializing machine ID from random generator. Sep 9 21:52:39.100078 zram_generator::config[1145]: No configuration found. Sep 9 21:52:39.100172 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Sep 9 21:52:39.100185 kernel: Guest personality initialized and is active Sep 9 21:52:39.100191 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 9 21:52:39.100197 kernel: Initialized host personality Sep 9 21:52:39.100205 kernel: NET: Registered PF_VSOCK protocol family Sep 9 21:52:39.100212 systemd[1]: Populated /etc with preset unit settings. Sep 9 21:52:39.100220 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Sep 9 21:52:39.100228 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Sep 9 21:52:39.100235 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 9 21:52:39.100241 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 21:52:39.100248 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 21:52:39.100256 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 21:52:39.100263 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 21:52:39.100270 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 21:52:39.100277 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 21:52:39.100292 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 21:52:39.100299 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 21:52:39.100306 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 21:52:39.100314 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 21:52:39.100321 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 21:52:39.100328 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 21:52:39.100336 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 21:52:39.100343 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 21:52:39.100353 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 21:52:39.100360 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 21:52:39.100367 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 21:52:39.100375 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 9 21:52:39.100382 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 21:52:39.100389 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 21:52:39.100395 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 21:52:39.100402 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 21:52:39.100409 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 21:52:39.100416 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 21:52:39.100423 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 21:52:39.100430 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 21:52:39.100437 systemd[1]: Reached target slices.target - Slice Units. Sep 9 21:52:39.100444 systemd[1]: Reached target swap.target - Swaps. Sep 9 21:52:39.100451 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 21:52:39.100458 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 21:52:39.100466 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 9 21:52:39.100473 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 21:52:39.100480 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 21:52:39.100487 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 21:52:39.100493 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 21:52:39.100501 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 21:52:39.100508 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 21:52:39.100515 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 21:52:39.100523 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 21:52:39.100531 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 21:52:39.100537 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 21:52:39.100544 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 21:52:39.100551 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 21:52:39.100558 systemd[1]: Reached target machines.target - Containers. Sep 9 21:52:39.100565 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 21:52:39.100577 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Sep 9 21:52:39.100585 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 21:52:39.100592 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 21:52:39.100599 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 21:52:39.100606 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 21:52:39.100613 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 21:52:39.100620 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 21:52:39.100627 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 21:52:39.100634 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 21:52:39.100642 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 21:52:39.100649 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 21:52:39.100656 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 21:52:39.100664 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 21:52:39.100671 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 21:52:39.100678 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 21:52:39.100685 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 21:52:39.100692 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 21:52:39.100699 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 21:52:39.100707 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 9 21:52:39.100714 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 21:52:39.100721 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 21:52:39.100728 systemd[1]: Stopped verity-setup.service. Sep 9 21:52:39.100735 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 21:52:39.100742 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 21:52:39.100749 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 21:52:39.100756 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 21:52:39.100764 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 21:52:39.100771 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 21:52:39.100777 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 21:52:39.100784 kernel: fuse: init (API version 7.41) Sep 9 21:52:39.100791 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 21:52:39.100798 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 21:52:39.100805 kernel: loop: module loaded Sep 9 21:52:39.100812 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 21:52:39.100818 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 21:52:39.100826 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 21:52:39.100833 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 21:52:39.100840 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 21:52:39.100847 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 21:52:39.100854 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 21:52:39.100861 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 21:52:39.100868 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 21:52:39.100874 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 21:52:39.100882 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 21:52:39.100890 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 21:52:39.100899 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 21:52:39.100907 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 21:52:39.100915 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 21:52:39.100922 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 21:52:39.100929 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 21:52:39.100937 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 21:52:39.100944 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 9 21:52:39.100952 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 21:52:39.100959 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 21:52:39.100966 kernel: ACPI: bus type drm_connector registered Sep 9 21:52:39.100973 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 21:52:39.100980 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 21:52:39.100988 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 21:52:39.100996 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 21:52:39.101003 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 21:52:39.101010 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 21:52:39.101018 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 21:52:39.101025 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 21:52:39.101032 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 21:52:39.101039 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 9 21:52:39.101048 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 21:52:39.101057 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 21:52:39.101064 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 21:52:39.101087 systemd-journald[1235]: Collecting audit messages is disabled. Sep 9 21:52:39.101106 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 21:52:39.101114 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 9 21:52:39.101122 kernel: loop0: detected capacity change from 0 to 221472 Sep 9 21:52:39.101129 systemd-journald[1235]: Journal started Sep 9 21:52:39.101145 systemd-journald[1235]: Runtime Journal (/run/log/journal/f6ef342dceda4beeaadf43140b100c28) is 4.8M, max 38.8M, 34M free. Sep 9 21:52:39.110039 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 21:52:38.857486 systemd[1]: Queued start job for default target multi-user.target. Sep 9 21:52:38.870342 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 9 21:52:38.870579 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 21:52:39.110605 jq[1215]: true Sep 9 21:52:39.111086 jq[1248]: true Sep 9 21:52:39.111350 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 21:52:39.136464 systemd-journald[1235]: Time spent on flushing to /var/log/journal/f6ef342dceda4beeaadf43140b100c28 is 17.520ms for 1771 entries. Sep 9 21:52:39.136464 systemd-journald[1235]: System Journal (/var/log/journal/f6ef342dceda4beeaadf43140b100c28) is 8M, max 584.8M, 576.8M free. Sep 9 21:52:39.559752 systemd-journald[1235]: Received client request to flush runtime journal. Sep 9 21:52:39.559802 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 21:52:39.559823 kernel: loop1: detected capacity change from 0 to 2960 Sep 9 21:52:39.147324 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 21:52:39.148146 ignition[1271]: Ignition 2.22.0 Sep 9 21:52:39.226612 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Sep 9 21:52:39.148799 ignition[1271]: deleting config from guestinfo properties Sep 9 21:52:39.227167 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 21:52:39.219583 ignition[1271]: Successfully deleted config Sep 9 21:52:39.450377 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 21:52:39.452171 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 21:52:39.512604 systemd-tmpfiles[1310]: ACLs are not supported, ignoring. Sep 9 21:52:39.512616 systemd-tmpfiles[1310]: ACLs are not supported, ignoring. Sep 9 21:52:39.515343 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 21:52:39.560820 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 21:52:39.579309 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 9 21:52:39.606878 kernel: loop2: detected capacity change from 0 to 110984 Sep 9 21:52:39.660308 kernel: loop3: detected capacity change from 0 to 128016 Sep 9 21:52:39.709348 kernel: loop4: detected capacity change from 0 to 221472 Sep 9 21:52:39.740311 kernel: loop5: detected capacity change from 0 to 2960 Sep 9 21:52:39.763648 kernel: loop6: detected capacity change from 0 to 110984 Sep 9 21:52:39.778644 kernel: loop7: detected capacity change from 0 to 128016 Sep 9 21:52:39.802801 (sd-merge)[1320]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. Sep 9 21:52:39.803242 (sd-merge)[1320]: Merged extensions into '/usr'. Sep 9 21:52:39.807336 systemd[1]: Reload requested from client PID 1268 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 21:52:39.807346 systemd[1]: Reloading... Sep 9 21:52:39.858306 zram_generator::config[1348]: No configuration found. Sep 9 21:52:39.950206 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Sep 9 21:52:39.995499 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 21:52:39.995734 systemd[1]: Reloading finished in 188 ms. Sep 9 21:52:40.015002 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 21:52:40.015319 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 21:52:40.021075 systemd[1]: Starting ensure-sysext.service... Sep 9 21:52:40.023342 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 21:52:40.024419 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 21:52:40.037563 systemd[1]: Reload requested from client PID 1402 ('systemctl') (unit ensure-sysext.service)... Sep 9 21:52:40.037572 systemd[1]: Reloading... Sep 9 21:52:40.041166 systemd-tmpfiles[1403]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 9 21:52:40.041382 systemd-tmpfiles[1403]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 9 21:52:40.041573 systemd-tmpfiles[1403]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 21:52:40.041853 systemd-tmpfiles[1403]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 21:52:40.042519 systemd-tmpfiles[1403]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 21:52:40.042735 systemd-tmpfiles[1403]: ACLs are not supported, ignoring. Sep 9 21:52:40.042816 systemd-tmpfiles[1403]: ACLs are not supported, ignoring. Sep 9 21:52:40.044755 systemd-tmpfiles[1403]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 21:52:40.044830 systemd-tmpfiles[1403]: Skipping /boot Sep 9 21:52:40.049104 systemd-tmpfiles[1403]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 21:52:40.049160 systemd-tmpfiles[1403]: Skipping /boot Sep 9 21:52:40.059963 ldconfig[1264]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 21:52:40.064784 systemd-udevd[1404]: Using default interface naming scheme 'v255'. Sep 9 21:52:40.102469 zram_generator::config[1459]: No configuration found. Sep 9 21:52:40.178472 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Sep 9 21:52:40.241661 systemd[1]: Reloading finished in 203 ms. Sep 9 21:52:40.243314 kernel: mousedev: PS/2 mouse device common for all mice Sep 9 21:52:40.248945 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 21:52:40.249431 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 21:52:40.251305 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 9 21:52:40.257768 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 21:52:40.265308 kernel: ACPI: button: Power Button [PWRF] Sep 9 21:52:40.266731 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 9 21:52:40.268878 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 21:52:40.270626 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 21:52:40.273723 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 21:52:40.275030 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 21:52:40.280802 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 21:52:40.283391 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 21:52:40.288171 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 21:52:40.290334 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 21:52:40.292869 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 21:52:40.295322 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 21:52:40.295469 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 21:52:40.295530 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 21:52:40.296809 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 21:52:40.296910 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 21:52:40.299949 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 21:52:40.300042 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 21:52:40.300097 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 21:52:40.300152 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 21:52:40.301824 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 21:52:40.312475 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 21:52:40.313395 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 21:52:40.313471 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 21:52:40.313575 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 21:52:40.320370 systemd[1]: Finished ensure-sysext.service. Sep 9 21:52:40.322348 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 9 21:52:40.324545 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 21:52:40.342432 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 21:52:40.342716 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 21:52:40.342829 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 21:52:40.344588 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 21:52:40.350728 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 21:52:40.352788 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 21:52:40.353114 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 21:52:40.353343 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 21:52:40.353444 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 21:52:40.355852 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 21:52:40.355900 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 21:52:40.355921 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 21:52:40.360191 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 21:52:40.360320 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 21:52:40.363149 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 21:52:40.368010 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 21:52:40.379778 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 21:52:40.380372 augenrules[1573]: No rules Sep 9 21:52:40.380310 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 21:52:40.421880 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Sep 9 21:52:40.424405 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 21:52:40.437398 systemd-networkd[1527]: lo: Link UP Sep 9 21:52:40.437690 systemd-networkd[1527]: lo: Gained carrier Sep 9 21:52:40.439060 systemd-networkd[1527]: Enumeration completed Sep 9 21:52:40.439745 systemd-networkd[1527]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Sep 9 21:52:40.440336 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 21:52:40.443388 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Sep 9 21:52:40.443510 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Sep 9 21:52:40.444467 systemd-networkd[1527]: ens192: Link UP Sep 9 21:52:40.444772 systemd-networkd[1527]: ens192: Gained carrier Sep 9 21:52:40.445351 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 9 21:52:40.449383 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 21:52:40.463413 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 21:52:40.471791 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 9 21:52:40.471957 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 21:52:40.483063 systemd-resolved[1528]: Positive Trust Anchors: Sep 9 21:52:40.483071 systemd-resolved[1528]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 21:52:40.483094 systemd-resolved[1528]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 21:52:40.484389 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 9 21:52:40.487308 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Sep 9 21:52:40.488718 systemd-resolved[1528]: Defaulting to hostname 'linux'. Sep 9 21:52:40.490230 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 21:52:40.490384 systemd[1]: Reached target network.target - Network. Sep 9 21:52:40.490469 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 21:52:40.490575 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 21:52:40.490718 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 21:52:40.490837 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 21:52:40.490941 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 9 21:52:40.491114 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 21:52:40.491259 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 21:52:40.491377 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 21:52:40.491478 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 21:52:40.491499 systemd[1]: Reached target paths.target - Path Units. Sep 9 21:52:40.491585 systemd[1]: Reached target timers.target - Timer Units. Sep 9 21:52:40.492740 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 21:52:40.494078 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 21:52:40.495923 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 9 21:52:40.496605 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 9 21:52:40.496718 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 9 21:52:40.500509 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 21:52:40.500855 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 9 21:52:40.501685 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 21:52:40.502775 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 21:52:40.503104 systemd[1]: Reached target basic.target - Basic System. Sep 9 21:52:40.503326 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 21:52:40.503342 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 21:52:40.504455 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 21:52:40.506403 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 21:52:40.507176 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 21:52:40.509618 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 21:52:40.511362 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 21:52:40.511470 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 21:52:40.517821 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 9 21:52:40.519169 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 21:52:40.522386 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 21:52:40.523544 jq[1595]: false Sep 9 21:52:40.524152 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 21:52:40.526001 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 21:52:40.535310 extend-filesystems[1596]: Found /dev/sda6 Sep 9 21:52:40.538296 extend-filesystems[1596]: Found /dev/sda9 Sep 9 21:52:40.539487 extend-filesystems[1596]: Checking size of /dev/sda9 Sep 9 21:52:40.540162 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 21:52:40.540484 oslogin_cache_refresh[1597]: Refreshing passwd entry cache Sep 9 21:52:40.541459 google_oslogin_nss_cache[1597]: oslogin_cache_refresh[1597]: Refreshing passwd entry cache Sep 9 21:52:40.540757 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 21:52:40.541217 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 21:52:40.542452 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 21:52:40.546347 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 21:52:40.546666 google_oslogin_nss_cache[1597]: oslogin_cache_refresh[1597]: Failure getting users, quitting Sep 9 21:52:40.546666 google_oslogin_nss_cache[1597]: oslogin_cache_refresh[1597]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 21:52:40.546666 google_oslogin_nss_cache[1597]: oslogin_cache_refresh[1597]: Refreshing group entry cache Sep 9 21:52:40.546558 oslogin_cache_refresh[1597]: Failure getting users, quitting Sep 9 21:52:40.546569 oslogin_cache_refresh[1597]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 21:52:40.546616 oslogin_cache_refresh[1597]: Refreshing group entry cache Sep 9 21:52:40.553878 google_oslogin_nss_cache[1597]: oslogin_cache_refresh[1597]: Failure getting groups, quitting Sep 9 21:52:40.553878 google_oslogin_nss_cache[1597]: oslogin_cache_refresh[1597]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 21:52:40.553353 oslogin_cache_refresh[1597]: Failure getting groups, quitting Sep 9 21:52:40.553360 oslogin_cache_refresh[1597]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 21:52:40.555377 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Sep 9 21:52:40.557876 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 21:52:40.558138 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 21:52:40.558256 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 21:52:40.558417 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 9 21:52:40.558528 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 9 21:52:40.566609 jq[1616]: true Sep 9 21:52:40.566990 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 21:52:40.567151 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 21:52:40.567432 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 21:52:40.567901 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 21:52:40.568686 extend-filesystems[1596]: Old size kept for /dev/sda9 Sep 9 21:52:40.573085 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 21:52:40.574324 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 21:52:40.582303 update_engine[1613]: I20250909 21:52:40.577265 1613 main.cc:92] Flatcar Update Engine starting Sep 9 21:52:40.590290 jq[1631]: true Sep 9 21:52:40.592357 (ntainerd)[1643]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 21:52:40.594211 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Sep 9 21:52:40.600646 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Sep 9 21:52:40.610432 tar[1628]: linux-amd64/helm Sep 9 21:52:40.628625 dbus-daemon[1593]: [system] SELinux support is enabled Sep 9 21:52:40.628737 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 21:52:40.630299 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 21:52:40.630317 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 21:52:40.630454 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 21:52:40.630471 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 21:54:13.487715 systemd-timesyncd[1542]: Contacted time server 45.83.234.123:123 (0.flatcar.pool.ntp.org). Sep 9 21:54:13.487773 systemd-timesyncd[1542]: Initial clock synchronization to Tue 2025-09-09 21:54:13.487564 UTC. Sep 9 21:54:13.487794 systemd-resolved[1528]: Clock change detected. Flushing caches. Sep 9 21:54:13.497922 update_engine[1613]: I20250909 21:54:13.497822 1613 update_check_scheduler.cc:74] Next update check in 11m45s Sep 9 21:54:13.498134 systemd[1]: Started update-engine.service - Update Engine. Sep 9 21:54:13.506923 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 21:54:13.507078 bash[1669]: Updated "/home/core/.ssh/authorized_keys" Sep 9 21:54:13.508620 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 21:54:13.509045 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 9 21:54:13.541517 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 21:54:13.547370 (udev-worker)[1457]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Sep 9 21:54:13.606433 systemd-logind[1612]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 9 21:54:13.606717 systemd-logind[1612]: New seat seat0. Sep 9 21:54:13.607911 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 21:54:13.630669 systemd-logind[1612]: Watching system buttons on /dev/input/event2 (Power Button) Sep 9 21:54:13.642824 sshd_keygen[1625]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 21:54:13.680468 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Sep 9 21:54:13.688071 unknown[1646]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Sep 9 21:54:13.691437 unknown[1646]: Core dump limit set to -1 Sep 9 21:54:13.726121 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 21:54:13.728533 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 21:54:13.740127 containerd[1643]: time="2025-09-09T21:54:13Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 9 21:54:13.741794 locksmithd[1670]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 21:54:13.763662 containerd[1643]: time="2025-09-09T21:54:13.763628411Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 9 21:54:13.769752 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 21:54:13.770057 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 21:54:13.772527 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 21:54:13.791655 containerd[1643]: time="2025-09-09T21:54:13.791625815Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.382µs" Sep 9 21:54:13.791655 containerd[1643]: time="2025-09-09T21:54:13.791649136Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 9 21:54:13.791726 containerd[1643]: time="2025-09-09T21:54:13.791661026Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 9 21:54:13.791763 containerd[1643]: time="2025-09-09T21:54:13.791748078Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 9 21:54:13.791763 containerd[1643]: time="2025-09-09T21:54:13.791761254Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 9 21:54:13.791797 containerd[1643]: time="2025-09-09T21:54:13.791775448Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 21:54:13.791816 containerd[1643]: time="2025-09-09T21:54:13.791807329Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 21:54:13.791816 containerd[1643]: time="2025-09-09T21:54:13.791814292Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 21:54:13.791929 containerd[1643]: time="2025-09-09T21:54:13.791914924Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 21:54:13.791929 containerd[1643]: time="2025-09-09T21:54:13.791926042Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 21:54:13.791959 containerd[1643]: time="2025-09-09T21:54:13.791933106Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 21:54:13.791959 containerd[1643]: time="2025-09-09T21:54:13.791937639Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 9 21:54:13.791987 containerd[1643]: time="2025-09-09T21:54:13.791979748Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 9 21:54:13.792098 containerd[1643]: time="2025-09-09T21:54:13.792083853Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 21:54:13.792117 containerd[1643]: time="2025-09-09T21:54:13.792103069Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 21:54:13.792117 containerd[1643]: time="2025-09-09T21:54:13.792108991Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 9 21:54:13.792151 containerd[1643]: time="2025-09-09T21:54:13.792124470Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 9 21:54:13.792273 containerd[1643]: time="2025-09-09T21:54:13.792256473Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 9 21:54:13.792320 containerd[1643]: time="2025-09-09T21:54:13.792293634Z" level=info msg="metadata content store policy set" policy=shared Sep 9 21:54:13.803071 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 21:54:13.813714 containerd[1643]: time="2025-09-09T21:54:13.813675704Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 9 21:54:13.813714 containerd[1643]: time="2025-09-09T21:54:13.813725595Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 9 21:54:13.813714 containerd[1643]: time="2025-09-09T21:54:13.813736150Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 9 21:54:13.813910 containerd[1643]: time="2025-09-09T21:54:13.813744434Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 9 21:54:13.813910 containerd[1643]: time="2025-09-09T21:54:13.813755239Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 9 21:54:13.813910 containerd[1643]: time="2025-09-09T21:54:13.813761626Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 9 21:54:13.813910 containerd[1643]: time="2025-09-09T21:54:13.813775292Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 9 21:54:13.813910 containerd[1643]: time="2025-09-09T21:54:13.813783578Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 9 21:54:13.813910 containerd[1643]: time="2025-09-09T21:54:13.813803506Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 9 21:54:13.813910 containerd[1643]: time="2025-09-09T21:54:13.813811586Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 9 21:54:13.813910 containerd[1643]: time="2025-09-09T21:54:13.813819526Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 9 21:54:13.813910 containerd[1643]: time="2025-09-09T21:54:13.813827194Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 9 21:54:13.814048 containerd[1643]: time="2025-09-09T21:54:13.813926201Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 9 21:54:13.814048 containerd[1643]: time="2025-09-09T21:54:13.813938936Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 9 21:54:13.814048 containerd[1643]: time="2025-09-09T21:54:13.813947791Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 9 21:54:13.814048 containerd[1643]: time="2025-09-09T21:54:13.813953754Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 9 21:54:13.814048 containerd[1643]: time="2025-09-09T21:54:13.813961332Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 9 21:54:13.814048 containerd[1643]: time="2025-09-09T21:54:13.813968080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 9 21:54:13.814048 containerd[1643]: time="2025-09-09T21:54:13.813973997Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 9 21:54:13.814048 containerd[1643]: time="2025-09-09T21:54:13.813979024Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 9 21:54:13.814048 containerd[1643]: time="2025-09-09T21:54:13.813985072Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 9 21:54:13.814048 containerd[1643]: time="2025-09-09T21:54:13.813990896Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 9 21:54:13.814048 containerd[1643]: time="2025-09-09T21:54:13.813996353Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 9 21:54:13.814048 containerd[1643]: time="2025-09-09T21:54:13.814037719Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 9 21:54:13.814048 containerd[1643]: time="2025-09-09T21:54:13.814045588Z" level=info msg="Start snapshots syncer" Sep 9 21:54:13.814320 containerd[1643]: time="2025-09-09T21:54:13.814064400Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 9 21:54:13.814320 containerd[1643]: time="2025-09-09T21:54:13.814201451Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 9 21:54:13.814682 containerd[1643]: time="2025-09-09T21:54:13.814229647Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 9 21:54:13.814682 containerd[1643]: time="2025-09-09T21:54:13.814270058Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 9 21:54:13.814682 containerd[1643]: time="2025-09-09T21:54:13.814320977Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 9 21:54:13.814682 containerd[1643]: time="2025-09-09T21:54:13.814339366Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 9 21:54:13.814682 containerd[1643]: time="2025-09-09T21:54:13.814346078Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 9 21:54:13.814682 containerd[1643]: time="2025-09-09T21:54:13.814352305Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 9 21:54:13.816584 containerd[1643]: time="2025-09-09T21:54:13.816560912Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 9 21:54:13.816584 containerd[1643]: time="2025-09-09T21:54:13.816577905Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 9 21:54:13.816626 containerd[1643]: time="2025-09-09T21:54:13.816585768Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 9 21:54:13.816626 containerd[1643]: time="2025-09-09T21:54:13.816602830Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 9 21:54:13.816626 containerd[1643]: time="2025-09-09T21:54:13.816610341Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 9 21:54:13.816626 containerd[1643]: time="2025-09-09T21:54:13.816616194Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 9 21:54:13.816741 containerd[1643]: time="2025-09-09T21:54:13.816657694Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 21:54:13.816741 containerd[1643]: time="2025-09-09T21:54:13.816672122Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 21:54:13.816741 containerd[1643]: time="2025-09-09T21:54:13.816677783Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 21:54:13.816741 containerd[1643]: time="2025-09-09T21:54:13.816683076Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 21:54:13.816741 containerd[1643]: time="2025-09-09T21:54:13.816687299Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 9 21:54:13.816741 containerd[1643]: time="2025-09-09T21:54:13.816692221Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 9 21:54:13.816741 containerd[1643]: time="2025-09-09T21:54:13.816697778Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 9 21:54:13.816741 containerd[1643]: time="2025-09-09T21:54:13.816707861Z" level=info msg="runtime interface created" Sep 9 21:54:13.816741 containerd[1643]: time="2025-09-09T21:54:13.816710960Z" level=info msg="created NRI interface" Sep 9 21:54:13.816741 containerd[1643]: time="2025-09-09T21:54:13.816723906Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 9 21:54:13.816741 containerd[1643]: time="2025-09-09T21:54:13.816732885Z" level=info msg="Connect containerd service" Sep 9 21:54:13.816978 containerd[1643]: time="2025-09-09T21:54:13.816750814Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 21:54:13.817447 containerd[1643]: time="2025-09-09T21:54:13.817168024Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 21:54:13.818448 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 21:54:13.830769 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 9 21:54:13.831015 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 21:54:13.844230 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 21:54:13.922402 containerd[1643]: time="2025-09-09T21:54:13.922131148Z" level=info msg="Start subscribing containerd event" Sep 9 21:54:13.922402 containerd[1643]: time="2025-09-09T21:54:13.922164412Z" level=info msg="Start recovering state" Sep 9 21:54:13.922402 containerd[1643]: time="2025-09-09T21:54:13.922197056Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 21:54:13.922402 containerd[1643]: time="2025-09-09T21:54:13.922228619Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 21:54:13.922402 containerd[1643]: time="2025-09-09T21:54:13.922235644Z" level=info msg="Start event monitor" Sep 9 21:54:13.922402 containerd[1643]: time="2025-09-09T21:54:13.922244587Z" level=info msg="Start cni network conf syncer for default" Sep 9 21:54:13.922402 containerd[1643]: time="2025-09-09T21:54:13.922250652Z" level=info msg="Start streaming server" Sep 9 21:54:13.922402 containerd[1643]: time="2025-09-09T21:54:13.922255878Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 9 21:54:13.922402 containerd[1643]: time="2025-09-09T21:54:13.922259849Z" level=info msg="runtime interface starting up..." Sep 9 21:54:13.922402 containerd[1643]: time="2025-09-09T21:54:13.922263068Z" level=info msg="starting plugins..." Sep 9 21:54:13.922402 containerd[1643]: time="2025-09-09T21:54:13.922270146Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 9 21:54:13.922402 containerd[1643]: time="2025-09-09T21:54:13.922335569Z" level=info msg="containerd successfully booted in 0.182511s" Sep 9 21:54:13.922419 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 21:54:13.981087 tar[1628]: linux-amd64/LICENSE Sep 9 21:54:13.981087 tar[1628]: linux-amd64/README.md Sep 9 21:54:13.996158 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 21:54:14.652496 systemd-networkd[1527]: ens192: Gained IPv6LL Sep 9 21:54:14.654490 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 21:54:14.654985 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 21:54:14.656077 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Sep 9 21:54:14.657509 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 21:54:14.662542 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 21:54:14.687026 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 21:54:14.697021 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 9 21:54:14.697172 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Sep 9 21:54:14.697783 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 21:54:15.612195 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:54:15.612755 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 21:54:15.613241 systemd[1]: Startup finished in 2.656s (kernel) + 8.611s (initrd) + 4.602s (userspace) = 15.870s. Sep 9 21:54:15.619072 (kubelet)[1812]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 21:54:15.645587 login[1712]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 9 21:54:15.647590 login[1713]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 9 21:54:15.654594 systemd-logind[1612]: New session 1 of user core. Sep 9 21:54:15.655334 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 21:54:15.656460 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 21:54:15.659660 systemd-logind[1612]: New session 2 of user core. Sep 9 21:54:15.689014 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 21:54:15.690576 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 21:54:15.699770 (systemd)[1819]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 21:54:15.701191 systemd-logind[1612]: New session c1 of user core. Sep 9 21:54:15.783392 systemd[1819]: Queued start job for default target default.target. Sep 9 21:54:15.789411 systemd[1819]: Created slice app.slice - User Application Slice. Sep 9 21:54:15.789428 systemd[1819]: Reached target paths.target - Paths. Sep 9 21:54:15.789597 systemd[1819]: Reached target timers.target - Timers. Sep 9 21:54:15.791405 systemd[1819]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 21:54:15.797201 systemd[1819]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 21:54:15.797229 systemd[1819]: Reached target sockets.target - Sockets. Sep 9 21:54:15.797346 systemd[1819]: Reached target basic.target - Basic System. Sep 9 21:54:15.797407 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 21:54:15.798153 systemd[1819]: Reached target default.target - Main User Target. Sep 9 21:54:15.798178 systemd[1819]: Startup finished in 93ms. Sep 9 21:54:15.798895 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 21:54:15.800725 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 21:54:16.156920 kubelet[1812]: E0909 21:54:16.156882 1812 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 21:54:16.158694 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 21:54:16.158810 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 21:54:16.159073 systemd[1]: kubelet.service: Consumed 660ms CPU time, 263.3M memory peak. Sep 9 21:54:26.409381 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 21:54:26.410816 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 21:54:26.694061 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:54:26.697578 (kubelet)[1862]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 21:54:26.751129 kubelet[1862]: E0909 21:54:26.751097 1862 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 21:54:26.753835 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 21:54:26.753928 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 21:54:26.754273 systemd[1]: kubelet.service: Consumed 108ms CPU time, 110.9M memory peak. Sep 9 21:54:37.004413 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 21:54:37.005748 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 21:54:37.347995 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:54:37.357582 (kubelet)[1877]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 21:54:37.396877 kubelet[1877]: E0909 21:54:37.396845 1877 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 21:54:37.398403 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 21:54:37.398543 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 21:54:37.398913 systemd[1]: kubelet.service: Consumed 108ms CPU time, 110.6M memory peak. Sep 9 21:54:43.895953 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 21:54:43.898654 systemd[1]: Started sshd@0-139.178.70.109:22-139.178.89.65:53580.service - OpenSSH per-connection server daemon (139.178.89.65:53580). Sep 9 21:54:43.980422 sshd[1885]: Accepted publickey for core from 139.178.89.65 port 53580 ssh2: RSA SHA256:+yHkHs/g1kjLKz8TerXa64YormdzNna7WxTDm23L2SM Sep 9 21:54:43.981287 sshd-session[1885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:54:43.984583 systemd-logind[1612]: New session 3 of user core. Sep 9 21:54:43.994482 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 21:54:44.048572 systemd[1]: Started sshd@1-139.178.70.109:22-139.178.89.65:53586.service - OpenSSH per-connection server daemon (139.178.89.65:53586). Sep 9 21:54:44.093449 sshd[1891]: Accepted publickey for core from 139.178.89.65 port 53586 ssh2: RSA SHA256:+yHkHs/g1kjLKz8TerXa64YormdzNna7WxTDm23L2SM Sep 9 21:54:44.094321 sshd-session[1891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:54:44.097392 systemd-logind[1612]: New session 4 of user core. Sep 9 21:54:44.106489 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 21:54:44.154352 sshd[1894]: Connection closed by 139.178.89.65 port 53586 Sep 9 21:54:44.154783 sshd-session[1891]: pam_unix(sshd:session): session closed for user core Sep 9 21:54:44.165896 systemd[1]: sshd@1-139.178.70.109:22-139.178.89.65:53586.service: Deactivated successfully. Sep 9 21:54:44.166929 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 21:54:44.167837 systemd-logind[1612]: Session 4 logged out. Waiting for processes to exit. Sep 9 21:54:44.169126 systemd-logind[1612]: Removed session 4. Sep 9 21:54:44.170219 systemd[1]: Started sshd@2-139.178.70.109:22-139.178.89.65:53602.service - OpenSSH per-connection server daemon (139.178.89.65:53602). Sep 9 21:54:44.210292 sshd[1900]: Accepted publickey for core from 139.178.89.65 port 53602 ssh2: RSA SHA256:+yHkHs/g1kjLKz8TerXa64YormdzNna7WxTDm23L2SM Sep 9 21:54:44.211080 sshd-session[1900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:54:44.214094 systemd-logind[1612]: New session 5 of user core. Sep 9 21:54:44.223452 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 21:54:44.269991 sshd[1903]: Connection closed by 139.178.89.65 port 53602 Sep 9 21:54:44.270982 sshd-session[1900]: pam_unix(sshd:session): session closed for user core Sep 9 21:54:44.280684 systemd[1]: sshd@2-139.178.70.109:22-139.178.89.65:53602.service: Deactivated successfully. Sep 9 21:54:44.281586 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 21:54:44.282215 systemd-logind[1612]: Session 5 logged out. Waiting for processes to exit. Sep 9 21:54:44.283623 systemd[1]: Started sshd@3-139.178.70.109:22-139.178.89.65:53614.service - OpenSSH per-connection server daemon (139.178.89.65:53614). Sep 9 21:54:44.284251 systemd-logind[1612]: Removed session 5. Sep 9 21:54:44.322958 sshd[1909]: Accepted publickey for core from 139.178.89.65 port 53614 ssh2: RSA SHA256:+yHkHs/g1kjLKz8TerXa64YormdzNna7WxTDm23L2SM Sep 9 21:54:44.323694 sshd-session[1909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:54:44.326980 systemd-logind[1612]: New session 6 of user core. Sep 9 21:54:44.332452 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 21:54:44.379551 sshd[1912]: Connection closed by 139.178.89.65 port 53614 Sep 9 21:54:44.379825 sshd-session[1909]: pam_unix(sshd:session): session closed for user core Sep 9 21:54:44.390417 systemd[1]: sshd@3-139.178.70.109:22-139.178.89.65:53614.service: Deactivated successfully. Sep 9 21:54:44.391398 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 21:54:44.391866 systemd-logind[1612]: Session 6 logged out. Waiting for processes to exit. Sep 9 21:54:44.393221 systemd[1]: Started sshd@4-139.178.70.109:22-139.178.89.65:53624.service - OpenSSH per-connection server daemon (139.178.89.65:53624). Sep 9 21:54:44.393917 systemd-logind[1612]: Removed session 6. Sep 9 21:54:44.430653 sshd[1918]: Accepted publickey for core from 139.178.89.65 port 53624 ssh2: RSA SHA256:+yHkHs/g1kjLKz8TerXa64YormdzNna7WxTDm23L2SM Sep 9 21:54:44.430978 sshd-session[1918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:54:44.433835 systemd-logind[1612]: New session 7 of user core. Sep 9 21:54:44.443499 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 21:54:44.555510 sudo[1922]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 21:54:44.555666 sudo[1922]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 21:54:44.570505 sudo[1922]: pam_unix(sudo:session): session closed for user root Sep 9 21:54:44.571223 sshd[1921]: Connection closed by 139.178.89.65 port 53624 Sep 9 21:54:44.571515 sshd-session[1918]: pam_unix(sshd:session): session closed for user core Sep 9 21:54:44.583479 systemd[1]: sshd@4-139.178.70.109:22-139.178.89.65:53624.service: Deactivated successfully. Sep 9 21:54:44.584410 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 21:54:44.584863 systemd-logind[1612]: Session 7 logged out. Waiting for processes to exit. Sep 9 21:54:44.586274 systemd[1]: Started sshd@5-139.178.70.109:22-139.178.89.65:53632.service - OpenSSH per-connection server daemon (139.178.89.65:53632). Sep 9 21:54:44.587557 systemd-logind[1612]: Removed session 7. Sep 9 21:54:44.629772 sshd[1928]: Accepted publickey for core from 139.178.89.65 port 53632 ssh2: RSA SHA256:+yHkHs/g1kjLKz8TerXa64YormdzNna7WxTDm23L2SM Sep 9 21:54:44.630503 sshd-session[1928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:54:44.633824 systemd-logind[1612]: New session 8 of user core. Sep 9 21:54:44.639453 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 21:54:44.686488 sudo[1933]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 21:54:44.686638 sudo[1933]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 21:54:44.694553 sudo[1933]: pam_unix(sudo:session): session closed for user root Sep 9 21:54:44.697420 sudo[1932]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 9 21:54:44.697563 sudo[1932]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 21:54:44.703419 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 21:54:44.725586 augenrules[1955]: No rules Sep 9 21:54:44.726427 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 21:54:44.726589 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 21:54:44.727070 sudo[1932]: pam_unix(sudo:session): session closed for user root Sep 9 21:54:44.727939 sshd[1931]: Connection closed by 139.178.89.65 port 53632 Sep 9 21:54:44.728178 sshd-session[1928]: pam_unix(sshd:session): session closed for user core Sep 9 21:54:44.734354 systemd[1]: sshd@5-139.178.70.109:22-139.178.89.65:53632.service: Deactivated successfully. Sep 9 21:54:44.735717 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 21:54:44.736394 systemd-logind[1612]: Session 8 logged out. Waiting for processes to exit. Sep 9 21:54:44.738205 systemd[1]: Started sshd@6-139.178.70.109:22-139.178.89.65:53646.service - OpenSSH per-connection server daemon (139.178.89.65:53646). Sep 9 21:54:44.739791 systemd-logind[1612]: Removed session 8. Sep 9 21:54:44.774501 sshd[1964]: Accepted publickey for core from 139.178.89.65 port 53646 ssh2: RSA SHA256:+yHkHs/g1kjLKz8TerXa64YormdzNna7WxTDm23L2SM Sep 9 21:54:44.775235 sshd-session[1964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:54:44.777719 systemd-logind[1612]: New session 9 of user core. Sep 9 21:54:44.787447 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 21:54:44.834348 sudo[1968]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 21:54:44.834748 sudo[1968]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 21:54:45.374477 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 21:54:45.387727 (dockerd)[1985]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 21:54:45.730731 dockerd[1985]: time="2025-09-09T21:54:45.730555714Z" level=info msg="Starting up" Sep 9 21:54:45.731260 dockerd[1985]: time="2025-09-09T21:54:45.731246891Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 9 21:54:45.737554 dockerd[1985]: time="2025-09-09T21:54:45.737509520Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 9 21:54:45.793880 dockerd[1985]: time="2025-09-09T21:54:45.793858007Z" level=info msg="Loading containers: start." Sep 9 21:54:45.802381 kernel: Initializing XFRM netlink socket Sep 9 21:54:46.049941 systemd-networkd[1527]: docker0: Link UP Sep 9 21:54:46.061516 dockerd[1985]: time="2025-09-09T21:54:46.061425239Z" level=info msg="Loading containers: done." Sep 9 21:54:46.070994 dockerd[1985]: time="2025-09-09T21:54:46.070959298Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 21:54:46.071090 dockerd[1985]: time="2025-09-09T21:54:46.071020956Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 9 21:54:46.071090 dockerd[1985]: time="2025-09-09T21:54:46.071081629Z" level=info msg="Initializing buildkit" Sep 9 21:54:46.082020 dockerd[1985]: time="2025-09-09T21:54:46.081991082Z" level=info msg="Completed buildkit initialization" Sep 9 21:54:46.086643 dockerd[1985]: time="2025-09-09T21:54:46.086612623Z" level=info msg="Daemon has completed initialization" Sep 9 21:54:46.087107 dockerd[1985]: time="2025-09-09T21:54:46.087025093Z" level=info msg="API listen on /run/docker.sock" Sep 9 21:54:46.086779 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 21:54:47.510060 containerd[1643]: time="2025-09-09T21:54:47.510032907Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 9 21:54:47.544396 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 9 21:54:47.546461 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 21:54:47.804197 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:54:47.811648 (kubelet)[2202]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 21:54:47.851955 kubelet[2202]: E0909 21:54:47.851922 2202 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 21:54:47.853282 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 21:54:47.853447 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 21:54:47.853804 systemd[1]: kubelet.service: Consumed 108ms CPU time, 108.7M memory peak. Sep 9 21:54:48.322166 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2432991793.mount: Deactivated successfully. Sep 9 21:54:49.354469 containerd[1643]: time="2025-09-09T21:54:49.354435967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:54:49.356960 containerd[1643]: time="2025-09-09T21:54:49.356942817Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=28079631" Sep 9 21:54:49.359472 containerd[1643]: time="2025-09-09T21:54:49.359454754Z" level=info msg="ImageCreate event name:\"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:54:49.363859 containerd[1643]: time="2025-09-09T21:54:49.363836493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:54:49.364233 containerd[1643]: time="2025-09-09T21:54:49.364139129Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"28076431\" in 1.854061962s" Sep 9 21:54:49.364233 containerd[1643]: time="2025-09-09T21:54:49.364158219Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\"" Sep 9 21:54:49.364620 containerd[1643]: time="2025-09-09T21:54:49.364607538Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 9 21:54:50.587379 containerd[1643]: time="2025-09-09T21:54:50.587331576Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:54:50.594107 containerd[1643]: time="2025-09-09T21:54:50.594058684Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=24714681" Sep 9 21:54:50.606215 containerd[1643]: time="2025-09-09T21:54:50.606183734Z" level=info msg="ImageCreate event name:\"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:54:50.612245 containerd[1643]: time="2025-09-09T21:54:50.612210444Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:54:50.612856 containerd[1643]: time="2025-09-09T21:54:50.612625659Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"26317875\" in 1.248002431s" Sep 9 21:54:50.612856 containerd[1643]: time="2025-09-09T21:54:50.612647609Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\"" Sep 9 21:54:50.613015 containerd[1643]: time="2025-09-09T21:54:50.613003310Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 9 21:54:51.887035 containerd[1643]: time="2025-09-09T21:54:51.886569646Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:54:51.887530 containerd[1643]: time="2025-09-09T21:54:51.887519663Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=18782427" Sep 9 21:54:51.887807 containerd[1643]: time="2025-09-09T21:54:51.887796495Z" level=info msg="ImageCreate event name:\"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:54:51.889557 containerd[1643]: time="2025-09-09T21:54:51.889545671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:54:51.890198 containerd[1643]: time="2025-09-09T21:54:51.889931898Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"20385639\" in 1.276912705s" Sep 9 21:54:51.890384 containerd[1643]: time="2025-09-09T21:54:51.890374862Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\"" Sep 9 21:54:51.890709 containerd[1643]: time="2025-09-09T21:54:51.890676438Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 9 21:54:53.291151 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3939104814.mount: Deactivated successfully. Sep 9 21:54:53.591617 containerd[1643]: time="2025-09-09T21:54:53.591588846Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:54:53.591990 containerd[1643]: time="2025-09-09T21:54:53.591969783Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=30384255" Sep 9 21:54:53.592187 containerd[1643]: time="2025-09-09T21:54:53.592175582Z" level=info msg="ImageCreate event name:\"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:54:53.593048 containerd[1643]: time="2025-09-09T21:54:53.593036428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:54:53.593472 containerd[1643]: time="2025-09-09T21:54:53.593402928Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"30383274\" in 1.702625576s" Sep 9 21:54:53.593472 containerd[1643]: time="2025-09-09T21:54:53.593421563Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\"" Sep 9 21:54:53.593663 containerd[1643]: time="2025-09-09T21:54:53.593642737Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 9 21:54:54.404864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2729640192.mount: Deactivated successfully. Sep 9 21:54:55.427821 containerd[1643]: time="2025-09-09T21:54:55.427795708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:54:55.428393 containerd[1643]: time="2025-09-09T21:54:55.428375475Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 9 21:54:55.429159 containerd[1643]: time="2025-09-09T21:54:55.429143118Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:54:55.430456 containerd[1643]: time="2025-09-09T21:54:55.430437443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:54:55.431110 containerd[1643]: time="2025-09-09T21:54:55.431091200Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.837432374s" Sep 9 21:54:55.431140 containerd[1643]: time="2025-09-09T21:54:55.431112691Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 9 21:54:55.431508 containerd[1643]: time="2025-09-09T21:54:55.431354666Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 21:54:55.941057 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1898943098.mount: Deactivated successfully. Sep 9 21:54:55.943921 containerd[1643]: time="2025-09-09T21:54:55.943858783Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 21:54:55.944649 containerd[1643]: time="2025-09-09T21:54:55.944160895Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 9 21:54:55.944649 containerd[1643]: time="2025-09-09T21:54:55.944610533Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 21:54:55.945862 containerd[1643]: time="2025-09-09T21:54:55.945845633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 21:54:55.946417 containerd[1643]: time="2025-09-09T21:54:55.946397678Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 515.017488ms" Sep 9 21:54:55.946459 containerd[1643]: time="2025-09-09T21:54:55.946420835Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 9 21:54:55.947042 containerd[1643]: time="2025-09-09T21:54:55.947023504Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 9 21:54:56.457729 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3535002588.mount: Deactivated successfully. Sep 9 21:54:58.044624 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 9 21:54:58.046106 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 21:54:58.467343 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:54:58.472538 (kubelet)[2396]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 21:54:58.632381 kubelet[2396]: E0909 21:54:58.632104 2396 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 21:54:58.634545 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 21:54:58.634686 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 21:54:58.635159 systemd[1]: kubelet.service: Consumed 107ms CPU time, 107.6M memory peak. Sep 9 21:54:58.752016 update_engine[1613]: I20250909 21:54:58.751622 1613 update_attempter.cc:509] Updating boot flags... Sep 9 21:54:59.468905 containerd[1643]: time="2025-09-09T21:54:59.468857120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:54:59.480412 containerd[1643]: time="2025-09-09T21:54:59.480378482Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 9 21:54:59.563581 containerd[1643]: time="2025-09-09T21:54:59.563531757Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:54:59.566370 containerd[1643]: time="2025-09-09T21:54:59.566243555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:54:59.566646 containerd[1643]: time="2025-09-09T21:54:59.566631548Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.61957847s" Sep 9 21:54:59.566711 containerd[1643]: time="2025-09-09T21:54:59.566699204Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 9 21:55:01.763927 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:55:01.764230 systemd[1]: kubelet.service: Consumed 107ms CPU time, 107.6M memory peak. Sep 9 21:55:01.766070 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 21:55:01.786105 systemd[1]: Reload requested from client PID 2455 ('systemctl') (unit session-9.scope)... Sep 9 21:55:01.786122 systemd[1]: Reloading... Sep 9 21:55:01.867383 zram_generator::config[2497]: No configuration found. Sep 9 21:55:01.944429 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Sep 9 21:55:02.012365 systemd[1]: Reloading finished in 226 ms. Sep 9 21:55:02.056642 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 21:55:02.056713 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 21:55:02.056978 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:55:02.058541 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 21:55:02.617864 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:55:02.623543 (kubelet)[2565]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 21:55:02.672109 kubelet[2565]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 21:55:02.672310 kubelet[2565]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 9 21:55:02.672340 kubelet[2565]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 21:55:02.672436 kubelet[2565]: I0909 21:55:02.672415 2565 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 21:55:03.158658 kubelet[2565]: I0909 21:55:03.158627 2565 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 9 21:55:03.158658 kubelet[2565]: I0909 21:55:03.158652 2565 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 21:55:03.160371 kubelet[2565]: I0909 21:55:03.159049 2565 server.go:934] "Client rotation is on, will bootstrap in background" Sep 9 21:55:03.268957 kubelet[2565]: I0909 21:55:03.268933 2565 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 21:55:03.274320 kubelet[2565]: E0909 21:55:03.274304 2565 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.109:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.109:6443: connect: connection refused" logger="UnhandledError" Sep 9 21:55:03.299525 kubelet[2565]: I0909 21:55:03.299434 2565 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 21:55:03.304005 kubelet[2565]: I0909 21:55:03.303976 2565 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 21:55:03.306203 kubelet[2565]: I0909 21:55:03.306180 2565 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 9 21:55:03.306324 kubelet[2565]: I0909 21:55:03.306297 2565 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 21:55:03.306480 kubelet[2565]: I0909 21:55:03.306323 2565 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 21:55:03.306571 kubelet[2565]: I0909 21:55:03.306486 2565 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 21:55:03.306571 kubelet[2565]: I0909 21:55:03.306493 2565 container_manager_linux.go:300] "Creating device plugin manager" Sep 9 21:55:03.307121 kubelet[2565]: I0909 21:55:03.307105 2565 state_mem.go:36] "Initialized new in-memory state store" Sep 9 21:55:03.310069 kubelet[2565]: I0909 21:55:03.310049 2565 kubelet.go:408] "Attempting to sync node with API server" Sep 9 21:55:03.310069 kubelet[2565]: I0909 21:55:03.310068 2565 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 21:55:03.311539 kubelet[2565]: I0909 21:55:03.311399 2565 kubelet.go:314] "Adding apiserver pod source" Sep 9 21:55:03.311539 kubelet[2565]: I0909 21:55:03.311418 2565 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 21:55:03.314137 kubelet[2565]: W0909 21:55:03.314106 2565 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Sep 9 21:55:03.314174 kubelet[2565]: E0909 21:55:03.314142 2565 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.109:6443: connect: connection refused" logger="UnhandledError" Sep 9 21:55:03.315284 kubelet[2565]: W0909 21:55:03.315258 2565 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Sep 9 21:55:03.315320 kubelet[2565]: E0909 21:55:03.315290 2565 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.109:6443: connect: connection refused" logger="UnhandledError" Sep 9 21:55:03.315411 kubelet[2565]: I0909 21:55:03.315371 2565 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 21:55:03.318381 kubelet[2565]: I0909 21:55:03.318164 2565 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 21:55:03.319809 kubelet[2565]: W0909 21:55:03.318662 2565 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 21:55:03.320012 kubelet[2565]: I0909 21:55:03.319999 2565 server.go:1274] "Started kubelet" Sep 9 21:55:03.320051 kubelet[2565]: I0909 21:55:03.320035 2565 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 21:55:03.326213 kubelet[2565]: I0909 21:55:03.326189 2565 server.go:449] "Adding debug handlers to kubelet server" Sep 9 21:55:03.327914 kubelet[2565]: I0909 21:55:03.327824 2565 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 21:55:03.327914 kubelet[2565]: I0909 21:55:03.327873 2565 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 21:55:03.327999 kubelet[2565]: I0909 21:55:03.327987 2565 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 21:55:03.333198 kubelet[2565]: E0909 21:55:03.328912 2565 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.109:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.109:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863bbec55220d39 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 21:55:03.319596345 +0000 UTC m=+0.693822316,LastTimestamp:2025-09-09 21:55:03.319596345 +0000 UTC m=+0.693822316,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 21:55:03.333451 kubelet[2565]: I0909 21:55:03.333440 2565 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 21:55:03.335402 kubelet[2565]: E0909 21:55:03.334785 2565 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 21:55:03.335402 kubelet[2565]: I0909 21:55:03.334822 2565 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 9 21:55:03.335402 kubelet[2565]: I0909 21:55:03.334953 2565 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 9 21:55:03.336572 kubelet[2565]: I0909 21:55:03.336560 2565 reconciler.go:26] "Reconciler: start to sync state" Sep 9 21:55:03.336779 kubelet[2565]: W0909 21:55:03.336757 2565 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Sep 9 21:55:03.336814 kubelet[2565]: E0909 21:55:03.336786 2565 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.109:6443: connect: connection refused" logger="UnhandledError" Sep 9 21:55:03.336834 kubelet[2565]: E0909 21:55:03.336819 2565 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.109:6443: connect: connection refused" interval="200ms" Sep 9 21:55:03.339041 kubelet[2565]: E0909 21:55:03.339026 2565 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 21:55:03.339373 kubelet[2565]: I0909 21:55:03.339232 2565 factory.go:221] Registration of the containerd container factory successfully Sep 9 21:55:03.339373 kubelet[2565]: I0909 21:55:03.339241 2565 factory.go:221] Registration of the systemd container factory successfully Sep 9 21:55:03.339373 kubelet[2565]: I0909 21:55:03.339283 2565 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 21:55:03.343557 kubelet[2565]: I0909 21:55:03.343532 2565 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 21:55:03.344300 kubelet[2565]: I0909 21:55:03.344282 2565 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 21:55:03.344370 kubelet[2565]: I0909 21:55:03.344348 2565 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 9 21:55:03.344572 kubelet[2565]: I0909 21:55:03.344410 2565 kubelet.go:2321] "Starting kubelet main sync loop" Sep 9 21:55:03.344572 kubelet[2565]: E0909 21:55:03.344437 2565 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 21:55:03.348654 kubelet[2565]: W0909 21:55:03.348622 2565 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Sep 9 21:55:03.348818 kubelet[2565]: E0909 21:55:03.348801 2565 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.109:6443: connect: connection refused" logger="UnhandledError" Sep 9 21:55:03.379426 kubelet[2565]: I0909 21:55:03.379409 2565 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 9 21:55:03.379426 kubelet[2565]: I0909 21:55:03.379420 2565 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 9 21:55:03.379426 kubelet[2565]: I0909 21:55:03.379430 2565 state_mem.go:36] "Initialized new in-memory state store" Sep 9 21:55:03.385679 kubelet[2565]: I0909 21:55:03.385663 2565 policy_none.go:49] "None policy: Start" Sep 9 21:55:03.385972 kubelet[2565]: I0909 21:55:03.385957 2565 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 9 21:55:03.385972 kubelet[2565]: I0909 21:55:03.385971 2565 state_mem.go:35] "Initializing new in-memory state store" Sep 9 21:55:03.411714 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 21:55:03.420950 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 21:55:03.423958 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 21:55:03.434251 kubelet[2565]: I0909 21:55:03.434122 2565 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 21:55:03.434553 kubelet[2565]: I0909 21:55:03.434328 2565 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 21:55:03.434553 kubelet[2565]: I0909 21:55:03.434339 2565 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 21:55:03.434553 kubelet[2565]: I0909 21:55:03.434506 2565 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 21:55:03.436176 kubelet[2565]: E0909 21:55:03.436141 2565 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 9 21:55:03.452488 systemd[1]: Created slice kubepods-burstable-podf5cad38b758a68e8bf569e7e99679b73.slice - libcontainer container kubepods-burstable-podf5cad38b758a68e8bf569e7e99679b73.slice. Sep 9 21:55:03.460245 systemd[1]: Created slice kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice - libcontainer container kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice. Sep 9 21:55:03.472610 systemd[1]: Created slice kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice - libcontainer container kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice. Sep 9 21:55:03.535824 kubelet[2565]: I0909 21:55:03.535796 2565 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 21:55:03.536135 kubelet[2565]: E0909 21:55:03.536123 2565 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.109:6443/api/v1/nodes\": dial tcp 139.178.70.109:6443: connect: connection refused" node="localhost" Sep 9 21:55:03.537220 kubelet[2565]: I0909 21:55:03.537207 2565 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f5cad38b758a68e8bf569e7e99679b73-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f5cad38b758a68e8bf569e7e99679b73\") " pod="kube-system/kube-apiserver-localhost" Sep 9 21:55:03.537278 kubelet[2565]: I0909 21:55:03.537270 2565 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f5cad38b758a68e8bf569e7e99679b73-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f5cad38b758a68e8bf569e7e99679b73\") " pod="kube-system/kube-apiserver-localhost" Sep 9 21:55:03.537321 kubelet[2565]: I0909 21:55:03.537315 2565 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:55:03.537372 kubelet[2565]: I0909 21:55:03.537361 2565 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:55:03.537426 kubelet[2565]: I0909 21:55:03.537419 2565 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f5cad38b758a68e8bf569e7e99679b73-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f5cad38b758a68e8bf569e7e99679b73\") " pod="kube-system/kube-apiserver-localhost" Sep 9 21:55:03.537469 kubelet[2565]: I0909 21:55:03.537462 2565 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:55:03.537508 kubelet[2565]: I0909 21:55:03.537502 2565 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:55:03.537545 kubelet[2565]: I0909 21:55:03.537540 2565 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:55:03.537599 kubelet[2565]: I0909 21:55:03.537574 2565 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 9 21:55:03.537599 kubelet[2565]: E0909 21:55:03.537287 2565 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.109:6443: connect: connection refused" interval="400ms" Sep 9 21:55:03.737999 kubelet[2565]: I0909 21:55:03.737931 2565 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 21:55:03.738544 kubelet[2565]: E0909 21:55:03.738520 2565 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.109:6443/api/v1/nodes\": dial tcp 139.178.70.109:6443: connect: connection refused" node="localhost" Sep 9 21:55:03.760638 containerd[1643]: time="2025-09-09T21:55:03.760331626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f5cad38b758a68e8bf569e7e99679b73,Namespace:kube-system,Attempt:0,}" Sep 9 21:55:03.776448 containerd[1643]: time="2025-09-09T21:55:03.775816770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Sep 9 21:55:03.786305 containerd[1643]: time="2025-09-09T21:55:03.786285683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Sep 9 21:55:03.900747 containerd[1643]: time="2025-09-09T21:55:03.900717056Z" level=info msg="connecting to shim f9742fb9c670976f119111d59d0e3a63697e9d6d3cd0b84c069e4d280309549e" address="unix:///run/containerd/s/2dd33eaddec09468bf29dd6c717b2e5d6c8a8a19e86584073a92af1a60cdbc11" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:55:03.926108 containerd[1643]: time="2025-09-09T21:55:03.926084155Z" level=info msg="connecting to shim c18b08e7cfe4da28f3d0b6fb5fc59e81988a85b38d07277c7eed787c80b5361c" address="unix:///run/containerd/s/0a3ce6d2c15c8e48239536f124307a5bb0c3d366a0f6aa08064d8c660cba3236" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:55:03.944788 kubelet[2565]: E0909 21:55:03.944748 2565 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.109:6443: connect: connection refused" interval="800ms" Sep 9 21:55:04.094538 systemd[1]: Started cri-containerd-c18b08e7cfe4da28f3d0b6fb5fc59e81988a85b38d07277c7eed787c80b5361c.scope - libcontainer container c18b08e7cfe4da28f3d0b6fb5fc59e81988a85b38d07277c7eed787c80b5361c. Sep 9 21:55:04.096835 systemd[1]: Started cri-containerd-f9742fb9c670976f119111d59d0e3a63697e9d6d3cd0b84c069e4d280309549e.scope - libcontainer container f9742fb9c670976f119111d59d0e3a63697e9d6d3cd0b84c069e4d280309549e. Sep 9 21:55:04.114131 containerd[1643]: time="2025-09-09T21:55:04.114096964Z" level=info msg="connecting to shim ece5b97178e6551a087d91c138b8de56a680e6d6f629ea0db9cffbed01253028" address="unix:///run/containerd/s/f0f1bc09b6e8f0d142d1eb598cb97c1777ad87e4f3ab56130ed4d7a026c81fbd" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:55:04.137479 systemd[1]: Started cri-containerd-ece5b97178e6551a087d91c138b8de56a680e6d6f629ea0db9cffbed01253028.scope - libcontainer container ece5b97178e6551a087d91c138b8de56a680e6d6f629ea0db9cffbed01253028. Sep 9 21:55:04.142143 kubelet[2565]: I0909 21:55:04.139969 2565 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 21:55:04.142143 kubelet[2565]: E0909 21:55:04.140218 2565 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.109:6443/api/v1/nodes\": dial tcp 139.178.70.109:6443: connect: connection refused" node="localhost" Sep 9 21:55:04.205272 containerd[1643]: time="2025-09-09T21:55:04.205175647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f5cad38b758a68e8bf569e7e99679b73,Namespace:kube-system,Attempt:0,} returns sandbox id \"f9742fb9c670976f119111d59d0e3a63697e9d6d3cd0b84c069e4d280309549e\"" Sep 9 21:55:04.206680 containerd[1643]: time="2025-09-09T21:55:04.206637681Z" level=info msg="CreateContainer within sandbox \"f9742fb9c670976f119111d59d0e3a63697e9d6d3cd0b84c069e4d280309549e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 21:55:04.224828 containerd[1643]: time="2025-09-09T21:55:04.224804417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"c18b08e7cfe4da28f3d0b6fb5fc59e81988a85b38d07277c7eed787c80b5361c\"" Sep 9 21:55:04.226731 containerd[1643]: time="2025-09-09T21:55:04.226713672Z" level=info msg="CreateContainer within sandbox \"c18b08e7cfe4da28f3d0b6fb5fc59e81988a85b38d07277c7eed787c80b5361c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 21:55:04.231652 containerd[1643]: time="2025-09-09T21:55:04.231625150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"ece5b97178e6551a087d91c138b8de56a680e6d6f629ea0db9cffbed01253028\"" Sep 9 21:55:04.233381 containerd[1643]: time="2025-09-09T21:55:04.233033644Z" level=info msg="CreateContainer within sandbox \"ece5b97178e6551a087d91c138b8de56a680e6d6f629ea0db9cffbed01253028\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 21:55:04.313679 kubelet[2565]: W0909 21:55:04.308572 2565 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Sep 9 21:55:04.313789 kubelet[2565]: E0909 21:55:04.313687 2565 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.109:6443: connect: connection refused" logger="UnhandledError" Sep 9 21:55:04.343277 containerd[1643]: time="2025-09-09T21:55:04.343036627Z" level=info msg="Container a8e5d463ae522db5d4622a6677f2d209c066369b47e30a75d473f111dda95c4f: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:55:04.345544 containerd[1643]: time="2025-09-09T21:55:04.345384724Z" level=info msg="Container fb1a94190125a7a57562fbdd301522ceeba95abe6c957ac6732a3b63ef8705be: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:55:04.346922 containerd[1643]: time="2025-09-09T21:55:04.346522685Z" level=info msg="Container 7f4b74d6ffa4ec5ea0dc078adb428ddaa32f5857fa3d2e3ed02ba72bbf4b783d: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:55:04.353246 containerd[1643]: time="2025-09-09T21:55:04.353162186Z" level=info msg="CreateContainer within sandbox \"f9742fb9c670976f119111d59d0e3a63697e9d6d3cd0b84c069e4d280309549e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fb1a94190125a7a57562fbdd301522ceeba95abe6c957ac6732a3b63ef8705be\"" Sep 9 21:55:04.354823 containerd[1643]: time="2025-09-09T21:55:04.354781117Z" level=info msg="CreateContainer within sandbox \"c18b08e7cfe4da28f3d0b6fb5fc59e81988a85b38d07277c7eed787c80b5361c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a8e5d463ae522db5d4622a6677f2d209c066369b47e30a75d473f111dda95c4f\"" Sep 9 21:55:04.354998 containerd[1643]: time="2025-09-09T21:55:04.354974526Z" level=info msg="StartContainer for \"fb1a94190125a7a57562fbdd301522ceeba95abe6c957ac6732a3b63ef8705be\"" Sep 9 21:55:04.355580 containerd[1643]: time="2025-09-09T21:55:04.355559814Z" level=info msg="StartContainer for \"a8e5d463ae522db5d4622a6677f2d209c066369b47e30a75d473f111dda95c4f\"" Sep 9 21:55:04.358138 containerd[1643]: time="2025-09-09T21:55:04.358111631Z" level=info msg="connecting to shim a8e5d463ae522db5d4622a6677f2d209c066369b47e30a75d473f111dda95c4f" address="unix:///run/containerd/s/0a3ce6d2c15c8e48239536f124307a5bb0c3d366a0f6aa08064d8c660cba3236" protocol=ttrpc version=3 Sep 9 21:55:04.358559 containerd[1643]: time="2025-09-09T21:55:04.358467189Z" level=info msg="connecting to shim fb1a94190125a7a57562fbdd301522ceeba95abe6c957ac6732a3b63ef8705be" address="unix:///run/containerd/s/2dd33eaddec09468bf29dd6c717b2e5d6c8a8a19e86584073a92af1a60cdbc11" protocol=ttrpc version=3 Sep 9 21:55:04.359925 containerd[1643]: time="2025-09-09T21:55:04.359862673Z" level=info msg="CreateContainer within sandbox \"ece5b97178e6551a087d91c138b8de56a680e6d6f629ea0db9cffbed01253028\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7f4b74d6ffa4ec5ea0dc078adb428ddaa32f5857fa3d2e3ed02ba72bbf4b783d\"" Sep 9 21:55:04.361329 containerd[1643]: time="2025-09-09T21:55:04.361302724Z" level=info msg="StartContainer for \"7f4b74d6ffa4ec5ea0dc078adb428ddaa32f5857fa3d2e3ed02ba72bbf4b783d\"" Sep 9 21:55:04.362915 containerd[1643]: time="2025-09-09T21:55:04.362894775Z" level=info msg="connecting to shim 7f4b74d6ffa4ec5ea0dc078adb428ddaa32f5857fa3d2e3ed02ba72bbf4b783d" address="unix:///run/containerd/s/f0f1bc09b6e8f0d142d1eb598cb97c1777ad87e4f3ab56130ed4d7a026c81fbd" protocol=ttrpc version=3 Sep 9 21:55:04.372295 kubelet[2565]: W0909 21:55:04.372223 2565 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Sep 9 21:55:04.372295 kubelet[2565]: E0909 21:55:04.372276 2565 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.109:6443: connect: connection refused" logger="UnhandledError" Sep 9 21:55:04.383505 systemd[1]: Started cri-containerd-a8e5d463ae522db5d4622a6677f2d209c066369b47e30a75d473f111dda95c4f.scope - libcontainer container a8e5d463ae522db5d4622a6677f2d209c066369b47e30a75d473f111dda95c4f. Sep 9 21:55:04.387627 systemd[1]: Started cri-containerd-7f4b74d6ffa4ec5ea0dc078adb428ddaa32f5857fa3d2e3ed02ba72bbf4b783d.scope - libcontainer container 7f4b74d6ffa4ec5ea0dc078adb428ddaa32f5857fa3d2e3ed02ba72bbf4b783d. Sep 9 21:55:04.389882 systemd[1]: Started cri-containerd-fb1a94190125a7a57562fbdd301522ceeba95abe6c957ac6732a3b63ef8705be.scope - libcontainer container fb1a94190125a7a57562fbdd301522ceeba95abe6c957ac6732a3b63ef8705be. Sep 9 21:55:04.442871 containerd[1643]: time="2025-09-09T21:55:04.442849414Z" level=info msg="StartContainer for \"a8e5d463ae522db5d4622a6677f2d209c066369b47e30a75d473f111dda95c4f\" returns successfully" Sep 9 21:55:04.443054 containerd[1643]: time="2025-09-09T21:55:04.443044388Z" level=info msg="StartContainer for \"fb1a94190125a7a57562fbdd301522ceeba95abe6c957ac6732a3b63ef8705be\" returns successfully" Sep 9 21:55:04.468393 containerd[1643]: time="2025-09-09T21:55:04.468353480Z" level=info msg="StartContainer for \"7f4b74d6ffa4ec5ea0dc078adb428ddaa32f5857fa3d2e3ed02ba72bbf4b783d\" returns successfully" Sep 9 21:55:04.694584 kubelet[2565]: W0909 21:55:04.694498 2565 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Sep 9 21:55:04.694584 kubelet[2565]: E0909 21:55:04.694551 2565 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.109:6443: connect: connection refused" logger="UnhandledError" Sep 9 21:55:04.745291 kubelet[2565]: E0909 21:55:04.745251 2565 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.109:6443: connect: connection refused" interval="1.6s" Sep 9 21:55:04.930504 kubelet[2565]: W0909 21:55:04.930461 2565 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.109:6443: connect: connection refused Sep 9 21:55:04.930504 kubelet[2565]: E0909 21:55:04.930507 2565 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.109:6443: connect: connection refused" logger="UnhandledError" Sep 9 21:55:04.941582 kubelet[2565]: I0909 21:55:04.941563 2565 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 21:55:04.941788 kubelet[2565]: E0909 21:55:04.941761 2565 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.109:6443/api/v1/nodes\": dial tcp 139.178.70.109:6443: connect: connection refused" node="localhost" Sep 9 21:55:06.192144 kubelet[2565]: E0909 21:55:06.192113 2565 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Sep 9 21:55:06.348040 kubelet[2565]: E0909 21:55:06.348007 2565 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 9 21:55:06.543731 kubelet[2565]: I0909 21:55:06.543615 2565 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 21:55:06.552386 kubelet[2565]: I0909 21:55:06.552340 2565 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 9 21:55:06.552835 kubelet[2565]: E0909 21:55:06.552820 2565 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 9 21:55:06.559687 kubelet[2565]: E0909 21:55:06.559658 2565 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 21:55:06.660350 kubelet[2565]: E0909 21:55:06.660321 2565 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 21:55:06.760956 kubelet[2565]: E0909 21:55:06.760924 2565 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 21:55:06.861506 kubelet[2565]: E0909 21:55:06.861466 2565 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 21:55:06.962405 kubelet[2565]: E0909 21:55:06.962353 2565 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 21:55:07.316375 kubelet[2565]: I0909 21:55:07.316201 2565 apiserver.go:52] "Watching apiserver" Sep 9 21:55:07.335497 kubelet[2565]: I0909 21:55:07.335461 2565 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 9 21:55:07.808148 systemd[1]: Reload requested from client PID 2836 ('systemctl') (unit session-9.scope)... Sep 9 21:55:07.808160 systemd[1]: Reloading... Sep 9 21:55:07.869372 zram_generator::config[2883]: No configuration found. Sep 9 21:55:07.948000 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Sep 9 21:55:08.025111 systemd[1]: Reloading finished in 216 ms. Sep 9 21:55:08.048945 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 21:55:08.062106 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 21:55:08.062271 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:55:08.062306 systemd[1]: kubelet.service: Consumed 628ms CPU time, 130.1M memory peak. Sep 9 21:55:08.063533 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 21:55:08.289439 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:55:08.302680 (kubelet)[2947]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 21:55:08.500281 kubelet[2947]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 21:55:08.500281 kubelet[2947]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 9 21:55:08.500281 kubelet[2947]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 21:55:08.500539 kubelet[2947]: I0909 21:55:08.500279 2947 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 21:55:08.504428 kubelet[2947]: I0909 21:55:08.504410 2947 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 9 21:55:08.504428 kubelet[2947]: I0909 21:55:08.504424 2947 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 21:55:08.504555 kubelet[2947]: I0909 21:55:08.504545 2947 server.go:934] "Client rotation is on, will bootstrap in background" Sep 9 21:55:08.505447 kubelet[2947]: I0909 21:55:08.505429 2947 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 9 21:55:08.514516 kubelet[2947]: I0909 21:55:08.514443 2947 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 21:55:08.518376 kubelet[2947]: I0909 21:55:08.517991 2947 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 21:55:08.519738 kubelet[2947]: I0909 21:55:08.519726 2947 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 21:55:08.519801 kubelet[2947]: I0909 21:55:08.519780 2947 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 9 21:55:08.519882 kubelet[2947]: I0909 21:55:08.519850 2947 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 21:55:08.519995 kubelet[2947]: I0909 21:55:08.519881 2947 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 21:55:08.520050 kubelet[2947]: I0909 21:55:08.520001 2947 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 21:55:08.520050 kubelet[2947]: I0909 21:55:08.520008 2947 container_manager_linux.go:300] "Creating device plugin manager" Sep 9 21:55:08.520050 kubelet[2947]: I0909 21:55:08.520035 2947 state_mem.go:36] "Initialized new in-memory state store" Sep 9 21:55:08.520101 kubelet[2947]: I0909 21:55:08.520098 2947 kubelet.go:408] "Attempting to sync node with API server" Sep 9 21:55:08.520122 kubelet[2947]: I0909 21:55:08.520106 2947 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 21:55:08.520394 kubelet[2947]: I0909 21:55:08.520123 2947 kubelet.go:314] "Adding apiserver pod source" Sep 9 21:55:08.520394 kubelet[2947]: I0909 21:55:08.520128 2947 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 21:55:08.523450 kubelet[2947]: I0909 21:55:08.523434 2947 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 21:55:08.523707 kubelet[2947]: I0909 21:55:08.523697 2947 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 21:55:08.523972 kubelet[2947]: I0909 21:55:08.523957 2947 server.go:1274] "Started kubelet" Sep 9 21:55:08.531060 kubelet[2947]: I0909 21:55:08.531036 2947 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 21:55:08.533274 kubelet[2947]: I0909 21:55:08.533263 2947 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 21:55:08.533836 kubelet[2947]: I0909 21:55:08.533818 2947 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 21:55:08.535213 kubelet[2947]: I0909 21:55:08.535204 2947 server.go:449] "Adding debug handlers to kubelet server" Sep 9 21:55:08.535924 kubelet[2947]: I0909 21:55:08.535909 2947 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 21:55:08.536595 kubelet[2947]: I0909 21:55:08.536586 2947 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 21:55:08.537738 kubelet[2947]: I0909 21:55:08.537644 2947 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 9 21:55:08.538398 kubelet[2947]: I0909 21:55:08.538389 2947 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 9 21:55:08.544159 kubelet[2947]: I0909 21:55:08.544143 2947 reconciler.go:26] "Reconciler: start to sync state" Sep 9 21:55:08.545480 kubelet[2947]: I0909 21:55:08.545464 2947 factory.go:221] Registration of the systemd container factory successfully Sep 9 21:55:08.545530 kubelet[2947]: I0909 21:55:08.545510 2947 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 21:55:08.545735 kubelet[2947]: I0909 21:55:08.545720 2947 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 21:55:08.546582 kubelet[2947]: I0909 21:55:08.546573 2947 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 21:55:08.546630 kubelet[2947]: I0909 21:55:08.546625 2947 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 9 21:55:08.546669 kubelet[2947]: I0909 21:55:08.546665 2947 kubelet.go:2321] "Starting kubelet main sync loop" Sep 9 21:55:08.546727 kubelet[2947]: E0909 21:55:08.546719 2947 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 21:55:08.547587 kubelet[2947]: E0909 21:55:08.547573 2947 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 21:55:08.548509 kubelet[2947]: I0909 21:55:08.548436 2947 factory.go:221] Registration of the containerd container factory successfully Sep 9 21:55:08.584570 kubelet[2947]: I0909 21:55:08.584553 2947 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 9 21:55:08.584570 kubelet[2947]: I0909 21:55:08.584563 2947 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 9 21:55:08.584570 kubelet[2947]: I0909 21:55:08.584574 2947 state_mem.go:36] "Initialized new in-memory state store" Sep 9 21:55:08.584686 kubelet[2947]: I0909 21:55:08.584661 2947 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 21:55:08.584686 kubelet[2947]: I0909 21:55:08.584667 2947 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 21:55:08.584686 kubelet[2947]: I0909 21:55:08.584679 2947 policy_none.go:49] "None policy: Start" Sep 9 21:55:08.585040 kubelet[2947]: I0909 21:55:08.585032 2947 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 9 21:55:08.585124 kubelet[2947]: I0909 21:55:08.585118 2947 state_mem.go:35] "Initializing new in-memory state store" Sep 9 21:55:08.585240 kubelet[2947]: I0909 21:55:08.585234 2947 state_mem.go:75] "Updated machine memory state" Sep 9 21:55:08.587703 kubelet[2947]: I0909 21:55:08.587693 2947 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 21:55:08.587969 kubelet[2947]: I0909 21:55:08.587928 2947 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 21:55:08.588435 kubelet[2947]: I0909 21:55:08.588417 2947 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 21:55:08.588842 kubelet[2947]: I0909 21:55:08.588771 2947 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 21:55:08.651844 kubelet[2947]: E0909 21:55:08.651804 2947 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 21:55:08.693773 kubelet[2947]: I0909 21:55:08.693755 2947 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 21:55:08.697028 kubelet[2947]: I0909 21:55:08.696964 2947 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 9 21:55:08.697253 kubelet[2947]: I0909 21:55:08.697204 2947 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 9 21:55:08.745463 kubelet[2947]: I0909 21:55:08.745436 2947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 9 21:55:08.745463 kubelet[2947]: I0909 21:55:08.745459 2947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f5cad38b758a68e8bf569e7e99679b73-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f5cad38b758a68e8bf569e7e99679b73\") " pod="kube-system/kube-apiserver-localhost" Sep 9 21:55:08.745572 kubelet[2947]: I0909 21:55:08.745471 2947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:55:08.745572 kubelet[2947]: I0909 21:55:08.745480 2947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:55:08.745572 kubelet[2947]: I0909 21:55:08.745489 2947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f5cad38b758a68e8bf569e7e99679b73-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f5cad38b758a68e8bf569e7e99679b73\") " pod="kube-system/kube-apiserver-localhost" Sep 9 21:55:08.745572 kubelet[2947]: I0909 21:55:08.745496 2947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f5cad38b758a68e8bf569e7e99679b73-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f5cad38b758a68e8bf569e7e99679b73\") " pod="kube-system/kube-apiserver-localhost" Sep 9 21:55:08.745572 kubelet[2947]: I0909 21:55:08.745511 2947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:55:08.745663 kubelet[2947]: I0909 21:55:08.745522 2947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:55:08.745663 kubelet[2947]: I0909 21:55:08.745534 2947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:55:08.815600 sudo[2980]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 9 21:55:08.815993 sudo[2980]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 9 21:55:09.078987 sudo[2980]: pam_unix(sudo:session): session closed for user root Sep 9 21:55:09.521264 kubelet[2947]: I0909 21:55:09.521185 2947 apiserver.go:52] "Watching apiserver" Sep 9 21:55:09.541145 kubelet[2947]: I0909 21:55:09.541121 2947 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 9 21:55:09.577055 kubelet[2947]: E0909 21:55:09.576905 2947 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 21:55:09.595883 kubelet[2947]: I0909 21:55:09.595690 2947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.595679843 podStartE2EDuration="1.595679843s" podCreationTimestamp="2025-09-09 21:55:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 21:55:09.595175476 +0000 UTC m=+1.135284594" watchObservedRunningTime="2025-09-09 21:55:09.595679843 +0000 UTC m=+1.135788958" Sep 9 21:55:09.615375 kubelet[2947]: I0909 21:55:09.614671 2947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.614658302 podStartE2EDuration="1.614658302s" podCreationTimestamp="2025-09-09 21:55:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 21:55:09.61464315 +0000 UTC m=+1.154752264" watchObservedRunningTime="2025-09-09 21:55:09.614658302 +0000 UTC m=+1.154767416" Sep 9 21:55:09.615558 kubelet[2947]: I0909 21:55:09.615487 2947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.615481069 podStartE2EDuration="2.615481069s" podCreationTimestamp="2025-09-09 21:55:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 21:55:09.604228398 +0000 UTC m=+1.144337528" watchObservedRunningTime="2025-09-09 21:55:09.615481069 +0000 UTC m=+1.155590193" Sep 9 21:55:10.653045 sudo[1968]: pam_unix(sudo:session): session closed for user root Sep 9 21:55:10.653801 sshd[1967]: Connection closed by 139.178.89.65 port 53646 Sep 9 21:55:10.655231 sshd-session[1964]: pam_unix(sshd:session): session closed for user core Sep 9 21:55:10.657453 systemd[1]: sshd@6-139.178.70.109:22-139.178.89.65:53646.service: Deactivated successfully. Sep 9 21:55:10.658629 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 21:55:10.658788 systemd[1]: session-9.scope: Consumed 3.205s CPU time, 211.5M memory peak. Sep 9 21:55:10.659627 systemd-logind[1612]: Session 9 logged out. Waiting for processes to exit. Sep 9 21:55:10.660541 systemd-logind[1612]: Removed session 9. Sep 9 21:55:13.998756 kubelet[2947]: I0909 21:55:13.998733 2947 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 21:55:13.999215 containerd[1643]: time="2025-09-09T21:55:13.999193770Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 21:55:13.999410 kubelet[2947]: I0909 21:55:13.999286 2947 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 21:55:14.615531 systemd[1]: Created slice kubepods-burstable-pod30bcb0a8_f191_4488_ad28_1508eb2dab0e.slice - libcontainer container kubepods-burstable-pod30bcb0a8_f191_4488_ad28_1508eb2dab0e.slice. Sep 9 21:55:14.623436 systemd[1]: Created slice kubepods-besteffort-podf03bb879_a698_45d7_a103_86a83b5c0772.slice - libcontainer container kubepods-besteffort-podf03bb879_a698_45d7_a103_86a83b5c0772.slice. Sep 9 21:55:14.680974 kubelet[2947]: I0909 21:55:14.680944 2947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/30bcb0a8-f191-4488-ad28-1508eb2dab0e-cilium-cgroup\") pod \"cilium-hhzvm\" (UID: \"30bcb0a8-f191-4488-ad28-1508eb2dab0e\") " pod="kube-system/cilium-hhzvm" Sep 9 21:55:14.680974 kubelet[2947]: I0909 21:55:14.680972 2947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/30bcb0a8-f191-4488-ad28-1508eb2dab0e-clustermesh-secrets\") pod \"cilium-hhzvm\" (UID: \"30bcb0a8-f191-4488-ad28-1508eb2dab0e\") " pod="kube-system/cilium-hhzvm" Sep 9 21:55:14.681107 kubelet[2947]: I0909 21:55:14.680990 2947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/30bcb0a8-f191-4488-ad28-1508eb2dab0e-hostproc\") pod \"cilium-hhzvm\" (UID: \"30bcb0a8-f191-4488-ad28-1508eb2dab0e\") " pod="kube-system/cilium-hhzvm" Sep 9 21:55:14.681107 kubelet[2947]: I0909 21:55:14.681007 2947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48zfr\" (UniqueName: \"kubernetes.io/projected/30bcb0a8-f191-4488-ad28-1508eb2dab0e-kube-api-access-48zfr\") pod \"cilium-hhzvm\" (UID: \"30bcb0a8-f191-4488-ad28-1508eb2dab0e\") " pod="kube-system/cilium-hhzvm" Sep 9 21:55:14.681107 kubelet[2947]: I0909 21:55:14.681018 2947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f03bb879-a698-45d7-a103-86a83b5c0772-kube-proxy\") pod \"kube-proxy-q8trq\" (UID: \"f03bb879-a698-45d7-a103-86a83b5c0772\") " pod="kube-system/kube-proxy-q8trq" Sep 9 21:55:14.681107 kubelet[2947]: I0909 21:55:14.681027 2947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/30bcb0a8-f191-4488-ad28-1508eb2dab0e-lib-modules\") pod \"cilium-hhzvm\" (UID: \"30bcb0a8-f191-4488-ad28-1508eb2dab0e\") " pod="kube-system/cilium-hhzvm" Sep 9 21:55:14.681107 kubelet[2947]: I0909 21:55:14.681035 2947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/30bcb0a8-f191-4488-ad28-1508eb2dab0e-host-proc-sys-net\") pod \"cilium-hhzvm\" (UID: \"30bcb0a8-f191-4488-ad28-1508eb2dab0e\") " pod="kube-system/cilium-hhzvm" Sep 9 21:55:14.681213 kubelet[2947]: I0909 21:55:14.681045 2947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vmzf\" (UniqueName: \"kubernetes.io/projected/f03bb879-a698-45d7-a103-86a83b5c0772-kube-api-access-6vmzf\") pod \"kube-proxy-q8trq\" (UID: \"f03bb879-a698-45d7-a103-86a83b5c0772\") " pod="kube-system/kube-proxy-q8trq" Sep 9 21:55:14.681213 kubelet[2947]: I0909 21:55:14.681059 2947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/30bcb0a8-f191-4488-ad28-1508eb2dab0e-etc-cni-netd\") pod \"cilium-hhzvm\" (UID: \"30bcb0a8-f191-4488-ad28-1508eb2dab0e\") " pod="kube-system/cilium-hhzvm" Sep 9 21:55:14.681213 kubelet[2947]: I0909 21:55:14.681074 2947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/30bcb0a8-f191-4488-ad28-1508eb2dab0e-cilium-config-path\") pod \"cilium-hhzvm\" (UID: \"30bcb0a8-f191-4488-ad28-1508eb2dab0e\") " pod="kube-system/cilium-hhzvm" Sep 9 21:55:14.681213 kubelet[2947]: I0909 21:55:14.681085 2947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/30bcb0a8-f191-4488-ad28-1508eb2dab0e-xtables-lock\") pod \"cilium-hhzvm\" (UID: \"30bcb0a8-f191-4488-ad28-1508eb2dab0e\") " pod="kube-system/cilium-hhzvm" Sep 9 21:55:14.681213 kubelet[2947]: I0909 21:55:14.681093 2947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/30bcb0a8-f191-4488-ad28-1508eb2dab0e-cilium-run\") pod \"cilium-hhzvm\" (UID: \"30bcb0a8-f191-4488-ad28-1508eb2dab0e\") " pod="kube-system/cilium-hhzvm" Sep 9 21:55:14.681213 kubelet[2947]: I0909 21:55:14.681103 2947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/30bcb0a8-f191-4488-ad28-1508eb2dab0e-bpf-maps\") pod \"cilium-hhzvm\" (UID: \"30bcb0a8-f191-4488-ad28-1508eb2dab0e\") " pod="kube-system/cilium-hhzvm" Sep 9 21:55:14.681350 kubelet[2947]: I0909 21:55:14.681118 2947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f03bb879-a698-45d7-a103-86a83b5c0772-xtables-lock\") pod \"kube-proxy-q8trq\" (UID: \"f03bb879-a698-45d7-a103-86a83b5c0772\") " pod="kube-system/kube-proxy-q8trq" Sep 9 21:55:14.681350 kubelet[2947]: I0909 21:55:14.681127 2947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f03bb879-a698-45d7-a103-86a83b5c0772-lib-modules\") pod \"kube-proxy-q8trq\" (UID: \"f03bb879-a698-45d7-a103-86a83b5c0772\") " pod="kube-system/kube-proxy-q8trq" Sep 9 21:55:14.681350 kubelet[2947]: I0909 21:55:14.681135 2947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/30bcb0a8-f191-4488-ad28-1508eb2dab0e-host-proc-sys-kernel\") pod \"cilium-hhzvm\" (UID: \"30bcb0a8-f191-4488-ad28-1508eb2dab0e\") " pod="kube-system/cilium-hhzvm" Sep 9 21:55:14.681350 kubelet[2947]: I0909 21:55:14.681143 2947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/30bcb0a8-f191-4488-ad28-1508eb2dab0e-hubble-tls\") pod \"cilium-hhzvm\" (UID: \"30bcb0a8-f191-4488-ad28-1508eb2dab0e\") " pod="kube-system/cilium-hhzvm" Sep 9 21:55:14.681350 kubelet[2947]: I0909 21:55:14.681153 2947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/30bcb0a8-f191-4488-ad28-1508eb2dab0e-cni-path\") pod \"cilium-hhzvm\" (UID: \"30bcb0a8-f191-4488-ad28-1508eb2dab0e\") " pod="kube-system/cilium-hhzvm" Sep 9 21:55:14.919933 containerd[1643]: time="2025-09-09T21:55:14.919599593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hhzvm,Uid:30bcb0a8-f191-4488-ad28-1508eb2dab0e,Namespace:kube-system,Attempt:0,}" Sep 9 21:55:14.931443 containerd[1643]: time="2025-09-09T21:55:14.931415196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q8trq,Uid:f03bb879-a698-45d7-a103-86a83b5c0772,Namespace:kube-system,Attempt:0,}" Sep 9 21:55:14.955825 containerd[1643]: time="2025-09-09T21:55:14.955403237Z" level=info msg="connecting to shim 77df44a395c0da7772e25f5277a351800a60b84bd505047f16b72a4fdf32aae3" address="unix:///run/containerd/s/efbacc1e05862b92e5b431de3c98b66b9d8c2f6132e0c5b621588f29c153e7d1" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:55:14.955825 containerd[1643]: time="2025-09-09T21:55:14.955670870Z" level=info msg="connecting to shim e1d6c97511ed57f1548fcdf3761f9a5ef5db4c9ece3d18a80148e4267fb833f8" address="unix:///run/containerd/s/1c3da94255618302a08154de80951323060fcd96ccbdd352b8c7ed32c54f5d4f" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:55:14.970657 systemd[1]: Started cri-containerd-e1d6c97511ed57f1548fcdf3761f9a5ef5db4c9ece3d18a80148e4267fb833f8.scope - libcontainer container e1d6c97511ed57f1548fcdf3761f9a5ef5db4c9ece3d18a80148e4267fb833f8. Sep 9 21:55:14.981096 systemd[1]: Started cri-containerd-77df44a395c0da7772e25f5277a351800a60b84bd505047f16b72a4fdf32aae3.scope - libcontainer container 77df44a395c0da7772e25f5277a351800a60b84bd505047f16b72a4fdf32aae3. Sep 9 21:55:15.010454 containerd[1643]: time="2025-09-09T21:55:15.010414861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hhzvm,Uid:30bcb0a8-f191-4488-ad28-1508eb2dab0e,Namespace:kube-system,Attempt:0,} returns sandbox id \"77df44a395c0da7772e25f5277a351800a60b84bd505047f16b72a4fdf32aae3\"" Sep 9 21:55:15.012215 containerd[1643]: time="2025-09-09T21:55:15.012193262Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 9 21:55:15.014919 containerd[1643]: time="2025-09-09T21:55:15.014690602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q8trq,Uid:f03bb879-a698-45d7-a103-86a83b5c0772,Namespace:kube-system,Attempt:0,} returns sandbox id \"e1d6c97511ed57f1548fcdf3761f9a5ef5db4c9ece3d18a80148e4267fb833f8\"" Sep 9 21:55:15.017286 containerd[1643]: time="2025-09-09T21:55:15.017263786Z" level=info msg="CreateContainer within sandbox \"e1d6c97511ed57f1548fcdf3761f9a5ef5db4c9ece3d18a80148e4267fb833f8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 21:55:15.045963 containerd[1643]: time="2025-09-09T21:55:15.045932357Z" level=info msg="Container c348a212d8fe88f7e268c12872b98531ba1f14ffe604ecfcbcd94bf7d0c96d0e: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:55:15.061174 containerd[1643]: time="2025-09-09T21:55:15.061107614Z" level=info msg="CreateContainer within sandbox \"e1d6c97511ed57f1548fcdf3761f9a5ef5db4c9ece3d18a80148e4267fb833f8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c348a212d8fe88f7e268c12872b98531ba1f14ffe604ecfcbcd94bf7d0c96d0e\"" Sep 9 21:55:15.062048 containerd[1643]: time="2025-09-09T21:55:15.061690133Z" level=info msg="StartContainer for \"c348a212d8fe88f7e268c12872b98531ba1f14ffe604ecfcbcd94bf7d0c96d0e\"" Sep 9 21:55:15.063772 containerd[1643]: time="2025-09-09T21:55:15.063728781Z" level=info msg="connecting to shim c348a212d8fe88f7e268c12872b98531ba1f14ffe604ecfcbcd94bf7d0c96d0e" address="unix:///run/containerd/s/1c3da94255618302a08154de80951323060fcd96ccbdd352b8c7ed32c54f5d4f" protocol=ttrpc version=3 Sep 9 21:55:15.085558 systemd[1]: Started cri-containerd-c348a212d8fe88f7e268c12872b98531ba1f14ffe604ecfcbcd94bf7d0c96d0e.scope - libcontainer container c348a212d8fe88f7e268c12872b98531ba1f14ffe604ecfcbcd94bf7d0c96d0e. Sep 9 21:55:15.111307 systemd[1]: Created slice kubepods-besteffort-pod0f74fba8_c222_45be_918a_3b6ee4165248.slice - libcontainer container kubepods-besteffort-pod0f74fba8_c222_45be_918a_3b6ee4165248.slice. Sep 9 21:55:15.137649 containerd[1643]: time="2025-09-09T21:55:15.137598620Z" level=info msg="StartContainer for \"c348a212d8fe88f7e268c12872b98531ba1f14ffe604ecfcbcd94bf7d0c96d0e\" returns successfully" Sep 9 21:55:15.183610 kubelet[2947]: I0909 21:55:15.183541 2947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0f74fba8-c222-45be-918a-3b6ee4165248-cilium-config-path\") pod \"cilium-operator-5d85765b45-x8npl\" (UID: \"0f74fba8-c222-45be-918a-3b6ee4165248\") " pod="kube-system/cilium-operator-5d85765b45-x8npl" Sep 9 21:55:15.184224 kubelet[2947]: I0909 21:55:15.184155 2947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c7s6\" (UniqueName: \"kubernetes.io/projected/0f74fba8-c222-45be-918a-3b6ee4165248-kube-api-access-7c7s6\") pod \"cilium-operator-5d85765b45-x8npl\" (UID: \"0f74fba8-c222-45be-918a-3b6ee4165248\") " pod="kube-system/cilium-operator-5d85765b45-x8npl" Sep 9 21:55:15.415195 containerd[1643]: time="2025-09-09T21:55:15.415075637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-x8npl,Uid:0f74fba8-c222-45be-918a-3b6ee4165248,Namespace:kube-system,Attempt:0,}" Sep 9 21:55:15.428319 containerd[1643]: time="2025-09-09T21:55:15.428240762Z" level=info msg="connecting to shim a2131e432ee5f99774ed186fffec8222e80b6b7f15a542401455dea440111c10" address="unix:///run/containerd/s/f555ea9bdb475f39000aa7c80649d85c111b2a32ef983d5f1f6e4e95d97a4b21" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:55:15.448688 systemd[1]: Started cri-containerd-a2131e432ee5f99774ed186fffec8222e80b6b7f15a542401455dea440111c10.scope - libcontainer container a2131e432ee5f99774ed186fffec8222e80b6b7f15a542401455dea440111c10. Sep 9 21:55:15.501277 containerd[1643]: time="2025-09-09T21:55:15.501245508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-x8npl,Uid:0f74fba8-c222-45be-918a-3b6ee4165248,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2131e432ee5f99774ed186fffec8222e80b6b7f15a542401455dea440111c10\"" Sep 9 21:55:15.591989 kubelet[2947]: I0909 21:55:15.591939 2947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-q8trq" podStartSLOduration=1.591926795 podStartE2EDuration="1.591926795s" podCreationTimestamp="2025-09-09 21:55:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 21:55:15.591143057 +0000 UTC m=+7.131252193" watchObservedRunningTime="2025-09-09 21:55:15.591926795 +0000 UTC m=+7.132035914" Sep 9 21:55:18.821058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1421970375.mount: Deactivated successfully. Sep 9 21:55:21.302589 containerd[1643]: time="2025-09-09T21:55:21.302443665Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:55:21.315396 containerd[1643]: time="2025-09-09T21:55:21.314947689Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 9 21:55:21.394009 containerd[1643]: time="2025-09-09T21:55:21.393477797Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:55:21.399676 containerd[1643]: time="2025-09-09T21:55:21.397770398Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 6.385557791s" Sep 9 21:55:21.399676 containerd[1643]: time="2025-09-09T21:55:21.397789549Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 9 21:55:21.399676 containerd[1643]: time="2025-09-09T21:55:21.398870246Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 9 21:55:21.399866 containerd[1643]: time="2025-09-09T21:55:21.399845047Z" level=info msg="CreateContainer within sandbox \"77df44a395c0da7772e25f5277a351800a60b84bd505047f16b72a4fdf32aae3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 21:55:21.481974 containerd[1643]: time="2025-09-09T21:55:21.481944095Z" level=info msg="Container b53a055ee2ff81cc9cebb97e0e6ab7c9d89e95517e8a6b618711f00ef89a2657: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:55:21.489844 containerd[1643]: time="2025-09-09T21:55:21.489818054Z" level=info msg="CreateContainer within sandbox \"77df44a395c0da7772e25f5277a351800a60b84bd505047f16b72a4fdf32aae3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b53a055ee2ff81cc9cebb97e0e6ab7c9d89e95517e8a6b618711f00ef89a2657\"" Sep 9 21:55:21.490376 containerd[1643]: time="2025-09-09T21:55:21.490294661Z" level=info msg="StartContainer for \"b53a055ee2ff81cc9cebb97e0e6ab7c9d89e95517e8a6b618711f00ef89a2657\"" Sep 9 21:55:21.490947 containerd[1643]: time="2025-09-09T21:55:21.490924367Z" level=info msg="connecting to shim b53a055ee2ff81cc9cebb97e0e6ab7c9d89e95517e8a6b618711f00ef89a2657" address="unix:///run/containerd/s/efbacc1e05862b92e5b431de3c98b66b9d8c2f6132e0c5b621588f29c153e7d1" protocol=ttrpc version=3 Sep 9 21:55:21.513526 systemd[1]: Started cri-containerd-b53a055ee2ff81cc9cebb97e0e6ab7c9d89e95517e8a6b618711f00ef89a2657.scope - libcontainer container b53a055ee2ff81cc9cebb97e0e6ab7c9d89e95517e8a6b618711f00ef89a2657. Sep 9 21:55:21.534240 containerd[1643]: time="2025-09-09T21:55:21.534174552Z" level=info msg="StartContainer for \"b53a055ee2ff81cc9cebb97e0e6ab7c9d89e95517e8a6b618711f00ef89a2657\" returns successfully" Sep 9 21:55:21.544125 systemd[1]: cri-containerd-b53a055ee2ff81cc9cebb97e0e6ab7c9d89e95517e8a6b618711f00ef89a2657.scope: Deactivated successfully. Sep 9 21:55:21.575004 containerd[1643]: time="2025-09-09T21:55:21.574923158Z" level=info msg="received exit event container_id:\"b53a055ee2ff81cc9cebb97e0e6ab7c9d89e95517e8a6b618711f00ef89a2657\" id:\"b53a055ee2ff81cc9cebb97e0e6ab7c9d89e95517e8a6b618711f00ef89a2657\" pid:3358 exited_at:{seconds:1757454921 nanos:544960870}" Sep 9 21:55:21.579382 containerd[1643]: time="2025-09-09T21:55:21.579314476Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b53a055ee2ff81cc9cebb97e0e6ab7c9d89e95517e8a6b618711f00ef89a2657\" id:\"b53a055ee2ff81cc9cebb97e0e6ab7c9d89e95517e8a6b618711f00ef89a2657\" pid:3358 exited_at:{seconds:1757454921 nanos:544960870}" Sep 9 21:55:21.606650 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b53a055ee2ff81cc9cebb97e0e6ab7c9d89e95517e8a6b618711f00ef89a2657-rootfs.mount: Deactivated successfully. Sep 9 21:55:22.600039 containerd[1643]: time="2025-09-09T21:55:22.599403882Z" level=info msg="CreateContainer within sandbox \"77df44a395c0da7772e25f5277a351800a60b84bd505047f16b72a4fdf32aae3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 21:55:22.623045 containerd[1643]: time="2025-09-09T21:55:22.622559157Z" level=info msg="Container ea97f04782cc0383fa5f8693e6e51b6911641c0d86b78747a4a9ef70b8bf08e3: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:55:22.622749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1913252733.mount: Deactivated successfully. Sep 9 21:55:22.635492 containerd[1643]: time="2025-09-09T21:55:22.635442446Z" level=info msg="CreateContainer within sandbox \"77df44a395c0da7772e25f5277a351800a60b84bd505047f16b72a4fdf32aae3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ea97f04782cc0383fa5f8693e6e51b6911641c0d86b78747a4a9ef70b8bf08e3\"" Sep 9 21:55:22.638529 containerd[1643]: time="2025-09-09T21:55:22.636870726Z" level=info msg="StartContainer for \"ea97f04782cc0383fa5f8693e6e51b6911641c0d86b78747a4a9ef70b8bf08e3\"" Sep 9 21:55:22.638529 containerd[1643]: time="2025-09-09T21:55:22.637334462Z" level=info msg="connecting to shim ea97f04782cc0383fa5f8693e6e51b6911641c0d86b78747a4a9ef70b8bf08e3" address="unix:///run/containerd/s/efbacc1e05862b92e5b431de3c98b66b9d8c2f6132e0c5b621588f29c153e7d1" protocol=ttrpc version=3 Sep 9 21:55:22.662494 systemd[1]: Started cri-containerd-ea97f04782cc0383fa5f8693e6e51b6911641c0d86b78747a4a9ef70b8bf08e3.scope - libcontainer container ea97f04782cc0383fa5f8693e6e51b6911641c0d86b78747a4a9ef70b8bf08e3. Sep 9 21:55:22.682306 containerd[1643]: time="2025-09-09T21:55:22.682278033Z" level=info msg="StartContainer for \"ea97f04782cc0383fa5f8693e6e51b6911641c0d86b78747a4a9ef70b8bf08e3\" returns successfully" Sep 9 21:55:22.690181 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 21:55:22.690919 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 21:55:22.691146 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 9 21:55:22.693839 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 21:55:22.695457 systemd[1]: cri-containerd-ea97f04782cc0383fa5f8693e6e51b6911641c0d86b78747a4a9ef70b8bf08e3.scope: Deactivated successfully. Sep 9 21:55:22.695937 containerd[1643]: time="2025-09-09T21:55:22.695898915Z" level=info msg="received exit event container_id:\"ea97f04782cc0383fa5f8693e6e51b6911641c0d86b78747a4a9ef70b8bf08e3\" id:\"ea97f04782cc0383fa5f8693e6e51b6911641c0d86b78747a4a9ef70b8bf08e3\" pid:3402 exited_at:{seconds:1757454922 nanos:695324758}" Sep 9 21:55:22.696177 containerd[1643]: time="2025-09-09T21:55:22.696149571Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ea97f04782cc0383fa5f8693e6e51b6911641c0d86b78747a4a9ef70b8bf08e3\" id:\"ea97f04782cc0383fa5f8693e6e51b6911641c0d86b78747a4a9ef70b8bf08e3\" pid:3402 exited_at:{seconds:1757454922 nanos:695324758}" Sep 9 21:55:22.741065 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 21:55:23.545801 containerd[1643]: time="2025-09-09T21:55:23.545772503Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:55:23.546394 containerd[1643]: time="2025-09-09T21:55:23.546378139Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 9 21:55:23.546709 containerd[1643]: time="2025-09-09T21:55:23.546693672Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:55:23.547715 containerd[1643]: time="2025-09-09T21:55:23.547699704Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.14880541s" Sep 9 21:55:23.547743 containerd[1643]: time="2025-09-09T21:55:23.547717992Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 9 21:55:23.549156 containerd[1643]: time="2025-09-09T21:55:23.549024832Z" level=info msg="CreateContainer within sandbox \"a2131e432ee5f99774ed186fffec8222e80b6b7f15a542401455dea440111c10\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 9 21:55:23.553148 containerd[1643]: time="2025-09-09T21:55:23.552846317Z" level=info msg="Container c1f884221b1080996491ea784a08b4021d733069507ea675fd872f3918a9891e: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:55:23.570146 containerd[1643]: time="2025-09-09T21:55:23.570094884Z" level=info msg="CreateContainer within sandbox \"a2131e432ee5f99774ed186fffec8222e80b6b7f15a542401455dea440111c10\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c1f884221b1080996491ea784a08b4021d733069507ea675fd872f3918a9891e\"" Sep 9 21:55:23.570584 containerd[1643]: time="2025-09-09T21:55:23.570560454Z" level=info msg="StartContainer for \"c1f884221b1080996491ea784a08b4021d733069507ea675fd872f3918a9891e\"" Sep 9 21:55:23.571267 containerd[1643]: time="2025-09-09T21:55:23.571250649Z" level=info msg="connecting to shim c1f884221b1080996491ea784a08b4021d733069507ea675fd872f3918a9891e" address="unix:///run/containerd/s/f555ea9bdb475f39000aa7c80649d85c111b2a32ef983d5f1f6e4e95d97a4b21" protocol=ttrpc version=3 Sep 9 21:55:23.584452 systemd[1]: Started cri-containerd-c1f884221b1080996491ea784a08b4021d733069507ea675fd872f3918a9891e.scope - libcontainer container c1f884221b1080996491ea784a08b4021d733069507ea675fd872f3918a9891e. Sep 9 21:55:23.616406 containerd[1643]: time="2025-09-09T21:55:23.616364926Z" level=info msg="StartContainer for \"c1f884221b1080996491ea784a08b4021d733069507ea675fd872f3918a9891e\" returns successfully" Sep 9 21:55:23.618980 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea97f04782cc0383fa5f8693e6e51b6911641c0d86b78747a4a9ef70b8bf08e3-rootfs.mount: Deactivated successfully. Sep 9 21:55:23.628375 containerd[1643]: time="2025-09-09T21:55:23.628297346Z" level=info msg="CreateContainer within sandbox \"77df44a395c0da7772e25f5277a351800a60b84bd505047f16b72a4fdf32aae3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 21:55:23.678997 containerd[1643]: time="2025-09-09T21:55:23.678497694Z" level=info msg="Container 43c2b4e4ccfa965e34587e6e57e62fbc4797dac42bf266937475f56fe3b0477c: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:55:23.679088 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2339842908.mount: Deactivated successfully. Sep 9 21:55:23.690273 containerd[1643]: time="2025-09-09T21:55:23.690121668Z" level=info msg="CreateContainer within sandbox \"77df44a395c0da7772e25f5277a351800a60b84bd505047f16b72a4fdf32aae3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"43c2b4e4ccfa965e34587e6e57e62fbc4797dac42bf266937475f56fe3b0477c\"" Sep 9 21:55:23.690711 containerd[1643]: time="2025-09-09T21:55:23.690686411Z" level=info msg="StartContainer for \"43c2b4e4ccfa965e34587e6e57e62fbc4797dac42bf266937475f56fe3b0477c\"" Sep 9 21:55:23.693424 containerd[1643]: time="2025-09-09T21:55:23.693400649Z" level=info msg="connecting to shim 43c2b4e4ccfa965e34587e6e57e62fbc4797dac42bf266937475f56fe3b0477c" address="unix:///run/containerd/s/efbacc1e05862b92e5b431de3c98b66b9d8c2f6132e0c5b621588f29c153e7d1" protocol=ttrpc version=3 Sep 9 21:55:23.715107 systemd[1]: Started cri-containerd-43c2b4e4ccfa965e34587e6e57e62fbc4797dac42bf266937475f56fe3b0477c.scope - libcontainer container 43c2b4e4ccfa965e34587e6e57e62fbc4797dac42bf266937475f56fe3b0477c. Sep 9 21:55:23.762040 containerd[1643]: time="2025-09-09T21:55:23.762005976Z" level=info msg="StartContainer for \"43c2b4e4ccfa965e34587e6e57e62fbc4797dac42bf266937475f56fe3b0477c\" returns successfully" Sep 9 21:55:23.774698 systemd[1]: cri-containerd-43c2b4e4ccfa965e34587e6e57e62fbc4797dac42bf266937475f56fe3b0477c.scope: Deactivated successfully. Sep 9 21:55:23.774903 systemd[1]: cri-containerd-43c2b4e4ccfa965e34587e6e57e62fbc4797dac42bf266937475f56fe3b0477c.scope: Consumed 17ms CPU time, 4.5M memory peak, 1.3M read from disk. Sep 9 21:55:23.775815 containerd[1643]: time="2025-09-09T21:55:23.775778407Z" level=info msg="TaskExit event in podsandbox handler container_id:\"43c2b4e4ccfa965e34587e6e57e62fbc4797dac42bf266937475f56fe3b0477c\" id:\"43c2b4e4ccfa965e34587e6e57e62fbc4797dac42bf266937475f56fe3b0477c\" pid:3497 exited_at:{seconds:1757454923 nanos:775517643}" Sep 9 21:55:23.776519 containerd[1643]: time="2025-09-09T21:55:23.776439489Z" level=info msg="received exit event container_id:\"43c2b4e4ccfa965e34587e6e57e62fbc4797dac42bf266937475f56fe3b0477c\" id:\"43c2b4e4ccfa965e34587e6e57e62fbc4797dac42bf266937475f56fe3b0477c\" pid:3497 exited_at:{seconds:1757454923 nanos:775517643}" Sep 9 21:55:24.617937 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43c2b4e4ccfa965e34587e6e57e62fbc4797dac42bf266937475f56fe3b0477c-rootfs.mount: Deactivated successfully. Sep 9 21:55:24.648375 containerd[1643]: time="2025-09-09T21:55:24.646783378Z" level=info msg="CreateContainer within sandbox \"77df44a395c0da7772e25f5277a351800a60b84bd505047f16b72a4fdf32aae3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 21:55:24.658239 containerd[1643]: time="2025-09-09T21:55:24.654970772Z" level=info msg="Container 5f0274bbaea1ee045a8e2ec0b306be99f726b55672702641375716c2235593d7: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:55:24.656616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2965449071.mount: Deactivated successfully. Sep 9 21:55:24.658840 containerd[1643]: time="2025-09-09T21:55:24.658704359Z" level=info msg="CreateContainer within sandbox \"77df44a395c0da7772e25f5277a351800a60b84bd505047f16b72a4fdf32aae3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5f0274bbaea1ee045a8e2ec0b306be99f726b55672702641375716c2235593d7\"" Sep 9 21:55:24.665203 kubelet[2947]: I0909 21:55:24.665167 2947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-x8npl" podStartSLOduration=1.619003524 podStartE2EDuration="9.665155814s" podCreationTimestamp="2025-09-09 21:55:15 +0000 UTC" firstStartedPulling="2025-09-09 21:55:15.501967431 +0000 UTC m=+7.042076546" lastFinishedPulling="2025-09-09 21:55:23.548119722 +0000 UTC m=+15.088228836" observedRunningTime="2025-09-09 21:55:24.638969234 +0000 UTC m=+16.179078357" watchObservedRunningTime="2025-09-09 21:55:24.665155814 +0000 UTC m=+16.205264931" Sep 9 21:55:24.669417 containerd[1643]: time="2025-09-09T21:55:24.669396233Z" level=info msg="StartContainer for \"5f0274bbaea1ee045a8e2ec0b306be99f726b55672702641375716c2235593d7\"" Sep 9 21:55:24.669898 containerd[1643]: time="2025-09-09T21:55:24.669881174Z" level=info msg="connecting to shim 5f0274bbaea1ee045a8e2ec0b306be99f726b55672702641375716c2235593d7" address="unix:///run/containerd/s/efbacc1e05862b92e5b431de3c98b66b9d8c2f6132e0c5b621588f29c153e7d1" protocol=ttrpc version=3 Sep 9 21:55:24.686569 systemd[1]: Started cri-containerd-5f0274bbaea1ee045a8e2ec0b306be99f726b55672702641375716c2235593d7.scope - libcontainer container 5f0274bbaea1ee045a8e2ec0b306be99f726b55672702641375716c2235593d7. Sep 9 21:55:24.705286 systemd[1]: cri-containerd-5f0274bbaea1ee045a8e2ec0b306be99f726b55672702641375716c2235593d7.scope: Deactivated successfully. Sep 9 21:55:24.707135 containerd[1643]: time="2025-09-09T21:55:24.707118402Z" level=info msg="received exit event container_id:\"5f0274bbaea1ee045a8e2ec0b306be99f726b55672702641375716c2235593d7\" id:\"5f0274bbaea1ee045a8e2ec0b306be99f726b55672702641375716c2235593d7\" pid:3537 exited_at:{seconds:1757454924 nanos:706657457}" Sep 9 21:55:24.709197 containerd[1643]: time="2025-09-09T21:55:24.708986470Z" level=info msg="StartContainer for \"5f0274bbaea1ee045a8e2ec0b306be99f726b55672702641375716c2235593d7\" returns successfully" Sep 9 21:55:24.710133 containerd[1643]: time="2025-09-09T21:55:24.710106356Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5f0274bbaea1ee045a8e2ec0b306be99f726b55672702641375716c2235593d7\" id:\"5f0274bbaea1ee045a8e2ec0b306be99f726b55672702641375716c2235593d7\" pid:3537 exited_at:{seconds:1757454924 nanos:706657457}" Sep 9 21:55:25.617924 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f0274bbaea1ee045a8e2ec0b306be99f726b55672702641375716c2235593d7-rootfs.mount: Deactivated successfully. Sep 9 21:55:25.650666 containerd[1643]: time="2025-09-09T21:55:25.649873769Z" level=info msg="CreateContainer within sandbox \"77df44a395c0da7772e25f5277a351800a60b84bd505047f16b72a4fdf32aae3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 21:55:25.660735 containerd[1643]: time="2025-09-09T21:55:25.659235536Z" level=info msg="Container 629b70d14e266359323151daf107d046422d1f618ed519cd017ce62f317cb73d: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:55:25.664084 containerd[1643]: time="2025-09-09T21:55:25.664068617Z" level=info msg="CreateContainer within sandbox \"77df44a395c0da7772e25f5277a351800a60b84bd505047f16b72a4fdf32aae3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"629b70d14e266359323151daf107d046422d1f618ed519cd017ce62f317cb73d\"" Sep 9 21:55:25.664466 containerd[1643]: time="2025-09-09T21:55:25.664454080Z" level=info msg="StartContainer for \"629b70d14e266359323151daf107d046422d1f618ed519cd017ce62f317cb73d\"" Sep 9 21:55:25.666847 containerd[1643]: time="2025-09-09T21:55:25.666810236Z" level=info msg="connecting to shim 629b70d14e266359323151daf107d046422d1f618ed519cd017ce62f317cb73d" address="unix:///run/containerd/s/efbacc1e05862b92e5b431de3c98b66b9d8c2f6132e0c5b621588f29c153e7d1" protocol=ttrpc version=3 Sep 9 21:55:25.686446 systemd[1]: Started cri-containerd-629b70d14e266359323151daf107d046422d1f618ed519cd017ce62f317cb73d.scope - libcontainer container 629b70d14e266359323151daf107d046422d1f618ed519cd017ce62f317cb73d. Sep 9 21:55:25.711260 containerd[1643]: time="2025-09-09T21:55:25.711232445Z" level=info msg="StartContainer for \"629b70d14e266359323151daf107d046422d1f618ed519cd017ce62f317cb73d\" returns successfully" Sep 9 21:55:25.794386 containerd[1643]: time="2025-09-09T21:55:25.794247086Z" level=info msg="TaskExit event in podsandbox handler container_id:\"629b70d14e266359323151daf107d046422d1f618ed519cd017ce62f317cb73d\" id:\"80e8be0ebfb48433cabb3e2be89386cea1ad2243b3f629d1fad49d393eb551e8\" pid:3607 exited_at:{seconds:1757454925 nanos:793879598}" Sep 9 21:55:25.812939 kubelet[2947]: I0909 21:55:25.812917 2947 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 9 21:55:25.838777 systemd[1]: Created slice kubepods-burstable-pod600021b3_4915_46d8_8463_428ebc146a76.slice - libcontainer container kubepods-burstable-pod600021b3_4915_46d8_8463_428ebc146a76.slice. Sep 9 21:55:25.844653 systemd[1]: Created slice kubepods-burstable-pode95cd207_b702_4ee9_a6b6_8cb7a0ddf8eb.slice - libcontainer container kubepods-burstable-pode95cd207_b702_4ee9_a6b6_8cb7a0ddf8eb.slice. Sep 9 21:55:25.859350 kubelet[2947]: I0909 21:55:25.859324 2947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6crdn\" (UniqueName: \"kubernetes.io/projected/600021b3-4915-46d8-8463-428ebc146a76-kube-api-access-6crdn\") pod \"coredns-7c65d6cfc9-fhsmd\" (UID: \"600021b3-4915-46d8-8463-428ebc146a76\") " pod="kube-system/coredns-7c65d6cfc9-fhsmd" Sep 9 21:55:25.859440 kubelet[2947]: I0909 21:55:25.859377 2947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/600021b3-4915-46d8-8463-428ebc146a76-config-volume\") pod \"coredns-7c65d6cfc9-fhsmd\" (UID: \"600021b3-4915-46d8-8463-428ebc146a76\") " pod="kube-system/coredns-7c65d6cfc9-fhsmd" Sep 9 21:55:25.859440 kubelet[2947]: I0909 21:55:25.859392 2947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwnks\" (UniqueName: \"kubernetes.io/projected/e95cd207-b702-4ee9-a6b6-8cb7a0ddf8eb-kube-api-access-hwnks\") pod \"coredns-7c65d6cfc9-w49xb\" (UID: \"e95cd207-b702-4ee9-a6b6-8cb7a0ddf8eb\") " pod="kube-system/coredns-7c65d6cfc9-w49xb" Sep 9 21:55:25.859440 kubelet[2947]: I0909 21:55:25.859404 2947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e95cd207-b702-4ee9-a6b6-8cb7a0ddf8eb-config-volume\") pod \"coredns-7c65d6cfc9-w49xb\" (UID: \"e95cd207-b702-4ee9-a6b6-8cb7a0ddf8eb\") " pod="kube-system/coredns-7c65d6cfc9-w49xb" Sep 9 21:55:26.143010 containerd[1643]: time="2025-09-09T21:55:26.142980385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fhsmd,Uid:600021b3-4915-46d8-8463-428ebc146a76,Namespace:kube-system,Attempt:0,}" Sep 9 21:55:26.147592 containerd[1643]: time="2025-09-09T21:55:26.147576203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-w49xb,Uid:e95cd207-b702-4ee9-a6b6-8cb7a0ddf8eb,Namespace:kube-system,Attempt:0,}" Sep 9 21:55:27.941125 systemd-networkd[1527]: cilium_host: Link UP Sep 9 21:55:27.941232 systemd-networkd[1527]: cilium_net: Link UP Sep 9 21:55:27.941327 systemd-networkd[1527]: cilium_net: Gained carrier Sep 9 21:55:27.942092 systemd-networkd[1527]: cilium_host: Gained carrier Sep 9 21:55:28.209106 systemd-networkd[1527]: cilium_vxlan: Link UP Sep 9 21:55:28.209111 systemd-networkd[1527]: cilium_vxlan: Gained carrier Sep 9 21:55:28.532376 kernel: NET: Registered PF_ALG protocol family Sep 9 21:55:28.636445 systemd-networkd[1527]: cilium_host: Gained IPv6LL Sep 9 21:55:28.764437 systemd-networkd[1527]: cilium_net: Gained IPv6LL Sep 9 21:55:29.002282 systemd-networkd[1527]: lxc_health: Link UP Sep 9 21:55:29.003555 systemd-networkd[1527]: lxc_health: Gained carrier Sep 9 21:55:29.205372 kernel: eth0: renamed from tmpfbb8b Sep 9 21:55:29.207457 kernel: eth0: renamed from tmp25d57 Sep 9 21:55:29.205737 systemd-networkd[1527]: lxc1fddb91f7876: Link UP Sep 9 21:55:29.205881 systemd-networkd[1527]: lxc962426a8eb09: Link UP Sep 9 21:55:29.207467 systemd-networkd[1527]: lxc962426a8eb09: Gained carrier Sep 9 21:55:29.209195 systemd-networkd[1527]: lxc1fddb91f7876: Gained carrier Sep 9 21:55:29.596470 systemd-networkd[1527]: cilium_vxlan: Gained IPv6LL Sep 9 21:55:30.301446 systemd-networkd[1527]: lxc_health: Gained IPv6LL Sep 9 21:55:30.684472 systemd-networkd[1527]: lxc1fddb91f7876: Gained IPv6LL Sep 9 21:55:30.929373 kubelet[2947]: I0909 21:55:30.929124 2947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hhzvm" podStartSLOduration=10.542174655 podStartE2EDuration="16.929113192s" podCreationTimestamp="2025-09-09 21:55:14 +0000 UTC" firstStartedPulling="2025-09-09 21:55:15.011406895 +0000 UTC m=+6.551516009" lastFinishedPulling="2025-09-09 21:55:21.398345431 +0000 UTC m=+12.938454546" observedRunningTime="2025-09-09 21:55:26.669057005 +0000 UTC m=+18.209166127" watchObservedRunningTime="2025-09-09 21:55:30.929113192 +0000 UTC m=+22.469222316" Sep 9 21:55:31.068478 systemd-networkd[1527]: lxc962426a8eb09: Gained IPv6LL Sep 9 21:55:31.814300 containerd[1643]: time="2025-09-09T21:55:31.814247656Z" level=info msg="connecting to shim 25d57269657aac8a9a2f0012d6033858efc9d6bbba041a25e54332ae8958b341" address="unix:///run/containerd/s/41f81f1c3a68757f8b9d57bea3724f468067bc517c1117b8a9340f9529f314f9" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:55:31.819586 containerd[1643]: time="2025-09-09T21:55:31.819531006Z" level=info msg="connecting to shim fbb8b36d6145a7786e48e64f23ffcfe08951017ccfe6627d161002b2a42ccaf8" address="unix:///run/containerd/s/b5e415c7d48a059af201fae6680aeb68dcff8ccd710042b2ae7d9f2d03892a74" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:55:31.848564 systemd[1]: Started cri-containerd-25d57269657aac8a9a2f0012d6033858efc9d6bbba041a25e54332ae8958b341.scope - libcontainer container 25d57269657aac8a9a2f0012d6033858efc9d6bbba041a25e54332ae8958b341. Sep 9 21:55:31.856461 systemd[1]: Started cri-containerd-fbb8b36d6145a7786e48e64f23ffcfe08951017ccfe6627d161002b2a42ccaf8.scope - libcontainer container fbb8b36d6145a7786e48e64f23ffcfe08951017ccfe6627d161002b2a42ccaf8. Sep 9 21:55:31.877169 systemd-resolved[1528]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 21:55:31.878873 systemd-resolved[1528]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 21:55:31.907148 containerd[1643]: time="2025-09-09T21:55:31.907085490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-w49xb,Uid:e95cd207-b702-4ee9-a6b6-8cb7a0ddf8eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"fbb8b36d6145a7786e48e64f23ffcfe08951017ccfe6627d161002b2a42ccaf8\"" Sep 9 21:55:31.911835 containerd[1643]: time="2025-09-09T21:55:31.911460229Z" level=info msg="CreateContainer within sandbox \"fbb8b36d6145a7786e48e64f23ffcfe08951017ccfe6627d161002b2a42ccaf8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 21:55:31.912291 containerd[1643]: time="2025-09-09T21:55:31.912259406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fhsmd,Uid:600021b3-4915-46d8-8463-428ebc146a76,Namespace:kube-system,Attempt:0,} returns sandbox id \"25d57269657aac8a9a2f0012d6033858efc9d6bbba041a25e54332ae8958b341\"" Sep 9 21:55:31.915850 containerd[1643]: time="2025-09-09T21:55:31.915825503Z" level=info msg="CreateContainer within sandbox \"25d57269657aac8a9a2f0012d6033858efc9d6bbba041a25e54332ae8958b341\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 21:55:31.925713 containerd[1643]: time="2025-09-09T21:55:31.925672615Z" level=info msg="Container f4b05f3f94d16ab11510036c2c10a73f4a6fe381475566c7423ad7165f436681: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:55:31.926174 containerd[1643]: time="2025-09-09T21:55:31.925817953Z" level=info msg="Container b5dcb27141fc93a18bf3cffaf7d073cbf6912e38e74f6f79f6dcd27915155202: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:55:31.932183 containerd[1643]: time="2025-09-09T21:55:31.931669095Z" level=info msg="CreateContainer within sandbox \"fbb8b36d6145a7786e48e64f23ffcfe08951017ccfe6627d161002b2a42ccaf8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f4b05f3f94d16ab11510036c2c10a73f4a6fe381475566c7423ad7165f436681\"" Sep 9 21:55:31.933587 containerd[1643]: time="2025-09-09T21:55:31.933570221Z" level=info msg="StartContainer for \"f4b05f3f94d16ab11510036c2c10a73f4a6fe381475566c7423ad7165f436681\"" Sep 9 21:55:31.934404 containerd[1643]: time="2025-09-09T21:55:31.934389297Z" level=info msg="connecting to shim f4b05f3f94d16ab11510036c2c10a73f4a6fe381475566c7423ad7165f436681" address="unix:///run/containerd/s/b5e415c7d48a059af201fae6680aeb68dcff8ccd710042b2ae7d9f2d03892a74" protocol=ttrpc version=3 Sep 9 21:55:31.935927 containerd[1643]: time="2025-09-09T21:55:31.935894743Z" level=info msg="CreateContainer within sandbox \"25d57269657aac8a9a2f0012d6033858efc9d6bbba041a25e54332ae8958b341\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b5dcb27141fc93a18bf3cffaf7d073cbf6912e38e74f6f79f6dcd27915155202\"" Sep 9 21:55:31.937977 containerd[1643]: time="2025-09-09T21:55:31.936439908Z" level=info msg="StartContainer for \"b5dcb27141fc93a18bf3cffaf7d073cbf6912e38e74f6f79f6dcd27915155202\"" Sep 9 21:55:31.938836 containerd[1643]: time="2025-09-09T21:55:31.938817848Z" level=info msg="connecting to shim b5dcb27141fc93a18bf3cffaf7d073cbf6912e38e74f6f79f6dcd27915155202" address="unix:///run/containerd/s/41f81f1c3a68757f8b9d57bea3724f468067bc517c1117b8a9340f9529f314f9" protocol=ttrpc version=3 Sep 9 21:55:31.951559 systemd[1]: Started cri-containerd-f4b05f3f94d16ab11510036c2c10a73f4a6fe381475566c7423ad7165f436681.scope - libcontainer container f4b05f3f94d16ab11510036c2c10a73f4a6fe381475566c7423ad7165f436681. Sep 9 21:55:31.961556 systemd[1]: Started cri-containerd-b5dcb27141fc93a18bf3cffaf7d073cbf6912e38e74f6f79f6dcd27915155202.scope - libcontainer container b5dcb27141fc93a18bf3cffaf7d073cbf6912e38e74f6f79f6dcd27915155202. Sep 9 21:55:31.989642 containerd[1643]: time="2025-09-09T21:55:31.989531633Z" level=info msg="StartContainer for \"f4b05f3f94d16ab11510036c2c10a73f4a6fe381475566c7423ad7165f436681\" returns successfully" Sep 9 21:55:32.000039 containerd[1643]: time="2025-09-09T21:55:32.000013392Z" level=info msg="StartContainer for \"b5dcb27141fc93a18bf3cffaf7d073cbf6912e38e74f6f79f6dcd27915155202\" returns successfully" Sep 9 21:55:32.672763 kubelet[2947]: I0909 21:55:32.672573 2947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-fhsmd" podStartSLOduration=17.672563071 podStartE2EDuration="17.672563071s" podCreationTimestamp="2025-09-09 21:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 21:55:32.672212749 +0000 UTC m=+24.212321866" watchObservedRunningTime="2025-09-09 21:55:32.672563071 +0000 UTC m=+24.212672189" Sep 9 21:55:32.799123 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3135240471.mount: Deactivated successfully. Sep 9 21:55:32.799183 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2287239935.mount: Deactivated successfully. Sep 9 21:55:33.560496 kubelet[2947]: I0909 21:55:33.560189 2947 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 21:55:33.577370 kubelet[2947]: I0909 21:55:33.576726 2947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-w49xb" podStartSLOduration=18.576715219 podStartE2EDuration="18.576715219s" podCreationTimestamp="2025-09-09 21:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 21:55:32.690291736 +0000 UTC m=+24.230400860" watchObservedRunningTime="2025-09-09 21:55:33.576715219 +0000 UTC m=+25.116824338" Sep 9 21:56:11.180867 systemd[1]: Started sshd@7-139.178.70.109:22-139.178.89.65:36080.service - OpenSSH per-connection server daemon (139.178.89.65:36080). Sep 9 21:56:11.554231 sshd[4267]: Accepted publickey for core from 139.178.89.65 port 36080 ssh2: RSA SHA256:+yHkHs/g1kjLKz8TerXa64YormdzNna7WxTDm23L2SM Sep 9 21:56:11.555351 sshd-session[4267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:56:11.561921 systemd-logind[1612]: New session 10 of user core. Sep 9 21:56:11.570727 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 21:56:12.073345 sshd[4270]: Connection closed by 139.178.89.65 port 36080 Sep 9 21:56:12.073713 sshd-session[4267]: pam_unix(sshd:session): session closed for user core Sep 9 21:56:12.081606 systemd[1]: sshd@7-139.178.70.109:22-139.178.89.65:36080.service: Deactivated successfully. Sep 9 21:56:12.083109 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 21:56:12.083900 systemd-logind[1612]: Session 10 logged out. Waiting for processes to exit. Sep 9 21:56:12.084790 systemd-logind[1612]: Removed session 10. Sep 9 21:56:17.084293 systemd[1]: Started sshd@8-139.178.70.109:22-139.178.89.65:36084.service - OpenSSH per-connection server daemon (139.178.89.65:36084). Sep 9 21:56:17.219303 sshd[4285]: Accepted publickey for core from 139.178.89.65 port 36084 ssh2: RSA SHA256:+yHkHs/g1kjLKz8TerXa64YormdzNna7WxTDm23L2SM Sep 9 21:56:17.220840 sshd-session[4285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:56:17.225411 systemd-logind[1612]: New session 11 of user core. Sep 9 21:56:17.230575 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 21:56:17.341104 sshd[4288]: Connection closed by 139.178.89.65 port 36084 Sep 9 21:56:17.340494 sshd-session[4285]: pam_unix(sshd:session): session closed for user core Sep 9 21:56:17.343187 systemd[1]: sshd@8-139.178.70.109:22-139.178.89.65:36084.service: Deactivated successfully. Sep 9 21:56:17.344748 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 21:56:17.346560 systemd-logind[1612]: Session 11 logged out. Waiting for processes to exit. Sep 9 21:56:17.347661 systemd-logind[1612]: Removed session 11. Sep 9 21:56:22.357607 systemd[1]: Started sshd@9-139.178.70.109:22-139.178.89.65:42312.service - OpenSSH per-connection server daemon (139.178.89.65:42312). Sep 9 21:56:22.405039 sshd[4301]: Accepted publickey for core from 139.178.89.65 port 42312 ssh2: RSA SHA256:+yHkHs/g1kjLKz8TerXa64YormdzNna7WxTDm23L2SM Sep 9 21:56:22.405978 sshd-session[4301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:56:22.409528 systemd-logind[1612]: New session 12 of user core. Sep 9 21:56:22.416525 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 21:56:22.546036 sshd[4304]: Connection closed by 139.178.89.65 port 42312 Sep 9 21:56:22.546498 sshd-session[4301]: pam_unix(sshd:session): session closed for user core Sep 9 21:56:22.549586 systemd[1]: sshd@9-139.178.70.109:22-139.178.89.65:42312.service: Deactivated successfully. Sep 9 21:56:22.551566 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 21:56:22.553049 systemd-logind[1612]: Session 12 logged out. Waiting for processes to exit. Sep 9 21:56:22.554930 systemd-logind[1612]: Removed session 12. Sep 9 21:56:27.556702 systemd[1]: Started sshd@10-139.178.70.109:22-139.178.89.65:42320.service - OpenSSH per-connection server daemon (139.178.89.65:42320). Sep 9 21:56:27.606249 sshd[4317]: Accepted publickey for core from 139.178.89.65 port 42320 ssh2: RSA SHA256:+yHkHs/g1kjLKz8TerXa64YormdzNna7WxTDm23L2SM Sep 9 21:56:27.607218 sshd-session[4317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:56:27.610402 systemd-logind[1612]: New session 13 of user core. Sep 9 21:56:27.619517 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 21:56:27.720665 sshd[4320]: Connection closed by 139.178.89.65 port 42320 Sep 9 21:56:27.721115 sshd-session[4317]: pam_unix(sshd:session): session closed for user core Sep 9 21:56:27.727919 systemd[1]: sshd@10-139.178.70.109:22-139.178.89.65:42320.service: Deactivated successfully. Sep 9 21:56:27.729507 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 21:56:27.730086 systemd-logind[1612]: Session 13 logged out. Waiting for processes to exit. Sep 9 21:56:27.732487 systemd[1]: Started sshd@11-139.178.70.109:22-139.178.89.65:42332.service - OpenSSH per-connection server daemon (139.178.89.65:42332). Sep 9 21:56:27.734203 systemd-logind[1612]: Removed session 13. Sep 9 21:56:27.774813 sshd[4332]: Accepted publickey for core from 139.178.89.65 port 42332 ssh2: RSA SHA256:+yHkHs/g1kjLKz8TerXa64YormdzNna7WxTDm23L2SM Sep 9 21:56:27.775638 sshd-session[4332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:56:27.779425 systemd-logind[1612]: New session 14 of user core. Sep 9 21:56:27.785579 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 21:56:27.908940 sshd[4335]: Connection closed by 139.178.89.65 port 42332 Sep 9 21:56:27.909373 sshd-session[4332]: pam_unix(sshd:session): session closed for user core Sep 9 21:56:27.919288 systemd[1]: sshd@11-139.178.70.109:22-139.178.89.65:42332.service: Deactivated successfully. Sep 9 21:56:27.922203 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 21:56:27.923597 systemd-logind[1612]: Session 14 logged out. Waiting for processes to exit. Sep 9 21:56:27.928278 systemd[1]: Started sshd@12-139.178.70.109:22-139.178.89.65:42344.service - OpenSSH per-connection server daemon (139.178.89.65:42344). Sep 9 21:56:27.931984 systemd-logind[1612]: Removed session 14. Sep 9 21:56:27.976529 sshd[4346]: Accepted publickey for core from 139.178.89.65 port 42344 ssh2: RSA SHA256:+yHkHs/g1kjLKz8TerXa64YormdzNna7WxTDm23L2SM Sep 9 21:56:27.977476 sshd-session[4346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:56:27.981410 systemd-logind[1612]: New session 15 of user core. Sep 9 21:56:27.984479 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 21:56:28.102615 sshd[4349]: Connection closed by 139.178.89.65 port 42344 Sep 9 21:56:28.102942 sshd-session[4346]: pam_unix(sshd:session): session closed for user core Sep 9 21:56:28.105055 systemd[1]: sshd@12-139.178.70.109:22-139.178.89.65:42344.service: Deactivated successfully. Sep 9 21:56:28.106441 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 21:56:28.107053 systemd-logind[1612]: Session 15 logged out. Waiting for processes to exit. Sep 9 21:56:28.107879 systemd-logind[1612]: Removed session 15. Sep 9 21:56:33.113099 systemd[1]: Started sshd@13-139.178.70.109:22-139.178.89.65:51890.service - OpenSSH per-connection server daemon (139.178.89.65:51890). Sep 9 21:56:33.238610 sshd[4361]: Accepted publickey for core from 139.178.89.65 port 51890 ssh2: RSA SHA256:+yHkHs/g1kjLKz8TerXa64YormdzNna7WxTDm23L2SM Sep 9 21:56:33.239376 sshd-session[4361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:56:33.241944 systemd-logind[1612]: New session 16 of user core. Sep 9 21:56:33.247520 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 21:56:33.407963 sshd[4364]: Connection closed by 139.178.89.65 port 51890 Sep 9 21:56:33.408408 sshd-session[4361]: pam_unix(sshd:session): session closed for user core Sep 9 21:56:33.410662 systemd[1]: sshd@13-139.178.70.109:22-139.178.89.65:51890.service: Deactivated successfully. Sep 9 21:56:33.411997 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 21:56:33.412661 systemd-logind[1612]: Session 16 logged out. Waiting for processes to exit. Sep 9 21:56:33.413584 systemd-logind[1612]: Removed session 16. Sep 9 21:56:38.423278 systemd[1]: Started sshd@14-139.178.70.109:22-139.178.89.65:51900.service - OpenSSH per-connection server daemon (139.178.89.65:51900). Sep 9 21:56:38.465481 sshd[4377]: Accepted publickey for core from 139.178.89.65 port 51900 ssh2: RSA SHA256:+yHkHs/g1kjLKz8TerXa64YormdzNna7WxTDm23L2SM Sep 9 21:56:38.466573 sshd-session[4377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:56:38.470029 systemd-logind[1612]: New session 17 of user core. Sep 9 21:56:38.480479 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 21:56:38.602471 sshd[4380]: Connection closed by 139.178.89.65 port 51900 Sep 9 21:56:38.603013 sshd-session[4377]: pam_unix(sshd:session): session closed for user core Sep 9 21:56:38.612145 systemd[1]: sshd@14-139.178.70.109:22-139.178.89.65:51900.service: Deactivated successfully. Sep 9 21:56:38.614511 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 21:56:38.615257 systemd-logind[1612]: Session 17 logged out. Waiting for processes to exit. Sep 9 21:56:38.618052 systemd[1]: Started sshd@15-139.178.70.109:22-139.178.89.65:51914.service - OpenSSH per-connection server daemon (139.178.89.65:51914). Sep 9 21:56:38.619573 systemd-logind[1612]: Removed session 17. Sep 9 21:56:38.901955 sshd[4391]: Accepted publickey for core from 139.178.89.65 port 51914 ssh2: RSA SHA256:+yHkHs/g1kjLKz8TerXa64YormdzNna7WxTDm23L2SM Sep 9 21:56:38.902939 sshd-session[4391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:56:38.906943 systemd-logind[1612]: New session 18 of user core. Sep 9 21:56:38.910457 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 21:56:39.666565 sshd[4394]: Connection closed by 139.178.89.65 port 51914 Sep 9 21:56:39.668229 sshd-session[4391]: pam_unix(sshd:session): session closed for user core Sep 9 21:56:39.676451 systemd[1]: Started sshd@16-139.178.70.109:22-139.178.89.65:51920.service - OpenSSH per-connection server daemon (139.178.89.65:51920). Sep 9 21:56:39.678969 systemd[1]: sshd@15-139.178.70.109:22-139.178.89.65:51914.service: Deactivated successfully. Sep 9 21:56:39.680956 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 21:56:39.683841 systemd-logind[1612]: Session 18 logged out. Waiting for processes to exit. Sep 9 21:56:39.684816 systemd-logind[1612]: Removed session 18. Sep 9 21:56:39.724683 sshd[4401]: Accepted publickey for core from 139.178.89.65 port 51920 ssh2: RSA SHA256:+yHkHs/g1kjLKz8TerXa64YormdzNna7WxTDm23L2SM Sep 9 21:56:39.725617 sshd-session[4401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:56:39.729479 systemd-logind[1612]: New session 19 of user core. Sep 9 21:56:39.734487 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 21:56:40.938330 sshd[4407]: Connection closed by 139.178.89.65 port 51920 Sep 9 21:56:40.938658 sshd-session[4401]: pam_unix(sshd:session): session closed for user core Sep 9 21:56:40.947762 systemd[1]: sshd@16-139.178.70.109:22-139.178.89.65:51920.service: Deactivated successfully. Sep 9 21:56:40.950206 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 21:56:40.951347 systemd-logind[1612]: Session 19 logged out. Waiting for processes to exit. Sep 9 21:56:40.955514 systemd[1]: Started sshd@17-139.178.70.109:22-139.178.89.65:42200.service - OpenSSH per-connection server daemon (139.178.89.65:42200). Sep 9 21:56:40.956241 systemd-logind[1612]: Removed session 19. Sep 9 21:56:41.013023 sshd[4423]: Accepted publickey for core from 139.178.89.65 port 42200 ssh2: RSA SHA256:+yHkHs/g1kjLKz8TerXa64YormdzNna7WxTDm23L2SM Sep 9 21:56:41.013808 sshd-session[4423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:56:41.017354 systemd-logind[1612]: New session 20 of user core. Sep 9 21:56:41.025475 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 21:56:41.312287 sshd[4427]: Connection closed by 139.178.89.65 port 42200 Sep 9 21:56:41.313216 sshd-session[4423]: pam_unix(sshd:session): session closed for user core Sep 9 21:56:41.322662 systemd[1]: sshd@17-139.178.70.109:22-139.178.89.65:42200.service: Deactivated successfully. Sep 9 21:56:41.325264 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 21:56:41.327406 systemd-logind[1612]: Session 20 logged out. Waiting for processes to exit. Sep 9 21:56:41.330607 systemd[1]: Started sshd@18-139.178.70.109:22-139.178.89.65:42214.service - OpenSSH per-connection server daemon (139.178.89.65:42214). Sep 9 21:56:41.332607 systemd-logind[1612]: Removed session 20. Sep 9 21:56:41.374763 sshd[4437]: Accepted publickey for core from 139.178.89.65 port 42214 ssh2: RSA SHA256:+yHkHs/g1kjLKz8TerXa64YormdzNna7WxTDm23L2SM Sep 9 21:56:41.375746 sshd-session[4437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:56:41.382478 systemd-logind[1612]: New session 21 of user core. Sep 9 21:56:41.386507 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 21:56:41.490385 sshd[4440]: Connection closed by 139.178.89.65 port 42214 Sep 9 21:56:41.490334 sshd-session[4437]: pam_unix(sshd:session): session closed for user core Sep 9 21:56:41.493222 systemd[1]: sshd@18-139.178.70.109:22-139.178.89.65:42214.service: Deactivated successfully. Sep 9 21:56:41.494659 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 21:56:41.495351 systemd-logind[1612]: Session 21 logged out. Waiting for processes to exit. Sep 9 21:56:41.496630 systemd-logind[1612]: Removed session 21. Sep 9 21:56:46.500920 systemd[1]: Started sshd@19-139.178.70.109:22-139.178.89.65:42230.service - OpenSSH per-connection server daemon (139.178.89.65:42230). Sep 9 21:56:46.543420 sshd[4454]: Accepted publickey for core from 139.178.89.65 port 42230 ssh2: RSA SHA256:+yHkHs/g1kjLKz8TerXa64YormdzNna7WxTDm23L2SM Sep 9 21:56:46.544162 sshd-session[4454]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:56:46.547231 systemd-logind[1612]: New session 22 of user core. Sep 9 21:56:46.554475 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 21:56:46.646232 sshd[4457]: Connection closed by 139.178.89.65 port 42230 Sep 9 21:56:46.647642 sshd-session[4454]: pam_unix(sshd:session): session closed for user core Sep 9 21:56:46.649791 systemd-logind[1612]: Session 22 logged out. Waiting for processes to exit. Sep 9 21:56:46.650021 systemd[1]: sshd@19-139.178.70.109:22-139.178.89.65:42230.service: Deactivated successfully. Sep 9 21:56:46.651657 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 21:56:46.653260 systemd-logind[1612]: Removed session 22. Sep 9 21:56:51.657155 systemd[1]: Started sshd@20-139.178.70.109:22-139.178.89.65:59862.service - OpenSSH per-connection server daemon (139.178.89.65:59862). Sep 9 21:56:51.694969 sshd[4472]: Accepted publickey for core from 139.178.89.65 port 59862 ssh2: RSA SHA256:+yHkHs/g1kjLKz8TerXa64YormdzNna7WxTDm23L2SM Sep 9 21:56:51.695955 sshd-session[4472]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:56:51.699403 systemd-logind[1612]: New session 23 of user core. Sep 9 21:56:51.706467 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 21:56:51.952259 sshd[4475]: Connection closed by 139.178.89.65 port 59862 Sep 9 21:56:51.952735 sshd-session[4472]: pam_unix(sshd:session): session closed for user core Sep 9 21:56:51.955416 systemd[1]: sshd@20-139.178.70.109:22-139.178.89.65:59862.service: Deactivated successfully. Sep 9 21:56:51.956583 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 21:56:51.957089 systemd-logind[1612]: Session 23 logged out. Waiting for processes to exit. Sep 9 21:56:51.957853 systemd-logind[1612]: Removed session 23. Sep 9 21:56:56.966715 systemd[1]: Started sshd@21-139.178.70.109:22-139.178.89.65:59876.service - OpenSSH per-connection server daemon (139.178.89.65:59876). Sep 9 21:56:57.163489 sshd[4487]: Accepted publickey for core from 139.178.89.65 port 59876 ssh2: RSA SHA256:+yHkHs/g1kjLKz8TerXa64YormdzNna7WxTDm23L2SM Sep 9 21:56:57.164301 sshd-session[4487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:56:57.167113 systemd-logind[1612]: New session 24 of user core. Sep 9 21:56:57.183571 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 9 21:56:57.276872 sshd[4490]: Connection closed by 139.178.89.65 port 59876 Sep 9 21:56:57.277192 sshd-session[4487]: pam_unix(sshd:session): session closed for user core Sep 9 21:56:57.279659 systemd[1]: sshd@21-139.178.70.109:22-139.178.89.65:59876.service: Deactivated successfully. Sep 9 21:56:57.280941 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 21:56:57.281586 systemd-logind[1612]: Session 24 logged out. Waiting for processes to exit. Sep 9 21:56:57.282629 systemd-logind[1612]: Removed session 24. Sep 9 21:57:02.290761 systemd[1]: Started sshd@22-139.178.70.109:22-139.178.89.65:44992.service - OpenSSH per-connection server daemon (139.178.89.65:44992). Sep 9 21:57:02.341974 sshd[4503]: Accepted publickey for core from 139.178.89.65 port 44992 ssh2: RSA SHA256:+yHkHs/g1kjLKz8TerXa64YormdzNna7WxTDm23L2SM Sep 9 21:57:02.343178 sshd-session[4503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:57:02.345881 systemd-logind[1612]: New session 25 of user core. Sep 9 21:57:02.353470 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 9 21:57:02.441342 sshd[4506]: Connection closed by 139.178.89.65 port 44992 Sep 9 21:57:02.441850 sshd-session[4503]: pam_unix(sshd:session): session closed for user core Sep 9 21:57:02.448529 systemd[1]: sshd@22-139.178.70.109:22-139.178.89.65:44992.service: Deactivated successfully. Sep 9 21:57:02.449793 systemd[1]: session-25.scope: Deactivated successfully. Sep 9 21:57:02.450352 systemd-logind[1612]: Session 25 logged out. Waiting for processes to exit. Sep 9 21:57:02.452460 systemd[1]: Started sshd@23-139.178.70.109:22-139.178.89.65:45004.service - OpenSSH per-connection server daemon (139.178.89.65:45004). Sep 9 21:57:02.453462 systemd-logind[1612]: Removed session 25. Sep 9 21:57:02.490158 sshd[4518]: Accepted publickey for core from 139.178.89.65 port 45004 ssh2: RSA SHA256:+yHkHs/g1kjLKz8TerXa64YormdzNna7WxTDm23L2SM Sep 9 21:57:02.491027 sshd-session[4518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:57:02.493698 systemd-logind[1612]: New session 26 of user core. Sep 9 21:57:02.507470 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 9 21:57:03.909193 containerd[1643]: time="2025-09-09T21:57:03.909058425Z" level=info msg="StopContainer for \"c1f884221b1080996491ea784a08b4021d733069507ea675fd872f3918a9891e\" with timeout 30 (s)" Sep 9 21:57:03.920298 containerd[1643]: time="2025-09-09T21:57:03.920232777Z" level=info msg="Stop container \"c1f884221b1080996491ea784a08b4021d733069507ea675fd872f3918a9891e\" with signal terminated" Sep 9 21:57:03.942467 systemd[1]: cri-containerd-c1f884221b1080996491ea784a08b4021d733069507ea675fd872f3918a9891e.scope: Deactivated successfully. Sep 9 21:57:03.945031 containerd[1643]: time="2025-09-09T21:57:03.944970024Z" level=info msg="received exit event container_id:\"c1f884221b1080996491ea784a08b4021d733069507ea675fd872f3918a9891e\" id:\"c1f884221b1080996491ea784a08b4021d733069507ea675fd872f3918a9891e\" pid:3465 exited_at:{seconds:1757455023 nanos:944744133}" Sep 9 21:57:03.950192 containerd[1643]: time="2025-09-09T21:57:03.950167506Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c1f884221b1080996491ea784a08b4021d733069507ea675fd872f3918a9891e\" id:\"c1f884221b1080996491ea784a08b4021d733069507ea675fd872f3918a9891e\" pid:3465 exited_at:{seconds:1757455023 nanos:944744133}" Sep 9 21:57:03.956596 containerd[1643]: time="2025-09-09T21:57:03.956554915Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 21:57:03.957187 containerd[1643]: time="2025-09-09T21:57:03.957171022Z" level=info msg="TaskExit event in podsandbox handler container_id:\"629b70d14e266359323151daf107d046422d1f618ed519cd017ce62f317cb73d\" id:\"a9612da9a3f66d2c8a48eb4a2980e286a4648490756b9f9a45eca57f36acfc7a\" pid:4544 exited_at:{seconds:1757455023 nanos:956918513}" Sep 9 21:57:03.958394 containerd[1643]: time="2025-09-09T21:57:03.958288909Z" level=info msg="StopContainer for \"629b70d14e266359323151daf107d046422d1f618ed519cd017ce62f317cb73d\" with timeout 2 (s)" Sep 9 21:57:03.958566 containerd[1643]: time="2025-09-09T21:57:03.958555524Z" level=info msg="Stop container \"629b70d14e266359323151daf107d046422d1f618ed519cd017ce62f317cb73d\" with signal terminated" Sep 9 21:57:03.966144 systemd-networkd[1527]: lxc_health: Link DOWN Sep 9 21:57:03.966148 systemd-networkd[1527]: lxc_health: Lost carrier Sep 9 21:57:03.976895 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1f884221b1080996491ea784a08b4021d733069507ea675fd872f3918a9891e-rootfs.mount: Deactivated successfully. Sep 9 21:57:03.981615 systemd[1]: cri-containerd-629b70d14e266359323151daf107d046422d1f618ed519cd017ce62f317cb73d.scope: Deactivated successfully. Sep 9 21:57:03.982016 systemd[1]: cri-containerd-629b70d14e266359323151daf107d046422d1f618ed519cd017ce62f317cb73d.scope: Consumed 4.391s CPU time, 197.1M memory peak, 76.4M read from disk, 13.3M written to disk. Sep 9 21:57:03.989214 containerd[1643]: time="2025-09-09T21:57:03.982976303Z" level=info msg="received exit event container_id:\"629b70d14e266359323151daf107d046422d1f618ed519cd017ce62f317cb73d\" id:\"629b70d14e266359323151daf107d046422d1f618ed519cd017ce62f317cb73d\" pid:3579 exited_at:{seconds:1757455023 nanos:982701593}" Sep 9 21:57:03.989214 containerd[1643]: time="2025-09-09T21:57:03.983068395Z" level=info msg="TaskExit event in podsandbox handler container_id:\"629b70d14e266359323151daf107d046422d1f618ed519cd017ce62f317cb73d\" id:\"629b70d14e266359323151daf107d046422d1f618ed519cd017ce62f317cb73d\" pid:3579 exited_at:{seconds:1757455023 nanos:982701593}" Sep 9 21:57:03.989214 containerd[1643]: time="2025-09-09T21:57:03.986508888Z" level=info msg="StopContainer for \"c1f884221b1080996491ea784a08b4021d733069507ea675fd872f3918a9891e\" returns successfully" Sep 9 21:57:03.989214 containerd[1643]: time="2025-09-09T21:57:03.987531842Z" level=info msg="StopPodSandbox for \"a2131e432ee5f99774ed186fffec8222e80b6b7f15a542401455dea440111c10\"" Sep 9 21:57:03.991759 containerd[1643]: time="2025-09-09T21:57:03.991735231Z" level=info msg="Container to stop \"c1f884221b1080996491ea784a08b4021d733069507ea675fd872f3918a9891e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 21:57:04.001596 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-629b70d14e266359323151daf107d046422d1f618ed519cd017ce62f317cb73d-rootfs.mount: Deactivated successfully. Sep 9 21:57:04.003309 systemd[1]: cri-containerd-a2131e432ee5f99774ed186fffec8222e80b6b7f15a542401455dea440111c10.scope: Deactivated successfully. Sep 9 21:57:04.009651 containerd[1643]: time="2025-09-09T21:57:04.009583804Z" level=info msg="StopContainer for \"629b70d14e266359323151daf107d046422d1f618ed519cd017ce62f317cb73d\" returns successfully" Sep 9 21:57:04.009915 containerd[1643]: time="2025-09-09T21:57:04.009898395Z" level=info msg="StopPodSandbox for \"77df44a395c0da7772e25f5277a351800a60b84bd505047f16b72a4fdf32aae3\"" Sep 9 21:57:04.009951 containerd[1643]: time="2025-09-09T21:57:04.009939404Z" level=info msg="Container to stop \"43c2b4e4ccfa965e34587e6e57e62fbc4797dac42bf266937475f56fe3b0477c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 21:57:04.009951 containerd[1643]: time="2025-09-09T21:57:04.009949448Z" level=info msg="Container to stop \"5f0274bbaea1ee045a8e2ec0b306be99f726b55672702641375716c2235593d7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 21:57:04.009998 containerd[1643]: time="2025-09-09T21:57:04.009955512Z" level=info msg="Container to stop \"629b70d14e266359323151daf107d046422d1f618ed519cd017ce62f317cb73d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 21:57:04.009998 containerd[1643]: time="2025-09-09T21:57:04.009960813Z" level=info msg="Container to stop \"b53a055ee2ff81cc9cebb97e0e6ab7c9d89e95517e8a6b618711f00ef89a2657\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 21:57:04.009998 containerd[1643]: time="2025-09-09T21:57:04.009966483Z" level=info msg="Container to stop \"ea97f04782cc0383fa5f8693e6e51b6911641c0d86b78747a4a9ef70b8bf08e3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 21:57:04.010781 containerd[1643]: time="2025-09-09T21:57:04.010763921Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a2131e432ee5f99774ed186fffec8222e80b6b7f15a542401455dea440111c10\" id:\"a2131e432ee5f99774ed186fffec8222e80b6b7f15a542401455dea440111c10\" pid:3174 exit_status:137 exited_at:{seconds:1757455024 nanos:10559151}" Sep 9 21:57:04.014561 systemd[1]: cri-containerd-77df44a395c0da7772e25f5277a351800a60b84bd505047f16b72a4fdf32aae3.scope: Deactivated successfully. Sep 9 21:57:04.032829 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77df44a395c0da7772e25f5277a351800a60b84bd505047f16b72a4fdf32aae3-rootfs.mount: Deactivated successfully. Sep 9 21:57:04.038321 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a2131e432ee5f99774ed186fffec8222e80b6b7f15a542401455dea440111c10-rootfs.mount: Deactivated successfully. Sep 9 21:57:04.067143 containerd[1643]: time="2025-09-09T21:57:04.067047141Z" level=info msg="shim disconnected" id=77df44a395c0da7772e25f5277a351800a60b84bd505047f16b72a4fdf32aae3 namespace=k8s.io Sep 9 21:57:04.067143 containerd[1643]: time="2025-09-09T21:57:04.067066584Z" level=warning msg="cleaning up after shim disconnected" id=77df44a395c0da7772e25f5277a351800a60b84bd505047f16b72a4fdf32aae3 namespace=k8s.io Sep 9 21:57:04.075614 containerd[1643]: time="2025-09-09T21:57:04.067075663Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 21:57:04.079490 containerd[1643]: time="2025-09-09T21:57:04.067771400Z" level=info msg="shim disconnected" id=a2131e432ee5f99774ed186fffec8222e80b6b7f15a542401455dea440111c10 namespace=k8s.io Sep 9 21:57:04.079618 containerd[1643]: time="2025-09-09T21:57:04.079588763Z" level=warning msg="cleaning up after shim disconnected" id=a2131e432ee5f99774ed186fffec8222e80b6b7f15a542401455dea440111c10 namespace=k8s.io Sep 9 21:57:04.079654 containerd[1643]: time="2025-09-09T21:57:04.079611439Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 21:57:04.081168 containerd[1643]: time="2025-09-09T21:57:04.080818393Z" level=info msg="TearDown network for sandbox \"a2131e432ee5f99774ed186fffec8222e80b6b7f15a542401455dea440111c10\" successfully" Sep 9 21:57:04.081168 containerd[1643]: time="2025-09-09T21:57:04.080835082Z" level=info msg="StopPodSandbox for \"a2131e432ee5f99774ed186fffec8222e80b6b7f15a542401455dea440111c10\" returns successfully" Sep 9 21:57:04.082609 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a2131e432ee5f99774ed186fffec8222e80b6b7f15a542401455dea440111c10-shm.mount: Deactivated successfully. Sep 9 21:57:04.115477 containerd[1643]: time="2025-09-09T21:57:04.115234008Z" level=info msg="received exit event sandbox_id:\"a2131e432ee5f99774ed186fffec8222e80b6b7f15a542401455dea440111c10\" exit_status:137 exited_at:{seconds:1757455024 nanos:10559151}" Sep 9 21:57:04.148212 kubelet[2947]: I0909 21:57:04.148182 2947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0f74fba8-c222-45be-918a-3b6ee4165248-cilium-config-path\") pod \"0f74fba8-c222-45be-918a-3b6ee4165248\" (UID: \"0f74fba8-c222-45be-918a-3b6ee4165248\") " Sep 9 21:57:04.148212 kubelet[2947]: I0909 21:57:04.148215 2947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c7s6\" (UniqueName: \"kubernetes.io/projected/0f74fba8-c222-45be-918a-3b6ee4165248-kube-api-access-7c7s6\") pod \"0f74fba8-c222-45be-918a-3b6ee4165248\" (UID: \"0f74fba8-c222-45be-918a-3b6ee4165248\") " Sep 9 21:57:04.149954 kubelet[2947]: I0909 21:57:04.149350 2947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f74fba8-c222-45be-918a-3b6ee4165248-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0f74fba8-c222-45be-918a-3b6ee4165248" (UID: "0f74fba8-c222-45be-918a-3b6ee4165248"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 9 21:57:04.156699 containerd[1643]: time="2025-09-09T21:57:04.156675287Z" level=info msg="received exit event sandbox_id:\"77df44a395c0da7772e25f5277a351800a60b84bd505047f16b72a4fdf32aae3\" exit_status:137 exited_at:{seconds:1757455024 nanos:15339508}" Sep 9 21:57:04.156999 containerd[1643]: time="2025-09-09T21:57:04.156980975Z" level=info msg="TearDown network for sandbox \"77df44a395c0da7772e25f5277a351800a60b84bd505047f16b72a4fdf32aae3\" successfully" Sep 9 21:57:04.156999 containerd[1643]: time="2025-09-09T21:57:04.156996708Z" level=info msg="StopPodSandbox for \"77df44a395c0da7772e25f5277a351800a60b84bd505047f16b72a4fdf32aae3\" returns successfully" Sep 9 21:57:04.157304 containerd[1643]: time="2025-09-09T21:57:04.157291552Z" level=info msg="TaskExit event in podsandbox handler container_id:\"77df44a395c0da7772e25f5277a351800a60b84bd505047f16b72a4fdf32aae3\" id:\"77df44a395c0da7772e25f5277a351800a60b84bd505047f16b72a4fdf32aae3\" pid:3086 exit_status:137 exited_at:{seconds:1757455024 nanos:15339508}" Sep 9 21:57:04.171500 kubelet[2947]: I0909 21:57:04.171438 2947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f74fba8-c222-45be-918a-3b6ee4165248-kube-api-access-7c7s6" (OuterVolumeSpecName: "kube-api-access-7c7s6") pod "0f74fba8-c222-45be-918a-3b6ee4165248" (UID: "0f74fba8-c222-45be-918a-3b6ee4165248"). InnerVolumeSpecName "kube-api-access-7c7s6". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 21:57:04.249371 kubelet[2947]: I0909 21:57:04.249213 2947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/30bcb0a8-f191-4488-ad28-1508eb2dab0e-xtables-lock\") pod \"30bcb0a8-f191-4488-ad28-1508eb2dab0e\" (UID: \"30bcb0a8-f191-4488-ad28-1508eb2dab0e\") " Sep 9 21:57:04.249371 kubelet[2947]: I0909 21:57:04.249243 2947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/30bcb0a8-f191-4488-ad28-1508eb2dab0e-host-proc-sys-net\") pod \"30bcb0a8-f191-4488-ad28-1508eb2dab0e\" (UID: \"30bcb0a8-f191-4488-ad28-1508eb2dab0e\") " Sep 9 21:57:04.249371 kubelet[2947]: I0909 21:57:04.249257 2947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/30bcb0a8-f191-4488-ad28-1508eb2dab0e-etc-cni-netd\") pod \"30bcb0a8-f191-4488-ad28-1508eb2dab0e\" (UID: \"30bcb0a8-f191-4488-ad28-1508eb2dab0e\") " Sep 9 21:57:04.249371 kubelet[2947]: I0909 21:57:04.249267 2947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/30bcb0a8-f191-4488-ad28-1508eb2dab0e-lib-modules\") pod \"30bcb0a8-f191-4488-ad28-1508eb2dab0e\" (UID: \"30bcb0a8-f191-4488-ad28-1508eb2dab0e\") " Sep 9 21:57:04.249371 kubelet[2947]: I0909 21:57:04.249284 2947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/30bcb0a8-f191-4488-ad28-1508eb2dab0e-cilium-config-path\") pod \"30bcb0a8-f191-4488-ad28-1508eb2dab0e\" (UID: \"30bcb0a8-f191-4488-ad28-1508eb2dab0e\") " Sep 9 21:57:04.249371 kubelet[2947]: I0909 21:57:04.249299 2947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/30bcb0a8-f191-4488-ad28-1508eb2dab0e-clustermesh-secrets\") pod \"30bcb0a8-f191-4488-ad28-1508eb2dab0e\" (UID: \"30bcb0a8-f191-4488-ad28-1508eb2dab0e\") " Sep 9 21:57:04.249595 kubelet[2947]: I0909 21:57:04.249313 2947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/30bcb0a8-f191-4488-ad28-1508eb2dab0e-hubble-tls\") pod \"30bcb0a8-f191-4488-ad28-1508eb2dab0e\" (UID: \"30bcb0a8-f191-4488-ad28-1508eb2dab0e\") " Sep 9 21:57:04.249595 kubelet[2947]: I0909 21:57:04.249325 2947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/30bcb0a8-f191-4488-ad28-1508eb2dab0e-host-proc-sys-kernel\") pod \"30bcb0a8-f191-4488-ad28-1508eb2dab0e\" (UID: \"30bcb0a8-f191-4488-ad28-1508eb2dab0e\") " Sep 9 21:57:04.249595 kubelet[2947]: I0909 21:57:04.249338 2947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48zfr\" (UniqueName: \"kubernetes.io/projected/30bcb0a8-f191-4488-ad28-1508eb2dab0e-kube-api-access-48zfr\") pod \"30bcb0a8-f191-4488-ad28-1508eb2dab0e\" (UID: \"30bcb0a8-f191-4488-ad28-1508eb2dab0e\") " Sep 9 21:57:04.249595 kubelet[2947]: I0909 21:57:04.249349 2947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/30bcb0a8-f191-4488-ad28-1508eb2dab0e-bpf-maps\") pod \"30bcb0a8-f191-4488-ad28-1508eb2dab0e\" (UID: \"30bcb0a8-f191-4488-ad28-1508eb2dab0e\") " Sep 9 21:57:04.251138 kubelet[2947]: I0909 21:57:04.251116 2947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30bcb0a8-f191-4488-ad28-1508eb2dab0e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "30bcb0a8-f191-4488-ad28-1508eb2dab0e" (UID: "30bcb0a8-f191-4488-ad28-1508eb2dab0e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 9 21:57:04.251417 kubelet[2947]: I0909 21:57:04.251223 2947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30bcb0a8-f191-4488-ad28-1508eb2dab0e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "30bcb0a8-f191-4488-ad28-1508eb2dab0e" (UID: "30bcb0a8-f191-4488-ad28-1508eb2dab0e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 21:57:04.251518 kubelet[2947]: I0909 21:57:04.251507 2947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30bcb0a8-f191-4488-ad28-1508eb2dab0e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "30bcb0a8-f191-4488-ad28-1508eb2dab0e" (UID: "30bcb0a8-f191-4488-ad28-1508eb2dab0e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 21:57:04.251577 kubelet[2947]: I0909 21:57:04.251567 2947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30bcb0a8-f191-4488-ad28-1508eb2dab0e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "30bcb0a8-f191-4488-ad28-1508eb2dab0e" (UID: "30bcb0a8-f191-4488-ad28-1508eb2dab0e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 21:57:04.251630 kubelet[2947]: I0909 21:57:04.251621 2947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30bcb0a8-f191-4488-ad28-1508eb2dab0e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "30bcb0a8-f191-4488-ad28-1508eb2dab0e" (UID: "30bcb0a8-f191-4488-ad28-1508eb2dab0e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 21:57:04.251689 kubelet[2947]: I0909 21:57:04.251680 2947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30bcb0a8-f191-4488-ad28-1508eb2dab0e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "30bcb0a8-f191-4488-ad28-1508eb2dab0e" (UID: "30bcb0a8-f191-4488-ad28-1508eb2dab0e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 21:57:04.251729 kubelet[2947]: I0909 21:57:04.251208 2947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/30bcb0a8-f191-4488-ad28-1508eb2dab0e-cilium-run\") pod \"30bcb0a8-f191-4488-ad28-1508eb2dab0e\" (UID: \"30bcb0a8-f191-4488-ad28-1508eb2dab0e\") " Sep 9 21:57:04.251783 kubelet[2947]: I0909 21:57:04.251775 2947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/30bcb0a8-f191-4488-ad28-1508eb2dab0e-cilium-cgroup\") pod \"30bcb0a8-f191-4488-ad28-1508eb2dab0e\" (UID: \"30bcb0a8-f191-4488-ad28-1508eb2dab0e\") " Sep 9 21:57:04.251833 kubelet[2947]: I0909 21:57:04.251826 2947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/30bcb0a8-f191-4488-ad28-1508eb2dab0e-hostproc\") pod \"30bcb0a8-f191-4488-ad28-1508eb2dab0e\" (UID: \"30bcb0a8-f191-4488-ad28-1508eb2dab0e\") " Sep 9 21:57:04.251886 kubelet[2947]: I0909 21:57:04.251877 2947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/30bcb0a8-f191-4488-ad28-1508eb2dab0e-cni-path\") pod \"30bcb0a8-f191-4488-ad28-1508eb2dab0e\" (UID: \"30bcb0a8-f191-4488-ad28-1508eb2dab0e\") " Sep 9 21:57:04.251960 kubelet[2947]: I0909 21:57:04.251951 2947 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/30bcb0a8-f191-4488-ad28-1508eb2dab0e-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 9 21:57:04.252010 kubelet[2947]: I0909 21:57:04.252002 2947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c7s6\" (UniqueName: \"kubernetes.io/projected/0f74fba8-c222-45be-918a-3b6ee4165248-kube-api-access-7c7s6\") on node \"localhost\" DevicePath \"\"" Sep 9 21:57:04.252053 kubelet[2947]: I0909 21:57:04.252047 2947 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/30bcb0a8-f191-4488-ad28-1508eb2dab0e-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 9 21:57:04.252093 kubelet[2947]: I0909 21:57:04.252087 2947 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/30bcb0a8-f191-4488-ad28-1508eb2dab0e-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 9 21:57:04.252134 kubelet[2947]: I0909 21:57:04.252128 2947 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/30bcb0a8-f191-4488-ad28-1508eb2dab0e-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 9 21:57:04.252181 kubelet[2947]: I0909 21:57:04.252174 2947 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/30bcb0a8-f191-4488-ad28-1508eb2dab0e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 21:57:04.252228 kubelet[2947]: I0909 21:57:04.252222 2947 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0f74fba8-c222-45be-918a-3b6ee4165248-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 21:57:04.252272 kubelet[2947]: I0909 21:57:04.252265 2947 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/30bcb0a8-f191-4488-ad28-1508eb2dab0e-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 9 21:57:04.252319 kubelet[2947]: I0909 21:57:04.252311 2947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30bcb0a8-f191-4488-ad28-1508eb2dab0e-cni-path" (OuterVolumeSpecName: "cni-path") pod "30bcb0a8-f191-4488-ad28-1508eb2dab0e" (UID: "30bcb0a8-f191-4488-ad28-1508eb2dab0e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 21:57:04.252392 kubelet[2947]: I0909 21:57:04.252382 2947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30bcb0a8-f191-4488-ad28-1508eb2dab0e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "30bcb0a8-f191-4488-ad28-1508eb2dab0e" (UID: "30bcb0a8-f191-4488-ad28-1508eb2dab0e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 21:57:04.252449 kubelet[2947]: I0909 21:57:04.252440 2947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30bcb0a8-f191-4488-ad28-1508eb2dab0e-hostproc" (OuterVolumeSpecName: "hostproc") pod "30bcb0a8-f191-4488-ad28-1508eb2dab0e" (UID: "30bcb0a8-f191-4488-ad28-1508eb2dab0e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 21:57:04.252506 kubelet[2947]: I0909 21:57:04.252497 2947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30bcb0a8-f191-4488-ad28-1508eb2dab0e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "30bcb0a8-f191-4488-ad28-1508eb2dab0e" (UID: "30bcb0a8-f191-4488-ad28-1508eb2dab0e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 21:57:04.253381 kubelet[2947]: I0909 21:57:04.253343 2947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30bcb0a8-f191-4488-ad28-1508eb2dab0e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "30bcb0a8-f191-4488-ad28-1508eb2dab0e" (UID: "30bcb0a8-f191-4488-ad28-1508eb2dab0e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 21:57:04.253885 kubelet[2947]: I0909 21:57:04.253869 2947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30bcb0a8-f191-4488-ad28-1508eb2dab0e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "30bcb0a8-f191-4488-ad28-1508eb2dab0e" (UID: "30bcb0a8-f191-4488-ad28-1508eb2dab0e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 9 21:57:04.254552 kubelet[2947]: I0909 21:57:04.254536 2947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30bcb0a8-f191-4488-ad28-1508eb2dab0e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "30bcb0a8-f191-4488-ad28-1508eb2dab0e" (UID: "30bcb0a8-f191-4488-ad28-1508eb2dab0e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 21:57:04.255264 kubelet[2947]: I0909 21:57:04.255237 2947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30bcb0a8-f191-4488-ad28-1508eb2dab0e-kube-api-access-48zfr" (OuterVolumeSpecName: "kube-api-access-48zfr") pod "30bcb0a8-f191-4488-ad28-1508eb2dab0e" (UID: "30bcb0a8-f191-4488-ad28-1508eb2dab0e"). InnerVolumeSpecName "kube-api-access-48zfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 21:57:04.352566 kubelet[2947]: I0909 21:57:04.352512 2947 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/30bcb0a8-f191-4488-ad28-1508eb2dab0e-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 9 21:57:04.352566 kubelet[2947]: I0909 21:57:04.352551 2947 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/30bcb0a8-f191-4488-ad28-1508eb2dab0e-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 9 21:57:04.352566 kubelet[2947]: I0909 21:57:04.352563 2947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48zfr\" (UniqueName: \"kubernetes.io/projected/30bcb0a8-f191-4488-ad28-1508eb2dab0e-kube-api-access-48zfr\") on node \"localhost\" DevicePath \"\"" Sep 9 21:57:04.352566 kubelet[2947]: I0909 21:57:04.352572 2947 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/30bcb0a8-f191-4488-ad28-1508eb2dab0e-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 9 21:57:04.352746 kubelet[2947]: I0909 21:57:04.352580 2947 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/30bcb0a8-f191-4488-ad28-1508eb2dab0e-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 9 21:57:04.352746 kubelet[2947]: I0909 21:57:04.352587 2947 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/30bcb0a8-f191-4488-ad28-1508eb2dab0e-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 9 21:57:04.352746 kubelet[2947]: I0909 21:57:04.352592 2947 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/30bcb0a8-f191-4488-ad28-1508eb2dab0e-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 9 21:57:04.352746 kubelet[2947]: I0909 21:57:04.352599 2947 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/30bcb0a8-f191-4488-ad28-1508eb2dab0e-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 9 21:57:04.555388 systemd[1]: Removed slice kubepods-burstable-pod30bcb0a8_f191_4488_ad28_1508eb2dab0e.slice - libcontainer container kubepods-burstable-pod30bcb0a8_f191_4488_ad28_1508eb2dab0e.slice. Sep 9 21:57:04.555455 systemd[1]: kubepods-burstable-pod30bcb0a8_f191_4488_ad28_1508eb2dab0e.slice: Consumed 4.450s CPU time, 198.2M memory peak, 77.7M read from disk, 13.3M written to disk. Sep 9 21:57:04.557279 systemd[1]: Removed slice kubepods-besteffort-pod0f74fba8_c222_45be_918a_3b6ee4165248.slice - libcontainer container kubepods-besteffort-pod0f74fba8_c222_45be_918a_3b6ee4165248.slice. Sep 9 21:57:04.845821 kubelet[2947]: I0909 21:57:04.845714 2947 scope.go:117] "RemoveContainer" containerID="c1f884221b1080996491ea784a08b4021d733069507ea675fd872f3918a9891e" Sep 9 21:57:04.853236 containerd[1643]: time="2025-09-09T21:57:04.853196524Z" level=info msg="RemoveContainer for \"c1f884221b1080996491ea784a08b4021d733069507ea675fd872f3918a9891e\"" Sep 9 21:57:04.863577 containerd[1643]: time="2025-09-09T21:57:04.863391831Z" level=info msg="RemoveContainer for \"c1f884221b1080996491ea784a08b4021d733069507ea675fd872f3918a9891e\" returns successfully" Sep 9 21:57:04.863724 kubelet[2947]: I0909 21:57:04.863603 2947 scope.go:117] "RemoveContainer" containerID="c1f884221b1080996491ea784a08b4021d733069507ea675fd872f3918a9891e" Sep 9 21:57:04.867242 containerd[1643]: time="2025-09-09T21:57:04.864223695Z" level=error msg="ContainerStatus for \"c1f884221b1080996491ea784a08b4021d733069507ea675fd872f3918a9891e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c1f884221b1080996491ea784a08b4021d733069507ea675fd872f3918a9891e\": not found" Sep 9 21:57:04.869702 kubelet[2947]: E0909 21:57:04.869677 2947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c1f884221b1080996491ea784a08b4021d733069507ea675fd872f3918a9891e\": not found" containerID="c1f884221b1080996491ea784a08b4021d733069507ea675fd872f3918a9891e" Sep 9 21:57:04.871112 kubelet[2947]: I0909 21:57:04.870734 2947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c1f884221b1080996491ea784a08b4021d733069507ea675fd872f3918a9891e"} err="failed to get container status \"c1f884221b1080996491ea784a08b4021d733069507ea675fd872f3918a9891e\": rpc error: code = NotFound desc = an error occurred when try to find container \"c1f884221b1080996491ea784a08b4021d733069507ea675fd872f3918a9891e\": not found" Sep 9 21:57:04.871112 kubelet[2947]: I0909 21:57:04.870802 2947 scope.go:117] "RemoveContainer" containerID="629b70d14e266359323151daf107d046422d1f618ed519cd017ce62f317cb73d" Sep 9 21:57:04.874404 containerd[1643]: time="2025-09-09T21:57:04.874305596Z" level=info msg="RemoveContainer for \"629b70d14e266359323151daf107d046422d1f618ed519cd017ce62f317cb73d\"" Sep 9 21:57:04.880937 containerd[1643]: time="2025-09-09T21:57:04.880906028Z" level=info msg="RemoveContainer for \"629b70d14e266359323151daf107d046422d1f618ed519cd017ce62f317cb73d\" returns successfully" Sep 9 21:57:04.881706 kubelet[2947]: I0909 21:57:04.881690 2947 scope.go:117] "RemoveContainer" containerID="5f0274bbaea1ee045a8e2ec0b306be99f726b55672702641375716c2235593d7" Sep 9 21:57:04.882604 containerd[1643]: time="2025-09-09T21:57:04.882591545Z" level=info msg="RemoveContainer for \"5f0274bbaea1ee045a8e2ec0b306be99f726b55672702641375716c2235593d7\"" Sep 9 21:57:04.884542 containerd[1643]: time="2025-09-09T21:57:04.884509820Z" level=info msg="RemoveContainer for \"5f0274bbaea1ee045a8e2ec0b306be99f726b55672702641375716c2235593d7\" returns successfully" Sep 9 21:57:04.885032 kubelet[2947]: I0909 21:57:04.884621 2947 scope.go:117] "RemoveContainer" containerID="43c2b4e4ccfa965e34587e6e57e62fbc4797dac42bf266937475f56fe3b0477c" Sep 9 21:57:04.885784 containerd[1643]: time="2025-09-09T21:57:04.885774999Z" level=info msg="RemoveContainer for \"43c2b4e4ccfa965e34587e6e57e62fbc4797dac42bf266937475f56fe3b0477c\"" Sep 9 21:57:04.887428 containerd[1643]: time="2025-09-09T21:57:04.887416476Z" level=info msg="RemoveContainer for \"43c2b4e4ccfa965e34587e6e57e62fbc4797dac42bf266937475f56fe3b0477c\" returns successfully" Sep 9 21:57:04.887539 kubelet[2947]: I0909 21:57:04.887525 2947 scope.go:117] "RemoveContainer" containerID="ea97f04782cc0383fa5f8693e6e51b6911641c0d86b78747a4a9ef70b8bf08e3" Sep 9 21:57:04.888160 containerd[1643]: time="2025-09-09T21:57:04.888149120Z" level=info msg="RemoveContainer for \"ea97f04782cc0383fa5f8693e6e51b6911641c0d86b78747a4a9ef70b8bf08e3\"" Sep 9 21:57:04.889448 containerd[1643]: time="2025-09-09T21:57:04.889436749Z" level=info msg="RemoveContainer for \"ea97f04782cc0383fa5f8693e6e51b6911641c0d86b78747a4a9ef70b8bf08e3\" returns successfully" Sep 9 21:57:04.889606 kubelet[2947]: I0909 21:57:04.889569 2947 scope.go:117] "RemoveContainer" containerID="b53a055ee2ff81cc9cebb97e0e6ab7c9d89e95517e8a6b618711f00ef89a2657" Sep 9 21:57:04.890343 containerd[1643]: time="2025-09-09T21:57:04.890322993Z" level=info msg="RemoveContainer for \"b53a055ee2ff81cc9cebb97e0e6ab7c9d89e95517e8a6b618711f00ef89a2657\"" Sep 9 21:57:04.891611 containerd[1643]: time="2025-09-09T21:57:04.891597981Z" level=info msg="RemoveContainer for \"b53a055ee2ff81cc9cebb97e0e6ab7c9d89e95517e8a6b618711f00ef89a2657\" returns successfully" Sep 9 21:57:04.891673 kubelet[2947]: I0909 21:57:04.891665 2947 scope.go:117] "RemoveContainer" containerID="629b70d14e266359323151daf107d046422d1f618ed519cd017ce62f317cb73d" Sep 9 21:57:04.891764 containerd[1643]: time="2025-09-09T21:57:04.891750049Z" level=error msg="ContainerStatus for \"629b70d14e266359323151daf107d046422d1f618ed519cd017ce62f317cb73d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"629b70d14e266359323151daf107d046422d1f618ed519cd017ce62f317cb73d\": not found" Sep 9 21:57:04.891895 kubelet[2947]: E0909 21:57:04.891882 2947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"629b70d14e266359323151daf107d046422d1f618ed519cd017ce62f317cb73d\": not found" containerID="629b70d14e266359323151daf107d046422d1f618ed519cd017ce62f317cb73d" Sep 9 21:57:04.891924 kubelet[2947]: I0909 21:57:04.891899 2947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"629b70d14e266359323151daf107d046422d1f618ed519cd017ce62f317cb73d"} err="failed to get container status \"629b70d14e266359323151daf107d046422d1f618ed519cd017ce62f317cb73d\": rpc error: code = NotFound desc = an error occurred when try to find container \"629b70d14e266359323151daf107d046422d1f618ed519cd017ce62f317cb73d\": not found" Sep 9 21:57:04.891924 kubelet[2947]: I0909 21:57:04.891911 2947 scope.go:117] "RemoveContainer" containerID="5f0274bbaea1ee045a8e2ec0b306be99f726b55672702641375716c2235593d7" Sep 9 21:57:04.892026 containerd[1643]: time="2025-09-09T21:57:04.891995339Z" level=error msg="ContainerStatus for \"5f0274bbaea1ee045a8e2ec0b306be99f726b55672702641375716c2235593d7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5f0274bbaea1ee045a8e2ec0b306be99f726b55672702641375716c2235593d7\": not found" Sep 9 21:57:04.892081 kubelet[2947]: E0909 21:57:04.892075 2947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5f0274bbaea1ee045a8e2ec0b306be99f726b55672702641375716c2235593d7\": not found" containerID="5f0274bbaea1ee045a8e2ec0b306be99f726b55672702641375716c2235593d7" Sep 9 21:57:04.892120 kubelet[2947]: I0909 21:57:04.892085 2947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5f0274bbaea1ee045a8e2ec0b306be99f726b55672702641375716c2235593d7"} err="failed to get container status \"5f0274bbaea1ee045a8e2ec0b306be99f726b55672702641375716c2235593d7\": rpc error: code = NotFound desc = an error occurred when try to find container \"5f0274bbaea1ee045a8e2ec0b306be99f726b55672702641375716c2235593d7\": not found" Sep 9 21:57:04.892143 kubelet[2947]: I0909 21:57:04.892119 2947 scope.go:117] "RemoveContainer" containerID="43c2b4e4ccfa965e34587e6e57e62fbc4797dac42bf266937475f56fe3b0477c" Sep 9 21:57:04.892243 containerd[1643]: time="2025-09-09T21:57:04.892226027Z" level=error msg="ContainerStatus for \"43c2b4e4ccfa965e34587e6e57e62fbc4797dac42bf266937475f56fe3b0477c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"43c2b4e4ccfa965e34587e6e57e62fbc4797dac42bf266937475f56fe3b0477c\": not found" Sep 9 21:57:04.892371 kubelet[2947]: E0909 21:57:04.892353 2947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"43c2b4e4ccfa965e34587e6e57e62fbc4797dac42bf266937475f56fe3b0477c\": not found" containerID="43c2b4e4ccfa965e34587e6e57e62fbc4797dac42bf266937475f56fe3b0477c" Sep 9 21:57:04.892427 kubelet[2947]: I0909 21:57:04.892417 2947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"43c2b4e4ccfa965e34587e6e57e62fbc4797dac42bf266937475f56fe3b0477c"} err="failed to get container status \"43c2b4e4ccfa965e34587e6e57e62fbc4797dac42bf266937475f56fe3b0477c\": rpc error: code = NotFound desc = an error occurred when try to find container \"43c2b4e4ccfa965e34587e6e57e62fbc4797dac42bf266937475f56fe3b0477c\": not found" Sep 9 21:57:04.892463 kubelet[2947]: I0909 21:57:04.892457 2947 scope.go:117] "RemoveContainer" containerID="ea97f04782cc0383fa5f8693e6e51b6911641c0d86b78747a4a9ef70b8bf08e3" Sep 9 21:57:04.892574 containerd[1643]: time="2025-09-09T21:57:04.892561802Z" level=error msg="ContainerStatus for \"ea97f04782cc0383fa5f8693e6e51b6911641c0d86b78747a4a9ef70b8bf08e3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ea97f04782cc0383fa5f8693e6e51b6911641c0d86b78747a4a9ef70b8bf08e3\": not found" Sep 9 21:57:04.892699 kubelet[2947]: E0909 21:57:04.892690 2947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ea97f04782cc0383fa5f8693e6e51b6911641c0d86b78747a4a9ef70b8bf08e3\": not found" containerID="ea97f04782cc0383fa5f8693e6e51b6911641c0d86b78747a4a9ef70b8bf08e3" Sep 9 21:57:04.892753 kubelet[2947]: I0909 21:57:04.892743 2947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ea97f04782cc0383fa5f8693e6e51b6911641c0d86b78747a4a9ef70b8bf08e3"} err="failed to get container status \"ea97f04782cc0383fa5f8693e6e51b6911641c0d86b78747a4a9ef70b8bf08e3\": rpc error: code = NotFound desc = an error occurred when try to find container \"ea97f04782cc0383fa5f8693e6e51b6911641c0d86b78747a4a9ef70b8bf08e3\": not found" Sep 9 21:57:04.892792 kubelet[2947]: I0909 21:57:04.892787 2947 scope.go:117] "RemoveContainer" containerID="b53a055ee2ff81cc9cebb97e0e6ab7c9d89e95517e8a6b618711f00ef89a2657" Sep 9 21:57:04.892902 containerd[1643]: time="2025-09-09T21:57:04.892889224Z" level=error msg="ContainerStatus for \"b53a055ee2ff81cc9cebb97e0e6ab7c9d89e95517e8a6b618711f00ef89a2657\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b53a055ee2ff81cc9cebb97e0e6ab7c9d89e95517e8a6b618711f00ef89a2657\": not found" Sep 9 21:57:04.893022 kubelet[2947]: E0909 21:57:04.893010 2947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b53a055ee2ff81cc9cebb97e0e6ab7c9d89e95517e8a6b618711f00ef89a2657\": not found" containerID="b53a055ee2ff81cc9cebb97e0e6ab7c9d89e95517e8a6b618711f00ef89a2657" Sep 9 21:57:04.893074 kubelet[2947]: I0909 21:57:04.893064 2947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b53a055ee2ff81cc9cebb97e0e6ab7c9d89e95517e8a6b618711f00ef89a2657"} err="failed to get container status \"b53a055ee2ff81cc9cebb97e0e6ab7c9d89e95517e8a6b618711f00ef89a2657\": rpc error: code = NotFound desc = an error occurred when try to find container \"b53a055ee2ff81cc9cebb97e0e6ab7c9d89e95517e8a6b618711f00ef89a2657\": not found" Sep 9 21:57:04.976897 systemd[1]: var-lib-kubelet-pods-0f74fba8\x2dc222\x2d45be\x2d918a\x2d3b6ee4165248-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7c7s6.mount: Deactivated successfully. Sep 9 21:57:04.976976 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-77df44a395c0da7772e25f5277a351800a60b84bd505047f16b72a4fdf32aae3-shm.mount: Deactivated successfully. Sep 9 21:57:04.977020 systemd[1]: var-lib-kubelet-pods-30bcb0a8\x2df191\x2d4488\x2dad28\x2d1508eb2dab0e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d48zfr.mount: Deactivated successfully. Sep 9 21:57:04.977060 systemd[1]: var-lib-kubelet-pods-30bcb0a8\x2df191\x2d4488\x2dad28\x2d1508eb2dab0e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 9 21:57:04.977097 systemd[1]: var-lib-kubelet-pods-30bcb0a8\x2df191\x2d4488\x2dad28\x2d1508eb2dab0e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 9 21:57:05.865353 sshd[4521]: Connection closed by 139.178.89.65 port 45004 Sep 9 21:57:05.868832 sshd-session[4518]: pam_unix(sshd:session): session closed for user core Sep 9 21:57:05.874651 systemd[1]: sshd@23-139.178.70.109:22-139.178.89.65:45004.service: Deactivated successfully. Sep 9 21:57:05.876667 systemd[1]: session-26.scope: Deactivated successfully. Sep 9 21:57:05.877557 systemd-logind[1612]: Session 26 logged out. Waiting for processes to exit. Sep 9 21:57:05.881027 systemd[1]: Started sshd@24-139.178.70.109:22-139.178.89.65:45018.service - OpenSSH per-connection server daemon (139.178.89.65:45018). Sep 9 21:57:05.881874 systemd-logind[1612]: Removed session 26. Sep 9 21:57:05.955190 sshd[4668]: Accepted publickey for core from 139.178.89.65 port 45018 ssh2: RSA SHA256:+yHkHs/g1kjLKz8TerXa64YormdzNna7WxTDm23L2SM Sep 9 21:57:05.956142 sshd-session[4668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:57:05.959597 systemd-logind[1612]: New session 27 of user core. Sep 9 21:57:05.966473 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 9 21:57:06.548773 kubelet[2947]: I0909 21:57:06.548748 2947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f74fba8-c222-45be-918a-3b6ee4165248" path="/var/lib/kubelet/pods/0f74fba8-c222-45be-918a-3b6ee4165248/volumes" Sep 9 21:57:06.549021 kubelet[2947]: I0909 21:57:06.549006 2947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30bcb0a8-f191-4488-ad28-1508eb2dab0e" path="/var/lib/kubelet/pods/30bcb0a8-f191-4488-ad28-1508eb2dab0e/volumes" Sep 9 21:57:06.592740 sshd[4671]: Connection closed by 139.178.89.65 port 45018 Sep 9 21:57:06.594440 sshd-session[4668]: pam_unix(sshd:session): session closed for user core Sep 9 21:57:06.600627 systemd[1]: sshd@24-139.178.70.109:22-139.178.89.65:45018.service: Deactivated successfully. Sep 9 21:57:06.602514 systemd[1]: session-27.scope: Deactivated successfully. Sep 9 21:57:06.603237 systemd-logind[1612]: Session 27 logged out. Waiting for processes to exit. Sep 9 21:57:06.605783 systemd[1]: Started sshd@25-139.178.70.109:22-139.178.89.65:45030.service - OpenSSH per-connection server daemon (139.178.89.65:45030). Sep 9 21:57:06.607198 systemd-logind[1612]: Removed session 27. Sep 9 21:57:06.633583 kubelet[2947]: E0909 21:57:06.633544 2947 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="30bcb0a8-f191-4488-ad28-1508eb2dab0e" containerName="apply-sysctl-overwrites" Sep 9 21:57:06.633583 kubelet[2947]: E0909 21:57:06.633578 2947 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="30bcb0a8-f191-4488-ad28-1508eb2dab0e" containerName="mount-bpf-fs" Sep 9 21:57:06.633583 kubelet[2947]: E0909 21:57:06.633585 2947 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="30bcb0a8-f191-4488-ad28-1508eb2dab0e" containerName="clean-cilium-state" Sep 9 21:57:06.633583 kubelet[2947]: E0909 21:57:06.633588 2947 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="30bcb0a8-f191-4488-ad28-1508eb2dab0e" containerName="cilium-agent" Sep 9 21:57:06.633583 kubelet[2947]: E0909 21:57:06.633592 2947 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="30bcb0a8-f191-4488-ad28-1508eb2dab0e" containerName="mount-cgroup" Sep 9 21:57:06.633769 kubelet[2947]: E0909 21:57:06.633596 2947 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0f74fba8-c222-45be-918a-3b6ee4165248" containerName="cilium-operator" Sep 9 21:57:06.633769 kubelet[2947]: I0909 21:57:06.633642 2947 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f74fba8-c222-45be-918a-3b6ee4165248" containerName="cilium-operator" Sep 9 21:57:06.633769 kubelet[2947]: I0909 21:57:06.633663 2947 memory_manager.go:354] "RemoveStaleState removing state" podUID="30bcb0a8-f191-4488-ad28-1508eb2dab0e" containerName="cilium-agent" Sep 9 21:57:06.642639 systemd[1]: Created slice kubepods-burstable-podecdfd5b4_cf8b_4c14_9974_8122ebe840f3.slice - libcontainer container kubepods-burstable-podecdfd5b4_cf8b_4c14_9974_8122ebe840f3.slice. Sep 9 21:57:06.659830 sshd[4681]: Accepted publickey for core from 139.178.89.65 port 45030 ssh2: RSA SHA256:+yHkHs/g1kjLKz8TerXa64YormdzNna7WxTDm23L2SM Sep 9 21:57:06.661770 sshd-session[4681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:57:06.665187 kubelet[2947]: I0909 21:57:06.665161 2947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2nmq\" (UniqueName: \"kubernetes.io/projected/ecdfd5b4-cf8b-4c14-9974-8122ebe840f3-kube-api-access-b2nmq\") pod \"cilium-j6bn5\" (UID: \"ecdfd5b4-cf8b-4c14-9974-8122ebe840f3\") " pod="kube-system/cilium-j6bn5" Sep 9 21:57:06.665270 kubelet[2947]: I0909 21:57:06.665188 2947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ecdfd5b4-cf8b-4c14-9974-8122ebe840f3-bpf-maps\") pod \"cilium-j6bn5\" (UID: \"ecdfd5b4-cf8b-4c14-9974-8122ebe840f3\") " pod="kube-system/cilium-j6bn5" Sep 9 21:57:06.665270 kubelet[2947]: I0909 21:57:06.665203 2947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ecdfd5b4-cf8b-4c14-9974-8122ebe840f3-cni-path\") pod \"cilium-j6bn5\" (UID: \"ecdfd5b4-cf8b-4c14-9974-8122ebe840f3\") " pod="kube-system/cilium-j6bn5" Sep 9 21:57:06.665270 kubelet[2947]: I0909 21:57:06.665213 2947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ecdfd5b4-cf8b-4c14-9974-8122ebe840f3-host-proc-sys-net\") pod \"cilium-j6bn5\" (UID: \"ecdfd5b4-cf8b-4c14-9974-8122ebe840f3\") " pod="kube-system/cilium-j6bn5" Sep 9 21:57:06.665270 kubelet[2947]: I0909 21:57:06.665250 2947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ecdfd5b4-cf8b-4c14-9974-8122ebe840f3-cilium-config-path\") pod \"cilium-j6bn5\" (UID: \"ecdfd5b4-cf8b-4c14-9974-8122ebe840f3\") " pod="kube-system/cilium-j6bn5" Sep 9 21:57:06.665270 kubelet[2947]: I0909 21:57:06.665264 2947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ecdfd5b4-cf8b-4c14-9974-8122ebe840f3-cilium-run\") pod \"cilium-j6bn5\" (UID: \"ecdfd5b4-cf8b-4c14-9974-8122ebe840f3\") " pod="kube-system/cilium-j6bn5" Sep 9 21:57:06.665410 kubelet[2947]: I0909 21:57:06.665274 2947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ecdfd5b4-cf8b-4c14-9974-8122ebe840f3-etc-cni-netd\") pod \"cilium-j6bn5\" (UID: \"ecdfd5b4-cf8b-4c14-9974-8122ebe840f3\") " pod="kube-system/cilium-j6bn5" Sep 9 21:57:06.665410 kubelet[2947]: I0909 21:57:06.665285 2947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ecdfd5b4-cf8b-4c14-9974-8122ebe840f3-clustermesh-secrets\") pod \"cilium-j6bn5\" (UID: \"ecdfd5b4-cf8b-4c14-9974-8122ebe840f3\") " pod="kube-system/cilium-j6bn5" Sep 9 21:57:06.665410 kubelet[2947]: I0909 21:57:06.665301 2947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ecdfd5b4-cf8b-4c14-9974-8122ebe840f3-hostproc\") pod \"cilium-j6bn5\" (UID: \"ecdfd5b4-cf8b-4c14-9974-8122ebe840f3\") " pod="kube-system/cilium-j6bn5" Sep 9 21:57:06.665410 kubelet[2947]: I0909 21:57:06.665315 2947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ecdfd5b4-cf8b-4c14-9974-8122ebe840f3-lib-modules\") pod \"cilium-j6bn5\" (UID: \"ecdfd5b4-cf8b-4c14-9974-8122ebe840f3\") " pod="kube-system/cilium-j6bn5" Sep 9 21:57:06.665410 kubelet[2947]: I0909 21:57:06.665325 2947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ecdfd5b4-cf8b-4c14-9974-8122ebe840f3-host-proc-sys-kernel\") pod \"cilium-j6bn5\" (UID: \"ecdfd5b4-cf8b-4c14-9974-8122ebe840f3\") " pod="kube-system/cilium-j6bn5" Sep 9 21:57:06.665410 kubelet[2947]: I0909 21:57:06.665334 2947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ecdfd5b4-cf8b-4c14-9974-8122ebe840f3-cilium-cgroup\") pod \"cilium-j6bn5\" (UID: \"ecdfd5b4-cf8b-4c14-9974-8122ebe840f3\") " pod="kube-system/cilium-j6bn5" Sep 9 21:57:06.665532 kubelet[2947]: I0909 21:57:06.665342 2947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ecdfd5b4-cf8b-4c14-9974-8122ebe840f3-hubble-tls\") pod \"cilium-j6bn5\" (UID: \"ecdfd5b4-cf8b-4c14-9974-8122ebe840f3\") " pod="kube-system/cilium-j6bn5" Sep 9 21:57:06.665532 kubelet[2947]: I0909 21:57:06.665350 2947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ecdfd5b4-cf8b-4c14-9974-8122ebe840f3-xtables-lock\") pod \"cilium-j6bn5\" (UID: \"ecdfd5b4-cf8b-4c14-9974-8122ebe840f3\") " pod="kube-system/cilium-j6bn5" Sep 9 21:57:06.667112 kubelet[2947]: I0909 21:57:06.667091 2947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ecdfd5b4-cf8b-4c14-9974-8122ebe840f3-cilium-ipsec-secrets\") pod \"cilium-j6bn5\" (UID: \"ecdfd5b4-cf8b-4c14-9974-8122ebe840f3\") " pod="kube-system/cilium-j6bn5" Sep 9 21:57:06.668790 systemd-logind[1612]: New session 28 of user core. Sep 9 21:57:06.673539 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 9 21:57:06.725905 sshd[4684]: Connection closed by 139.178.89.65 port 45030 Sep 9 21:57:06.726472 sshd-session[4681]: pam_unix(sshd:session): session closed for user core Sep 9 21:57:06.735694 systemd[1]: sshd@25-139.178.70.109:22-139.178.89.65:45030.service: Deactivated successfully. Sep 9 21:57:06.736896 systemd[1]: session-28.scope: Deactivated successfully. Sep 9 21:57:06.738654 systemd-logind[1612]: Session 28 logged out. Waiting for processes to exit. Sep 9 21:57:06.741113 systemd[1]: Started sshd@26-139.178.70.109:22-139.178.89.65:45040.service - OpenSSH per-connection server daemon (139.178.89.65:45040). Sep 9 21:57:06.742062 systemd-logind[1612]: Removed session 28. Sep 9 21:57:06.814066 sshd[4691]: Accepted publickey for core from 139.178.89.65 port 45040 ssh2: RSA SHA256:+yHkHs/g1kjLKz8TerXa64YormdzNna7WxTDm23L2SM Sep 9 21:57:06.815234 sshd-session[4691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:57:06.818400 systemd-logind[1612]: New session 29 of user core. Sep 9 21:57:06.826461 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 9 21:57:06.956118 containerd[1643]: time="2025-09-09T21:57:06.955803554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j6bn5,Uid:ecdfd5b4-cf8b-4c14-9974-8122ebe840f3,Namespace:kube-system,Attempt:0,}" Sep 9 21:57:06.968122 containerd[1643]: time="2025-09-09T21:57:06.968091644Z" level=info msg="connecting to shim 7be2bbb768f5da4ce4ae964ca3786835c2ceed28ca640af0afc91f501e40aa43" address="unix:///run/containerd/s/53f307691777e2c96133daf33d058dbedc2aaa934b67ce92649fa7beb142d91a" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:57:06.987527 systemd[1]: Started cri-containerd-7be2bbb768f5da4ce4ae964ca3786835c2ceed28ca640af0afc91f501e40aa43.scope - libcontainer container 7be2bbb768f5da4ce4ae964ca3786835c2ceed28ca640af0afc91f501e40aa43. Sep 9 21:57:07.011830 containerd[1643]: time="2025-09-09T21:57:07.011756018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j6bn5,Uid:ecdfd5b4-cf8b-4c14-9974-8122ebe840f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"7be2bbb768f5da4ce4ae964ca3786835c2ceed28ca640af0afc91f501e40aa43\"" Sep 9 21:57:07.014048 containerd[1643]: time="2025-09-09T21:57:07.014021632Z" level=info msg="CreateContainer within sandbox \"7be2bbb768f5da4ce4ae964ca3786835c2ceed28ca640af0afc91f501e40aa43\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 21:57:07.019929 containerd[1643]: time="2025-09-09T21:57:07.019884523Z" level=info msg="Container 194d985a3c94dffc100fcf964177b83b6d71a73ccbbc7a3f50d96c13b819c963: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:57:07.023322 containerd[1643]: time="2025-09-09T21:57:07.023289718Z" level=info msg="CreateContainer within sandbox \"7be2bbb768f5da4ce4ae964ca3786835c2ceed28ca640af0afc91f501e40aa43\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"194d985a3c94dffc100fcf964177b83b6d71a73ccbbc7a3f50d96c13b819c963\"" Sep 9 21:57:07.023859 containerd[1643]: time="2025-09-09T21:57:07.023759569Z" level=info msg="StartContainer for \"194d985a3c94dffc100fcf964177b83b6d71a73ccbbc7a3f50d96c13b819c963\"" Sep 9 21:57:07.025283 containerd[1643]: time="2025-09-09T21:57:07.025187850Z" level=info msg="connecting to shim 194d985a3c94dffc100fcf964177b83b6d71a73ccbbc7a3f50d96c13b819c963" address="unix:///run/containerd/s/53f307691777e2c96133daf33d058dbedc2aaa934b67ce92649fa7beb142d91a" protocol=ttrpc version=3 Sep 9 21:57:07.042566 systemd[1]: Started cri-containerd-194d985a3c94dffc100fcf964177b83b6d71a73ccbbc7a3f50d96c13b819c963.scope - libcontainer container 194d985a3c94dffc100fcf964177b83b6d71a73ccbbc7a3f50d96c13b819c963. Sep 9 21:57:07.066798 containerd[1643]: time="2025-09-09T21:57:07.066664105Z" level=info msg="StartContainer for \"194d985a3c94dffc100fcf964177b83b6d71a73ccbbc7a3f50d96c13b819c963\" returns successfully" Sep 9 21:57:07.082155 systemd[1]: cri-containerd-194d985a3c94dffc100fcf964177b83b6d71a73ccbbc7a3f50d96c13b819c963.scope: Deactivated successfully. Sep 9 21:57:07.082449 systemd[1]: cri-containerd-194d985a3c94dffc100fcf964177b83b6d71a73ccbbc7a3f50d96c13b819c963.scope: Consumed 15ms CPU time, 9.6M memory peak, 3.2M read from disk. Sep 9 21:57:07.084277 containerd[1643]: time="2025-09-09T21:57:07.084242094Z" level=info msg="received exit event container_id:\"194d985a3c94dffc100fcf964177b83b6d71a73ccbbc7a3f50d96c13b819c963\" id:\"194d985a3c94dffc100fcf964177b83b6d71a73ccbbc7a3f50d96c13b819c963\" pid:4762 exited_at:{seconds:1757455027 nanos:83965679}" Sep 9 21:57:07.085024 containerd[1643]: time="2025-09-09T21:57:07.085006978Z" level=info msg="TaskExit event in podsandbox handler container_id:\"194d985a3c94dffc100fcf964177b83b6d71a73ccbbc7a3f50d96c13b819c963\" id:\"194d985a3c94dffc100fcf964177b83b6d71a73ccbbc7a3f50d96c13b819c963\" pid:4762 exited_at:{seconds:1757455027 nanos:83965679}" Sep 9 21:57:07.867693 containerd[1643]: time="2025-09-09T21:57:07.867632548Z" level=info msg="CreateContainer within sandbox \"7be2bbb768f5da4ce4ae964ca3786835c2ceed28ca640af0afc91f501e40aa43\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 21:57:07.891908 containerd[1643]: time="2025-09-09T21:57:07.891375387Z" level=info msg="Container e6aadbecb0df9a33c41c0d577e3488ecbd25814fe4a49b05a664218f105f0e7f: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:57:07.906080 containerd[1643]: time="2025-09-09T21:57:07.906049220Z" level=info msg="CreateContainer within sandbox \"7be2bbb768f5da4ce4ae964ca3786835c2ceed28ca640af0afc91f501e40aa43\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e6aadbecb0df9a33c41c0d577e3488ecbd25814fe4a49b05a664218f105f0e7f\"" Sep 9 21:57:07.906831 containerd[1643]: time="2025-09-09T21:57:07.906812053Z" level=info msg="StartContainer for \"e6aadbecb0df9a33c41c0d577e3488ecbd25814fe4a49b05a664218f105f0e7f\"" Sep 9 21:57:07.907827 containerd[1643]: time="2025-09-09T21:57:07.907810127Z" level=info msg="connecting to shim e6aadbecb0df9a33c41c0d577e3488ecbd25814fe4a49b05a664218f105f0e7f" address="unix:///run/containerd/s/53f307691777e2c96133daf33d058dbedc2aaa934b67ce92649fa7beb142d91a" protocol=ttrpc version=3 Sep 9 21:57:07.934515 systemd[1]: Started cri-containerd-e6aadbecb0df9a33c41c0d577e3488ecbd25814fe4a49b05a664218f105f0e7f.scope - libcontainer container e6aadbecb0df9a33c41c0d577e3488ecbd25814fe4a49b05a664218f105f0e7f. Sep 9 21:57:07.953541 containerd[1643]: time="2025-09-09T21:57:07.953464202Z" level=info msg="StartContainer for \"e6aadbecb0df9a33c41c0d577e3488ecbd25814fe4a49b05a664218f105f0e7f\" returns successfully" Sep 9 21:57:07.967424 systemd[1]: cri-containerd-e6aadbecb0df9a33c41c0d577e3488ecbd25814fe4a49b05a664218f105f0e7f.scope: Deactivated successfully. Sep 9 21:57:07.967616 systemd[1]: cri-containerd-e6aadbecb0df9a33c41c0d577e3488ecbd25814fe4a49b05a664218f105f0e7f.scope: Consumed 11ms CPU time, 7.3M memory peak, 2.2M read from disk. Sep 9 21:57:07.967926 containerd[1643]: time="2025-09-09T21:57:07.967892974Z" level=info msg="received exit event container_id:\"e6aadbecb0df9a33c41c0d577e3488ecbd25814fe4a49b05a664218f105f0e7f\" id:\"e6aadbecb0df9a33c41c0d577e3488ecbd25814fe4a49b05a664218f105f0e7f\" pid:4807 exited_at:{seconds:1757455027 nanos:967255499}" Sep 9 21:57:07.968081 containerd[1643]: time="2025-09-09T21:57:07.967957612Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e6aadbecb0df9a33c41c0d577e3488ecbd25814fe4a49b05a664218f105f0e7f\" id:\"e6aadbecb0df9a33c41c0d577e3488ecbd25814fe4a49b05a664218f105f0e7f\" pid:4807 exited_at:{seconds:1757455027 nanos:967255499}" Sep 9 21:57:07.982124 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e6aadbecb0df9a33c41c0d577e3488ecbd25814fe4a49b05a664218f105f0e7f-rootfs.mount: Deactivated successfully. Sep 9 21:57:08.555147 containerd[1643]: time="2025-09-09T21:57:08.555014371Z" level=info msg="StopPodSandbox for \"77df44a395c0da7772e25f5277a351800a60b84bd505047f16b72a4fdf32aae3\"" Sep 9 21:57:08.555455 containerd[1643]: time="2025-09-09T21:57:08.555353034Z" level=info msg="TearDown network for sandbox \"77df44a395c0da7772e25f5277a351800a60b84bd505047f16b72a4fdf32aae3\" successfully" Sep 9 21:57:08.555455 containerd[1643]: time="2025-09-09T21:57:08.555402644Z" level=info msg="StopPodSandbox for \"77df44a395c0da7772e25f5277a351800a60b84bd505047f16b72a4fdf32aae3\" returns successfully" Sep 9 21:57:08.555839 containerd[1643]: time="2025-09-09T21:57:08.555822923Z" level=info msg="RemovePodSandbox for \"77df44a395c0da7772e25f5277a351800a60b84bd505047f16b72a4fdf32aae3\"" Sep 9 21:57:08.555980 containerd[1643]: time="2025-09-09T21:57:08.555918342Z" level=info msg="Forcibly stopping sandbox \"77df44a395c0da7772e25f5277a351800a60b84bd505047f16b72a4fdf32aae3\"" Sep 9 21:57:08.556052 containerd[1643]: time="2025-09-09T21:57:08.556041710Z" level=info msg="TearDown network for sandbox \"77df44a395c0da7772e25f5277a351800a60b84bd505047f16b72a4fdf32aae3\" successfully" Sep 9 21:57:08.557059 containerd[1643]: time="2025-09-09T21:57:08.557043376Z" level=info msg="Ensure that sandbox 77df44a395c0da7772e25f5277a351800a60b84bd505047f16b72a4fdf32aae3 in task-service has been cleanup successfully" Sep 9 21:57:08.592801 containerd[1643]: time="2025-09-09T21:57:08.592766310Z" level=info msg="RemovePodSandbox \"77df44a395c0da7772e25f5277a351800a60b84bd505047f16b72a4fdf32aae3\" returns successfully" Sep 9 21:57:08.593170 containerd[1643]: time="2025-09-09T21:57:08.593147051Z" level=info msg="StopPodSandbox for \"a2131e432ee5f99774ed186fffec8222e80b6b7f15a542401455dea440111c10\"" Sep 9 21:57:08.593250 containerd[1643]: time="2025-09-09T21:57:08.593230784Z" level=info msg="TearDown network for sandbox \"a2131e432ee5f99774ed186fffec8222e80b6b7f15a542401455dea440111c10\" successfully" Sep 9 21:57:08.593250 containerd[1643]: time="2025-09-09T21:57:08.593245275Z" level=info msg="StopPodSandbox for \"a2131e432ee5f99774ed186fffec8222e80b6b7f15a542401455dea440111c10\" returns successfully" Sep 9 21:57:08.594390 containerd[1643]: time="2025-09-09T21:57:08.593581654Z" level=info msg="RemovePodSandbox for \"a2131e432ee5f99774ed186fffec8222e80b6b7f15a542401455dea440111c10\"" Sep 9 21:57:08.594390 containerd[1643]: time="2025-09-09T21:57:08.593600404Z" level=info msg="Forcibly stopping sandbox \"a2131e432ee5f99774ed186fffec8222e80b6b7f15a542401455dea440111c10\"" Sep 9 21:57:08.594390 containerd[1643]: time="2025-09-09T21:57:08.593670548Z" level=info msg="TearDown network for sandbox \"a2131e432ee5f99774ed186fffec8222e80b6b7f15a542401455dea440111c10\" successfully" Sep 9 21:57:08.594522 containerd[1643]: time="2025-09-09T21:57:08.594492219Z" level=info msg="Ensure that sandbox a2131e432ee5f99774ed186fffec8222e80b6b7f15a542401455dea440111c10 in task-service has been cleanup successfully" Sep 9 21:57:08.628839 kubelet[2947]: E0909 21:57:08.628795 2947 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 21:57:08.637362 containerd[1643]: time="2025-09-09T21:57:08.637333843Z" level=info msg="RemovePodSandbox \"a2131e432ee5f99774ed186fffec8222e80b6b7f15a542401455dea440111c10\" returns successfully" Sep 9 21:57:08.870509 containerd[1643]: time="2025-09-09T21:57:08.870405055Z" level=info msg="CreateContainer within sandbox \"7be2bbb768f5da4ce4ae964ca3786835c2ceed28ca640af0afc91f501e40aa43\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 21:57:08.890586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount28758993.mount: Deactivated successfully. Sep 9 21:57:08.897429 containerd[1643]: time="2025-09-09T21:57:08.893717366Z" level=info msg="Container 114a9a4dabe85dfc2f3a577e2ec5f2e6b7f578908ae753aa7abc4d5dda41029a: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:57:08.896968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3520111329.mount: Deactivated successfully. Sep 9 21:57:08.901762 containerd[1643]: time="2025-09-09T21:57:08.901715839Z" level=info msg="CreateContainer within sandbox \"7be2bbb768f5da4ce4ae964ca3786835c2ceed28ca640af0afc91f501e40aa43\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"114a9a4dabe85dfc2f3a577e2ec5f2e6b7f578908ae753aa7abc4d5dda41029a\"" Sep 9 21:57:08.902320 containerd[1643]: time="2025-09-09T21:57:08.902304839Z" level=info msg="StartContainer for \"114a9a4dabe85dfc2f3a577e2ec5f2e6b7f578908ae753aa7abc4d5dda41029a\"" Sep 9 21:57:08.904026 containerd[1643]: time="2025-09-09T21:57:08.903980914Z" level=info msg="connecting to shim 114a9a4dabe85dfc2f3a577e2ec5f2e6b7f578908ae753aa7abc4d5dda41029a" address="unix:///run/containerd/s/53f307691777e2c96133daf33d058dbedc2aaa934b67ce92649fa7beb142d91a" protocol=ttrpc version=3 Sep 9 21:57:08.923639 systemd[1]: Started cri-containerd-114a9a4dabe85dfc2f3a577e2ec5f2e6b7f578908ae753aa7abc4d5dda41029a.scope - libcontainer container 114a9a4dabe85dfc2f3a577e2ec5f2e6b7f578908ae753aa7abc4d5dda41029a. Sep 9 21:57:08.981846 containerd[1643]: time="2025-09-09T21:57:08.981819800Z" level=info msg="StartContainer for \"114a9a4dabe85dfc2f3a577e2ec5f2e6b7f578908ae753aa7abc4d5dda41029a\" returns successfully" Sep 9 21:57:09.013032 systemd[1]: cri-containerd-114a9a4dabe85dfc2f3a577e2ec5f2e6b7f578908ae753aa7abc4d5dda41029a.scope: Deactivated successfully. Sep 9 21:57:09.014379 containerd[1643]: time="2025-09-09T21:57:09.014288764Z" level=info msg="received exit event container_id:\"114a9a4dabe85dfc2f3a577e2ec5f2e6b7f578908ae753aa7abc4d5dda41029a\" id:\"114a9a4dabe85dfc2f3a577e2ec5f2e6b7f578908ae753aa7abc4d5dda41029a\" pid:4851 exited_at:{seconds:1757455029 nanos:14144424}" Sep 9 21:57:09.018063 containerd[1643]: time="2025-09-09T21:57:09.018043272Z" level=info msg="TaskExit event in podsandbox handler container_id:\"114a9a4dabe85dfc2f3a577e2ec5f2e6b7f578908ae753aa7abc4d5dda41029a\" id:\"114a9a4dabe85dfc2f3a577e2ec5f2e6b7f578908ae753aa7abc4d5dda41029a\" pid:4851 exited_at:{seconds:1757455029 nanos:14144424}" Sep 9 21:57:09.876756 containerd[1643]: time="2025-09-09T21:57:09.876720508Z" level=info msg="CreateContainer within sandbox \"7be2bbb768f5da4ce4ae964ca3786835c2ceed28ca640af0afc91f501e40aa43\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 21:57:09.886037 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-114a9a4dabe85dfc2f3a577e2ec5f2e6b7f578908ae753aa7abc4d5dda41029a-rootfs.mount: Deactivated successfully. Sep 9 21:57:09.900947 containerd[1643]: time="2025-09-09T21:57:09.900352694Z" level=info msg="Container a8de0c4532f5ba5de309c3549eabcb743ce3e87c51fd48b02fd100ba9707d302: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:57:09.912277 containerd[1643]: time="2025-09-09T21:57:09.912243726Z" level=info msg="CreateContainer within sandbox \"7be2bbb768f5da4ce4ae964ca3786835c2ceed28ca640af0afc91f501e40aa43\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a8de0c4532f5ba5de309c3549eabcb743ce3e87c51fd48b02fd100ba9707d302\"" Sep 9 21:57:09.912916 containerd[1643]: time="2025-09-09T21:57:09.912864668Z" level=info msg="StartContainer for \"a8de0c4532f5ba5de309c3549eabcb743ce3e87c51fd48b02fd100ba9707d302\"" Sep 9 21:57:09.914532 containerd[1643]: time="2025-09-09T21:57:09.914508966Z" level=info msg="connecting to shim a8de0c4532f5ba5de309c3549eabcb743ce3e87c51fd48b02fd100ba9707d302" address="unix:///run/containerd/s/53f307691777e2c96133daf33d058dbedc2aaa934b67ce92649fa7beb142d91a" protocol=ttrpc version=3 Sep 9 21:57:09.940568 systemd[1]: Started cri-containerd-a8de0c4532f5ba5de309c3549eabcb743ce3e87c51fd48b02fd100ba9707d302.scope - libcontainer container a8de0c4532f5ba5de309c3549eabcb743ce3e87c51fd48b02fd100ba9707d302. Sep 9 21:57:09.984202 systemd[1]: cri-containerd-a8de0c4532f5ba5de309c3549eabcb743ce3e87c51fd48b02fd100ba9707d302.scope: Deactivated successfully. Sep 9 21:57:09.985554 containerd[1643]: time="2025-09-09T21:57:09.985500975Z" level=info msg="received exit event container_id:\"a8de0c4532f5ba5de309c3549eabcb743ce3e87c51fd48b02fd100ba9707d302\" id:\"a8de0c4532f5ba5de309c3549eabcb743ce3e87c51fd48b02fd100ba9707d302\" pid:4895 exited_at:{seconds:1757455029 nanos:985388227}" Sep 9 21:57:09.986093 containerd[1643]: time="2025-09-09T21:57:09.985892885Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a8de0c4532f5ba5de309c3549eabcb743ce3e87c51fd48b02fd100ba9707d302\" id:\"a8de0c4532f5ba5de309c3549eabcb743ce3e87c51fd48b02fd100ba9707d302\" pid:4895 exited_at:{seconds:1757455029 nanos:985388227}" Sep 9 21:57:09.990740 containerd[1643]: time="2025-09-09T21:57:09.990715848Z" level=info msg="StartContainer for \"a8de0c4532f5ba5de309c3549eabcb743ce3e87c51fd48b02fd100ba9707d302\" returns successfully" Sep 9 21:57:10.878419 containerd[1643]: time="2025-09-09T21:57:10.878391985Z" level=info msg="CreateContainer within sandbox \"7be2bbb768f5da4ce4ae964ca3786835c2ceed28ca640af0afc91f501e40aa43\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 21:57:10.909866 containerd[1643]: time="2025-09-09T21:57:10.908627671Z" level=info msg="Container 8a242163666b8e3ad967a57fa9fa5477035f24cd7d2606f1f559fc2f7664cf2d: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:57:10.909333 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2271572969.mount: Deactivated successfully. Sep 9 21:57:10.935783 containerd[1643]: time="2025-09-09T21:57:10.935753896Z" level=info msg="CreateContainer within sandbox \"7be2bbb768f5da4ce4ae964ca3786835c2ceed28ca640af0afc91f501e40aa43\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8a242163666b8e3ad967a57fa9fa5477035f24cd7d2606f1f559fc2f7664cf2d\"" Sep 9 21:57:10.936552 containerd[1643]: time="2025-09-09T21:57:10.936528200Z" level=info msg="StartContainer for \"8a242163666b8e3ad967a57fa9fa5477035f24cd7d2606f1f559fc2f7664cf2d\"" Sep 9 21:57:10.937480 containerd[1643]: time="2025-09-09T21:57:10.937449556Z" level=info msg="connecting to shim 8a242163666b8e3ad967a57fa9fa5477035f24cd7d2606f1f559fc2f7664cf2d" address="unix:///run/containerd/s/53f307691777e2c96133daf33d058dbedc2aaa934b67ce92649fa7beb142d91a" protocol=ttrpc version=3 Sep 9 21:57:10.959632 systemd[1]: Started cri-containerd-8a242163666b8e3ad967a57fa9fa5477035f24cd7d2606f1f559fc2f7664cf2d.scope - libcontainer container 8a242163666b8e3ad967a57fa9fa5477035f24cd7d2606f1f559fc2f7664cf2d. Sep 9 21:57:11.007684 containerd[1643]: time="2025-09-09T21:57:11.007609064Z" level=info msg="StartContainer for \"8a242163666b8e3ad967a57fa9fa5477035f24cd7d2606f1f559fc2f7664cf2d\" returns successfully" Sep 9 21:57:11.390230 containerd[1643]: time="2025-09-09T21:57:11.390077431Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8a242163666b8e3ad967a57fa9fa5477035f24cd7d2606f1f559fc2f7664cf2d\" id:\"600b9556cf8425e878f87d243eec09a845cc089205b4acf018c6d5c66db5e6c9\" pid:4958 exited_at:{seconds:1757455031 nanos:389856931}" Sep 9 21:57:11.792001 kubelet[2947]: I0909 21:57:11.791756 2947 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-09T21:57:11Z","lastTransitionTime":"2025-09-09T21:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 9 21:57:12.713438 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Sep 9 21:57:13.267772 containerd[1643]: time="2025-09-09T21:57:13.267732630Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8a242163666b8e3ad967a57fa9fa5477035f24cd7d2606f1f559fc2f7664cf2d\" id:\"72d46ee7f319eef0bd3cca41615d27620c437df7542a08c70d46388d1fe7ed91\" pid:5039 exit_status:1 exited_at:{seconds:1757455033 nanos:266953984}" Sep 9 21:57:13.277928 kubelet[2947]: E0909 21:57:13.277885 2947 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:48782->127.0.0.1:45669: write tcp 127.0.0.1:48782->127.0.0.1:45669: write: broken pipe Sep 9 21:57:15.396918 containerd[1643]: time="2025-09-09T21:57:15.396840948Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8a242163666b8e3ad967a57fa9fa5477035f24cd7d2606f1f559fc2f7664cf2d\" id:\"b494629ad554fd064df4dee2b693aa76ee7d26d5cf9f78a6b03ee64be9f91c97\" pid:5440 exit_status:1 exited_at:{seconds:1757455035 nanos:396482608}" Sep 9 21:57:15.421423 systemd-networkd[1527]: lxc_health: Link UP Sep 9 21:57:15.433475 systemd-networkd[1527]: lxc_health: Gained carrier Sep 9 21:57:16.796495 systemd-networkd[1527]: lxc_health: Gained IPv6LL Sep 9 21:57:16.961039 kubelet[2947]: I0909 21:57:16.960934 2947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-j6bn5" podStartSLOduration=10.96092288 podStartE2EDuration="10.96092288s" podCreationTimestamp="2025-09-09 21:57:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 21:57:11.896813602 +0000 UTC m=+123.436922726" watchObservedRunningTime="2025-09-09 21:57:16.96092288 +0000 UTC m=+128.501032002" Sep 9 21:57:17.527998 containerd[1643]: time="2025-09-09T21:57:17.527973852Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8a242163666b8e3ad967a57fa9fa5477035f24cd7d2606f1f559fc2f7664cf2d\" id:\"7becaed8f6361b67d8fae258dd694b5bdda433c8d6846bb294216df2ade192d9\" pid:5526 exited_at:{seconds:1757455037 nanos:527139780}" Sep 9 21:57:19.590865 containerd[1643]: time="2025-09-09T21:57:19.590805270Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8a242163666b8e3ad967a57fa9fa5477035f24cd7d2606f1f559fc2f7664cf2d\" id:\"5177ce55e1639289117f1c69edf6218e8fa685f5042ab3525f658674e8ea053d\" pid:5555 exited_at:{seconds:1757455039 nanos:590180081}" Sep 9 21:57:19.595959 kubelet[2947]: E0909 21:57:19.595862 2947 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:48812->127.0.0.1:45669: write tcp 127.0.0.1:48812->127.0.0.1:45669: write: broken pipe Sep 9 21:57:21.672933 containerd[1643]: time="2025-09-09T21:57:21.672904281Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8a242163666b8e3ad967a57fa9fa5477035f24cd7d2606f1f559fc2f7664cf2d\" id:\"a0d9e9d2b37739011446984e93db1903bba485468f10987efbab5cf9cf5cc974\" pid:5577 exited_at:{seconds:1757455041 nanos:672608071}" Sep 9 21:57:21.676664 sshd[4699]: Connection closed by 139.178.89.65 port 45040 Sep 9 21:57:21.682643 sshd-session[4691]: pam_unix(sshd:session): session closed for user core Sep 9 21:57:21.684719 systemd-logind[1612]: Session 29 logged out. Waiting for processes to exit. Sep 9 21:57:21.684881 systemd[1]: sshd@26-139.178.70.109:22-139.178.89.65:45040.service: Deactivated successfully. Sep 9 21:57:21.686325 systemd[1]: session-29.scope: Deactivated successfully. Sep 9 21:57:21.687939 systemd-logind[1612]: Removed session 29.