Sep 9 00:55:25.721463 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Sep 8 22:13:49 -00 2025 Sep 9 00:55:25.721481 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=34d704fb26999c645221adf783007b0add8c1672b7c5860358d83aa19335714a Sep 9 00:55:25.721488 kernel: Disabled fast string operations Sep 9 00:55:25.721497 kernel: BIOS-provided physical RAM map: Sep 9 00:55:25.721501 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Sep 9 00:55:25.721505 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Sep 9 00:55:25.721512 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Sep 9 00:55:25.721516 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Sep 9 00:55:25.721520 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Sep 9 00:55:25.721528 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Sep 9 00:55:25.721532 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Sep 9 00:55:25.721537 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Sep 9 00:55:25.721541 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Sep 9 00:55:25.721545 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Sep 9 00:55:25.721556 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Sep 9 00:55:25.721563 kernel: NX (Execute Disable) protection: active Sep 9 00:55:25.721568 kernel: APIC: Static calls initialized Sep 9 00:55:25.721573 kernel: SMBIOS 2.7 present. Sep 9 00:55:25.721578 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Sep 9 00:55:25.721582 kernel: DMI: Memory slots populated: 1/128 Sep 9 00:55:25.721593 kernel: vmware: hypercall mode: 0x00 Sep 9 00:55:25.721597 kernel: Hypervisor detected: VMware Sep 9 00:55:25.721602 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Sep 9 00:55:25.721607 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Sep 9 00:55:25.721612 kernel: vmware: using clock offset of 3710759485 ns Sep 9 00:55:25.721622 kernel: tsc: Detected 3408.000 MHz processor Sep 9 00:55:25.721628 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 9 00:55:25.722661 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 9 00:55:25.722676 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Sep 9 00:55:25.722682 kernel: total RAM covered: 3072M Sep 9 00:55:25.722690 kernel: Found optimal setting for mtrr clean up Sep 9 00:55:25.722695 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Sep 9 00:55:25.722701 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Sep 9 00:55:25.722706 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 9 00:55:25.722711 kernel: Using GB pages for direct mapping Sep 9 00:55:25.722716 kernel: ACPI: Early table checksum verification disabled Sep 9 00:55:25.722720 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Sep 9 00:55:25.722726 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Sep 9 00:55:25.722731 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Sep 9 00:55:25.722737 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Sep 9 00:55:25.722744 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Sep 9 00:55:25.722749 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Sep 9 00:55:25.722754 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Sep 9 00:55:25.722759 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Sep 9 00:55:25.722766 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Sep 9 00:55:25.722771 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Sep 9 00:55:25.722776 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Sep 9 00:55:25.722781 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Sep 9 00:55:25.722787 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Sep 9 00:55:25.722792 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Sep 9 00:55:25.722797 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Sep 9 00:55:25.722802 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Sep 9 00:55:25.722807 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Sep 9 00:55:25.722812 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Sep 9 00:55:25.722819 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Sep 9 00:55:25.722824 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Sep 9 00:55:25.722829 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Sep 9 00:55:25.722834 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Sep 9 00:55:25.722839 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 9 00:55:25.722844 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 9 00:55:25.722850 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Sep 9 00:55:25.722855 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00001000-0x7fffffff] Sep 9 00:55:25.722860 kernel: NODE_DATA(0) allocated [mem 0x7fff8dc0-0x7fffffff] Sep 9 00:55:25.722866 kernel: Zone ranges: Sep 9 00:55:25.722872 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 9 00:55:25.722877 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Sep 9 00:55:25.722882 kernel: Normal empty Sep 9 00:55:25.722887 kernel: Device empty Sep 9 00:55:25.722892 kernel: Movable zone start for each node Sep 9 00:55:25.722897 kernel: Early memory node ranges Sep 9 00:55:25.722903 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Sep 9 00:55:25.722908 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Sep 9 00:55:25.722913 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Sep 9 00:55:25.722919 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Sep 9 00:55:25.722924 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 9 00:55:25.722929 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Sep 9 00:55:25.722950 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Sep 9 00:55:25.722955 kernel: ACPI: PM-Timer IO Port: 0x1008 Sep 9 00:55:25.722960 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Sep 9 00:55:25.722965 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Sep 9 00:55:25.722970 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Sep 9 00:55:25.722975 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Sep 9 00:55:25.722981 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Sep 9 00:55:25.722986 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Sep 9 00:55:25.722990 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Sep 9 00:55:25.722995 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Sep 9 00:55:25.723000 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Sep 9 00:55:25.723005 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Sep 9 00:55:25.723010 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Sep 9 00:55:25.723015 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Sep 9 00:55:25.723020 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Sep 9 00:55:25.723025 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Sep 9 00:55:25.723031 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Sep 9 00:55:25.723036 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Sep 9 00:55:25.723041 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Sep 9 00:55:25.723045 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Sep 9 00:55:25.723050 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Sep 9 00:55:25.723055 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Sep 9 00:55:25.723060 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Sep 9 00:55:25.723065 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Sep 9 00:55:25.723070 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Sep 9 00:55:25.723075 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Sep 9 00:55:25.723081 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Sep 9 00:55:25.723086 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Sep 9 00:55:25.723091 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Sep 9 00:55:25.723096 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Sep 9 00:55:25.723101 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Sep 9 00:55:25.723106 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Sep 9 00:55:25.723111 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Sep 9 00:55:25.723116 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Sep 9 00:55:25.723121 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Sep 9 00:55:25.723126 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Sep 9 00:55:25.723131 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Sep 9 00:55:25.723136 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Sep 9 00:55:25.723141 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Sep 9 00:55:25.723146 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Sep 9 00:55:25.723151 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Sep 9 00:55:25.723157 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Sep 9 00:55:25.723165 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Sep 9 00:55:25.723171 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Sep 9 00:55:25.723176 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Sep 9 00:55:25.723181 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Sep 9 00:55:25.723187 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Sep 9 00:55:25.723193 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Sep 9 00:55:25.723198 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Sep 9 00:55:25.723203 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Sep 9 00:55:25.723208 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Sep 9 00:55:25.723213 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Sep 9 00:55:25.723219 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Sep 9 00:55:25.723225 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Sep 9 00:55:25.723230 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Sep 9 00:55:25.723235 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Sep 9 00:55:25.723240 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Sep 9 00:55:25.723246 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Sep 9 00:55:25.723251 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Sep 9 00:55:25.723256 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Sep 9 00:55:25.723261 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Sep 9 00:55:25.723266 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Sep 9 00:55:25.723272 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Sep 9 00:55:25.723278 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Sep 9 00:55:25.723283 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Sep 9 00:55:25.723288 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Sep 9 00:55:25.723294 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Sep 9 00:55:25.723299 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Sep 9 00:55:25.723304 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Sep 9 00:55:25.723309 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Sep 9 00:55:25.723315 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Sep 9 00:55:25.723320 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Sep 9 00:55:25.723325 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Sep 9 00:55:25.723332 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Sep 9 00:55:25.723337 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Sep 9 00:55:25.723342 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Sep 9 00:55:25.723347 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Sep 9 00:55:25.723352 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Sep 9 00:55:25.723357 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Sep 9 00:55:25.723363 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Sep 9 00:55:25.723368 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Sep 9 00:55:25.723373 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Sep 9 00:55:25.723378 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Sep 9 00:55:25.723385 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Sep 9 00:55:25.723390 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Sep 9 00:55:25.723395 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Sep 9 00:55:25.723400 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Sep 9 00:55:25.723405 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Sep 9 00:55:25.723411 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Sep 9 00:55:25.723416 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Sep 9 00:55:25.723421 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Sep 9 00:55:25.723426 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Sep 9 00:55:25.723432 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Sep 9 00:55:25.723438 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Sep 9 00:55:25.723443 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Sep 9 00:55:25.723448 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Sep 9 00:55:25.723454 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Sep 9 00:55:25.723459 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Sep 9 00:55:25.723464 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Sep 9 00:55:25.723470 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Sep 9 00:55:25.723475 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Sep 9 00:55:25.723480 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Sep 9 00:55:25.723485 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Sep 9 00:55:25.723491 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Sep 9 00:55:25.723497 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Sep 9 00:55:25.723502 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Sep 9 00:55:25.723507 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Sep 9 00:55:25.723513 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Sep 9 00:55:25.723518 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Sep 9 00:55:25.723523 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Sep 9 00:55:25.723528 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Sep 9 00:55:25.723533 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Sep 9 00:55:25.723539 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Sep 9 00:55:25.723545 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Sep 9 00:55:25.723550 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Sep 9 00:55:25.723555 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Sep 9 00:55:25.723560 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Sep 9 00:55:25.723566 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Sep 9 00:55:25.723571 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Sep 9 00:55:25.723576 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Sep 9 00:55:25.723581 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Sep 9 00:55:25.723586 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Sep 9 00:55:25.723593 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Sep 9 00:55:25.723598 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Sep 9 00:55:25.723603 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Sep 9 00:55:25.723609 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Sep 9 00:55:25.723614 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Sep 9 00:55:25.723619 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Sep 9 00:55:25.723624 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Sep 9 00:55:25.723629 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Sep 9 00:55:25.723651 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Sep 9 00:55:25.723658 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Sep 9 00:55:25.723666 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 9 00:55:25.723671 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Sep 9 00:55:25.723676 kernel: TSC deadline timer available Sep 9 00:55:25.723682 kernel: CPU topo: Max. logical packages: 128 Sep 9 00:55:25.723687 kernel: CPU topo: Max. logical dies: 128 Sep 9 00:55:25.723692 kernel: CPU topo: Max. dies per package: 1 Sep 9 00:55:25.723697 kernel: CPU topo: Max. threads per core: 1 Sep 9 00:55:25.723703 kernel: CPU topo: Num. cores per package: 1 Sep 9 00:55:25.723708 kernel: CPU topo: Num. threads per package: 1 Sep 9 00:55:25.723713 kernel: CPU topo: Allowing 2 present CPUs plus 126 hotplug CPUs Sep 9 00:55:25.723720 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Sep 9 00:55:25.723725 kernel: Booting paravirtualized kernel on VMware hypervisor Sep 9 00:55:25.723731 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 9 00:55:25.723736 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Sep 9 00:55:25.723742 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Sep 9 00:55:25.723747 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Sep 9 00:55:25.723753 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Sep 9 00:55:25.723758 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Sep 9 00:55:25.723763 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Sep 9 00:55:25.723769 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Sep 9 00:55:25.723775 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Sep 9 00:55:25.723780 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Sep 9 00:55:25.723785 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Sep 9 00:55:25.723790 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Sep 9 00:55:25.723795 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Sep 9 00:55:25.723801 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Sep 9 00:55:25.723806 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Sep 9 00:55:25.723811 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Sep 9 00:55:25.723817 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Sep 9 00:55:25.723822 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Sep 9 00:55:25.723828 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Sep 9 00:55:25.723833 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Sep 9 00:55:25.723839 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=34d704fb26999c645221adf783007b0add8c1672b7c5860358d83aa19335714a Sep 9 00:55:25.723845 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 00:55:25.723850 kernel: random: crng init done Sep 9 00:55:25.723857 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Sep 9 00:55:25.723862 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Sep 9 00:55:25.723867 kernel: printk: log_buf_len min size: 262144 bytes Sep 9 00:55:25.723873 kernel: printk: log_buf_len: 1048576 bytes Sep 9 00:55:25.723878 kernel: printk: early log buf free: 245592(93%) Sep 9 00:55:25.723883 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 00:55:25.723889 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 9 00:55:25.723894 kernel: Fallback order for Node 0: 0 Sep 9 00:55:25.723899 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524157 Sep 9 00:55:25.723923 kernel: Policy zone: DMA32 Sep 9 00:55:25.723930 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 00:55:25.723935 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Sep 9 00:55:25.723940 kernel: ftrace: allocating 40102 entries in 157 pages Sep 9 00:55:25.723946 kernel: ftrace: allocated 157 pages with 5 groups Sep 9 00:55:25.723951 kernel: Dynamic Preempt: voluntary Sep 9 00:55:25.723957 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 00:55:25.723963 kernel: rcu: RCU event tracing is enabled. Sep 9 00:55:25.723983 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Sep 9 00:55:25.723989 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 00:55:25.723995 kernel: Rude variant of Tasks RCU enabled. Sep 9 00:55:25.724001 kernel: Tracing variant of Tasks RCU enabled. Sep 9 00:55:25.724006 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 00:55:25.724011 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Sep 9 00:55:25.724017 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Sep 9 00:55:25.724022 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Sep 9 00:55:25.724027 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Sep 9 00:55:25.724033 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Sep 9 00:55:25.724038 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Sep 9 00:55:25.724045 kernel: Console: colour VGA+ 80x25 Sep 9 00:55:25.724050 kernel: printk: legacy console [tty0] enabled Sep 9 00:55:25.724055 kernel: printk: legacy console [ttyS0] enabled Sep 9 00:55:25.724061 kernel: ACPI: Core revision 20240827 Sep 9 00:55:25.724066 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Sep 9 00:55:25.724071 kernel: APIC: Switch to symmetric I/O mode setup Sep 9 00:55:25.724077 kernel: x2apic enabled Sep 9 00:55:25.724082 kernel: APIC: Switched APIC routing to: physical x2apic Sep 9 00:55:25.724088 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 9 00:55:25.724095 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Sep 9 00:55:25.724100 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Sep 9 00:55:25.724106 kernel: Disabled fast string operations Sep 9 00:55:25.724111 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 9 00:55:25.724116 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 9 00:55:25.724122 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 9 00:55:25.724127 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Sep 9 00:55:25.724133 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Sep 9 00:55:25.724138 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Sep 9 00:55:25.724145 kernel: RETBleed: Mitigation: Enhanced IBRS Sep 9 00:55:25.724150 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 9 00:55:25.724156 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 9 00:55:25.724161 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 9 00:55:25.724167 kernel: SRBDS: Unknown: Dependent on hypervisor status Sep 9 00:55:25.724172 kernel: GDS: Unknown: Dependent on hypervisor status Sep 9 00:55:25.724177 kernel: active return thunk: its_return_thunk Sep 9 00:55:25.724183 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 9 00:55:25.724188 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 9 00:55:25.724194 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 9 00:55:25.724200 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 9 00:55:25.724206 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 9 00:55:25.724211 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 9 00:55:25.724217 kernel: Freeing SMP alternatives memory: 32K Sep 9 00:55:25.724222 kernel: pid_max: default: 131072 minimum: 1024 Sep 9 00:55:25.724228 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 9 00:55:25.724233 kernel: landlock: Up and running. Sep 9 00:55:25.724239 kernel: SELinux: Initializing. Sep 9 00:55:25.724245 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 9 00:55:25.724250 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 9 00:55:25.724256 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Sep 9 00:55:25.724261 kernel: Performance Events: Skylake events, core PMU driver. Sep 9 00:55:25.724267 kernel: core: CPUID marked event: 'cpu cycles' unavailable Sep 9 00:55:25.724291 kernel: core: CPUID marked event: 'instructions' unavailable Sep 9 00:55:25.724296 kernel: core: CPUID marked event: 'bus cycles' unavailable Sep 9 00:55:25.724302 kernel: core: CPUID marked event: 'cache references' unavailable Sep 9 00:55:25.724307 kernel: core: CPUID marked event: 'cache misses' unavailable Sep 9 00:55:25.724314 kernel: core: CPUID marked event: 'branch instructions' unavailable Sep 9 00:55:25.724335 kernel: core: CPUID marked event: 'branch misses' unavailable Sep 9 00:55:25.724340 kernel: ... version: 1 Sep 9 00:55:25.724345 kernel: ... bit width: 48 Sep 9 00:55:25.724350 kernel: ... generic registers: 4 Sep 9 00:55:25.724356 kernel: ... value mask: 0000ffffffffffff Sep 9 00:55:25.724361 kernel: ... max period: 000000007fffffff Sep 9 00:55:25.724367 kernel: ... fixed-purpose events: 0 Sep 9 00:55:25.724372 kernel: ... event mask: 000000000000000f Sep 9 00:55:25.724378 kernel: signal: max sigframe size: 1776 Sep 9 00:55:25.724384 kernel: rcu: Hierarchical SRCU implementation. Sep 9 00:55:25.724389 kernel: rcu: Max phase no-delay instances is 400. Sep 9 00:55:25.724395 kernel: Timer migration: 3 hierarchy levels; 8 children per group; 3 crossnode level Sep 9 00:55:25.724400 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 9 00:55:25.724406 kernel: smp: Bringing up secondary CPUs ... Sep 9 00:55:25.724411 kernel: smpboot: x86: Booting SMP configuration: Sep 9 00:55:25.724416 kernel: .... node #0, CPUs: #1 Sep 9 00:55:25.724422 kernel: Disabled fast string operations Sep 9 00:55:25.724428 kernel: smp: Brought up 1 node, 2 CPUs Sep 9 00:55:25.724433 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Sep 9 00:55:25.724439 kernel: Memory: 1924256K/2096628K available (14336K kernel code, 2428K rwdata, 9960K rodata, 54036K init, 2932K bss, 161000K reserved, 0K cma-reserved) Sep 9 00:55:25.724445 kernel: devtmpfs: initialized Sep 9 00:55:25.724450 kernel: x86/mm: Memory block size: 128MB Sep 9 00:55:25.724456 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Sep 9 00:55:25.724461 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 00:55:25.724485 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Sep 9 00:55:25.724491 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 00:55:25.724498 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 00:55:25.724503 kernel: audit: initializing netlink subsys (disabled) Sep 9 00:55:25.724509 kernel: audit: type=2000 audit(1757379322.276:1): state=initialized audit_enabled=0 res=1 Sep 9 00:55:25.724514 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 00:55:25.724520 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 9 00:55:25.724525 kernel: cpuidle: using governor menu Sep 9 00:55:25.724531 kernel: Simple Boot Flag at 0x36 set to 0x80 Sep 9 00:55:25.724536 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 00:55:25.724542 kernel: dca service started, version 1.12.1 Sep 9 00:55:25.724555 kernel: PCI: ECAM [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) for domain 0000 [bus 00-7f] Sep 9 00:55:25.724562 kernel: PCI: Using configuration type 1 for base access Sep 9 00:55:25.724568 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 9 00:55:25.724574 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 00:55:25.724580 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 00:55:25.724585 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 00:55:25.724591 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 00:55:25.724597 kernel: ACPI: Added _OSI(Module Device) Sep 9 00:55:25.724603 kernel: ACPI: Added _OSI(Processor Device) Sep 9 00:55:25.724610 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 00:55:25.724616 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 00:55:25.724622 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Sep 9 00:55:25.724628 kernel: ACPI: Interpreter enabled Sep 9 00:55:25.724941 kernel: ACPI: PM: (supports S0 S1 S5) Sep 9 00:55:25.724952 kernel: ACPI: Using IOAPIC for interrupt routing Sep 9 00:55:25.724958 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 9 00:55:25.724964 kernel: PCI: Using E820 reservations for host bridge windows Sep 9 00:55:25.724969 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Sep 9 00:55:25.724978 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Sep 9 00:55:25.725072 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 00:55:25.725127 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Sep 9 00:55:25.725177 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Sep 9 00:55:25.725188 kernel: PCI host bridge to bus 0000:00 Sep 9 00:55:25.725242 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 9 00:55:25.725295 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Sep 9 00:55:25.725340 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 9 00:55:25.725384 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 9 00:55:25.725430 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Sep 9 00:55:25.725476 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Sep 9 00:55:25.725536 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 conventional PCI endpoint Sep 9 00:55:25.725595 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 conventional PCI bridge Sep 9 00:55:25.726140 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 9 00:55:25.726208 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Sep 9 00:55:25.726275 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a conventional PCI endpoint Sep 9 00:55:25.726337 kernel: pci 0000:00:07.1: BAR 4 [io 0x1060-0x106f] Sep 9 00:55:25.726389 kernel: pci 0000:00:07.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Sep 9 00:55:25.726438 kernel: pci 0000:00:07.1: BAR 1 [io 0x03f6]: legacy IDE quirk Sep 9 00:55:25.726489 kernel: pci 0000:00:07.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Sep 9 00:55:25.726538 kernel: pci 0000:00:07.1: BAR 3 [io 0x0376]: legacy IDE quirk Sep 9 00:55:25.726593 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Sep 9 00:55:25.728664 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Sep 9 00:55:25.728732 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Sep 9 00:55:25.728791 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 conventional PCI endpoint Sep 9 00:55:25.728844 kernel: pci 0000:00:07.7: BAR 0 [io 0x1080-0x10bf] Sep 9 00:55:25.728894 kernel: pci 0000:00:07.7: BAR 1 [mem 0xfebfe000-0xfebfffff 64bit] Sep 9 00:55:25.728948 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 conventional PCI endpoint Sep 9 00:55:25.728998 kernel: pci 0000:00:0f.0: BAR 0 [io 0x1070-0x107f] Sep 9 00:55:25.729050 kernel: pci 0000:00:0f.0: BAR 1 [mem 0xe8000000-0xefffffff pref] Sep 9 00:55:25.729099 kernel: pci 0000:00:0f.0: BAR 2 [mem 0xfe000000-0xfe7fffff] Sep 9 00:55:25.729148 kernel: pci 0000:00:0f.0: ROM [mem 0x00000000-0x00007fff pref] Sep 9 00:55:25.729197 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 9 00:55:25.729251 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 conventional PCI bridge Sep 9 00:55:25.729309 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Sep 9 00:55:25.729358 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Sep 9 00:55:25.729409 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Sep 9 00:55:25.729458 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Sep 9 00:55:25.729518 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:55:25.729569 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Sep 9 00:55:25.729619 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Sep 9 00:55:25.731253 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Sep 9 00:55:25.731329 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Sep 9 00:55:25.731388 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:55:25.731446 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Sep 9 00:55:25.731498 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Sep 9 00:55:25.731547 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Sep 9 00:55:25.731598 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Sep 9 00:55:25.732390 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Sep 9 00:55:25.732460 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:55:25.732528 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Sep 9 00:55:25.732592 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Sep 9 00:55:25.732657 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Sep 9 00:55:25.732711 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Sep 9 00:55:25.732761 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Sep 9 00:55:25.732816 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:55:25.732871 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Sep 9 00:55:25.732922 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Sep 9 00:55:25.732979 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Sep 9 00:55:25.733042 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Sep 9 00:55:25.733098 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:55:25.733150 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Sep 9 00:55:25.733200 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Sep 9 00:55:25.733251 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Sep 9 00:55:25.733304 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Sep 9 00:55:25.733370 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:55:25.733432 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Sep 9 00:55:25.733498 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Sep 9 00:55:25.733563 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Sep 9 00:55:25.733626 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Sep 9 00:55:25.737747 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:55:25.737821 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Sep 9 00:55:25.737876 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Sep 9 00:55:25.737928 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Sep 9 00:55:25.737979 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Sep 9 00:55:25.738036 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:55:25.738089 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Sep 9 00:55:25.738141 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Sep 9 00:55:25.738194 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Sep 9 00:55:25.738245 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Sep 9 00:55:25.738301 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:55:25.738352 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Sep 9 00:55:25.738402 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Sep 9 00:55:25.738452 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Sep 9 00:55:25.738502 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Sep 9 00:55:25.738558 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:55:25.738612 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Sep 9 00:55:25.738678 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Sep 9 00:55:25.738729 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Sep 9 00:55:25.738780 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Sep 9 00:55:25.738830 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Sep 9 00:55:25.738885 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:55:25.738950 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Sep 9 00:55:25.739006 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Sep 9 00:55:25.739056 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Sep 9 00:55:25.739113 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Sep 9 00:55:25.739164 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Sep 9 00:55:25.739224 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:55:25.741958 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Sep 9 00:55:25.744265 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Sep 9 00:55:25.744339 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Sep 9 00:55:25.744394 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Sep 9 00:55:25.744452 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:55:25.744504 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Sep 9 00:55:25.744554 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Sep 9 00:55:25.744605 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Sep 9 00:55:25.744663 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Sep 9 00:55:25.744722 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:55:25.744773 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Sep 9 00:55:25.744830 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Sep 9 00:55:25.744887 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Sep 9 00:55:25.744937 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Sep 9 00:55:25.744992 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:55:25.745043 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Sep 9 00:55:25.745096 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Sep 9 00:55:25.745147 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Sep 9 00:55:25.745196 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Sep 9 00:55:25.745250 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:55:25.745304 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Sep 9 00:55:25.745355 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Sep 9 00:55:25.745406 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Sep 9 00:55:25.745458 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Sep 9 00:55:25.745514 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:55:25.745565 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Sep 9 00:55:25.745614 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Sep 9 00:55:25.745685 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Sep 9 00:55:25.745735 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Sep 9 00:55:25.745785 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Sep 9 00:55:25.745842 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:55:25.745893 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Sep 9 00:55:25.745943 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Sep 9 00:55:25.745992 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Sep 9 00:55:25.746045 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Sep 9 00:55:25.746094 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Sep 9 00:55:25.746148 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:55:25.746201 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Sep 9 00:55:25.746251 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Sep 9 00:55:25.746327 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Sep 9 00:55:25.746379 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Sep 9 00:55:25.746432 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Sep 9 00:55:25.746489 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:55:25.746540 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Sep 9 00:55:25.746590 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Sep 9 00:55:25.746649 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Sep 9 00:55:25.746700 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Sep 9 00:55:25.746757 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:55:25.746810 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Sep 9 00:55:25.746861 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Sep 9 00:55:25.746911 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Sep 9 00:55:25.746961 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Sep 9 00:55:25.747015 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:55:25.747065 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Sep 9 00:55:25.747114 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Sep 9 00:55:25.747166 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Sep 9 00:55:25.747217 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Sep 9 00:55:25.747272 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:55:25.747322 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Sep 9 00:55:25.747373 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Sep 9 00:55:25.747422 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Sep 9 00:55:25.747472 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Sep 9 00:55:25.747530 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:55:25.747581 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Sep 9 00:55:25.747630 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Sep 9 00:55:25.747720 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Sep 9 00:55:25.747783 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Sep 9 00:55:25.747846 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:55:25.747897 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Sep 9 00:55:25.747952 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Sep 9 00:55:25.748002 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Sep 9 00:55:25.748053 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Sep 9 00:55:25.748103 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Sep 9 00:55:25.748157 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:55:25.748208 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Sep 9 00:55:25.748258 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Sep 9 00:55:25.748316 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Sep 9 00:55:25.748367 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Sep 9 00:55:25.748416 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Sep 9 00:55:25.748472 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:55:25.748523 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Sep 9 00:55:25.748573 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Sep 9 00:55:25.748623 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Sep 9 00:55:25.748686 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Sep 9 00:55:25.748741 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:55:25.748792 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Sep 9 00:55:25.748842 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Sep 9 00:55:25.748892 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Sep 9 00:55:25.748942 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Sep 9 00:55:25.748997 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:55:25.749051 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Sep 9 00:55:25.749101 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Sep 9 00:55:25.749150 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Sep 9 00:55:25.749200 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Sep 9 00:55:25.749254 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:55:25.749310 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Sep 9 00:55:25.749360 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Sep 9 00:55:25.749409 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Sep 9 00:55:25.749461 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Sep 9 00:55:25.749516 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:55:25.749567 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Sep 9 00:55:25.749616 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Sep 9 00:55:25.749677 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Sep 9 00:55:25.749727 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Sep 9 00:55:25.749783 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Sep 9 00:55:25.749837 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Sep 9 00:55:25.749886 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Sep 9 00:55:25.749936 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Sep 9 00:55:25.749985 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Sep 9 00:55:25.750042 kernel: pci_bus 0000:01: extended config space not accessible Sep 9 00:55:25.750095 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 9 00:55:25.750147 kernel: pci_bus 0000:02: extended config space not accessible Sep 9 00:55:25.750159 kernel: acpiphp: Slot [32] registered Sep 9 00:55:25.750165 kernel: acpiphp: Slot [33] registered Sep 9 00:55:25.750171 kernel: acpiphp: Slot [34] registered Sep 9 00:55:25.750177 kernel: acpiphp: Slot [35] registered Sep 9 00:55:25.750182 kernel: acpiphp: Slot [36] registered Sep 9 00:55:25.750188 kernel: acpiphp: Slot [37] registered Sep 9 00:55:25.750194 kernel: acpiphp: Slot [38] registered Sep 9 00:55:25.750200 kernel: acpiphp: Slot [39] registered Sep 9 00:55:25.750206 kernel: acpiphp: Slot [40] registered Sep 9 00:55:25.750213 kernel: acpiphp: Slot [41] registered Sep 9 00:55:25.750219 kernel: acpiphp: Slot [42] registered Sep 9 00:55:25.750225 kernel: acpiphp: Slot [43] registered Sep 9 00:55:25.750231 kernel: acpiphp: Slot [44] registered Sep 9 00:55:25.750236 kernel: acpiphp: Slot [45] registered Sep 9 00:55:25.750242 kernel: acpiphp: Slot [46] registered Sep 9 00:55:25.750248 kernel: acpiphp: Slot [47] registered Sep 9 00:55:25.750254 kernel: acpiphp: Slot [48] registered Sep 9 00:55:25.750260 kernel: acpiphp: Slot [49] registered Sep 9 00:55:25.750266 kernel: acpiphp: Slot [50] registered Sep 9 00:55:25.750274 kernel: acpiphp: Slot [51] registered Sep 9 00:55:25.750280 kernel: acpiphp: Slot [52] registered Sep 9 00:55:25.750286 kernel: acpiphp: Slot [53] registered Sep 9 00:55:25.750292 kernel: acpiphp: Slot [54] registered Sep 9 00:55:25.750298 kernel: acpiphp: Slot [55] registered Sep 9 00:55:25.750304 kernel: acpiphp: Slot [56] registered Sep 9 00:55:25.750309 kernel: acpiphp: Slot [57] registered Sep 9 00:55:25.750315 kernel: acpiphp: Slot [58] registered Sep 9 00:55:25.750321 kernel: acpiphp: Slot [59] registered Sep 9 00:55:25.750328 kernel: acpiphp: Slot [60] registered Sep 9 00:55:25.750334 kernel: acpiphp: Slot [61] registered Sep 9 00:55:25.750339 kernel: acpiphp: Slot [62] registered Sep 9 00:55:25.750345 kernel: acpiphp: Slot [63] registered Sep 9 00:55:25.750396 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Sep 9 00:55:25.750446 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Sep 9 00:55:25.750496 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Sep 9 00:55:25.750546 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Sep 9 00:55:25.750666 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Sep 9 00:55:25.750721 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Sep 9 00:55:25.750780 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 PCIe Endpoint Sep 9 00:55:25.750833 kernel: pci 0000:03:00.0: BAR 0 [io 0x4000-0x4007] Sep 9 00:55:25.750885 kernel: pci 0000:03:00.0: BAR 1 [mem 0xfd5f8000-0xfd5fffff 64bit] Sep 9 00:55:25.750937 kernel: pci 0000:03:00.0: ROM [mem 0x00000000-0x0000ffff pref] Sep 9 00:55:25.750988 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Sep 9 00:55:25.751039 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Sep 9 00:55:25.751094 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Sep 9 00:55:25.751147 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Sep 9 00:55:25.751200 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Sep 9 00:55:25.751254 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Sep 9 00:55:25.751307 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Sep 9 00:55:25.753221 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Sep 9 00:55:25.753286 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Sep 9 00:55:25.753346 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Sep 9 00:55:25.753407 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 PCIe Endpoint Sep 9 00:55:25.753461 kernel: pci 0000:0b:00.0: BAR 0 [mem 0xfd4fc000-0xfd4fcfff] Sep 9 00:55:25.753513 kernel: pci 0000:0b:00.0: BAR 1 [mem 0xfd4fd000-0xfd4fdfff] Sep 9 00:55:25.753565 kernel: pci 0000:0b:00.0: BAR 2 [mem 0xfd4fe000-0xfd4fffff] Sep 9 00:55:25.753617 kernel: pci 0000:0b:00.0: BAR 3 [io 0x5000-0x500f] Sep 9 00:55:25.753685 kernel: pci 0000:0b:00.0: ROM [mem 0x00000000-0x0000ffff pref] Sep 9 00:55:25.753742 kernel: pci 0000:0b:00.0: supports D1 D2 Sep 9 00:55:25.753794 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 9 00:55:25.753845 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Sep 9 00:55:25.753898 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Sep 9 00:55:25.753952 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Sep 9 00:55:25.754005 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Sep 9 00:55:25.754057 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Sep 9 00:55:25.754110 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Sep 9 00:55:25.754165 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Sep 9 00:55:25.754217 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Sep 9 00:55:25.754269 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Sep 9 00:55:25.754321 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Sep 9 00:55:25.754373 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Sep 9 00:55:25.754424 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Sep 9 00:55:25.754475 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Sep 9 00:55:25.754529 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Sep 9 00:55:25.754581 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Sep 9 00:55:25.754633 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Sep 9 00:55:25.754713 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Sep 9 00:55:25.754766 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Sep 9 00:55:25.754817 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Sep 9 00:55:25.754867 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Sep 9 00:55:25.754917 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Sep 9 00:55:25.754972 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Sep 9 00:55:25.755022 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Sep 9 00:55:25.755072 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Sep 9 00:55:25.755124 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Sep 9 00:55:25.755133 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Sep 9 00:55:25.755139 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Sep 9 00:55:25.755145 kernel: ACPI: PCI: Interrupt link LNKB disabled Sep 9 00:55:25.755153 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 9 00:55:25.755159 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Sep 9 00:55:25.755165 kernel: iommu: Default domain type: Translated Sep 9 00:55:25.755171 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 9 00:55:25.755177 kernel: PCI: Using ACPI for IRQ routing Sep 9 00:55:25.755183 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 9 00:55:25.755190 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Sep 9 00:55:25.755195 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Sep 9 00:55:25.755246 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Sep 9 00:55:25.755298 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Sep 9 00:55:25.755347 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 9 00:55:25.755357 kernel: vgaarb: loaded Sep 9 00:55:25.755363 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Sep 9 00:55:25.755369 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Sep 9 00:55:25.755375 kernel: clocksource: Switched to clocksource tsc-early Sep 9 00:55:25.755380 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 00:55:25.755386 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 00:55:25.755392 kernel: pnp: PnP ACPI init Sep 9 00:55:25.755446 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Sep 9 00:55:25.755494 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Sep 9 00:55:25.755540 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Sep 9 00:55:25.755591 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Sep 9 00:55:25.755664 kernel: pnp 00:06: [dma 2] Sep 9 00:55:25.755716 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Sep 9 00:55:25.755766 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Sep 9 00:55:25.755811 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Sep 9 00:55:25.755820 kernel: pnp: PnP ACPI: found 8 devices Sep 9 00:55:25.755826 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 9 00:55:25.755832 kernel: NET: Registered PF_INET protocol family Sep 9 00:55:25.755838 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 00:55:25.755844 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 9 00:55:25.755850 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 00:55:25.755858 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 9 00:55:25.755864 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 9 00:55:25.755870 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 9 00:55:25.755875 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 9 00:55:25.755881 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 9 00:55:25.755887 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 00:55:25.755893 kernel: NET: Registered PF_XDP protocol family Sep 9 00:55:25.755944 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Sep 9 00:55:25.755997 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Sep 9 00:55:25.756051 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 9 00:55:25.756104 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 9 00:55:25.756155 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 9 00:55:25.756207 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Sep 9 00:55:25.756258 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Sep 9 00:55:25.756335 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Sep 9 00:55:25.756388 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Sep 9 00:55:25.756439 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Sep 9 00:55:25.756493 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Sep 9 00:55:25.756544 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Sep 9 00:55:25.756595 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Sep 9 00:55:25.756659 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Sep 9 00:55:25.756712 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Sep 9 00:55:25.756762 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Sep 9 00:55:25.756813 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Sep 9 00:55:25.756866 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Sep 9 00:55:25.756915 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Sep 9 00:55:25.756965 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Sep 9 00:55:25.757016 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Sep 9 00:55:25.757065 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Sep 9 00:55:25.757116 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Sep 9 00:55:25.757166 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref]: assigned Sep 9 00:55:25.757216 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref]: assigned Sep 9 00:55:25.757269 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:55:25.757319 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Sep 9 00:55:25.757370 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:55:25.757419 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Sep 9 00:55:25.757470 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:55:25.757520 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Sep 9 00:55:25.757569 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:55:25.757620 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Sep 9 00:55:25.757688 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:55:25.757740 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Sep 9 00:55:25.757791 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:55:25.757843 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Sep 9 00:55:25.757894 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:55:25.757945 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Sep 9 00:55:25.757996 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:55:25.758046 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Sep 9 00:55:25.758100 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:55:25.758150 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Sep 9 00:55:25.758201 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:55:25.758251 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Sep 9 00:55:25.758302 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:55:25.758352 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Sep 9 00:55:25.758402 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:55:25.758452 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Sep 9 00:55:25.758505 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:55:25.758555 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Sep 9 00:55:25.758605 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:55:25.758973 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Sep 9 00:55:25.759031 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:55:25.759082 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Sep 9 00:55:25.759132 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:55:25.759187 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Sep 9 00:55:25.759237 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:55:25.759292 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Sep 9 00:55:25.759342 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:55:25.759393 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Sep 9 00:55:25.759442 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:55:25.759491 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Sep 9 00:55:25.759540 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:55:25.759591 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Sep 9 00:55:25.759654 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:55:25.759708 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Sep 9 00:55:25.759758 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:55:25.759807 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Sep 9 00:55:25.759857 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:55:25.759907 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Sep 9 00:55:25.759956 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:55:25.760046 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Sep 9 00:55:25.760104 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:55:25.760154 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Sep 9 00:55:25.760203 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:55:25.760253 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Sep 9 00:55:25.760308 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:55:25.760359 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Sep 9 00:55:25.760409 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:55:25.760457 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Sep 9 00:55:25.760507 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:55:25.760556 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Sep 9 00:55:25.760609 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:55:25.760672 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Sep 9 00:55:25.760725 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:55:25.762660 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Sep 9 00:55:25.762727 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:55:25.762782 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Sep 9 00:55:25.762836 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:55:25.762887 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Sep 9 00:55:25.762940 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:55:25.762994 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Sep 9 00:55:25.763046 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:55:25.763097 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Sep 9 00:55:25.763148 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:55:25.763197 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Sep 9 00:55:25.763249 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:55:25.763299 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Sep 9 00:55:25.763353 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:55:25.763404 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Sep 9 00:55:25.763456 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:55:25.763508 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Sep 9 00:55:25.763560 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:55:25.763610 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Sep 9 00:55:25.763669 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:55:25.763720 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Sep 9 00:55:25.763775 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Sep 9 00:55:25.763824 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Sep 9 00:55:25.763876 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 9 00:55:25.763928 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Sep 9 00:55:25.763979 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Sep 9 00:55:25.764029 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Sep 9 00:55:25.764078 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Sep 9 00:55:25.764132 kernel: pci 0000:03:00.0: ROM [mem 0xfd500000-0xfd50ffff pref]: assigned Sep 9 00:55:25.764183 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Sep 9 00:55:25.764249 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Sep 9 00:55:25.764309 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Sep 9 00:55:25.764359 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Sep 9 00:55:25.764418 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Sep 9 00:55:25.764468 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Sep 9 00:55:25.764518 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Sep 9 00:55:25.764573 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Sep 9 00:55:25.764625 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Sep 9 00:55:25.764695 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Sep 9 00:55:25.764749 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Sep 9 00:55:25.764801 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Sep 9 00:55:25.764851 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Sep 9 00:55:25.764901 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Sep 9 00:55:25.764951 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Sep 9 00:55:25.765000 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Sep 9 00:55:25.765051 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Sep 9 00:55:25.765100 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Sep 9 00:55:25.765153 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Sep 9 00:55:25.765203 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Sep 9 00:55:25.765253 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Sep 9 00:55:25.765307 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Sep 9 00:55:25.765358 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Sep 9 00:55:25.765407 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Sep 9 00:55:25.765457 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Sep 9 00:55:25.765508 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Sep 9 00:55:25.765560 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Sep 9 00:55:25.765614 kernel: pci 0000:0b:00.0: ROM [mem 0xfd400000-0xfd40ffff pref]: assigned Sep 9 00:55:25.765687 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Sep 9 00:55:25.765739 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Sep 9 00:55:25.765789 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Sep 9 00:55:25.765838 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Sep 9 00:55:25.765890 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Sep 9 00:55:25.765940 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Sep 9 00:55:25.765992 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Sep 9 00:55:25.766042 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Sep 9 00:55:25.766093 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Sep 9 00:55:25.766143 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Sep 9 00:55:25.766194 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Sep 9 00:55:25.766262 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Sep 9 00:55:25.766329 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Sep 9 00:55:25.766380 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Sep 9 00:55:25.766430 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Sep 9 00:55:25.766484 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Sep 9 00:55:25.766534 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Sep 9 00:55:25.766585 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Sep 9 00:55:25.766897 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Sep 9 00:55:25.766960 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Sep 9 00:55:25.767013 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Sep 9 00:55:25.767065 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Sep 9 00:55:25.767120 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Sep 9 00:55:25.767170 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Sep 9 00:55:25.767221 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Sep 9 00:55:25.767271 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Sep 9 00:55:25.767322 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Sep 9 00:55:25.767374 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Sep 9 00:55:25.767424 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Sep 9 00:55:25.767473 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Sep 9 00:55:25.767525 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Sep 9 00:55:25.767576 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Sep 9 00:55:25.767627 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Sep 9 00:55:25.767719 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Sep 9 00:55:25.767770 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Sep 9 00:55:25.767821 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Sep 9 00:55:25.767870 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Sep 9 00:55:25.767919 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Sep 9 00:55:25.767970 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Sep 9 00:55:25.768020 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Sep 9 00:55:25.768073 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Sep 9 00:55:25.768122 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Sep 9 00:55:25.768173 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Sep 9 00:55:25.768224 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Sep 9 00:55:25.768274 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Sep 9 00:55:25.768325 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Sep 9 00:55:25.768375 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Sep 9 00:55:25.768427 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Sep 9 00:55:25.768479 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Sep 9 00:55:25.768529 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Sep 9 00:55:25.768578 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Sep 9 00:55:25.768630 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Sep 9 00:55:25.768690 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Sep 9 00:55:25.768740 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Sep 9 00:55:25.768795 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Sep 9 00:55:25.768845 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Sep 9 00:55:25.768895 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Sep 9 00:55:25.768945 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Sep 9 00:55:25.768996 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Sep 9 00:55:25.769046 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Sep 9 00:55:25.769095 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Sep 9 00:55:25.769145 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Sep 9 00:55:25.769195 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Sep 9 00:55:25.769244 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Sep 9 00:55:25.769301 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Sep 9 00:55:25.769352 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Sep 9 00:55:25.769408 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Sep 9 00:55:25.769469 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Sep 9 00:55:25.769521 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Sep 9 00:55:25.769573 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Sep 9 00:55:25.769624 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Sep 9 00:55:25.769894 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Sep 9 00:55:25.769951 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Sep 9 00:55:25.770003 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Sep 9 00:55:25.770056 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Sep 9 00:55:25.770107 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Sep 9 00:55:25.770158 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Sep 9 00:55:25.770212 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Sep 9 00:55:25.770267 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Sep 9 00:55:25.770318 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Sep 9 00:55:25.770369 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Sep 9 00:55:25.770416 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Sep 9 00:55:25.770461 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Sep 9 00:55:25.770505 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Sep 9 00:55:25.770548 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Sep 9 00:55:25.770600 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Sep 9 00:55:25.770665 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Sep 9 00:55:25.770712 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Sep 9 00:55:25.770757 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Sep 9 00:55:25.770803 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Sep 9 00:55:25.770849 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Sep 9 00:55:25.770894 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Sep 9 00:55:25.770942 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Sep 9 00:55:25.770993 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Sep 9 00:55:25.771040 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Sep 9 00:55:25.771085 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Sep 9 00:55:25.771137 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Sep 9 00:55:25.771183 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Sep 9 00:55:25.771228 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Sep 9 00:55:25.771280 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Sep 9 00:55:25.771326 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Sep 9 00:55:25.771372 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Sep 9 00:55:25.771421 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Sep 9 00:55:25.771468 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Sep 9 00:55:25.771518 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Sep 9 00:55:25.771565 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Sep 9 00:55:25.771619 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Sep 9 00:55:25.771678 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Sep 9 00:55:25.771728 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Sep 9 00:55:25.771775 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Sep 9 00:55:25.771824 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Sep 9 00:55:25.771871 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Sep 9 00:55:25.771923 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Sep 9 00:55:25.771970 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Sep 9 00:55:25.772015 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Sep 9 00:55:25.772064 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Sep 9 00:55:25.772110 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Sep 9 00:55:25.772155 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Sep 9 00:55:25.772209 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Sep 9 00:55:25.772256 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Sep 9 00:55:25.772310 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Sep 9 00:55:25.772360 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Sep 9 00:55:25.772405 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Sep 9 00:55:25.772455 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Sep 9 00:55:25.772504 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Sep 9 00:55:25.772554 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Sep 9 00:55:25.772601 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Sep 9 00:55:25.773003 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Sep 9 00:55:25.773059 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Sep 9 00:55:25.773112 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Sep 9 00:55:25.773162 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Sep 9 00:55:25.773213 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Sep 9 00:55:25.773260 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Sep 9 00:55:25.773306 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Sep 9 00:55:25.773357 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Sep 9 00:55:25.773403 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Sep 9 00:55:25.773449 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Sep 9 00:55:25.773520 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Sep 9 00:55:25.773773 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Sep 9 00:55:25.773825 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Sep 9 00:55:25.773876 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Sep 9 00:55:25.773924 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Sep 9 00:55:25.773974 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Sep 9 00:55:25.774020 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Sep 9 00:55:25.774073 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Sep 9 00:55:25.774120 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Sep 9 00:55:25.774169 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Sep 9 00:55:25.774215 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Sep 9 00:55:25.774267 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Sep 9 00:55:25.774313 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Sep 9 00:55:25.774365 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Sep 9 00:55:25.774411 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Sep 9 00:55:25.774457 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Sep 9 00:55:25.774507 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Sep 9 00:55:25.774559 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Sep 9 00:55:25.774630 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Sep 9 00:55:25.774708 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Sep 9 00:55:25.774759 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Sep 9 00:55:25.774811 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Sep 9 00:55:25.774858 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Sep 9 00:55:25.774908 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Sep 9 00:55:25.774956 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Sep 9 00:55:25.775007 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Sep 9 00:55:25.775056 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Sep 9 00:55:25.775107 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Sep 9 00:55:25.775154 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Sep 9 00:55:25.775203 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Sep 9 00:55:25.775249 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Sep 9 00:55:25.775305 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 9 00:55:25.775316 kernel: PCI: CLS 32 bytes, default 64 Sep 9 00:55:25.775323 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 9 00:55:25.775329 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Sep 9 00:55:25.775335 kernel: clocksource: Switched to clocksource tsc Sep 9 00:55:25.775341 kernel: Initialise system trusted keyrings Sep 9 00:55:25.775347 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 9 00:55:25.775353 kernel: Key type asymmetric registered Sep 9 00:55:25.775359 kernel: Asymmetric key parser 'x509' registered Sep 9 00:55:25.775365 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 9 00:55:25.775372 kernel: io scheduler mq-deadline registered Sep 9 00:55:25.775378 kernel: io scheduler kyber registered Sep 9 00:55:25.775384 kernel: io scheduler bfq registered Sep 9 00:55:25.775436 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Sep 9 00:55:25.775488 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:55:25.775540 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Sep 9 00:55:25.775592 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:55:25.775660 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Sep 9 00:55:25.775715 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:55:25.775766 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Sep 9 00:55:25.775817 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:55:25.775869 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Sep 9 00:55:25.775920 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:55:25.775972 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Sep 9 00:55:25.776023 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:55:25.776077 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Sep 9 00:55:25.776129 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:55:25.776180 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Sep 9 00:55:25.776231 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:55:25.776295 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Sep 9 00:55:25.776348 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:55:25.776400 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Sep 9 00:55:25.776454 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:55:25.776505 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Sep 9 00:55:25.776555 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:55:25.776606 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Sep 9 00:55:25.776677 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:55:25.776732 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Sep 9 00:55:25.776782 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:55:25.776835 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Sep 9 00:55:25.776887 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:55:25.776938 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Sep 9 00:55:25.776989 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:55:25.777039 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Sep 9 00:55:25.777090 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:55:25.777141 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Sep 9 00:55:25.777192 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:55:25.777246 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Sep 9 00:55:25.777296 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:55:25.777346 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Sep 9 00:55:25.777396 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:55:25.777446 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Sep 9 00:55:25.777496 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:55:25.777547 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Sep 9 00:55:25.777600 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:55:25.777668 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Sep 9 00:55:25.777721 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:55:25.777773 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Sep 9 00:55:25.777823 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:55:25.777873 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Sep 9 00:55:25.777924 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:55:25.777974 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Sep 9 00:55:25.778027 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:55:25.778078 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Sep 9 00:55:25.778128 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:55:25.778179 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Sep 9 00:55:25.778230 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:55:25.778281 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Sep 9 00:55:25.778332 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:55:25.778387 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Sep 9 00:55:25.778438 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:55:25.778490 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Sep 9 00:55:25.778540 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:55:25.778591 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Sep 9 00:55:25.778655 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:55:25.778709 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Sep 9 00:55:25.778763 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Sep 9 00:55:25.778775 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 9 00:55:25.778781 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 00:55:25.778788 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 9 00:55:25.778795 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Sep 9 00:55:25.778801 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 9 00:55:25.778807 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 9 00:55:25.778861 kernel: rtc_cmos 00:01: registered as rtc0 Sep 9 00:55:25.778911 kernel: rtc_cmos 00:01: setting system clock to 2025-09-09T00:55:25 UTC (1757379325) Sep 9 00:55:25.778921 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 9 00:55:25.778964 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Sep 9 00:55:25.778973 kernel: intel_pstate: CPU model not supported Sep 9 00:55:25.778979 kernel: NET: Registered PF_INET6 protocol family Sep 9 00:55:25.778986 kernel: Segment Routing with IPv6 Sep 9 00:55:25.778992 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 00:55:25.778998 kernel: NET: Registered PF_PACKET protocol family Sep 9 00:55:25.779006 kernel: Key type dns_resolver registered Sep 9 00:55:25.779012 kernel: IPI shorthand broadcast: enabled Sep 9 00:55:25.779018 kernel: sched_clock: Marking stable (2666066810, 173773190)->(2852990164, -13150164) Sep 9 00:55:25.779025 kernel: registered taskstats version 1 Sep 9 00:55:25.779031 kernel: Loading compiled-in X.509 certificates Sep 9 00:55:25.779037 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: f610abecf8d2943295243a86f7aa958542b6f677' Sep 9 00:55:25.779043 kernel: Demotion targets for Node 0: null Sep 9 00:55:25.779049 kernel: Key type .fscrypt registered Sep 9 00:55:25.779056 kernel: Key type fscrypt-provisioning registered Sep 9 00:55:25.779063 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 00:55:25.779071 kernel: ima: Allocated hash algorithm: sha1 Sep 9 00:55:25.779077 kernel: ima: No architecture policies found Sep 9 00:55:25.779084 kernel: clk: Disabling unused clocks Sep 9 00:55:25.779090 kernel: Warning: unable to open an initial console. Sep 9 00:55:25.779097 kernel: Freeing unused kernel image (initmem) memory: 54036K Sep 9 00:55:25.779103 kernel: Write protecting the kernel read-only data: 24576k Sep 9 00:55:25.779109 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Sep 9 00:55:25.779116 kernel: Run /init as init process Sep 9 00:55:25.779123 kernel: with arguments: Sep 9 00:55:25.779129 kernel: /init Sep 9 00:55:25.779135 kernel: with environment: Sep 9 00:55:25.779142 kernel: HOME=/ Sep 9 00:55:25.779148 kernel: TERM=linux Sep 9 00:55:25.779154 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 00:55:25.779161 systemd[1]: Successfully made /usr/ read-only. Sep 9 00:55:25.779169 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 00:55:25.779178 systemd[1]: Detected virtualization vmware. Sep 9 00:55:25.779184 systemd[1]: Detected architecture x86-64. Sep 9 00:55:25.779190 systemd[1]: Running in initrd. Sep 9 00:55:25.779196 systemd[1]: No hostname configured, using default hostname. Sep 9 00:55:25.779203 systemd[1]: Hostname set to . Sep 9 00:55:25.779209 systemd[1]: Initializing machine ID from random generator. Sep 9 00:55:25.779216 systemd[1]: Queued start job for default target initrd.target. Sep 9 00:55:25.779222 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:55:25.779229 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:55:25.779236 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 00:55:25.779243 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 00:55:25.779250 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 00:55:25.779258 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 00:55:25.779265 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 00:55:25.779275 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 00:55:25.779282 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:55:25.779289 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:55:25.779295 systemd[1]: Reached target paths.target - Path Units. Sep 9 00:55:25.779301 systemd[1]: Reached target slices.target - Slice Units. Sep 9 00:55:25.779308 systemd[1]: Reached target swap.target - Swaps. Sep 9 00:55:25.779315 systemd[1]: Reached target timers.target - Timer Units. Sep 9 00:55:25.779321 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 00:55:25.779327 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 00:55:25.779335 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 00:55:25.779341 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 9 00:55:25.779348 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:55:25.779354 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 00:55:25.779361 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:55:25.779367 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 00:55:25.779373 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 00:55:25.779380 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 00:55:25.779386 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 00:55:25.779394 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 9 00:55:25.779400 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 00:55:25.779406 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 00:55:25.779413 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 00:55:25.779419 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:55:25.779426 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 00:55:25.779433 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:55:25.779455 systemd-journald[243]: Collecting audit messages is disabled. Sep 9 00:55:25.779474 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 00:55:25.779481 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 00:55:25.779487 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 00:55:25.779494 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 00:55:25.779501 kernel: Bridge firewalling registered Sep 9 00:55:25.779507 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 00:55:25.779514 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:55:25.779520 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 00:55:25.779528 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:55:25.779535 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 00:55:25.779541 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:55:25.779548 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:55:25.779555 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:55:25.779561 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 00:55:25.779568 systemd-journald[243]: Journal started Sep 9 00:55:25.779584 systemd-journald[243]: Runtime Journal (/run/log/journal/fce5fd948dc94d52b5af7bd43818a91d) is 4.8M, max 38.8M, 34M free. Sep 9 00:55:25.719259 systemd-modules-load[244]: Inserted module 'overlay' Sep 9 00:55:25.780684 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 00:55:25.744850 systemd-modules-load[244]: Inserted module 'br_netfilter' Sep 9 00:55:25.787985 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 00:55:25.794565 systemd-tmpfiles[281]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 9 00:55:25.796786 dracut-cmdline[272]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=34d704fb26999c645221adf783007b0add8c1672b7c5860358d83aa19335714a Sep 9 00:55:25.798104 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:55:25.799352 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 00:55:25.828872 systemd-resolved[298]: Positive Trust Anchors: Sep 9 00:55:25.829081 systemd-resolved[298]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:55:25.829105 systemd-resolved[298]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 00:55:25.831433 systemd-resolved[298]: Defaulting to hostname 'linux'. Sep 9 00:55:25.832156 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 00:55:25.832443 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:55:25.851666 kernel: SCSI subsystem initialized Sep 9 00:55:25.868656 kernel: Loading iSCSI transport class v2.0-870. Sep 9 00:55:25.877656 kernel: iscsi: registered transport (tcp) Sep 9 00:55:25.900028 kernel: iscsi: registered transport (qla4xxx) Sep 9 00:55:25.900076 kernel: QLogic iSCSI HBA Driver Sep 9 00:55:25.912205 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 00:55:25.924060 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 00:55:25.925213 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 00:55:25.948789 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 00:55:25.950708 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 00:55:25.992669 kernel: raid6: avx2x4 gen() 46161 MB/s Sep 9 00:55:26.009658 kernel: raid6: avx2x2 gen() 52229 MB/s Sep 9 00:55:26.026859 kernel: raid6: avx2x1 gen() 44736 MB/s Sep 9 00:55:26.026886 kernel: raid6: using algorithm avx2x2 gen() 52229 MB/s Sep 9 00:55:26.044878 kernel: raid6: .... xor() 31778 MB/s, rmw enabled Sep 9 00:55:26.044928 kernel: raid6: using avx2x2 recovery algorithm Sep 9 00:55:26.058652 kernel: xor: automatically using best checksumming function avx Sep 9 00:55:26.166660 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 00:55:26.169803 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 00:55:26.170912 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:55:26.188256 systemd-udevd[493]: Using default interface naming scheme 'v255'. Sep 9 00:55:26.191633 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:55:26.193103 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 00:55:26.212058 dracut-pre-trigger[500]: rd.md=0: removing MD RAID activation Sep 9 00:55:26.226395 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 00:55:26.227493 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 00:55:26.298598 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:55:26.300002 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 00:55:26.372825 kernel: VMware PVSCSI driver - version 1.0.7.0-k Sep 9 00:55:26.372864 kernel: vmw_pvscsi: using 64bit dma Sep 9 00:55:26.373940 kernel: vmw_pvscsi: max_id: 16 Sep 9 00:55:26.373957 kernel: vmw_pvscsi: setting ring_pages to 8 Sep 9 00:55:26.379649 kernel: vmw_pvscsi: enabling reqCallThreshold Sep 9 00:55:26.379670 kernel: vmw_pvscsi: driver-based request coalescing enabled Sep 9 00:55:26.379678 kernel: vmw_pvscsi: using MSI-X Sep 9 00:55:26.389646 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Sep 9 00:55:26.393652 kernel: VMware vmxnet3 virtual NIC driver - version 1.9.0.0-k-NAPI Sep 9 00:55:26.395649 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Sep 9 00:55:26.402513 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Sep 9 00:55:26.407842 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Sep 9 00:55:26.409263 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Sep 9 00:55:26.409344 kernel: libata version 3.00 loaded. Sep 9 00:55:26.416650 kernel: ata_piix 0000:00:07.1: version 2.13 Sep 9 00:55:26.416766 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Sep 9 00:55:26.418649 kernel: cryptd: max_cpu_qlen set to 1000 Sep 9 00:55:26.420783 kernel: scsi host1: ata_piix Sep 9 00:55:26.422790 (udev-worker)[553]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Sep 9 00:55:26.428184 kernel: scsi host2: ata_piix Sep 9 00:55:26.428361 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 lpm-pol 0 Sep 9 00:55:26.428371 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 lpm-pol 0 Sep 9 00:55:26.428379 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Sep 9 00:55:26.429815 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:55:26.429889 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:55:26.430311 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:55:26.431591 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:55:26.436800 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Sep 9 00:55:26.436906 kernel: AES CTR mode by8 optimization enabled Sep 9 00:55:26.437751 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 9 00:55:26.437824 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Sep 9 00:55:26.439990 kernel: sd 0:0:0:0: [sda] Cache data unavailable Sep 9 00:55:26.440068 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Sep 9 00:55:26.454914 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 9 00:55:26.454950 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 9 00:55:26.460514 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:55:26.586658 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Sep 9 00:55:26.591689 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Sep 9 00:55:26.621072 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Sep 9 00:55:26.621342 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 9 00:55:26.639209 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Sep 9 00:55:26.642646 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 9 00:55:26.644594 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Sep 9 00:55:26.649895 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Sep 9 00:55:26.654222 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Sep 9 00:55:26.654352 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Sep 9 00:55:26.655018 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 00:55:26.690651 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 9 00:55:26.913898 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 00:55:26.914325 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 00:55:26.914494 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:55:26.914768 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 00:55:26.915581 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 00:55:26.928718 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 00:55:27.703606 disk-uuid[653]: The operation has completed successfully. Sep 9 00:55:27.703806 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 9 00:55:27.743612 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 00:55:27.743679 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 00:55:27.754206 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 00:55:27.764802 sh[683]: Success Sep 9 00:55:27.778695 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 00:55:27.778736 kernel: device-mapper: uevent: version 1.0.3 Sep 9 00:55:27.779864 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 9 00:55:27.786715 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Sep 9 00:55:27.830263 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 00:55:27.832680 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 00:55:27.840753 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 00:55:27.851697 kernel: BTRFS: device fsid eee400a1-88b9-480b-9c0c-54d171140f9a devid 1 transid 35 /dev/mapper/usr (254:0) scanned by mount (695) Sep 9 00:55:27.853762 kernel: BTRFS info (device dm-0): first mount of filesystem eee400a1-88b9-480b-9c0c-54d171140f9a Sep 9 00:55:27.853784 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:55:27.860851 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 9 00:55:27.860883 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 00:55:27.860891 kernel: BTRFS info (device dm-0): enabling free space tree Sep 9 00:55:27.863726 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 00:55:27.864079 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 9 00:55:27.864696 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Sep 9 00:55:27.865161 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 00:55:27.898653 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (718) Sep 9 00:55:27.903475 kernel: BTRFS info (device sda6): first mount of filesystem df6b516e-a914-4199-9bb5-7fc056237ce5 Sep 9 00:55:27.903519 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:55:27.907888 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 9 00:55:27.907931 kernel: BTRFS info (device sda6): enabling free space tree Sep 9 00:55:27.911672 kernel: BTRFS info (device sda6): last unmount of filesystem df6b516e-a914-4199-9bb5-7fc056237ce5 Sep 9 00:55:27.912060 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 00:55:27.912818 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 00:55:27.951329 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Sep 9 00:55:27.953383 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 00:55:28.024518 ignition[737]: Ignition 2.21.0 Sep 9 00:55:28.024914 ignition[737]: Stage: fetch-offline Sep 9 00:55:28.025048 ignition[737]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:55:28.025175 ignition[737]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 9 00:55:28.025358 ignition[737]: parsed url from cmdline: "" Sep 9 00:55:28.025388 ignition[737]: no config URL provided Sep 9 00:55:28.025498 ignition[737]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 00:55:28.025633 ignition[737]: no config at "/usr/lib/ignition/user.ign" Sep 9 00:55:28.026129 ignition[737]: config successfully fetched Sep 9 00:55:28.026147 ignition[737]: parsing config with SHA512: bf51508d956dd67cbb7304ff03e7880317f99567ca5686488de9731ccac9933bd196006a3e663c8692238d9af177db2f765a6f2477bde5c80446117aa350f6b6 Sep 9 00:55:28.030935 unknown[737]: fetched base config from "system" Sep 9 00:55:28.030941 unknown[737]: fetched user config from "vmware" Sep 9 00:55:28.031162 ignition[737]: fetch-offline: fetch-offline passed Sep 9 00:55:28.031194 ignition[737]: Ignition finished successfully Sep 9 00:55:28.032361 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 00:55:28.047392 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 00:55:28.048655 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 00:55:28.069640 systemd-networkd[874]: lo: Link UP Sep 9 00:55:28.069647 systemd-networkd[874]: lo: Gained carrier Sep 9 00:55:28.070380 systemd-networkd[874]: Enumeration completed Sep 9 00:55:28.070524 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 00:55:28.070804 systemd-networkd[874]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Sep 9 00:55:28.070815 systemd[1]: Reached target network.target - Network. Sep 9 00:55:28.075319 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Sep 9 00:55:28.075420 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Sep 9 00:55:28.071067 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 00:55:28.072744 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 00:55:28.073935 systemd-networkd[874]: ens192: Link UP Sep 9 00:55:28.073937 systemd-networkd[874]: ens192: Gained carrier Sep 9 00:55:28.089265 ignition[877]: Ignition 2.21.0 Sep 9 00:55:28.089501 ignition[877]: Stage: kargs Sep 9 00:55:28.089610 ignition[877]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:55:28.089617 ignition[877]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 9 00:55:28.090279 ignition[877]: kargs: kargs passed Sep 9 00:55:28.090307 ignition[877]: Ignition finished successfully Sep 9 00:55:28.091666 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 00:55:28.092370 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 00:55:28.108164 ignition[884]: Ignition 2.21.0 Sep 9 00:55:28.108173 ignition[884]: Stage: disks Sep 9 00:55:28.108253 ignition[884]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:55:28.108259 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 9 00:55:28.110160 ignition[884]: disks: disks passed Sep 9 00:55:28.110199 ignition[884]: Ignition finished successfully Sep 9 00:55:28.111161 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 00:55:28.111497 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 00:55:28.111756 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 00:55:28.112006 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 00:55:28.112244 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 00:55:28.112505 systemd[1]: Reached target basic.target - Basic System. Sep 9 00:55:28.113218 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 00:55:28.131694 systemd-fsck[893]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Sep 9 00:55:28.133001 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 00:55:28.134227 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 00:55:28.166828 systemd-resolved[298]: Detected conflict on linux IN A 139.178.70.105 Sep 9 00:55:28.166837 systemd-resolved[298]: Hostname conflict, changing published hostname from 'linux' to 'linux5'. Sep 9 00:55:28.226646 kernel: EXT4-fs (sda9): mounted filesystem 91c315eb-0fc3-4e95-bf9b-06acc06be6bc r/w with ordered data mode. Quota mode: none. Sep 9 00:55:28.226645 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 00:55:28.226983 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 00:55:28.230909 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 00:55:28.232673 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 00:55:28.233048 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 00:55:28.233245 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 00:55:28.233433 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 00:55:28.245593 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 00:55:28.246416 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 00:55:28.254785 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (901) Sep 9 00:55:28.257357 kernel: BTRFS info (device sda6): first mount of filesystem df6b516e-a914-4199-9bb5-7fc056237ce5 Sep 9 00:55:28.257378 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:55:28.261360 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 9 00:55:28.261389 kernel: BTRFS info (device sda6): enabling free space tree Sep 9 00:55:28.262606 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 00:55:28.282031 initrd-setup-root[925]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 00:55:28.285123 initrd-setup-root[932]: cut: /sysroot/etc/group: No such file or directory Sep 9 00:55:28.287580 initrd-setup-root[939]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 00:55:28.289354 initrd-setup-root[946]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 00:55:28.380160 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 00:55:28.380816 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 00:55:28.382131 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 00:55:28.397655 kernel: BTRFS info (device sda6): last unmount of filesystem df6b516e-a914-4199-9bb5-7fc056237ce5 Sep 9 00:55:28.414114 ignition[1014]: INFO : Ignition 2.21.0 Sep 9 00:55:28.414114 ignition[1014]: INFO : Stage: mount Sep 9 00:55:28.414535 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:55:28.414535 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 9 00:55:28.414850 ignition[1014]: INFO : mount: mount passed Sep 9 00:55:28.414850 ignition[1014]: INFO : Ignition finished successfully Sep 9 00:55:28.415694 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 00:55:28.415915 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 00:55:28.416771 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 00:55:28.851230 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 00:55:28.852488 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 00:55:28.880737 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1025) Sep 9 00:55:28.880774 kernel: BTRFS info (device sda6): first mount of filesystem df6b516e-a914-4199-9bb5-7fc056237ce5 Sep 9 00:55:28.883334 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:55:28.887448 kernel: BTRFS info (device sda6): enabling ssd optimizations Sep 9 00:55:28.887475 kernel: BTRFS info (device sda6): enabling free space tree Sep 9 00:55:28.888809 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 00:55:28.908676 ignition[1041]: INFO : Ignition 2.21.0 Sep 9 00:55:28.908676 ignition[1041]: INFO : Stage: files Sep 9 00:55:28.909035 ignition[1041]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:55:28.909035 ignition[1041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 9 00:55:28.909628 ignition[1041]: DEBUG : files: compiled without relabeling support, skipping Sep 9 00:55:28.911184 ignition[1041]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 00:55:28.911184 ignition[1041]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 00:55:28.912654 ignition[1041]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 00:55:28.912923 ignition[1041]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 00:55:28.913218 unknown[1041]: wrote ssh authorized keys file for user: core Sep 9 00:55:28.913471 ignition[1041]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 00:55:28.915612 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 9 00:55:28.915612 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 9 00:55:28.960447 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 00:55:29.473899 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 9 00:55:29.474384 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 00:55:29.474384 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 9 00:55:29.749873 systemd-networkd[874]: ens192: Gained IPv6LL Sep 9 00:55:29.793801 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 9 00:55:30.041404 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 00:55:30.041404 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 9 00:55:30.042318 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 00:55:30.042318 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:55:30.042318 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:55:30.042318 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:55:30.042318 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:55:30.042318 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:55:30.042318 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:55:30.043731 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:55:30.043971 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:55:30.043971 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 9 00:55:30.046093 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 9 00:55:30.046386 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 9 00:55:30.046386 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 9 00:55:30.679082 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 9 00:55:31.021497 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 9 00:55:31.021497 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Sep 9 00:55:31.022414 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Sep 9 00:55:31.022414 ignition[1041]: INFO : files: op(d): [started] processing unit "prepare-helm.service" Sep 9 00:55:31.023129 ignition[1041]: INFO : files: op(d): op(e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:55:31.023432 ignition[1041]: INFO : files: op(d): op(e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:55:31.023432 ignition[1041]: INFO : files: op(d): [finished] processing unit "prepare-helm.service" Sep 9 00:55:31.023432 ignition[1041]: INFO : files: op(f): [started] processing unit "coreos-metadata.service" Sep 9 00:55:31.024092 ignition[1041]: INFO : files: op(f): op(10): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:55:31.024092 ignition[1041]: INFO : files: op(f): op(10): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:55:31.024092 ignition[1041]: INFO : files: op(f): [finished] processing unit "coreos-metadata.service" Sep 9 00:55:31.024092 ignition[1041]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 00:55:31.494454 ignition[1041]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:55:31.497108 ignition[1041]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:55:31.497330 ignition[1041]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 00:55:31.497330 ignition[1041]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Sep 9 00:55:31.497330 ignition[1041]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 00:55:31.497330 ignition[1041]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:55:31.499140 ignition[1041]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:55:31.499140 ignition[1041]: INFO : files: files passed Sep 9 00:55:31.499140 ignition[1041]: INFO : Ignition finished successfully Sep 9 00:55:31.498396 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 00:55:31.500726 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 00:55:31.501416 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 00:55:31.529303 initrd-setup-root-after-ignition[1073]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:55:31.529303 initrd-setup-root-after-ignition[1073]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:55:31.530896 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 00:55:31.531072 initrd-setup-root-after-ignition[1077]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:55:31.531482 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 00:55:31.532210 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 00:55:31.532891 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 00:55:31.533802 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 00:55:31.568995 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 00:55:31.569095 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 00:55:31.569403 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 00:55:31.569552 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 00:55:31.569802 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 00:55:31.570331 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 00:55:31.590009 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 00:55:31.590974 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 00:55:31.607060 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:55:31.607297 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:55:31.607613 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 00:55:31.607900 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 00:55:31.607984 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 00:55:31.608458 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 00:55:31.608707 systemd[1]: Stopped target basic.target - Basic System. Sep 9 00:55:31.608918 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 00:55:31.609183 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 00:55:31.609459 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 00:55:31.609783 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 9 00:55:31.610054 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 00:55:31.610337 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 00:55:31.610649 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 00:55:31.610930 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 00:55:31.611201 systemd[1]: Stopped target swap.target - Swaps. Sep 9 00:55:31.611419 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 00:55:31.611511 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 00:55:31.611875 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:55:31.612228 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:55:31.612429 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 00:55:31.612485 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:55:31.612743 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 00:55:31.612824 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 00:55:31.613202 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 00:55:31.613286 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 00:55:31.613601 systemd[1]: Stopped target paths.target - Path Units. Sep 9 00:55:31.613777 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 00:55:31.613833 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:55:31.614051 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 00:55:31.614289 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 00:55:31.614509 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 00:55:31.614568 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 00:55:31.614772 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 00:55:31.614823 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 00:55:31.615042 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 00:55:31.615125 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 00:55:31.615434 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 00:55:31.615514 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 00:55:31.617739 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 00:55:31.617887 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 00:55:31.617986 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:55:31.618727 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 00:55:31.618847 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 00:55:31.618918 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:55:31.619095 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 00:55:31.619156 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 00:55:31.622490 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 00:55:31.628680 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 00:55:31.636220 ignition[1098]: INFO : Ignition 2.21.0 Sep 9 00:55:31.636220 ignition[1098]: INFO : Stage: umount Sep 9 00:55:31.636579 ignition[1098]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:55:31.636579 ignition[1098]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Sep 9 00:55:31.636875 ignition[1098]: INFO : umount: umount passed Sep 9 00:55:31.636966 ignition[1098]: INFO : Ignition finished successfully Sep 9 00:55:31.637604 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 00:55:31.637702 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 00:55:31.637977 systemd[1]: Stopped target network.target - Network. Sep 9 00:55:31.638097 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 00:55:31.638125 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 00:55:31.638264 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 00:55:31.638286 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 00:55:31.638454 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 00:55:31.638481 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 00:55:31.638583 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 00:55:31.638612 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 00:55:31.638934 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 00:55:31.639316 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 00:55:31.641479 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 00:55:31.641562 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 00:55:31.643207 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 9 00:55:31.643358 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 00:55:31.643386 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:55:31.644530 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 9 00:55:31.650525 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 00:55:31.650616 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 00:55:31.651614 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 9 00:55:31.651727 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 9 00:55:31.651881 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 00:55:31.651897 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:55:31.652871 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 00:55:31.652994 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 00:55:31.653028 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 00:55:31.653191 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Sep 9 00:55:31.653225 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Sep 9 00:55:31.653373 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 00:55:31.653403 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:55:31.654775 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 00:55:31.654801 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 00:55:31.655073 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:55:31.656497 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 00:55:31.665847 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 00:55:31.671975 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 00:55:31.672105 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:55:31.672612 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 00:55:31.672752 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 00:55:31.672999 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 00:55:31.673025 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:55:31.673905 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 00:55:31.673942 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 00:55:31.674201 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 00:55:31.674238 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 00:55:31.674527 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 00:55:31.674553 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:55:31.675337 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 00:55:31.675446 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 9 00:55:31.675471 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 00:55:31.675901 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 00:55:31.675924 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:55:31.676405 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:55:31.676430 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:55:31.677354 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 00:55:31.677398 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 00:55:31.688540 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 00:55:31.688607 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 00:55:32.078601 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 00:55:32.078702 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 00:55:32.079112 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 00:55:32.079230 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 00:55:32.079262 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 00:55:32.079854 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 00:55:32.099087 systemd[1]: Switching root. Sep 9 00:55:32.128744 systemd-journald[243]: Journal stopped Sep 9 00:55:33.991484 systemd-journald[243]: Received SIGTERM from PID 1 (systemd). Sep 9 00:55:33.991507 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 00:55:33.991515 kernel: SELinux: policy capability open_perms=1 Sep 9 00:55:33.991521 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 00:55:33.991526 kernel: SELinux: policy capability always_check_network=0 Sep 9 00:55:33.991532 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 00:55:33.991538 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 00:55:33.991544 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 00:55:33.991549 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 00:55:33.991555 kernel: SELinux: policy capability userspace_initial_context=0 Sep 9 00:55:33.991560 kernel: audit: type=1403 audit(1757379332.737:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 00:55:33.991567 systemd[1]: Successfully loaded SELinux policy in 52.573ms. Sep 9 00:55:33.991575 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 3.981ms. Sep 9 00:55:33.991582 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 00:55:33.991589 systemd[1]: Detected virtualization vmware. Sep 9 00:55:33.991595 systemd[1]: Detected architecture x86-64. Sep 9 00:55:33.991603 systemd[1]: Detected first boot. Sep 9 00:55:33.991610 systemd[1]: Initializing machine ID from random generator. Sep 9 00:55:33.991616 zram_generator::config[1142]: No configuration found. Sep 9 00:55:33.991718 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Sep 9 00:55:33.991729 kernel: Guest personality initialized and is active Sep 9 00:55:33.991736 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 9 00:55:33.991742 kernel: Initialized host personality Sep 9 00:55:33.991750 kernel: NET: Registered PF_VSOCK protocol family Sep 9 00:55:33.991757 systemd[1]: Populated /etc with preset unit settings. Sep 9 00:55:33.991764 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Sep 9 00:55:33.991771 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Sep 9 00:55:33.991778 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 9 00:55:33.991784 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 00:55:33.991790 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 00:55:33.991798 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 00:55:33.991805 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 00:55:33.991812 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 00:55:33.991819 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 00:55:33.991826 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 00:55:33.991832 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 00:55:33.991839 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 00:55:33.991847 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 00:55:33.991854 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 00:55:33.991861 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:55:33.991871 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:55:33.991885 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 00:55:33.991893 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 00:55:33.991900 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 00:55:33.991907 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 00:55:33.991915 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 9 00:55:33.991922 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:55:33.991929 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:55:33.991937 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 00:55:33.991944 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 00:55:33.991950 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 00:55:33.991957 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 00:55:33.991964 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:55:33.991972 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 00:55:33.991981 systemd[1]: Reached target slices.target - Slice Units. Sep 9 00:55:33.991990 systemd[1]: Reached target swap.target - Swaps. Sep 9 00:55:33.991997 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 00:55:33.992004 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 00:55:33.992013 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 9 00:55:33.992019 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:55:33.992026 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 00:55:33.992033 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:55:33.992040 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 00:55:33.992047 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 00:55:33.992054 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 00:55:33.992060 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 00:55:33.992068 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:55:33.992075 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 00:55:33.992082 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 00:55:33.992090 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 00:55:33.992096 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 00:55:33.992103 systemd[1]: Reached target machines.target - Containers. Sep 9 00:55:33.992110 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 00:55:33.992117 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Sep 9 00:55:33.992125 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 00:55:33.992132 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 00:55:33.992139 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:55:33.992146 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 00:55:33.992152 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:55:33.992159 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 00:55:33.992166 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:55:33.992173 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 00:55:33.992181 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 00:55:33.992188 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 00:55:33.992195 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 00:55:33.992201 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 00:55:33.992208 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 00:55:33.992215 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 00:55:33.992222 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 00:55:33.992229 kernel: loop: module loaded Sep 9 00:55:33.992236 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 00:55:33.992244 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 00:55:33.992251 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 9 00:55:33.992258 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 00:55:33.992265 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 00:55:33.992271 systemd[1]: Stopped verity-setup.service. Sep 9 00:55:33.992278 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:55:33.992285 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 00:55:33.992292 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 00:55:33.992301 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 00:55:33.992308 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 00:55:33.992315 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 00:55:33.992321 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 00:55:33.992328 kernel: fuse: init (API version 7.41) Sep 9 00:55:33.992347 systemd-journald[1235]: Collecting audit messages is disabled. Sep 9 00:55:33.992365 systemd-journald[1235]: Journal started Sep 9 00:55:33.992380 systemd-journald[1235]: Runtime Journal (/run/log/journal/883e86be7bf1463387313be1f0bb4ce0) is 4.8M, max 38.8M, 34M free. Sep 9 00:55:33.832019 systemd[1]: Queued start job for default target multi-user.target. Sep 9 00:55:33.844886 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 9 00:55:33.993664 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:55:33.845133 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 00:55:33.993931 jq[1212]: true Sep 9 00:55:33.995672 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 00:55:33.995951 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 00:55:33.996667 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 00:55:33.996929 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:55:33.997039 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:55:33.997579 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:55:33.998479 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:55:33.998761 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:55:33.998865 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:55:33.999118 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 00:55:33.999383 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 00:55:34.005000 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 00:55:34.012925 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 00:55:34.013287 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 00:55:34.014633 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 00:55:34.025786 jq[1247]: true Sep 9 00:55:34.026755 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 00:55:34.030644 kernel: ACPI: bus type drm_connector registered Sep 9 00:55:34.036689 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 00:55:34.036823 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 00:55:34.036844 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 00:55:34.037508 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 9 00:55:34.038785 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 00:55:34.038946 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:55:34.041758 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 00:55:34.043815 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 00:55:34.043948 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:55:34.051509 systemd-journald[1235]: Time spent on flushing to /var/log/journal/883e86be7bf1463387313be1f0bb4ce0 is 38.647ms for 1754 entries. Sep 9 00:55:34.051509 systemd-journald[1235]: System Journal (/var/log/journal/883e86be7bf1463387313be1f0bb4ce0) is 8M, max 584.8M, 576.8M free. Sep 9 00:55:34.095956 systemd-journald[1235]: Received client request to flush runtime journal. Sep 9 00:55:34.095979 kernel: loop0: detected capacity change from 0 to 128016 Sep 9 00:55:34.049091 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 00:55:34.049310 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 00:55:34.053161 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:55:34.058530 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 00:55:34.064204 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 00:55:34.064491 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:55:34.065076 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 00:55:34.065370 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 9 00:55:34.068085 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 00:55:34.068287 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 00:55:34.074117 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 00:55:34.097536 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 00:55:34.104237 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 00:55:34.104542 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 00:55:34.106562 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 9 00:55:34.118965 ignition[1284]: Ignition 2.21.0 Sep 9 00:55:34.119158 ignition[1284]: deleting config from guestinfo properties Sep 9 00:55:34.142230 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:55:34.166487 ignition[1284]: Successfully deleted config Sep 9 00:55:34.168198 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Sep 9 00:55:34.173798 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:55:34.189093 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 9 00:55:34.223571 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 00:55:34.227974 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 00:55:34.228843 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 00:55:34.246660 kernel: loop1: detected capacity change from 0 to 224512 Sep 9 00:55:34.261987 systemd-tmpfiles[1312]: ACLs are not supported, ignoring. Sep 9 00:55:34.262614 systemd-tmpfiles[1312]: ACLs are not supported, ignoring. Sep 9 00:55:34.266120 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:55:34.278654 kernel: loop2: detected capacity change from 0 to 111000 Sep 9 00:55:34.316660 kernel: loop3: detected capacity change from 0 to 2960 Sep 9 00:55:34.346849 kernel: loop4: detected capacity change from 0 to 128016 Sep 9 00:55:34.366656 kernel: loop5: detected capacity change from 0 to 224512 Sep 9 00:55:34.415658 kernel: loop6: detected capacity change from 0 to 111000 Sep 9 00:55:34.444959 kernel: loop7: detected capacity change from 0 to 2960 Sep 9 00:55:34.480298 (sd-merge)[1318]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. Sep 9 00:55:34.480852 (sd-merge)[1318]: Merged extensions into '/usr'. Sep 9 00:55:34.486908 systemd[1]: Reload requested from client PID 1283 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 00:55:34.487006 systemd[1]: Reloading... Sep 9 00:55:34.546106 zram_generator::config[1345]: No configuration found. Sep 9 00:55:34.638066 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Sep 9 00:55:34.684670 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 00:55:34.685090 systemd[1]: Reloading finished in 197 ms. Sep 9 00:55:34.695999 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 00:55:34.705754 systemd[1]: Starting ensure-sysext.service... Sep 9 00:55:34.707339 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 00:55:34.730878 systemd[1]: Reload requested from client PID 1400 ('systemctl') (unit ensure-sysext.service)... Sep 9 00:55:34.730889 systemd[1]: Reloading... Sep 9 00:55:34.737417 systemd-tmpfiles[1401]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 9 00:55:34.737437 systemd-tmpfiles[1401]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 9 00:55:34.737616 systemd-tmpfiles[1401]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 00:55:34.737789 systemd-tmpfiles[1401]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 00:55:34.738367 systemd-tmpfiles[1401]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 00:55:34.739011 systemd-tmpfiles[1401]: ACLs are not supported, ignoring. Sep 9 00:55:34.739049 systemd-tmpfiles[1401]: ACLs are not supported, ignoring. Sep 9 00:55:34.748321 systemd-tmpfiles[1401]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 00:55:34.748327 systemd-tmpfiles[1401]: Skipping /boot Sep 9 00:55:34.754707 systemd-tmpfiles[1401]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 00:55:34.754715 systemd-tmpfiles[1401]: Skipping /boot Sep 9 00:55:34.775648 zram_generator::config[1425]: No configuration found. Sep 9 00:55:34.861362 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Sep 9 00:55:34.907476 systemd[1]: Reloading finished in 176 ms. Sep 9 00:55:34.915487 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 00:55:34.921762 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:55:34.928782 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 00:55:34.932320 ldconfig[1278]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 00:55:34.932884 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 00:55:34.934021 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 00:55:34.938800 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 00:55:34.942510 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:55:34.945079 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 00:55:34.946035 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 00:55:34.951349 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:55:34.955291 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:55:34.956166 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:55:34.959732 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:55:34.959916 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:55:34.960010 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 00:55:34.966821 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 00:55:34.966949 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:55:34.969011 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:55:34.969175 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:55:34.969644 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:55:34.970798 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:55:34.970933 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:55:34.972972 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:55:34.975348 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:55:34.983289 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:55:34.983465 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:55:34.983535 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 00:55:34.983599 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:55:34.987134 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:55:34.987290 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:55:34.992726 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:55:34.997871 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:55:35.000815 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 00:55:35.001025 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:55:35.001095 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 00:55:35.001202 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:55:35.002668 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 00:55:35.008671 systemd[1]: Finished ensure-sysext.service. Sep 9 00:55:35.010626 systemd-udevd[1492]: Using default interface naming scheme 'v255'. Sep 9 00:55:35.012991 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 9 00:55:35.013376 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:55:35.013696 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:55:35.014076 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:55:35.014562 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:55:35.014837 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:55:35.014960 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 00:55:35.015598 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:55:35.016187 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:55:35.017128 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:55:35.017405 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 00:55:35.021792 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 00:55:35.082766 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 00:55:35.086567 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 00:55:35.099339 augenrules[1539]: No rules Sep 9 00:55:35.099459 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 00:55:35.099621 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 00:55:35.111978 systemd-resolved[1490]: Positive Trust Anchors: Sep 9 00:55:35.112330 systemd-resolved[1490]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:55:35.112392 systemd-resolved[1490]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 00:55:35.114501 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:55:35.116626 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 00:55:35.117305 systemd-resolved[1490]: Defaulting to hostname 'linux'. Sep 9 00:55:35.117923 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 00:55:35.120567 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 00:55:35.120767 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:55:35.122314 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 9 00:55:35.122703 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 00:55:35.141688 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 00:55:35.141936 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 00:55:35.141963 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 00:55:35.142322 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 00:55:35.142453 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 00:55:35.142771 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 9 00:55:35.142961 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 00:55:35.143162 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 00:55:35.143502 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 00:55:35.143628 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 00:55:35.143678 systemd[1]: Reached target paths.target - Path Units. Sep 9 00:55:35.143767 systemd[1]: Reached target timers.target - Timer Units. Sep 9 00:55:35.144813 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 00:55:35.146594 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 00:55:35.149573 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 9 00:55:35.150311 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 9 00:55:35.150731 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 9 00:55:35.154196 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 00:55:35.154526 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 9 00:55:35.155625 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 00:55:35.157032 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 00:55:35.157192 systemd[1]: Reached target basic.target - Basic System. Sep 9 00:55:35.157495 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 00:55:35.157514 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 00:55:35.160319 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 00:55:35.162767 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 00:55:35.164954 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 00:55:35.167428 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 00:55:35.168686 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 00:55:35.173978 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 9 00:55:35.176803 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 00:55:35.182217 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 00:55:35.188815 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 00:55:35.192578 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 00:55:35.197466 jq[1577]: false Sep 9 00:55:35.197889 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 00:55:35.199228 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 00:55:35.200075 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 00:55:35.204175 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 00:55:35.208046 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 00:55:35.213769 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Sep 9 00:55:35.214100 oslogin_cache_refresh[1579]: Refreshing passwd entry cache Sep 9 00:55:35.214873 google_oslogin_nss_cache[1579]: oslogin_cache_refresh[1579]: Refreshing passwd entry cache Sep 9 00:55:35.215302 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 00:55:35.215573 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 00:55:35.215737 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 00:55:35.215888 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 00:55:35.216013 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 00:55:35.216665 extend-filesystems[1578]: Found /dev/sda6 Sep 9 00:55:35.219225 extend-filesystems[1578]: Found /dev/sda9 Sep 9 00:55:35.224290 extend-filesystems[1578]: Checking size of /dev/sda9 Sep 9 00:55:35.225449 google_oslogin_nss_cache[1579]: oslogin_cache_refresh[1579]: Failure getting users, quitting Sep 9 00:55:35.225511 oslogin_cache_refresh[1579]: Failure getting users, quitting Sep 9 00:55:35.226560 google_oslogin_nss_cache[1579]: oslogin_cache_refresh[1579]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 00:55:35.226560 google_oslogin_nss_cache[1579]: oslogin_cache_refresh[1579]: Refreshing group entry cache Sep 9 00:55:35.226560 google_oslogin_nss_cache[1579]: oslogin_cache_refresh[1579]: Failure getting groups, quitting Sep 9 00:55:35.226560 google_oslogin_nss_cache[1579]: oslogin_cache_refresh[1579]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 00:55:35.225678 oslogin_cache_refresh[1579]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 00:55:35.225710 oslogin_cache_refresh[1579]: Refreshing group entry cache Sep 9 00:55:35.226025 oslogin_cache_refresh[1579]: Failure getting groups, quitting Sep 9 00:55:35.226029 oslogin_cache_refresh[1579]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 00:55:35.226808 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 9 00:55:35.230743 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 9 00:55:35.235859 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 00:55:35.236022 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 00:55:35.256574 jq[1595]: true Sep 9 00:55:35.259321 dbus-daemon[1573]: [system] SELinux support is enabled Sep 9 00:55:35.261003 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 00:55:35.264116 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 00:55:35.264137 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 00:55:35.264689 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 00:55:35.264704 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 00:55:35.265268 update_engine[1591]: I20250909 00:55:35.265206 1591 main.cc:92] Flatcar Update Engine starting Sep 9 00:55:35.272992 systemd[1]: Started update-engine.service - Update Engine. Sep 9 00:55:35.273389 update_engine[1591]: I20250909 00:55:35.273182 1591 update_check_scheduler.cc:74] Next update check in 5m27s Sep 9 00:55:35.281700 extend-filesystems[1578]: Old size kept for /dev/sda9 Sep 9 00:55:35.284161 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 00:55:35.284512 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 00:55:35.291698 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 00:55:35.295104 tar[1600]: linux-amd64/LICENSE Sep 9 00:55:35.295104 tar[1600]: linux-amd64/helm Sep 9 00:55:35.302662 jq[1613]: true Sep 9 00:55:35.303314 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Sep 9 00:55:35.304211 systemd-networkd[1545]: lo: Link UP Sep 9 00:55:35.304366 systemd-networkd[1545]: lo: Gained carrier Sep 9 00:55:35.305494 systemd-networkd[1545]: Enumeration completed Sep 9 00:55:35.310664 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Sep 9 00:55:35.310860 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 00:55:35.311015 systemd[1]: Reached target network.target - Network. Sep 9 00:55:35.313618 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 00:55:35.318768 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 9 00:55:35.327998 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 00:55:35.355449 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 9 00:55:35.361178 (ntainerd)[1638]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 00:55:35.371971 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Sep 9 00:55:35.392210 unknown[1622]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Sep 9 00:55:35.409182 unknown[1622]: Core dump limit set to -1 Sep 9 00:55:35.424332 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 9 00:55:35.432627 systemd-logind[1590]: New seat seat0. Sep 9 00:55:35.434412 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 00:55:35.513410 systemd-networkd[1545]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Sep 9 00:55:35.515693 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Sep 9 00:55:35.517952 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Sep 9 00:55:35.518233 systemd-networkd[1545]: ens192: Link UP Sep 9 00:55:35.518563 systemd-networkd[1545]: ens192: Gained carrier Sep 9 00:55:35.519883 bash[1648]: Updated "/home/core/.ssh/authorized_keys" Sep 9 00:55:35.521664 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 00:55:35.523988 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 9 00:55:35.525819 systemd-timesyncd[1522]: Network configuration changed, trying to establish connection. Sep 9 00:55:35.596051 sshd_keygen[1602]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 00:55:35.609746 locksmithd[1616]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 00:55:35.631434 kernel: mousedev: PS/2 mouse device common for all mice Sep 9 00:55:35.638041 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 00:55:35.640716 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 00:55:35.671493 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 00:55:35.671690 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 00:55:35.674217 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 00:55:35.679776 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 9 00:55:35.697538 kernel: ACPI: button: Power Button [PWRF] Sep 9 00:55:35.705918 containerd[1638]: time="2025-09-09T00:55:35Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 9 00:55:35.706378 containerd[1638]: time="2025-09-09T00:55:35.706358520Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 9 00:55:35.711003 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 00:55:35.714200 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 00:55:35.717823 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 9 00:55:35.718048 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 00:55:35.724871 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Sep 9 00:55:35.731923 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 00:55:35.736780 containerd[1638]: time="2025-09-09T00:55:35.736103675Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.398µs" Sep 9 00:55:35.736780 containerd[1638]: time="2025-09-09T00:55:35.736130426Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 9 00:55:35.736780 containerd[1638]: time="2025-09-09T00:55:35.736144743Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 9 00:55:35.736780 containerd[1638]: time="2025-09-09T00:55:35.736260377Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 9 00:55:35.736780 containerd[1638]: time="2025-09-09T00:55:35.736277966Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 9 00:55:35.736780 containerd[1638]: time="2025-09-09T00:55:35.736302164Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 00:55:35.736780 containerd[1638]: time="2025-09-09T00:55:35.736350228Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 00:55:35.736780 containerd[1638]: time="2025-09-09T00:55:35.736357833Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 00:55:35.736780 containerd[1638]: time="2025-09-09T00:55:35.736494064Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 00:55:35.736780 containerd[1638]: time="2025-09-09T00:55:35.736513848Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 00:55:35.736780 containerd[1638]: time="2025-09-09T00:55:35.736520383Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 00:55:35.736780 containerd[1638]: time="2025-09-09T00:55:35.736525165Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 9 00:55:35.736970 containerd[1638]: time="2025-09-09T00:55:35.736585956Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 9 00:55:35.741582 containerd[1638]: time="2025-09-09T00:55:35.741218530Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 00:55:35.741582 containerd[1638]: time="2025-09-09T00:55:35.741249875Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 00:55:35.741582 containerd[1638]: time="2025-09-09T00:55:35.741258159Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 9 00:55:35.741582 containerd[1638]: time="2025-09-09T00:55:35.741284909Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 9 00:55:35.741582 containerd[1638]: time="2025-09-09T00:55:35.741436388Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 9 00:55:35.741582 containerd[1638]: time="2025-09-09T00:55:35.741478123Z" level=info msg="metadata content store policy set" policy=shared Sep 9 00:55:35.743747 containerd[1638]: time="2025-09-09T00:55:35.743729739Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 9 00:55:35.743833 containerd[1638]: time="2025-09-09T00:55:35.743823097Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 9 00:55:35.743893 containerd[1638]: time="2025-09-09T00:55:35.743883702Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 9 00:55:35.743940 containerd[1638]: time="2025-09-09T00:55:35.743931470Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 9 00:55:35.745655 containerd[1638]: time="2025-09-09T00:55:35.745477940Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 9 00:55:35.745655 containerd[1638]: time="2025-09-09T00:55:35.745497101Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 9 00:55:35.745655 containerd[1638]: time="2025-09-09T00:55:35.745509196Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 9 00:55:35.745655 containerd[1638]: time="2025-09-09T00:55:35.745517251Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 9 00:55:35.745655 containerd[1638]: time="2025-09-09T00:55:35.745525378Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 9 00:55:35.745655 containerd[1638]: time="2025-09-09T00:55:35.745531492Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 9 00:55:35.745655 containerd[1638]: time="2025-09-09T00:55:35.745536898Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 9 00:55:35.745655 containerd[1638]: time="2025-09-09T00:55:35.745544629Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 9 00:55:35.746662 containerd[1638]: time="2025-09-09T00:55:35.745633290Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 9 00:55:35.746662 containerd[1638]: time="2025-09-09T00:55:35.745812105Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 9 00:55:35.746662 containerd[1638]: time="2025-09-09T00:55:35.745822739Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 9 00:55:35.746662 containerd[1638]: time="2025-09-09T00:55:35.745829229Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 9 00:55:35.746662 containerd[1638]: time="2025-09-09T00:55:35.745834851Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 9 00:55:35.746662 containerd[1638]: time="2025-09-09T00:55:35.745841129Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 9 00:55:35.746662 containerd[1638]: time="2025-09-09T00:55:35.745847908Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 9 00:55:35.746662 containerd[1638]: time="2025-09-09T00:55:35.745853400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 9 00:55:35.746662 containerd[1638]: time="2025-09-09T00:55:35.745859882Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 9 00:55:35.746662 containerd[1638]: time="2025-09-09T00:55:35.745877077Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 9 00:55:35.746662 containerd[1638]: time="2025-09-09T00:55:35.745883933Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 9 00:55:35.746662 containerd[1638]: time="2025-09-09T00:55:35.745929028Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 9 00:55:35.746662 containerd[1638]: time="2025-09-09T00:55:35.745937659Z" level=info msg="Start snapshots syncer" Sep 9 00:55:35.746662 containerd[1638]: time="2025-09-09T00:55:35.745956013Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 9 00:55:35.746871 containerd[1638]: time="2025-09-09T00:55:35.746099259Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 9 00:55:35.746871 containerd[1638]: time="2025-09-09T00:55:35.746129991Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 9 00:55:35.746955 containerd[1638]: time="2025-09-09T00:55:35.746173710Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 9 00:55:35.746955 containerd[1638]: time="2025-09-09T00:55:35.746227884Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 9 00:55:35.746955 containerd[1638]: time="2025-09-09T00:55:35.746240322Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 9 00:55:35.746955 containerd[1638]: time="2025-09-09T00:55:35.746246569Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 9 00:55:35.746955 containerd[1638]: time="2025-09-09T00:55:35.746252952Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 9 00:55:35.746955 containerd[1638]: time="2025-09-09T00:55:35.746260385Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 9 00:55:35.746955 containerd[1638]: time="2025-09-09T00:55:35.746266798Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 9 00:55:35.746955 containerd[1638]: time="2025-09-09T00:55:35.746286250Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 9 00:55:35.746955 containerd[1638]: time="2025-09-09T00:55:35.746304858Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 9 00:55:35.746955 containerd[1638]: time="2025-09-09T00:55:35.746311642Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 9 00:55:35.746955 containerd[1638]: time="2025-09-09T00:55:35.746318477Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 9 00:55:35.746955 containerd[1638]: time="2025-09-09T00:55:35.746336111Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 00:55:35.746955 containerd[1638]: time="2025-09-09T00:55:35.746345321Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 00:55:35.746955 containerd[1638]: time="2025-09-09T00:55:35.746350386Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 00:55:35.747136 containerd[1638]: time="2025-09-09T00:55:35.746355656Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 00:55:35.747136 containerd[1638]: time="2025-09-09T00:55:35.746359855Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 9 00:55:35.747136 containerd[1638]: time="2025-09-09T00:55:35.746365365Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 9 00:55:35.747136 containerd[1638]: time="2025-09-09T00:55:35.746371586Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 9 00:55:35.747136 containerd[1638]: time="2025-09-09T00:55:35.746380526Z" level=info msg="runtime interface created" Sep 9 00:55:35.747136 containerd[1638]: time="2025-09-09T00:55:35.746383646Z" level=info msg="created NRI interface" Sep 9 00:55:35.747136 containerd[1638]: time="2025-09-09T00:55:35.746388212Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 9 00:55:35.747136 containerd[1638]: time="2025-09-09T00:55:35.746394672Z" level=info msg="Connect containerd service" Sep 9 00:55:35.747136 containerd[1638]: time="2025-09-09T00:55:35.746421647Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 00:55:35.754520 containerd[1638]: time="2025-09-09T00:55:35.752732991Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 00:55:35.788365 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 00:55:35.942689 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Sep 9 00:55:35.966757 tar[1600]: linux-amd64/README.md Sep 9 00:55:35.971848 containerd[1638]: time="2025-09-09T00:55:35.971826458Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 00:55:35.971939 containerd[1638]: time="2025-09-09T00:55:35.971930285Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 00:55:35.976680 containerd[1638]: time="2025-09-09T00:55:35.976248105Z" level=info msg="Start subscribing containerd event" Sep 9 00:55:35.976680 containerd[1638]: time="2025-09-09T00:55:35.976298465Z" level=info msg="Start recovering state" Sep 9 00:55:35.976680 containerd[1638]: time="2025-09-09T00:55:35.976366654Z" level=info msg="Start event monitor" Sep 9 00:55:35.976680 containerd[1638]: time="2025-09-09T00:55:35.976382117Z" level=info msg="Start cni network conf syncer for default" Sep 9 00:55:35.976680 containerd[1638]: time="2025-09-09T00:55:35.976389789Z" level=info msg="Start streaming server" Sep 9 00:55:35.976680 containerd[1638]: time="2025-09-09T00:55:35.976398099Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 9 00:55:35.976680 containerd[1638]: time="2025-09-09T00:55:35.976403184Z" level=info msg="runtime interface starting up..." Sep 9 00:55:35.976680 containerd[1638]: time="2025-09-09T00:55:35.976408952Z" level=info msg="starting plugins..." Sep 9 00:55:35.976680 containerd[1638]: time="2025-09-09T00:55:35.976416999Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 9 00:55:35.979004 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 00:55:35.981582 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 00:55:35.981932 containerd[1638]: time="2025-09-09T00:55:35.981917120Z" level=info msg="containerd successfully booted in 0.276207s" Sep 9 00:55:35.995614 (udev-worker)[1549]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Sep 9 00:55:36.013817 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:55:36.044032 systemd-logind[1590]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 9 00:55:36.051982 systemd-logind[1590]: Watching system buttons on /dev/input/event2 (Power Button) Sep 9 00:55:36.146358 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:55:36.917799 systemd-networkd[1545]: ens192: Gained IPv6LL Sep 9 00:55:36.918170 systemd-timesyncd[1522]: Network configuration changed, trying to establish connection. Sep 9 00:55:36.919434 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 00:55:36.920068 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 00:55:36.921339 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Sep 9 00:55:36.922616 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:55:36.931697 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 00:55:36.952258 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 00:55:36.962566 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 9 00:55:36.962716 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Sep 9 00:55:36.963282 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 00:55:37.841403 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:55:37.841746 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 00:55:37.842661 systemd[1]: Startup finished in 2.699s (kernel) + 7.145s (initrd) + 5.156s (userspace) = 15.001s. Sep 9 00:55:37.848242 (kubelet)[1814]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:55:37.877133 login[1701]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 9 00:55:37.879989 login[1702]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 9 00:55:37.886970 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 00:55:37.887769 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 00:55:37.890360 systemd-logind[1590]: New session 1 of user core. Sep 9 00:55:37.894098 systemd-logind[1590]: New session 2 of user core. Sep 9 00:55:37.903359 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 00:55:37.906923 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 00:55:37.923392 (systemd)[1821]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:55:37.924658 systemd-logind[1590]: New session c1 of user core. Sep 9 00:55:38.030294 systemd[1821]: Queued start job for default target default.target. Sep 9 00:55:38.037476 systemd[1821]: Created slice app.slice - User Application Slice. Sep 9 00:55:38.037493 systemd[1821]: Reached target paths.target - Paths. Sep 9 00:55:38.037520 systemd[1821]: Reached target timers.target - Timers. Sep 9 00:55:38.038199 systemd[1821]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 00:55:38.045902 systemd[1821]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 00:55:38.045935 systemd[1821]: Reached target sockets.target - Sockets. Sep 9 00:55:38.045959 systemd[1821]: Reached target basic.target - Basic System. Sep 9 00:55:38.045980 systemd[1821]: Reached target default.target - Main User Target. Sep 9 00:55:38.045996 systemd[1821]: Startup finished in 117ms. Sep 9 00:55:38.046407 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 00:55:38.055738 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 00:55:38.056358 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 00:55:38.456956 kubelet[1814]: E0909 00:55:38.456921 1814 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:55:38.458417 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:55:38.458569 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:55:38.458873 systemd[1]: kubelet.service: Consumed 632ms CPU time, 264.3M memory peak. Sep 9 00:55:38.564890 systemd-timesyncd[1522]: Network configuration changed, trying to establish connection. Sep 9 00:55:48.709299 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 00:55:48.710518 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:55:49.058221 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:55:49.061308 (kubelet)[1869]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:55:49.094927 kubelet[1869]: E0909 00:55:49.094902 1869 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:55:49.097183 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:55:49.097390 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:55:49.097751 systemd[1]: kubelet.service: Consumed 106ms CPU time, 109M memory peak. Sep 9 00:55:59.347953 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 00:55:59.349963 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:55:59.709416 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:55:59.712634 (kubelet)[1884]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:55:59.746535 kubelet[1884]: E0909 00:55:59.746500 1884 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:55:59.747574 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:55:59.747668 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:55:59.748124 systemd[1]: kubelet.service: Consumed 105ms CPU time, 110.6M memory peak. Sep 9 00:56:05.638021 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 00:56:05.639782 systemd[1]: Started sshd@0-139.178.70.105:22-139.178.68.195:53418.service - OpenSSH per-connection server daemon (139.178.68.195:53418). Sep 9 00:56:05.700721 sshd[1892]: Accepted publickey for core from 139.178.68.195 port 53418 ssh2: RSA SHA256:di4PNdyPvpfAB0WOT8AEsUYj4AxD4pouXbu16YJnSLk Sep 9 00:56:05.701339 sshd-session[1892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:56:05.704385 systemd-logind[1590]: New session 3 of user core. Sep 9 00:56:05.714731 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 00:56:05.768192 systemd[1]: Started sshd@1-139.178.70.105:22-139.178.68.195:53426.service - OpenSSH per-connection server daemon (139.178.68.195:53426). Sep 9 00:56:05.806030 sshd[1898]: Accepted publickey for core from 139.178.68.195 port 53426 ssh2: RSA SHA256:di4PNdyPvpfAB0WOT8AEsUYj4AxD4pouXbu16YJnSLk Sep 9 00:56:05.806832 sshd-session[1898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:56:05.810085 systemd-logind[1590]: New session 4 of user core. Sep 9 00:56:05.819768 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 00:56:05.867812 sshd[1901]: Connection closed by 139.178.68.195 port 53426 Sep 9 00:56:05.868492 sshd-session[1898]: pam_unix(sshd:session): session closed for user core Sep 9 00:56:05.876767 systemd[1]: sshd@1-139.178.70.105:22-139.178.68.195:53426.service: Deactivated successfully. Sep 9 00:56:05.877762 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 00:56:05.878343 systemd-logind[1590]: Session 4 logged out. Waiting for processes to exit. Sep 9 00:56:05.879561 systemd[1]: Started sshd@2-139.178.70.105:22-139.178.68.195:53440.service - OpenSSH per-connection server daemon (139.178.68.195:53440). Sep 9 00:56:05.881752 systemd-logind[1590]: Removed session 4. Sep 9 00:56:05.920901 sshd[1907]: Accepted publickey for core from 139.178.68.195 port 53440 ssh2: RSA SHA256:di4PNdyPvpfAB0WOT8AEsUYj4AxD4pouXbu16YJnSLk Sep 9 00:56:05.921733 sshd-session[1907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:56:05.924672 systemd-logind[1590]: New session 5 of user core. Sep 9 00:56:05.930763 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 00:56:05.976367 sshd[1910]: Connection closed by 139.178.68.195 port 53440 Sep 9 00:56:05.976808 sshd-session[1907]: pam_unix(sshd:session): session closed for user core Sep 9 00:56:05.987286 systemd[1]: sshd@2-139.178.70.105:22-139.178.68.195:53440.service: Deactivated successfully. Sep 9 00:56:05.988524 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 00:56:05.989120 systemd-logind[1590]: Session 5 logged out. Waiting for processes to exit. Sep 9 00:56:05.990783 systemd[1]: Started sshd@3-139.178.70.105:22-139.178.68.195:53446.service - OpenSSH per-connection server daemon (139.178.68.195:53446). Sep 9 00:56:05.991583 systemd-logind[1590]: Removed session 5. Sep 9 00:56:06.027823 sshd[1916]: Accepted publickey for core from 139.178.68.195 port 53446 ssh2: RSA SHA256:di4PNdyPvpfAB0WOT8AEsUYj4AxD4pouXbu16YJnSLk Sep 9 00:56:06.029010 sshd-session[1916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:56:06.031740 systemd-logind[1590]: New session 6 of user core. Sep 9 00:56:06.040725 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 00:56:06.089070 sshd[1919]: Connection closed by 139.178.68.195 port 53446 Sep 9 00:56:06.089913 sshd-session[1916]: pam_unix(sshd:session): session closed for user core Sep 9 00:56:06.098540 systemd[1]: sshd@3-139.178.70.105:22-139.178.68.195:53446.service: Deactivated successfully. Sep 9 00:56:06.099707 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 00:56:06.100278 systemd-logind[1590]: Session 6 logged out. Waiting for processes to exit. Sep 9 00:56:06.102072 systemd[1]: Started sshd@4-139.178.70.105:22-139.178.68.195:53462.service - OpenSSH per-connection server daemon (139.178.68.195:53462). Sep 9 00:56:06.103015 systemd-logind[1590]: Removed session 6. Sep 9 00:56:06.137952 sshd[1925]: Accepted publickey for core from 139.178.68.195 port 53462 ssh2: RSA SHA256:di4PNdyPvpfAB0WOT8AEsUYj4AxD4pouXbu16YJnSLk Sep 9 00:56:06.138788 sshd-session[1925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:56:06.141596 systemd-logind[1590]: New session 7 of user core. Sep 9 00:56:06.151750 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 00:56:06.242662 sudo[1929]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 00:56:06.242905 sudo[1929]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:56:06.251904 sudo[1929]: pam_unix(sudo:session): session closed for user root Sep 9 00:56:06.252695 sshd[1928]: Connection closed by 139.178.68.195 port 53462 Sep 9 00:56:06.253456 sshd-session[1925]: pam_unix(sshd:session): session closed for user core Sep 9 00:56:06.258631 systemd[1]: sshd@4-139.178.70.105:22-139.178.68.195:53462.service: Deactivated successfully. Sep 9 00:56:06.259599 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 00:56:06.260082 systemd-logind[1590]: Session 7 logged out. Waiting for processes to exit. Sep 9 00:56:06.261573 systemd[1]: Started sshd@5-139.178.70.105:22-139.178.68.195:53476.service - OpenSSH per-connection server daemon (139.178.68.195:53476). Sep 9 00:56:06.262957 systemd-logind[1590]: Removed session 7. Sep 9 00:56:06.302019 sshd[1935]: Accepted publickey for core from 139.178.68.195 port 53476 ssh2: RSA SHA256:di4PNdyPvpfAB0WOT8AEsUYj4AxD4pouXbu16YJnSLk Sep 9 00:56:06.302855 sshd-session[1935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:56:06.305578 systemd-logind[1590]: New session 8 of user core. Sep 9 00:56:06.312737 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 00:56:06.362895 sudo[1940]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 00:56:06.363055 sudo[1940]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:56:06.365437 sudo[1940]: pam_unix(sudo:session): session closed for user root Sep 9 00:56:06.368508 sudo[1939]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 9 00:56:06.368696 sudo[1939]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:56:06.374524 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 00:56:06.401793 augenrules[1962]: No rules Sep 9 00:56:06.402384 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 00:56:06.402521 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 00:56:06.403257 sudo[1939]: pam_unix(sudo:session): session closed for user root Sep 9 00:56:06.404597 sshd[1938]: Connection closed by 139.178.68.195 port 53476 Sep 9 00:56:06.404794 sshd-session[1935]: pam_unix(sshd:session): session closed for user core Sep 9 00:56:06.409611 systemd[1]: sshd@5-139.178.70.105:22-139.178.68.195:53476.service: Deactivated successfully. Sep 9 00:56:06.410384 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 00:56:06.410862 systemd-logind[1590]: Session 8 logged out. Waiting for processes to exit. Sep 9 00:56:06.412906 systemd[1]: Started sshd@6-139.178.70.105:22-139.178.68.195:53486.service - OpenSSH per-connection server daemon (139.178.68.195:53486). Sep 9 00:56:06.413770 systemd-logind[1590]: Removed session 8. Sep 9 00:56:06.445660 sshd[1971]: Accepted publickey for core from 139.178.68.195 port 53486 ssh2: RSA SHA256:di4PNdyPvpfAB0WOT8AEsUYj4AxD4pouXbu16YJnSLk Sep 9 00:56:06.446593 sshd-session[1971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:56:06.449820 systemd-logind[1590]: New session 9 of user core. Sep 9 00:56:06.458748 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 00:56:06.506208 sudo[1975]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 00:56:06.506384 sudo[1975]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:56:06.883857 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 00:56:06.894937 (dockerd)[1992]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 00:56:07.159856 dockerd[1992]: time="2025-09-09T00:56:07.159668313Z" level=info msg="Starting up" Sep 9 00:56:07.160519 dockerd[1992]: time="2025-09-09T00:56:07.160508775Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 9 00:56:07.166770 dockerd[1992]: time="2025-09-09T00:56:07.166729364Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 9 00:56:07.187744 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport403417109-merged.mount: Deactivated successfully. Sep 9 00:56:07.204869 dockerd[1992]: time="2025-09-09T00:56:07.204685293Z" level=info msg="Loading containers: start." Sep 9 00:56:07.213660 kernel: Initializing XFRM netlink socket Sep 9 00:56:07.421816 systemd-timesyncd[1522]: Network configuration changed, trying to establish connection. Sep 9 00:56:07.452389 systemd-networkd[1545]: docker0: Link UP Sep 9 00:56:07.453701 dockerd[1992]: time="2025-09-09T00:56:07.453675329Z" level=info msg="Loading containers: done." Sep 9 00:56:07.464588 dockerd[1992]: time="2025-09-09T00:56:07.464311060Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 00:56:07.464588 dockerd[1992]: time="2025-09-09T00:56:07.464381068Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 9 00:56:07.464588 dockerd[1992]: time="2025-09-09T00:56:07.464437540Z" level=info msg="Initializing buildkit" Sep 9 00:56:07.475912 dockerd[1992]: time="2025-09-09T00:56:07.475887777Z" level=info msg="Completed buildkit initialization" Sep 9 00:56:07.480730 dockerd[1992]: time="2025-09-09T00:56:07.480672699Z" level=info msg="Daemon has completed initialization" Sep 9 00:56:07.480930 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 00:56:07.481162 dockerd[1992]: time="2025-09-09T00:56:07.481016794Z" level=info msg="API listen on /run/docker.sock" Sep 9 00:57:40.464386 systemd-resolved[1490]: Clock change detected. Flushing caches. Sep 9 00:57:40.464724 systemd-timesyncd[1522]: Contacted time server 23.150.41.122:123 (2.flatcar.pool.ntp.org). Sep 9 00:57:40.464761 systemd-timesyncd[1522]: Initial clock synchronization to Tue 2025-09-09 00:57:40.464340 UTC. Sep 9 00:57:40.963839 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1722770103-merged.mount: Deactivated successfully. Sep 9 00:57:41.635559 containerd[1638]: time="2025-09-09T00:57:41.635512356Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 9 00:57:42.487428 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2520493663.mount: Deactivated successfully. Sep 9 00:57:42.555786 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 9 00:57:42.557115 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:57:42.743994 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:57:42.751741 (kubelet)[2217]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:57:42.777672 kubelet[2217]: E0909 00:57:42.777636 2217 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:57:42.779114 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:57:42.779260 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:57:42.779599 systemd[1]: kubelet.service: Consumed 108ms CPU time, 112M memory peak. Sep 9 00:57:44.113931 containerd[1638]: time="2025-09-09T00:57:44.113367065Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:57:44.113931 containerd[1638]: time="2025-09-09T00:57:44.113779873Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=28800687" Sep 9 00:57:44.113931 containerd[1638]: time="2025-09-09T00:57:44.113905165Z" level=info msg="ImageCreate event name:\"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:57:44.115414 containerd[1638]: time="2025-09-09T00:57:44.115394827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:57:44.116321 containerd[1638]: time="2025-09-09T00:57:44.116304435Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"28797487\" in 2.480747973s" Sep 9 00:57:44.116350 containerd[1638]: time="2025-09-09T00:57:44.116325615Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\"" Sep 9 00:57:44.116699 containerd[1638]: time="2025-09-09T00:57:44.116679875Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 9 00:57:45.560479 containerd[1638]: time="2025-09-09T00:57:45.560425935Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:57:45.565392 containerd[1638]: time="2025-09-09T00:57:45.565241972Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=24784128" Sep 9 00:57:45.570456 containerd[1638]: time="2025-09-09T00:57:45.570425284Z" level=info msg="ImageCreate event name:\"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:57:45.578743 containerd[1638]: time="2025-09-09T00:57:45.578720821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:57:45.579683 containerd[1638]: time="2025-09-09T00:57:45.579396214Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"26387322\" in 1.462683634s" Sep 9 00:57:45.579683 containerd[1638]: time="2025-09-09T00:57:45.579424693Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\"" Sep 9 00:57:45.579775 containerd[1638]: time="2025-09-09T00:57:45.579737841Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 9 00:57:47.615465 containerd[1638]: time="2025-09-09T00:57:47.615426356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:57:47.616052 containerd[1638]: time="2025-09-09T00:57:47.616035815Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=19175036" Sep 9 00:57:47.616392 containerd[1638]: time="2025-09-09T00:57:47.616371590Z" level=info msg="ImageCreate event name:\"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:57:47.617881 containerd[1638]: time="2025-09-09T00:57:47.617860347Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:57:47.618603 containerd[1638]: time="2025-09-09T00:57:47.618539545Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"20778248\" in 2.038777509s" Sep 9 00:57:47.618603 containerd[1638]: time="2025-09-09T00:57:47.618555519Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\"" Sep 9 00:57:47.619032 containerd[1638]: time="2025-09-09T00:57:47.619004773Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 9 00:57:48.533327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1788739758.mount: Deactivated successfully. Sep 9 00:57:49.069476 containerd[1638]: time="2025-09-09T00:57:49.069275375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:57:49.074361 containerd[1638]: time="2025-09-09T00:57:49.074334212Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=30897170" Sep 9 00:57:49.079536 containerd[1638]: time="2025-09-09T00:57:49.079505522Z" level=info msg="ImageCreate event name:\"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:57:49.084175 containerd[1638]: time="2025-09-09T00:57:49.084146080Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:57:49.084681 containerd[1638]: time="2025-09-09T00:57:49.084575567Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"30896189\" in 1.465536185s" Sep 9 00:57:49.084681 containerd[1638]: time="2025-09-09T00:57:49.084600676Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\"" Sep 9 00:57:49.084890 containerd[1638]: time="2025-09-09T00:57:49.084873242Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 9 00:57:49.781043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2354057563.mount: Deactivated successfully. Sep 9 00:57:51.291643 containerd[1638]: time="2025-09-09T00:57:51.291600918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:57:51.299679 containerd[1638]: time="2025-09-09T00:57:51.299646458Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 9 00:57:51.304413 containerd[1638]: time="2025-09-09T00:57:51.304383483Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:57:51.310461 containerd[1638]: time="2025-09-09T00:57:51.309846109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:57:51.310461 containerd[1638]: time="2025-09-09T00:57:51.310382431Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.225490304s" Sep 9 00:57:51.310461 containerd[1638]: time="2025-09-09T00:57:51.310399705Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 9 00:57:51.310861 containerd[1638]: time="2025-09-09T00:57:51.310779639Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 00:57:52.037022 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount388818879.mount: Deactivated successfully. Sep 9 00:57:52.039043 containerd[1638]: time="2025-09-09T00:57:52.039024743Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:57:52.039423 containerd[1638]: time="2025-09-09T00:57:52.039408694Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 9 00:57:52.039526 containerd[1638]: time="2025-09-09T00:57:52.039464273Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:57:52.040527 containerd[1638]: time="2025-09-09T00:57:52.040503761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:57:52.040890 containerd[1638]: time="2025-09-09T00:57:52.040875736Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 730.079895ms" Sep 9 00:57:52.040941 containerd[1638]: time="2025-09-09T00:57:52.040932160Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 9 00:57:52.041216 containerd[1638]: time="2025-09-09T00:57:52.041200588Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 9 00:57:52.751386 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount293576777.mount: Deactivated successfully. Sep 9 00:57:52.805804 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 9 00:57:52.807383 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:57:53.278775 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:57:53.281272 (kubelet)[2355]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:57:53.320570 kubelet[2355]: E0909 00:57:53.320533 2355 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:57:53.321951 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:57:53.322089 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:57:53.322466 systemd[1]: kubelet.service: Consumed 89ms CPU time, 109.8M memory peak. Sep 9 00:57:53.633801 update_engine[1591]: I20250909 00:57:53.633469 1591 update_attempter.cc:509] Updating boot flags... Sep 9 00:57:56.626329 containerd[1638]: time="2025-09-09T00:57:56.626272847Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:57:56.633923 containerd[1638]: time="2025-09-09T00:57:56.633900850Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 9 00:57:56.643731 containerd[1638]: time="2025-09-09T00:57:56.643691377Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:57:56.667568 containerd[1638]: time="2025-09-09T00:57:56.667531290Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:57:56.668092 containerd[1638]: time="2025-09-09T00:57:56.667987367Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 4.626769161s" Sep 9 00:57:56.668092 containerd[1638]: time="2025-09-09T00:57:56.668006516Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 9 00:57:58.417257 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:57:58.417616 systemd[1]: kubelet.service: Consumed 89ms CPU time, 109.8M memory peak. Sep 9 00:57:58.419794 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:57:58.440788 systemd[1]: Reload requested from client PID 2460 ('systemctl') (unit session-9.scope)... Sep 9 00:57:58.440888 systemd[1]: Reloading... Sep 9 00:57:58.517502 zram_generator::config[2506]: No configuration found. Sep 9 00:57:58.596138 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Sep 9 00:57:58.663630 systemd[1]: Reloading finished in 222 ms. Sep 9 00:57:58.696222 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 00:57:58.696286 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 00:57:58.696525 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:57:58.696558 systemd[1]: kubelet.service: Consumed 47ms CPU time, 69.8M memory peak. Sep 9 00:57:58.697820 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:57:58.995228 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:57:58.998076 (kubelet)[2570]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 00:57:59.030001 kubelet[2570]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:57:59.030205 kubelet[2570]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 00:57:59.030233 kubelet[2570]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:57:59.030354 kubelet[2570]: I0909 00:57:59.030335 2570 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:57:59.290704 kubelet[2570]: I0909 00:57:59.290457 2570 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 9 00:57:59.290704 kubelet[2570]: I0909 00:57:59.290479 2570 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:57:59.290704 kubelet[2570]: I0909 00:57:59.290642 2570 server.go:954] "Client rotation is on, will bootstrap in background" Sep 9 00:57:59.633894 kubelet[2570]: E0909 00:57:59.633853 2570 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.105:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:57:59.639722 kubelet[2570]: I0909 00:57:59.639693 2570 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:57:59.690247 kubelet[2570]: I0909 00:57:59.690231 2570 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 00:57:59.700792 kubelet[2570]: I0909 00:57:59.700772 2570 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:57:59.705688 kubelet[2570]: I0909 00:57:59.705424 2570 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:57:59.705688 kubelet[2570]: I0909 00:57:59.705465 2570 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 00:57:59.720753 kubelet[2570]: I0909 00:57:59.720736 2570 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:57:59.720977 kubelet[2570]: I0909 00:57:59.720819 2570 container_manager_linux.go:304] "Creating device plugin manager" Sep 9 00:57:59.725933 kubelet[2570]: I0909 00:57:59.725915 2570 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:57:59.776141 kubelet[2570]: I0909 00:57:59.776043 2570 kubelet.go:446] "Attempting to sync node with API server" Sep 9 00:57:59.776141 kubelet[2570]: I0909 00:57:59.776077 2570 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:57:59.776795 kubelet[2570]: I0909 00:57:59.776585 2570 kubelet.go:352] "Adding apiserver pod source" Sep 9 00:57:59.776795 kubelet[2570]: I0909 00:57:59.776601 2570 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:57:59.780232 kubelet[2570]: W0909 00:57:59.780208 2570 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Sep 9 00:57:59.780307 kubelet[2570]: E0909 00:57:59.780296 2570 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:57:59.780716 kubelet[2570]: W0909 00:57:59.780575 2570 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Sep 9 00:57:59.780716 kubelet[2570]: E0909 00:57:59.780597 2570 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:57:59.783476 kubelet[2570]: I0909 00:57:59.783465 2570 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 00:57:59.786158 kubelet[2570]: I0909 00:57:59.786149 2570 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 00:57:59.786235 kubelet[2570]: W0909 00:57:59.786229 2570 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 00:57:59.786792 kubelet[2570]: I0909 00:57:59.786784 2570 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 00:57:59.786932 kubelet[2570]: I0909 00:57:59.786847 2570 server.go:1287] "Started kubelet" Sep 9 00:57:59.787265 kubelet[2570]: I0909 00:57:59.787242 2570 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:57:59.788742 kubelet[2570]: I0909 00:57:59.788411 2570 server.go:479] "Adding debug handlers to kubelet server" Sep 9 00:57:59.789641 kubelet[2570]: I0909 00:57:59.789610 2570 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:57:59.789820 kubelet[2570]: I0909 00:57:59.789812 2570 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:57:59.790588 kubelet[2570]: I0909 00:57:59.790575 2570 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:57:59.795205 kubelet[2570]: E0909 00:57:59.790884 2570 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.105:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.105:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186377536c414405 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 00:57:59.786832901 +0000 UTC m=+0.786403860,LastTimestamp:2025-09-09 00:57:59.786832901 +0000 UTC m=+0.786403860,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 00:57:59.795417 kubelet[2570]: I0909 00:57:59.795407 2570 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:57:59.798244 kubelet[2570]: I0909 00:57:59.798134 2570 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 00:57:59.799042 kubelet[2570]: E0909 00:57:59.798317 2570 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:57:59.799765 kubelet[2570]: E0909 00:57:59.799747 2570 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.105:6443: connect: connection refused" interval="200ms" Sep 9 00:57:59.800077 kubelet[2570]: I0909 00:57:59.800067 2570 factory.go:221] Registration of the systemd container factory successfully Sep 9 00:57:59.800169 kubelet[2570]: I0909 00:57:59.800159 2570 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:57:59.800466 kubelet[2570]: I0909 00:57:59.800442 2570 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 00:57:59.800498 kubelet[2570]: I0909 00:57:59.800479 2570 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:57:59.803803 kubelet[2570]: I0909 00:57:59.803745 2570 factory.go:221] Registration of the containerd container factory successfully Sep 9 00:57:59.807105 kubelet[2570]: W0909 00:57:59.807035 2570 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Sep 9 00:57:59.807105 kubelet[2570]: E0909 00:57:59.807070 2570 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:57:59.813297 kubelet[2570]: I0909 00:57:59.813261 2570 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 00:57:59.814461 kubelet[2570]: I0909 00:57:59.814186 2570 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 00:57:59.814461 kubelet[2570]: I0909 00:57:59.814201 2570 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 9 00:57:59.814461 kubelet[2570]: I0909 00:57:59.814212 2570 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 00:57:59.814461 kubelet[2570]: I0909 00:57:59.814217 2570 kubelet.go:2382] "Starting kubelet main sync loop" Sep 9 00:57:59.814461 kubelet[2570]: E0909 00:57:59.814244 2570 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:57:59.817566 kubelet[2570]: I0909 00:57:59.817549 2570 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 00:57:59.817566 kubelet[2570]: I0909 00:57:59.817560 2570 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 00:57:59.817566 kubelet[2570]: I0909 00:57:59.817570 2570 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:57:59.817831 kubelet[2570]: W0909 00:57:59.817819 2570 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Sep 9 00:57:59.817903 kubelet[2570]: E0909 00:57:59.817891 2570 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:57:59.818587 kubelet[2570]: I0909 00:57:59.818575 2570 policy_none.go:49] "None policy: Start" Sep 9 00:57:59.818587 kubelet[2570]: I0909 00:57:59.818588 2570 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 00:57:59.818633 kubelet[2570]: I0909 00:57:59.818595 2570 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:57:59.823166 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 00:57:59.836651 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 00:57:59.839742 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 00:57:59.864193 kubelet[2570]: I0909 00:57:59.864176 2570 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 00:57:59.864388 kubelet[2570]: I0909 00:57:59.864381 2570 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:57:59.864461 kubelet[2570]: I0909 00:57:59.864425 2570 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:57:59.864763 kubelet[2570]: I0909 00:57:59.864755 2570 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:57:59.865597 kubelet[2570]: E0909 00:57:59.865581 2570 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 00:57:59.865652 kubelet[2570]: E0909 00:57:59.865617 2570 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 9 00:57:59.921563 systemd[1]: Created slice kubepods-burstable-pode07d6e0f6d210430fac523dc82465082.slice - libcontainer container kubepods-burstable-pode07d6e0f6d210430fac523dc82465082.slice. Sep 9 00:57:59.932190 kubelet[2570]: E0909 00:57:59.932160 2570 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:57:59.933919 systemd[1]: Created slice kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice - libcontainer container kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice. Sep 9 00:57:59.935255 kubelet[2570]: E0909 00:57:59.935238 2570 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:57:59.937326 systemd[1]: Created slice kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice - libcontainer container kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice. Sep 9 00:57:59.938486 kubelet[2570]: E0909 00:57:59.938471 2570 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:57:59.966356 kubelet[2570]: I0909 00:57:59.966324 2570 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:57:59.966692 kubelet[2570]: E0909 00:57:59.966675 2570 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.105:6443/api/v1/nodes\": dial tcp 139.178.70.105:6443: connect: connection refused" node="localhost" Sep 9 00:58:00.001261 kubelet[2570]: I0909 00:58:00.001110 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e07d6e0f6d210430fac523dc82465082-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e07d6e0f6d210430fac523dc82465082\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:58:00.001261 kubelet[2570]: I0909 00:58:00.001137 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:58:00.001261 kubelet[2570]: I0909 00:58:00.001148 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:58:00.001261 kubelet[2570]: I0909 00:58:00.001158 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:58:00.001261 kubelet[2570]: I0909 00:58:00.001167 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:58:00.001411 kubelet[2570]: I0909 00:58:00.001176 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e07d6e0f6d210430fac523dc82465082-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e07d6e0f6d210430fac523dc82465082\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:58:00.001411 kubelet[2570]: I0909 00:58:00.001186 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e07d6e0f6d210430fac523dc82465082-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e07d6e0f6d210430fac523dc82465082\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:58:00.001411 kubelet[2570]: I0909 00:58:00.001194 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:58:00.001411 kubelet[2570]: I0909 00:58:00.001203 2570 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:58:00.001716 kubelet[2570]: E0909 00:58:00.001699 2570 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.105:6443: connect: connection refused" interval="400ms" Sep 9 00:58:00.168623 kubelet[2570]: I0909 00:58:00.168463 2570 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:58:00.168908 kubelet[2570]: E0909 00:58:00.168890 2570 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.105:6443/api/v1/nodes\": dial tcp 139.178.70.105:6443: connect: connection refused" node="localhost" Sep 9 00:58:00.234222 containerd[1638]: time="2025-09-09T00:58:00.234159751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e07d6e0f6d210430fac523dc82465082,Namespace:kube-system,Attempt:0,}" Sep 9 00:58:00.236709 containerd[1638]: time="2025-09-09T00:58:00.236570377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,}" Sep 9 00:58:00.239871 containerd[1638]: time="2025-09-09T00:58:00.239847210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,}" Sep 9 00:58:00.318398 containerd[1638]: time="2025-09-09T00:58:00.318367276Z" level=info msg="connecting to shim e909b3aafc99c57eedc95611ceeefea7f347a2c67ddadfca111c504024cbddf8" address="unix:///run/containerd/s/3ff1786b9cad9df7f5d799824de94209ddbe2e5899317f2c383f0caa207918c8" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:58:00.319271 containerd[1638]: time="2025-09-09T00:58:00.319225362Z" level=info msg="connecting to shim ab422822629351f611bc938240ccec54730c1793e03eb5584bc78412ca1f8a03" address="unix:///run/containerd/s/3886c6aa8b1fe0ac612d07c96a81270356251edd2f2403af044ca5a7a8eade90" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:58:00.325333 containerd[1638]: time="2025-09-09T00:58:00.325313142Z" level=info msg="connecting to shim f255a36eb9d7632dbbb01bc496896f139496154dea46219cf12eae609624f336" address="unix:///run/containerd/s/62483e7c791cd7a5b7f0cdab42b65153a46e0d610b9e813f2d935150e6b7c45c" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:58:00.402037 kubelet[2570]: E0909 00:58:00.402011 2570 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.105:6443: connect: connection refused" interval="800ms" Sep 9 00:58:00.444577 systemd[1]: Started cri-containerd-ab422822629351f611bc938240ccec54730c1793e03eb5584bc78412ca1f8a03.scope - libcontainer container ab422822629351f611bc938240ccec54730c1793e03eb5584bc78412ca1f8a03. Sep 9 00:58:00.445584 systemd[1]: Started cri-containerd-e909b3aafc99c57eedc95611ceeefea7f347a2c67ddadfca111c504024cbddf8.scope - libcontainer container e909b3aafc99c57eedc95611ceeefea7f347a2c67ddadfca111c504024cbddf8. Sep 9 00:58:00.446436 systemd[1]: Started cri-containerd-f255a36eb9d7632dbbb01bc496896f139496154dea46219cf12eae609624f336.scope - libcontainer container f255a36eb9d7632dbbb01bc496896f139496154dea46219cf12eae609624f336. Sep 9 00:58:00.501590 containerd[1638]: time="2025-09-09T00:58:00.501338639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e07d6e0f6d210430fac523dc82465082,Namespace:kube-system,Attempt:0,} returns sandbox id \"e909b3aafc99c57eedc95611ceeefea7f347a2c67ddadfca111c504024cbddf8\"" Sep 9 00:58:00.508409 containerd[1638]: time="2025-09-09T00:58:00.508380015Z" level=info msg="CreateContainer within sandbox \"e909b3aafc99c57eedc95611ceeefea7f347a2c67ddadfca111c504024cbddf8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 00:58:00.517907 containerd[1638]: time="2025-09-09T00:58:00.517419969Z" level=info msg="Container 587267f8a905cde1c88d9483adbdaa51a9bcf6d079d4c28aa4ec1b9b21e14fba: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:58:00.525868 containerd[1638]: time="2025-09-09T00:58:00.525845306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,} returns sandbox id \"f255a36eb9d7632dbbb01bc496896f139496154dea46219cf12eae609624f336\"" Sep 9 00:58:00.526480 containerd[1638]: time="2025-09-09T00:58:00.526467034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab422822629351f611bc938240ccec54730c1793e03eb5584bc78412ca1f8a03\"" Sep 9 00:58:00.527940 containerd[1638]: time="2025-09-09T00:58:00.527919442Z" level=info msg="CreateContainer within sandbox \"f255a36eb9d7632dbbb01bc496896f139496154dea46219cf12eae609624f336\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 00:58:00.528163 containerd[1638]: time="2025-09-09T00:58:00.528152662Z" level=info msg="CreateContainer within sandbox \"ab422822629351f611bc938240ccec54730c1793e03eb5584bc78412ca1f8a03\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 00:58:00.531240 containerd[1638]: time="2025-09-09T00:58:00.531221806Z" level=info msg="Container 0c28ae690f1a8a842fe9ca39da8d84306d84f550ae1f688a15e1c97e13c7939b: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:58:00.548034 containerd[1638]: time="2025-09-09T00:58:00.547919579Z" level=info msg="CreateContainer within sandbox \"e909b3aafc99c57eedc95611ceeefea7f347a2c67ddadfca111c504024cbddf8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"587267f8a905cde1c88d9483adbdaa51a9bcf6d079d4c28aa4ec1b9b21e14fba\"" Sep 9 00:58:00.548470 containerd[1638]: time="2025-09-09T00:58:00.548453750Z" level=info msg="StartContainer for \"587267f8a905cde1c88d9483adbdaa51a9bcf6d079d4c28aa4ec1b9b21e14fba\"" Sep 9 00:58:00.549064 containerd[1638]: time="2025-09-09T00:58:00.549049418Z" level=info msg="connecting to shim 587267f8a905cde1c88d9483adbdaa51a9bcf6d079d4c28aa4ec1b9b21e14fba" address="unix:///run/containerd/s/3ff1786b9cad9df7f5d799824de94209ddbe2e5899317f2c383f0caa207918c8" protocol=ttrpc version=3 Sep 9 00:58:00.565545 systemd[1]: Started cri-containerd-587267f8a905cde1c88d9483adbdaa51a9bcf6d079d4c28aa4ec1b9b21e14fba.scope - libcontainer container 587267f8a905cde1c88d9483adbdaa51a9bcf6d079d4c28aa4ec1b9b21e14fba. Sep 9 00:58:00.570101 kubelet[2570]: I0909 00:58:00.570073 2570 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:58:00.570301 kubelet[2570]: E0909 00:58:00.570281 2570 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.105:6443/api/v1/nodes\": dial tcp 139.178.70.105:6443: connect: connection refused" node="localhost" Sep 9 00:58:00.584649 containerd[1638]: time="2025-09-09T00:58:00.584582718Z" level=info msg="CreateContainer within sandbox \"f255a36eb9d7632dbbb01bc496896f139496154dea46219cf12eae609624f336\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0c28ae690f1a8a842fe9ca39da8d84306d84f550ae1f688a15e1c97e13c7939b\"" Sep 9 00:58:00.584928 containerd[1638]: time="2025-09-09T00:58:00.584904282Z" level=info msg="StartContainer for \"0c28ae690f1a8a842fe9ca39da8d84306d84f550ae1f688a15e1c97e13c7939b\"" Sep 9 00:58:00.585450 containerd[1638]: time="2025-09-09T00:58:00.585429798Z" level=info msg="connecting to shim 0c28ae690f1a8a842fe9ca39da8d84306d84f550ae1f688a15e1c97e13c7939b" address="unix:///run/containerd/s/62483e7c791cd7a5b7f0cdab42b65153a46e0d610b9e813f2d935150e6b7c45c" protocol=ttrpc version=3 Sep 9 00:58:00.597614 systemd[1]: Started cri-containerd-0c28ae690f1a8a842fe9ca39da8d84306d84f550ae1f688a15e1c97e13c7939b.scope - libcontainer container 0c28ae690f1a8a842fe9ca39da8d84306d84f550ae1f688a15e1c97e13c7939b. Sep 9 00:58:00.600900 containerd[1638]: time="2025-09-09T00:58:00.600706642Z" level=info msg="Container 548e6ace97a5a6130502b81b9706720df0376981d17e243706a4073758474b29: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:58:00.620311 containerd[1638]: time="2025-09-09T00:58:00.620283764Z" level=info msg="StartContainer for \"587267f8a905cde1c88d9483adbdaa51a9bcf6d079d4c28aa4ec1b9b21e14fba\" returns successfully" Sep 9 00:58:00.633410 containerd[1638]: time="2025-09-09T00:58:00.633387666Z" level=info msg="CreateContainer within sandbox \"ab422822629351f611bc938240ccec54730c1793e03eb5584bc78412ca1f8a03\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"548e6ace97a5a6130502b81b9706720df0376981d17e243706a4073758474b29\"" Sep 9 00:58:00.634496 containerd[1638]: time="2025-09-09T00:58:00.633919886Z" level=info msg="StartContainer for \"548e6ace97a5a6130502b81b9706720df0376981d17e243706a4073758474b29\"" Sep 9 00:58:00.634825 containerd[1638]: time="2025-09-09T00:58:00.634745734Z" level=info msg="connecting to shim 548e6ace97a5a6130502b81b9706720df0376981d17e243706a4073758474b29" address="unix:///run/containerd/s/3886c6aa8b1fe0ac612d07c96a81270356251edd2f2403af044ca5a7a8eade90" protocol=ttrpc version=3 Sep 9 00:58:00.648560 containerd[1638]: time="2025-09-09T00:58:00.648438057Z" level=info msg="StartContainer for \"0c28ae690f1a8a842fe9ca39da8d84306d84f550ae1f688a15e1c97e13c7939b\" returns successfully" Sep 9 00:58:00.654590 systemd[1]: Started cri-containerd-548e6ace97a5a6130502b81b9706720df0376981d17e243706a4073758474b29.scope - libcontainer container 548e6ace97a5a6130502b81b9706720df0376981d17e243706a4073758474b29. Sep 9 00:58:00.711974 containerd[1638]: time="2025-09-09T00:58:00.711905694Z" level=info msg="StartContainer for \"548e6ace97a5a6130502b81b9706720df0376981d17e243706a4073758474b29\" returns successfully" Sep 9 00:58:00.733508 kubelet[2570]: W0909 00:58:00.733473 2570 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Sep 9 00:58:00.733680 kubelet[2570]: E0909 00:58:00.733615 2570 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:58:00.746353 kubelet[2570]: W0909 00:58:00.746291 2570 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Sep 9 00:58:00.746353 kubelet[2570]: E0909 00:58:00.746334 2570 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:58:00.827412 kubelet[2570]: E0909 00:58:00.827377 2570 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:58:00.828567 kubelet[2570]: E0909 00:58:00.828556 2570 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:58:00.829340 kubelet[2570]: E0909 00:58:00.829320 2570 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:58:00.843710 kubelet[2570]: W0909 00:58:00.843685 2570 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Sep 9 00:58:00.843710 kubelet[2570]: E0909 00:58:00.843709 2570 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:58:00.870000 kubelet[2570]: W0909 00:58:00.869962 2570 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Sep 9 00:58:00.870096 kubelet[2570]: E0909 00:58:00.870014 2570 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:58:01.203384 kubelet[2570]: E0909 00:58:01.203175 2570 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.105:6443: connect: connection refused" interval="1.6s" Sep 9 00:58:01.373721 kubelet[2570]: I0909 00:58:01.373690 2570 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:58:01.374152 kubelet[2570]: E0909 00:58:01.374135 2570 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.105:6443/api/v1/nodes\": dial tcp 139.178.70.105:6443: connect: connection refused" node="localhost" Sep 9 00:58:01.830764 kubelet[2570]: E0909 00:58:01.830733 2570 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:58:01.831140 kubelet[2570]: E0909 00:58:01.831126 2570 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:58:02.805190 kubelet[2570]: E0909 00:58:02.805156 2570 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 9 00:58:02.854419 kubelet[2570]: E0909 00:58:02.854386 2570 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Sep 9 00:58:02.975590 kubelet[2570]: I0909 00:58:02.975566 2570 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:58:02.985464 kubelet[2570]: I0909 00:58:02.985099 2570 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 00:58:02.985464 kubelet[2570]: E0909 00:58:02.985124 2570 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 9 00:58:02.992389 kubelet[2570]: E0909 00:58:02.992364 2570 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:58:03.092914 kubelet[2570]: E0909 00:58:03.092879 2570 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:58:03.193823 kubelet[2570]: E0909 00:58:03.193792 2570 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:58:03.222799 kubelet[2570]: E0909 00:58:03.222754 2570 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:58:03.294567 kubelet[2570]: E0909 00:58:03.294532 2570 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:58:03.395524 kubelet[2570]: E0909 00:58:03.395435 2570 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:58:03.496658 kubelet[2570]: E0909 00:58:03.496500 2570 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:58:03.597328 kubelet[2570]: E0909 00:58:03.597297 2570 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:58:03.698195 kubelet[2570]: E0909 00:58:03.698117 2570 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:58:03.782161 kubelet[2570]: I0909 00:58:03.782111 2570 apiserver.go:52] "Watching apiserver" Sep 9 00:58:03.800479 kubelet[2570]: I0909 00:58:03.799203 2570 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:58:03.801534 kubelet[2570]: I0909 00:58:03.801111 2570 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 00:58:03.805527 kubelet[2570]: I0909 00:58:03.805246 2570 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:58:03.809305 kubelet[2570]: I0909 00:58:03.809280 2570 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:58:04.211504 systemd[1]: Reload requested from client PID 2836 ('systemctl') (unit session-9.scope)... Sep 9 00:58:04.211720 systemd[1]: Reloading... Sep 9 00:58:04.282479 zram_generator::config[2886]: No configuration found. Sep 9 00:58:04.375641 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Sep 9 00:58:04.458749 systemd[1]: Reloading finished in 246 ms. Sep 9 00:58:04.479084 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:58:04.494731 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 00:58:04.494992 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:58:04.495025 systemd[1]: kubelet.service: Consumed 523ms CPU time, 127.6M memory peak. Sep 9 00:58:04.497551 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:58:05.314572 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:58:05.322713 (kubelet)[2947]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 00:58:05.382732 kubelet[2947]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:58:05.382732 kubelet[2947]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 00:58:05.382732 kubelet[2947]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:58:05.382732 kubelet[2947]: I0909 00:58:05.382607 2947 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:58:05.388313 kubelet[2947]: I0909 00:58:05.387622 2947 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 9 00:58:05.388313 kubelet[2947]: I0909 00:58:05.387641 2947 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:58:05.388313 kubelet[2947]: I0909 00:58:05.387809 2947 server.go:954] "Client rotation is on, will bootstrap in background" Sep 9 00:58:05.388815 kubelet[2947]: I0909 00:58:05.388805 2947 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 9 00:58:05.395300 kubelet[2947]: I0909 00:58:05.395275 2947 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:58:05.400344 kubelet[2947]: I0909 00:58:05.400325 2947 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 00:58:05.402145 kubelet[2947]: I0909 00:58:05.402134 2947 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:58:05.402401 kubelet[2947]: I0909 00:58:05.402384 2947 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:58:05.402549 kubelet[2947]: I0909 00:58:05.402434 2947 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 00:58:05.402630 kubelet[2947]: I0909 00:58:05.402623 2947 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:58:05.402667 kubelet[2947]: I0909 00:58:05.402662 2947 container_manager_linux.go:304] "Creating device plugin manager" Sep 9 00:58:05.402720 kubelet[2947]: I0909 00:58:05.402715 2947 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:58:05.402868 kubelet[2947]: I0909 00:58:05.402863 2947 kubelet.go:446] "Attempting to sync node with API server" Sep 9 00:58:05.403294 kubelet[2947]: I0909 00:58:05.402908 2947 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:58:05.403364 kubelet[2947]: I0909 00:58:05.403349 2947 kubelet.go:352] "Adding apiserver pod source" Sep 9 00:58:05.403403 kubelet[2947]: I0909 00:58:05.403398 2947 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:58:05.404103 sudo[2961]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 9 00:58:05.404290 sudo[2961]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 9 00:58:05.405485 kubelet[2947]: I0909 00:58:05.405194 2947 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 00:58:05.407397 kubelet[2947]: I0909 00:58:05.406913 2947 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 00:58:05.407397 kubelet[2947]: I0909 00:58:05.407187 2947 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 00:58:05.407397 kubelet[2947]: I0909 00:58:05.407203 2947 server.go:1287] "Started kubelet" Sep 9 00:58:05.417873 kubelet[2947]: I0909 00:58:05.417814 2947 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:58:05.424515 kubelet[2947]: I0909 00:58:05.423668 2947 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:58:05.425918 kubelet[2947]: I0909 00:58:05.425900 2947 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:58:05.427768 kubelet[2947]: I0909 00:58:05.427722 2947 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:58:05.427982 kubelet[2947]: I0909 00:58:05.427971 2947 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:58:05.431120 kubelet[2947]: I0909 00:58:05.431100 2947 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 00:58:05.437136 kubelet[2947]: I0909 00:58:05.437014 2947 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 00:58:05.438330 kubelet[2947]: I0909 00:58:05.438320 2947 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:58:05.438738 kubelet[2947]: I0909 00:58:05.438730 2947 server.go:479] "Adding debug handlers to kubelet server" Sep 9 00:58:05.446721 kubelet[2947]: I0909 00:58:05.446582 2947 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:58:05.449729 kubelet[2947]: E0909 00:58:05.449716 2947 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 00:58:05.450884 kubelet[2947]: I0909 00:58:05.450794 2947 factory.go:221] Registration of the containerd container factory successfully Sep 9 00:58:05.451012 kubelet[2947]: I0909 00:58:05.450942 2947 factory.go:221] Registration of the systemd container factory successfully Sep 9 00:58:05.452143 kubelet[2947]: I0909 00:58:05.452073 2947 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 00:58:05.454783 kubelet[2947]: I0909 00:58:05.454768 2947 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 00:58:05.454856 kubelet[2947]: I0909 00:58:05.454851 2947 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 9 00:58:05.454899 kubelet[2947]: I0909 00:58:05.454894 2947 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 00:58:05.454936 kubelet[2947]: I0909 00:58:05.454930 2947 kubelet.go:2382] "Starting kubelet main sync loop" Sep 9 00:58:05.454993 kubelet[2947]: E0909 00:58:05.454984 2947 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:58:05.488473 kubelet[2947]: I0909 00:58:05.488243 2947 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 00:58:05.488473 kubelet[2947]: I0909 00:58:05.488254 2947 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 00:58:05.488473 kubelet[2947]: I0909 00:58:05.488264 2947 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:58:05.488473 kubelet[2947]: I0909 00:58:05.488358 2947 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 00:58:05.488473 kubelet[2947]: I0909 00:58:05.488365 2947 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 00:58:05.488473 kubelet[2947]: I0909 00:58:05.488376 2947 policy_none.go:49] "None policy: Start" Sep 9 00:58:05.488473 kubelet[2947]: I0909 00:58:05.488381 2947 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 00:58:05.488473 kubelet[2947]: I0909 00:58:05.488387 2947 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:58:05.488753 kubelet[2947]: I0909 00:58:05.488719 2947 state_mem.go:75] "Updated machine memory state" Sep 9 00:58:05.492912 kubelet[2947]: I0909 00:58:05.492870 2947 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 00:58:05.493197 kubelet[2947]: I0909 00:58:05.493145 2947 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:58:05.493352 kubelet[2947]: I0909 00:58:05.493242 2947 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:58:05.493772 kubelet[2947]: I0909 00:58:05.493765 2947 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:58:05.494510 kubelet[2947]: E0909 00:58:05.494395 2947 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 00:58:05.556870 kubelet[2947]: I0909 00:58:05.556626 2947 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:58:05.559625 kubelet[2947]: I0909 00:58:05.559557 2947 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:58:05.560043 kubelet[2947]: I0909 00:58:05.560026 2947 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:58:05.564343 kubelet[2947]: E0909 00:58:05.564202 2947 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 00:58:05.568502 kubelet[2947]: E0909 00:58:05.565472 2947 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 9 00:58:05.568502 kubelet[2947]: E0909 00:58:05.565592 2947 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:58:05.598329 kubelet[2947]: I0909 00:58:05.598311 2947 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:58:05.639966 kubelet[2947]: I0909 00:58:05.639939 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:58:05.639966 kubelet[2947]: I0909 00:58:05.639964 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e07d6e0f6d210430fac523dc82465082-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e07d6e0f6d210430fac523dc82465082\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:58:05.640067 kubelet[2947]: I0909 00:58:05.639977 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:58:05.640067 kubelet[2947]: I0909 00:58:05.639989 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:58:05.640067 kubelet[2947]: I0909 00:58:05.640000 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:58:05.640067 kubelet[2947]: I0909 00:58:05.640010 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:58:05.640067 kubelet[2947]: I0909 00:58:05.640019 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:58:05.640158 kubelet[2947]: I0909 00:58:05.640028 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e07d6e0f6d210430fac523dc82465082-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e07d6e0f6d210430fac523dc82465082\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:58:05.640158 kubelet[2947]: I0909 00:58:05.640037 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e07d6e0f6d210430fac523dc82465082-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e07d6e0f6d210430fac523dc82465082\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:58:05.667461 kubelet[2947]: I0909 00:58:05.667433 2947 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 9 00:58:05.667805 kubelet[2947]: I0909 00:58:05.667573 2947 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 00:58:05.844889 sudo[2961]: pam_unix(sudo:session): session closed for user root Sep 9 00:58:06.405520 kubelet[2947]: I0909 00:58:06.405496 2947 apiserver.go:52] "Watching apiserver" Sep 9 00:58:06.437597 kubelet[2947]: I0909 00:58:06.437564 2947 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 00:58:06.474842 kubelet[2947]: I0909 00:58:06.474823 2947 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:58:06.475081 kubelet[2947]: I0909 00:58:06.475071 2947 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:58:06.481239 kubelet[2947]: E0909 00:58:06.481219 2947 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 00:58:06.483551 kubelet[2947]: E0909 00:58:06.483538 2947 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 9 00:58:06.537828 kubelet[2947]: I0909 00:58:06.537541 2947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.537528093 podStartE2EDuration="3.537528093s" podCreationTimestamp="2025-09-09 00:58:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:58:06.522855982 +0000 UTC m=+1.184053194" watchObservedRunningTime="2025-09-09 00:58:06.537528093 +0000 UTC m=+1.198725296" Sep 9 00:58:06.549301 kubelet[2947]: I0909 00:58:06.549226 2947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.54920738 podStartE2EDuration="3.54920738s" podCreationTimestamp="2025-09-09 00:58:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:58:06.537708535 +0000 UTC m=+1.198905735" watchObservedRunningTime="2025-09-09 00:58:06.54920738 +0000 UTC m=+1.210404587" Sep 9 00:58:06.557525 kubelet[2947]: I0909 00:58:06.557435 2947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.557425008 podStartE2EDuration="3.557425008s" podCreationTimestamp="2025-09-09 00:58:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:58:06.549602967 +0000 UTC m=+1.210800179" watchObservedRunningTime="2025-09-09 00:58:06.557425008 +0000 UTC m=+1.218622215" Sep 9 00:58:08.153683 sudo[1975]: pam_unix(sudo:session): session closed for user root Sep 9 00:58:08.154789 sshd[1974]: Connection closed by 139.178.68.195 port 53486 Sep 9 00:58:08.155515 sshd-session[1971]: pam_unix(sshd:session): session closed for user core Sep 9 00:58:08.158193 systemd-logind[1590]: Session 9 logged out. Waiting for processes to exit. Sep 9 00:58:08.158278 systemd[1]: sshd@6-139.178.70.105:22-139.178.68.195:53486.service: Deactivated successfully. Sep 9 00:58:08.159921 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 00:58:08.160431 systemd[1]: session-9.scope: Consumed 3.195s CPU time, 209.4M memory peak. Sep 9 00:58:08.162481 systemd-logind[1590]: Removed session 9. Sep 9 00:58:08.556015 kubelet[2947]: I0909 00:58:08.556001 2947 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 00:58:08.557053 containerd[1638]: time="2025-09-09T00:58:08.556539826Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 00:58:08.557217 kubelet[2947]: I0909 00:58:08.556661 2947 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 00:58:08.680226 systemd[1]: Created slice kubepods-besteffort-podc08eada2_fac9_4fca_a987_ff56f2a82fb3.slice - libcontainer container kubepods-besteffort-podc08eada2_fac9_4fca_a987_ff56f2a82fb3.slice. Sep 9 00:58:08.695583 systemd[1]: Created slice kubepods-burstable-podaa5b4bb1_fa79_45cb_888e_d3826e4219db.slice - libcontainer container kubepods-burstable-podaa5b4bb1_fa79_45cb_888e_d3826e4219db.slice. Sep 9 00:58:08.761428 kubelet[2947]: I0909 00:58:08.761405 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/aa5b4bb1-fa79-45cb-888e-d3826e4219db-cilium-cgroup\") pod \"cilium-6ml68\" (UID: \"aa5b4bb1-fa79-45cb-888e-d3826e4219db\") " pod="kube-system/cilium-6ml68" Sep 9 00:58:08.761557 kubelet[2947]: I0909 00:58:08.761547 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c08eada2-fac9-4fca-a987-ff56f2a82fb3-lib-modules\") pod \"kube-proxy-qnf6v\" (UID: \"c08eada2-fac9-4fca-a987-ff56f2a82fb3\") " pod="kube-system/kube-proxy-qnf6v" Sep 9 00:58:08.761626 kubelet[2947]: I0909 00:58:08.761618 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/aa5b4bb1-fa79-45cb-888e-d3826e4219db-cilium-run\") pod \"cilium-6ml68\" (UID: \"aa5b4bb1-fa79-45cb-888e-d3826e4219db\") " pod="kube-system/cilium-6ml68" Sep 9 00:58:08.761702 kubelet[2947]: I0909 00:58:08.761694 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/aa5b4bb1-fa79-45cb-888e-d3826e4219db-bpf-maps\") pod \"cilium-6ml68\" (UID: \"aa5b4bb1-fa79-45cb-888e-d3826e4219db\") " pod="kube-system/cilium-6ml68" Sep 9 00:58:08.761741 kubelet[2947]: I0909 00:58:08.761735 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/aa5b4bb1-fa79-45cb-888e-d3826e4219db-cni-path\") pod \"cilium-6ml68\" (UID: \"aa5b4bb1-fa79-45cb-888e-d3826e4219db\") " pod="kube-system/cilium-6ml68" Sep 9 00:58:08.761794 kubelet[2947]: I0909 00:58:08.761787 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/aa5b4bb1-fa79-45cb-888e-d3826e4219db-clustermesh-secrets\") pod \"cilium-6ml68\" (UID: \"aa5b4bb1-fa79-45cb-888e-d3826e4219db\") " pod="kube-system/cilium-6ml68" Sep 9 00:58:08.761835 kubelet[2947]: I0909 00:58:08.761826 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/aa5b4bb1-fa79-45cb-888e-d3826e4219db-host-proc-sys-kernel\") pod \"cilium-6ml68\" (UID: \"aa5b4bb1-fa79-45cb-888e-d3826e4219db\") " pod="kube-system/cilium-6ml68" Sep 9 00:58:08.761887 kubelet[2947]: I0909 00:58:08.761880 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c08eada2-fac9-4fca-a987-ff56f2a82fb3-kube-proxy\") pod \"kube-proxy-qnf6v\" (UID: \"c08eada2-fac9-4fca-a987-ff56f2a82fb3\") " pod="kube-system/kube-proxy-qnf6v" Sep 9 00:58:08.761929 kubelet[2947]: I0909 00:58:08.761923 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/aa5b4bb1-fa79-45cb-888e-d3826e4219db-hostproc\") pod \"cilium-6ml68\" (UID: \"aa5b4bb1-fa79-45cb-888e-d3826e4219db\") " pod="kube-system/cilium-6ml68" Sep 9 00:58:08.761980 kubelet[2947]: I0909 00:58:08.761973 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c08eada2-fac9-4fca-a987-ff56f2a82fb3-xtables-lock\") pod \"kube-proxy-qnf6v\" (UID: \"c08eada2-fac9-4fca-a987-ff56f2a82fb3\") " pod="kube-system/kube-proxy-qnf6v" Sep 9 00:58:08.762020 kubelet[2947]: I0909 00:58:08.762013 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6m48\" (UniqueName: \"kubernetes.io/projected/aa5b4bb1-fa79-45cb-888e-d3826e4219db-kube-api-access-q6m48\") pod \"cilium-6ml68\" (UID: \"aa5b4bb1-fa79-45cb-888e-d3826e4219db\") " pod="kube-system/cilium-6ml68" Sep 9 00:58:08.762111 kubelet[2947]: I0909 00:58:08.762063 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aa5b4bb1-fa79-45cb-888e-d3826e4219db-etc-cni-netd\") pod \"cilium-6ml68\" (UID: \"aa5b4bb1-fa79-45cb-888e-d3826e4219db\") " pod="kube-system/cilium-6ml68" Sep 9 00:58:08.762111 kubelet[2947]: I0909 00:58:08.762076 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa5b4bb1-fa79-45cb-888e-d3826e4219db-lib-modules\") pod \"cilium-6ml68\" (UID: \"aa5b4bb1-fa79-45cb-888e-d3826e4219db\") " pod="kube-system/cilium-6ml68" Sep 9 00:58:08.762111 kubelet[2947]: I0909 00:58:08.762085 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/aa5b4bb1-fa79-45cb-888e-d3826e4219db-host-proc-sys-net\") pod \"cilium-6ml68\" (UID: \"aa5b4bb1-fa79-45cb-888e-d3826e4219db\") " pod="kube-system/cilium-6ml68" Sep 9 00:58:08.762111 kubelet[2947]: I0909 00:58:08.762101 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa5b4bb1-fa79-45cb-888e-d3826e4219db-xtables-lock\") pod \"cilium-6ml68\" (UID: \"aa5b4bb1-fa79-45cb-888e-d3826e4219db\") " pod="kube-system/cilium-6ml68" Sep 9 00:58:08.762206 kubelet[2947]: I0909 00:58:08.762118 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/aa5b4bb1-fa79-45cb-888e-d3826e4219db-hubble-tls\") pod \"cilium-6ml68\" (UID: \"aa5b4bb1-fa79-45cb-888e-d3826e4219db\") " pod="kube-system/cilium-6ml68" Sep 9 00:58:08.762206 kubelet[2947]: I0909 00:58:08.762137 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr9vc\" (UniqueName: \"kubernetes.io/projected/c08eada2-fac9-4fca-a987-ff56f2a82fb3-kube-api-access-gr9vc\") pod \"kube-proxy-qnf6v\" (UID: \"c08eada2-fac9-4fca-a987-ff56f2a82fb3\") " pod="kube-system/kube-proxy-qnf6v" Sep 9 00:58:08.762206 kubelet[2947]: I0909 00:58:08.762148 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aa5b4bb1-fa79-45cb-888e-d3826e4219db-cilium-config-path\") pod \"cilium-6ml68\" (UID: \"aa5b4bb1-fa79-45cb-888e-d3826e4219db\") " pod="kube-system/cilium-6ml68" Sep 9 00:58:08.884861 kubelet[2947]: E0909 00:58:08.883674 2947 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 9 00:58:08.884861 kubelet[2947]: E0909 00:58:08.883698 2947 projected.go:194] Error preparing data for projected volume kube-api-access-q6m48 for pod kube-system/cilium-6ml68: configmap "kube-root-ca.crt" not found Sep 9 00:58:08.884861 kubelet[2947]: E0909 00:58:08.883744 2947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/aa5b4bb1-fa79-45cb-888e-d3826e4219db-kube-api-access-q6m48 podName:aa5b4bb1-fa79-45cb-888e-d3826e4219db nodeName:}" failed. No retries permitted until 2025-09-09 00:58:09.383725949 +0000 UTC m=+4.044923156 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-q6m48" (UniqueName: "kubernetes.io/projected/aa5b4bb1-fa79-45cb-888e-d3826e4219db-kube-api-access-q6m48") pod "cilium-6ml68" (UID: "aa5b4bb1-fa79-45cb-888e-d3826e4219db") : configmap "kube-root-ca.crt" not found Sep 9 00:58:08.885066 kubelet[2947]: E0909 00:58:08.884993 2947 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 9 00:58:08.885066 kubelet[2947]: E0909 00:58:08.885013 2947 projected.go:194] Error preparing data for projected volume kube-api-access-gr9vc for pod kube-system/kube-proxy-qnf6v: configmap "kube-root-ca.crt" not found Sep 9 00:58:08.885066 kubelet[2947]: E0909 00:58:08.885048 2947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c08eada2-fac9-4fca-a987-ff56f2a82fb3-kube-api-access-gr9vc podName:c08eada2-fac9-4fca-a987-ff56f2a82fb3 nodeName:}" failed. No retries permitted until 2025-09-09 00:58:09.385033898 +0000 UTC m=+4.046231099 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gr9vc" (UniqueName: "kubernetes.io/projected/c08eada2-fac9-4fca-a987-ff56f2a82fb3-kube-api-access-gr9vc") pod "kube-proxy-qnf6v" (UID: "c08eada2-fac9-4fca-a987-ff56f2a82fb3") : configmap "kube-root-ca.crt" not found Sep 9 00:58:09.590264 containerd[1638]: time="2025-09-09T00:58:09.590141883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qnf6v,Uid:c08eada2-fac9-4fca-a987-ff56f2a82fb3,Namespace:kube-system,Attempt:0,}" Sep 9 00:58:09.601125 containerd[1638]: time="2025-09-09T00:58:09.601027794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6ml68,Uid:aa5b4bb1-fa79-45cb-888e-d3826e4219db,Namespace:kube-system,Attempt:0,}" Sep 9 00:58:09.642005 systemd[1]: Created slice kubepods-besteffort-podf38568a6_588d_413d_b7a8_9f3d2da27f6a.slice - libcontainer container kubepods-besteffort-podf38568a6_588d_413d_b7a8_9f3d2da27f6a.slice. Sep 9 00:58:09.654433 containerd[1638]: time="2025-09-09T00:58:09.654266358Z" level=info msg="connecting to shim 189adff7cb5524c20e2b626131b173ee2a657b8a6cfeb7deae2dbde54c2dc978" address="unix:///run/containerd/s/ca7b112657331760c55896fd0d330260ee81bfafdf2f2300d02666205c696c08" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:58:09.658282 containerd[1638]: time="2025-09-09T00:58:09.658241283Z" level=info msg="connecting to shim b34ce11d85db4b9773255f32469a407700f1fc2949aa964bb6ed0475a3663180" address="unix:///run/containerd/s/c5c46692542784901a50d070a5ac6db74155d468b08ab2da5488308f6cd5d4a7" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:58:09.668301 kubelet[2947]: I0909 00:58:09.668274 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f38568a6-588d-413d-b7a8-9f3d2da27f6a-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-h4ffv\" (UID: \"f38568a6-588d-413d-b7a8-9f3d2da27f6a\") " pod="kube-system/cilium-operator-6c4d7847fc-h4ffv" Sep 9 00:58:09.668301 kubelet[2947]: I0909 00:58:09.668307 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdkkf\" (UniqueName: \"kubernetes.io/projected/f38568a6-588d-413d-b7a8-9f3d2da27f6a-kube-api-access-cdkkf\") pod \"cilium-operator-6c4d7847fc-h4ffv\" (UID: \"f38568a6-588d-413d-b7a8-9f3d2da27f6a\") " pod="kube-system/cilium-operator-6c4d7847fc-h4ffv" Sep 9 00:58:09.686666 systemd[1]: Started cri-containerd-189adff7cb5524c20e2b626131b173ee2a657b8a6cfeb7deae2dbde54c2dc978.scope - libcontainer container 189adff7cb5524c20e2b626131b173ee2a657b8a6cfeb7deae2dbde54c2dc978. Sep 9 00:58:09.692698 systemd[1]: Started cri-containerd-b34ce11d85db4b9773255f32469a407700f1fc2949aa964bb6ed0475a3663180.scope - libcontainer container b34ce11d85db4b9773255f32469a407700f1fc2949aa964bb6ed0475a3663180. Sep 9 00:58:09.718884 containerd[1638]: time="2025-09-09T00:58:09.718825863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6ml68,Uid:aa5b4bb1-fa79-45cb-888e-d3826e4219db,Namespace:kube-system,Attempt:0,} returns sandbox id \"189adff7cb5524c20e2b626131b173ee2a657b8a6cfeb7deae2dbde54c2dc978\"" Sep 9 00:58:09.720083 containerd[1638]: time="2025-09-09T00:58:09.719993854Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 9 00:58:09.741328 containerd[1638]: time="2025-09-09T00:58:09.741249898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qnf6v,Uid:c08eada2-fac9-4fca-a987-ff56f2a82fb3,Namespace:kube-system,Attempt:0,} returns sandbox id \"b34ce11d85db4b9773255f32469a407700f1fc2949aa964bb6ed0475a3663180\"" Sep 9 00:58:09.745302 containerd[1638]: time="2025-09-09T00:58:09.744712646Z" level=info msg="CreateContainer within sandbox \"b34ce11d85db4b9773255f32469a407700f1fc2949aa964bb6ed0475a3663180\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 00:58:09.776063 containerd[1638]: time="2025-09-09T00:58:09.776009449Z" level=info msg="Container 9c9a03c439401bfbaca06828dea79b8c56377f8c62cfc08dfbf2f10462cbf176: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:58:09.780488 containerd[1638]: time="2025-09-09T00:58:09.780416955Z" level=info msg="CreateContainer within sandbox \"b34ce11d85db4b9773255f32469a407700f1fc2949aa964bb6ed0475a3663180\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9c9a03c439401bfbaca06828dea79b8c56377f8c62cfc08dfbf2f10462cbf176\"" Sep 9 00:58:09.780832 containerd[1638]: time="2025-09-09T00:58:09.780816398Z" level=info msg="StartContainer for \"9c9a03c439401bfbaca06828dea79b8c56377f8c62cfc08dfbf2f10462cbf176\"" Sep 9 00:58:09.783285 containerd[1638]: time="2025-09-09T00:58:09.783266555Z" level=info msg="connecting to shim 9c9a03c439401bfbaca06828dea79b8c56377f8c62cfc08dfbf2f10462cbf176" address="unix:///run/containerd/s/c5c46692542784901a50d070a5ac6db74155d468b08ab2da5488308f6cd5d4a7" protocol=ttrpc version=3 Sep 9 00:58:09.803605 systemd[1]: Started cri-containerd-9c9a03c439401bfbaca06828dea79b8c56377f8c62cfc08dfbf2f10462cbf176.scope - libcontainer container 9c9a03c439401bfbaca06828dea79b8c56377f8c62cfc08dfbf2f10462cbf176. Sep 9 00:58:09.842794 containerd[1638]: time="2025-09-09T00:58:09.842098887Z" level=info msg="StartContainer for \"9c9a03c439401bfbaca06828dea79b8c56377f8c62cfc08dfbf2f10462cbf176\" returns successfully" Sep 9 00:58:09.948484 containerd[1638]: time="2025-09-09T00:58:09.948436355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-h4ffv,Uid:f38568a6-588d-413d-b7a8-9f3d2da27f6a,Namespace:kube-system,Attempt:0,}" Sep 9 00:58:09.961654 containerd[1638]: time="2025-09-09T00:58:09.961617737Z" level=info msg="connecting to shim ede3596b58af7193d91d1c66b6176ded23b8bade47b63df3315cd47ada5ce596" address="unix:///run/containerd/s/851f81fb665cb4a4e7e955b2c0d16bfe7060c6ed2fab88fab076c0927693d0e5" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:58:09.984571 systemd[1]: Started cri-containerd-ede3596b58af7193d91d1c66b6176ded23b8bade47b63df3315cd47ada5ce596.scope - libcontainer container ede3596b58af7193d91d1c66b6176ded23b8bade47b63df3315cd47ada5ce596. Sep 9 00:58:10.024918 containerd[1638]: time="2025-09-09T00:58:10.024895476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-h4ffv,Uid:f38568a6-588d-413d-b7a8-9f3d2da27f6a,Namespace:kube-system,Attempt:0,} returns sandbox id \"ede3596b58af7193d91d1c66b6176ded23b8bade47b63df3315cd47ada5ce596\"" Sep 9 00:58:10.500077 kubelet[2947]: I0909 00:58:10.500034 2947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qnf6v" podStartSLOduration=2.50002015 podStartE2EDuration="2.50002015s" podCreationTimestamp="2025-09-09 00:58:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:58:10.49974107 +0000 UTC m=+5.160938283" watchObservedRunningTime="2025-09-09 00:58:10.50002015 +0000 UTC m=+5.161217363" Sep 9 00:58:14.426816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount672644211.mount: Deactivated successfully. Sep 9 00:58:18.391277 containerd[1638]: time="2025-09-09T00:58:18.391207993Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 9 00:58:18.399855 containerd[1638]: time="2025-09-09T00:58:18.399745026Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:58:18.421386 containerd[1638]: time="2025-09-09T00:58:18.420542911Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:58:18.421386 containerd[1638]: time="2025-09-09T00:58:18.421317208Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.701304446s" Sep 9 00:58:18.421386 containerd[1638]: time="2025-09-09T00:58:18.421334900Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 9 00:58:18.422340 containerd[1638]: time="2025-09-09T00:58:18.422319992Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 9 00:58:18.437231 containerd[1638]: time="2025-09-09T00:58:18.423068475Z" level=info msg="CreateContainer within sandbox \"189adff7cb5524c20e2b626131b173ee2a657b8a6cfeb7deae2dbde54c2dc978\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 00:58:18.491564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2842183747.mount: Deactivated successfully. Sep 9 00:58:18.497493 containerd[1638]: time="2025-09-09T00:58:18.495730545Z" level=info msg="Container 22316139513ff270f41b2c218a1d420f3a3b5095fe4b20addc2603e5e1f4b9eb: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:58:18.497301 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4012946777.mount: Deactivated successfully. Sep 9 00:58:18.501105 containerd[1638]: time="2025-09-09T00:58:18.501064982Z" level=info msg="CreateContainer within sandbox \"189adff7cb5524c20e2b626131b173ee2a657b8a6cfeb7deae2dbde54c2dc978\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"22316139513ff270f41b2c218a1d420f3a3b5095fe4b20addc2603e5e1f4b9eb\"" Sep 9 00:58:18.501686 containerd[1638]: time="2025-09-09T00:58:18.501665755Z" level=info msg="StartContainer for \"22316139513ff270f41b2c218a1d420f3a3b5095fe4b20addc2603e5e1f4b9eb\"" Sep 9 00:58:18.503483 containerd[1638]: time="2025-09-09T00:58:18.503219445Z" level=info msg="connecting to shim 22316139513ff270f41b2c218a1d420f3a3b5095fe4b20addc2603e5e1f4b9eb" address="unix:///run/containerd/s/ca7b112657331760c55896fd0d330260ee81bfafdf2f2300d02666205c696c08" protocol=ttrpc version=3 Sep 9 00:58:18.529810 systemd[1]: Started cri-containerd-22316139513ff270f41b2c218a1d420f3a3b5095fe4b20addc2603e5e1f4b9eb.scope - libcontainer container 22316139513ff270f41b2c218a1d420f3a3b5095fe4b20addc2603e5e1f4b9eb. Sep 9 00:58:18.554989 containerd[1638]: time="2025-09-09T00:58:18.554932147Z" level=info msg="StartContainer for \"22316139513ff270f41b2c218a1d420f3a3b5095fe4b20addc2603e5e1f4b9eb\" returns successfully" Sep 9 00:58:18.565856 systemd[1]: cri-containerd-22316139513ff270f41b2c218a1d420f3a3b5095fe4b20addc2603e5e1f4b9eb.scope: Deactivated successfully. Sep 9 00:58:18.579388 containerd[1638]: time="2025-09-09T00:58:18.579274586Z" level=info msg="received exit event container_id:\"22316139513ff270f41b2c218a1d420f3a3b5095fe4b20addc2603e5e1f4b9eb\" id:\"22316139513ff270f41b2c218a1d420f3a3b5095fe4b20addc2603e5e1f4b9eb\" pid:3358 exited_at:{seconds:1757379498 nanos:568638806}" Sep 9 00:58:18.586589 containerd[1638]: time="2025-09-09T00:58:18.586559350Z" level=info msg="TaskExit event in podsandbox handler container_id:\"22316139513ff270f41b2c218a1d420f3a3b5095fe4b20addc2603e5e1f4b9eb\" id:\"22316139513ff270f41b2c218a1d420f3a3b5095fe4b20addc2603e5e1f4b9eb\" pid:3358 exited_at:{seconds:1757379498 nanos:568638806}" Sep 9 00:58:19.490233 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22316139513ff270f41b2c218a1d420f3a3b5095fe4b20addc2603e5e1f4b9eb-rootfs.mount: Deactivated successfully. Sep 9 00:58:19.542714 containerd[1638]: time="2025-09-09T00:58:19.542580100Z" level=info msg="CreateContainer within sandbox \"189adff7cb5524c20e2b626131b173ee2a657b8a6cfeb7deae2dbde54c2dc978\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 00:58:19.649698 containerd[1638]: time="2025-09-09T00:58:19.649674292Z" level=info msg="Container 3346a83d69e61fed4424d1edd7f0b19fe04a55b8b8d9a4c137b022581f3bba4c: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:58:19.693306 containerd[1638]: time="2025-09-09T00:58:19.693276315Z" level=info msg="CreateContainer within sandbox \"189adff7cb5524c20e2b626131b173ee2a657b8a6cfeb7deae2dbde54c2dc978\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3346a83d69e61fed4424d1edd7f0b19fe04a55b8b8d9a4c137b022581f3bba4c\"" Sep 9 00:58:19.694542 containerd[1638]: time="2025-09-09T00:58:19.694523762Z" level=info msg="StartContainer for \"3346a83d69e61fed4424d1edd7f0b19fe04a55b8b8d9a4c137b022581f3bba4c\"" Sep 9 00:58:19.695138 containerd[1638]: time="2025-09-09T00:58:19.695116273Z" level=info msg="connecting to shim 3346a83d69e61fed4424d1edd7f0b19fe04a55b8b8d9a4c137b022581f3bba4c" address="unix:///run/containerd/s/ca7b112657331760c55896fd0d330260ee81bfafdf2f2300d02666205c696c08" protocol=ttrpc version=3 Sep 9 00:58:19.715572 systemd[1]: Started cri-containerd-3346a83d69e61fed4424d1edd7f0b19fe04a55b8b8d9a4c137b022581f3bba4c.scope - libcontainer container 3346a83d69e61fed4424d1edd7f0b19fe04a55b8b8d9a4c137b022581f3bba4c. Sep 9 00:58:19.753911 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 00:58:19.754354 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:58:19.754881 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:58:19.756810 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:58:19.769923 containerd[1638]: time="2025-09-09T00:58:19.758573906Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3346a83d69e61fed4424d1edd7f0b19fe04a55b8b8d9a4c137b022581f3bba4c\" id:\"3346a83d69e61fed4424d1edd7f0b19fe04a55b8b8d9a4c137b022581f3bba4c\" pid:3402 exited_at:{seconds:1757379499 nanos:756999905}" Sep 9 00:58:19.769923 containerd[1638]: time="2025-09-09T00:58:19.759578119Z" level=info msg="received exit event container_id:\"3346a83d69e61fed4424d1edd7f0b19fe04a55b8b8d9a4c137b022581f3bba4c\" id:\"3346a83d69e61fed4424d1edd7f0b19fe04a55b8b8d9a4c137b022581f3bba4c\" pid:3402 exited_at:{seconds:1757379499 nanos:756999905}" Sep 9 00:58:19.769923 containerd[1638]: time="2025-09-09T00:58:19.767472899Z" level=info msg="StartContainer for \"3346a83d69e61fed4424d1edd7f0b19fe04a55b8b8d9a4c137b022581f3bba4c\" returns successfully" Sep 9 00:58:19.759505 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 00:58:19.760509 systemd[1]: cri-containerd-3346a83d69e61fed4424d1edd7f0b19fe04a55b8b8d9a4c137b022581f3bba4c.scope: Deactivated successfully. Sep 9 00:58:19.807748 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:58:20.490442 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3346a83d69e61fed4424d1edd7f0b19fe04a55b8b8d9a4c137b022581f3bba4c-rootfs.mount: Deactivated successfully. Sep 9 00:58:20.548774 containerd[1638]: time="2025-09-09T00:58:20.548733697Z" level=info msg="CreateContainer within sandbox \"189adff7cb5524c20e2b626131b173ee2a657b8a6cfeb7deae2dbde54c2dc978\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 00:58:20.619546 containerd[1638]: time="2025-09-09T00:58:20.619239002Z" level=info msg="Container 2366c98ec46442a034a2dfc2957eedbcd8cc86a017214f26479c76cf4de5b13a: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:58:20.704229 containerd[1638]: time="2025-09-09T00:58:20.704200662Z" level=info msg="CreateContainer within sandbox \"189adff7cb5524c20e2b626131b173ee2a657b8a6cfeb7deae2dbde54c2dc978\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2366c98ec46442a034a2dfc2957eedbcd8cc86a017214f26479c76cf4de5b13a\"" Sep 9 00:58:20.708456 containerd[1638]: time="2025-09-09T00:58:20.708360488Z" level=info msg="StartContainer for \"2366c98ec46442a034a2dfc2957eedbcd8cc86a017214f26479c76cf4de5b13a\"" Sep 9 00:58:20.710410 containerd[1638]: time="2025-09-09T00:58:20.710352513Z" level=info msg="connecting to shim 2366c98ec46442a034a2dfc2957eedbcd8cc86a017214f26479c76cf4de5b13a" address="unix:///run/containerd/s/ca7b112657331760c55896fd0d330260ee81bfafdf2f2300d02666205c696c08" protocol=ttrpc version=3 Sep 9 00:58:20.736556 systemd[1]: Started cri-containerd-2366c98ec46442a034a2dfc2957eedbcd8cc86a017214f26479c76cf4de5b13a.scope - libcontainer container 2366c98ec46442a034a2dfc2957eedbcd8cc86a017214f26479c76cf4de5b13a. Sep 9 00:58:20.799681 containerd[1638]: time="2025-09-09T00:58:20.799580995Z" level=info msg="StartContainer for \"2366c98ec46442a034a2dfc2957eedbcd8cc86a017214f26479c76cf4de5b13a\" returns successfully" Sep 9 00:58:20.909556 systemd[1]: cri-containerd-2366c98ec46442a034a2dfc2957eedbcd8cc86a017214f26479c76cf4de5b13a.scope: Deactivated successfully. Sep 9 00:58:20.909910 systemd[1]: cri-containerd-2366c98ec46442a034a2dfc2957eedbcd8cc86a017214f26479c76cf4de5b13a.scope: Consumed 17ms CPU time, 5.5M memory peak, 1M read from disk. Sep 9 00:58:20.911014 containerd[1638]: time="2025-09-09T00:58:20.910944213Z" level=info msg="received exit event container_id:\"2366c98ec46442a034a2dfc2957eedbcd8cc86a017214f26479c76cf4de5b13a\" id:\"2366c98ec46442a034a2dfc2957eedbcd8cc86a017214f26479c76cf4de5b13a\" pid:3460 exited_at:{seconds:1757379500 nanos:910649524}" Sep 9 00:58:20.912197 containerd[1638]: time="2025-09-09T00:58:20.912185449Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2366c98ec46442a034a2dfc2957eedbcd8cc86a017214f26479c76cf4de5b13a\" id:\"2366c98ec46442a034a2dfc2957eedbcd8cc86a017214f26479c76cf4de5b13a\" pid:3460 exited_at:{seconds:1757379500 nanos:910649524}" Sep 9 00:58:20.932714 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2366c98ec46442a034a2dfc2957eedbcd8cc86a017214f26479c76cf4de5b13a-rootfs.mount: Deactivated successfully. Sep 9 00:58:21.502383 containerd[1638]: time="2025-09-09T00:58:21.502333671Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:58:21.519012 containerd[1638]: time="2025-09-09T00:58:21.518969492Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 9 00:58:21.533133 containerd[1638]: time="2025-09-09T00:58:21.533094367Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:58:21.533844 containerd[1638]: time="2025-09-09T00:58:21.533687195Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.111128818s" Sep 9 00:58:21.533844 containerd[1638]: time="2025-09-09T00:58:21.533707158Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 9 00:58:21.535423 containerd[1638]: time="2025-09-09T00:58:21.535410757Z" level=info msg="CreateContainer within sandbox \"ede3596b58af7193d91d1c66b6176ded23b8bade47b63df3315cd47ada5ce596\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 9 00:58:21.588877 containerd[1638]: time="2025-09-09T00:58:21.588847365Z" level=info msg="Container f186228d8ec9f725b4443851a0748c4c87709aa5d891e70402c4cda6a3ad3cf3: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:58:21.589591 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1399024743.mount: Deactivated successfully. Sep 9 00:58:21.656464 containerd[1638]: time="2025-09-09T00:58:21.656413821Z" level=info msg="CreateContainer within sandbox \"ede3596b58af7193d91d1c66b6176ded23b8bade47b63df3315cd47ada5ce596\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f186228d8ec9f725b4443851a0748c4c87709aa5d891e70402c4cda6a3ad3cf3\"" Sep 9 00:58:21.656789 containerd[1638]: time="2025-09-09T00:58:21.656773588Z" level=info msg="StartContainer for \"f186228d8ec9f725b4443851a0748c4c87709aa5d891e70402c4cda6a3ad3cf3\"" Sep 9 00:58:21.657252 containerd[1638]: time="2025-09-09T00:58:21.657227462Z" level=info msg="connecting to shim f186228d8ec9f725b4443851a0748c4c87709aa5d891e70402c4cda6a3ad3cf3" address="unix:///run/containerd/s/851f81fb665cb4a4e7e955b2c0d16bfe7060c6ed2fab88fab076c0927693d0e5" protocol=ttrpc version=3 Sep 9 00:58:21.676560 systemd[1]: Started cri-containerd-f186228d8ec9f725b4443851a0748c4c87709aa5d891e70402c4cda6a3ad3cf3.scope - libcontainer container f186228d8ec9f725b4443851a0748c4c87709aa5d891e70402c4cda6a3ad3cf3. Sep 9 00:58:21.977688 containerd[1638]: time="2025-09-09T00:58:21.977510393Z" level=info msg="StartContainer for \"f186228d8ec9f725b4443851a0748c4c87709aa5d891e70402c4cda6a3ad3cf3\" returns successfully" Sep 9 00:58:22.566676 containerd[1638]: time="2025-09-09T00:58:22.566649657Z" level=info msg="CreateContainer within sandbox \"189adff7cb5524c20e2b626131b173ee2a657b8a6cfeb7deae2dbde54c2dc978\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 00:58:22.623875 containerd[1638]: time="2025-09-09T00:58:22.621372734Z" level=info msg="Container af152aad96b22a2de98651dd1e6d068ef653cc493cc0677f91fa3d358a4d4cda: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:58:22.651500 containerd[1638]: time="2025-09-09T00:58:22.651461485Z" level=info msg="CreateContainer within sandbox \"189adff7cb5524c20e2b626131b173ee2a657b8a6cfeb7deae2dbde54c2dc978\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"af152aad96b22a2de98651dd1e6d068ef653cc493cc0677f91fa3d358a4d4cda\"" Sep 9 00:58:22.651967 containerd[1638]: time="2025-09-09T00:58:22.651941815Z" level=info msg="StartContainer for \"af152aad96b22a2de98651dd1e6d068ef653cc493cc0677f91fa3d358a4d4cda\"" Sep 9 00:58:22.652545 containerd[1638]: time="2025-09-09T00:58:22.652505755Z" level=info msg="connecting to shim af152aad96b22a2de98651dd1e6d068ef653cc493cc0677f91fa3d358a4d4cda" address="unix:///run/containerd/s/ca7b112657331760c55896fd0d330260ee81bfafdf2f2300d02666205c696c08" protocol=ttrpc version=3 Sep 9 00:58:22.673592 systemd[1]: Started cri-containerd-af152aad96b22a2de98651dd1e6d068ef653cc493cc0677f91fa3d358a4d4cda.scope - libcontainer container af152aad96b22a2de98651dd1e6d068ef653cc493cc0677f91fa3d358a4d4cda. Sep 9 00:58:22.711334 systemd[1]: cri-containerd-af152aad96b22a2de98651dd1e6d068ef653cc493cc0677f91fa3d358a4d4cda.scope: Deactivated successfully. Sep 9 00:58:22.716375 containerd[1638]: time="2025-09-09T00:58:22.712252331Z" level=info msg="TaskExit event in podsandbox handler container_id:\"af152aad96b22a2de98651dd1e6d068ef653cc493cc0677f91fa3d358a4d4cda\" id:\"af152aad96b22a2de98651dd1e6d068ef653cc493cc0677f91fa3d358a4d4cda\" pid:3537 exited_at:{seconds:1757379502 nanos:711872287}" Sep 9 00:58:22.716827 containerd[1638]: time="2025-09-09T00:58:22.716758335Z" level=info msg="received exit event container_id:\"af152aad96b22a2de98651dd1e6d068ef653cc493cc0677f91fa3d358a4d4cda\" id:\"af152aad96b22a2de98651dd1e6d068ef653cc493cc0677f91fa3d358a4d4cda\" pid:3537 exited_at:{seconds:1757379502 nanos:711872287}" Sep 9 00:58:22.721810 containerd[1638]: time="2025-09-09T00:58:22.721787640Z" level=info msg="StartContainer for \"af152aad96b22a2de98651dd1e6d068ef653cc493cc0677f91fa3d358a4d4cda\" returns successfully" Sep 9 00:58:22.728913 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af152aad96b22a2de98651dd1e6d068ef653cc493cc0677f91fa3d358a4d4cda-rootfs.mount: Deactivated successfully. Sep 9 00:58:22.999522 kubelet[2947]: I0909 00:58:22.999248 2947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-h4ffv" podStartSLOduration=2.490697656 podStartE2EDuration="13.999234946s" podCreationTimestamp="2025-09-09 00:58:09 +0000 UTC" firstStartedPulling="2025-09-09 00:58:10.025716809 +0000 UTC m=+4.686914019" lastFinishedPulling="2025-09-09 00:58:21.534254105 +0000 UTC m=+16.195451309" observedRunningTime="2025-09-09 00:58:22.848600446 +0000 UTC m=+17.509797658" watchObservedRunningTime="2025-09-09 00:58:22.999234946 +0000 UTC m=+17.660432157" Sep 9 00:58:23.580326 containerd[1638]: time="2025-09-09T00:58:23.580302187Z" level=info msg="CreateContainer within sandbox \"189adff7cb5524c20e2b626131b173ee2a657b8a6cfeb7deae2dbde54c2dc978\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 00:58:23.669591 containerd[1638]: time="2025-09-09T00:58:23.669564766Z" level=info msg="Container 9d1ae07599608ee0c2b15c7cf1823d6ffc71711c864ff2cf3fa547cc7f9b836b: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:58:23.738546 containerd[1638]: time="2025-09-09T00:58:23.738505683Z" level=info msg="CreateContainer within sandbox \"189adff7cb5524c20e2b626131b173ee2a657b8a6cfeb7deae2dbde54c2dc978\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9d1ae07599608ee0c2b15c7cf1823d6ffc71711c864ff2cf3fa547cc7f9b836b\"" Sep 9 00:58:23.739527 containerd[1638]: time="2025-09-09T00:58:23.739507402Z" level=info msg="StartContainer for \"9d1ae07599608ee0c2b15c7cf1823d6ffc71711c864ff2cf3fa547cc7f9b836b\"" Sep 9 00:58:23.740862 containerd[1638]: time="2025-09-09T00:58:23.740815387Z" level=info msg="connecting to shim 9d1ae07599608ee0c2b15c7cf1823d6ffc71711c864ff2cf3fa547cc7f9b836b" address="unix:///run/containerd/s/ca7b112657331760c55896fd0d330260ee81bfafdf2f2300d02666205c696c08" protocol=ttrpc version=3 Sep 9 00:58:23.755694 systemd[1]: Started cri-containerd-9d1ae07599608ee0c2b15c7cf1823d6ffc71711c864ff2cf3fa547cc7f9b836b.scope - libcontainer container 9d1ae07599608ee0c2b15c7cf1823d6ffc71711c864ff2cf3fa547cc7f9b836b. Sep 9 00:58:23.807411 containerd[1638]: time="2025-09-09T00:58:23.807384294Z" level=info msg="StartContainer for \"9d1ae07599608ee0c2b15c7cf1823d6ffc71711c864ff2cf3fa547cc7f9b836b\" returns successfully" Sep 9 00:58:24.224911 containerd[1638]: time="2025-09-09T00:58:24.224137288Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9d1ae07599608ee0c2b15c7cf1823d6ffc71711c864ff2cf3fa547cc7f9b836b\" id:\"0850b44b01678797d34c932346cdedd8c47e8ede08d0a7eb0eaf805dd596bded\" pid:3605 exited_at:{seconds:1757379504 nanos:223796601}" Sep 9 00:58:24.323415 kubelet[2947]: I0909 00:58:24.323379 2947 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 9 00:58:24.402892 systemd[1]: Created slice kubepods-burstable-pod5879948b_cc7a_4b88_85cf_7fcc781629af.slice - libcontainer container kubepods-burstable-pod5879948b_cc7a_4b88_85cf_7fcc781629af.slice. Sep 9 00:58:24.410907 systemd[1]: Created slice kubepods-burstable-pod2aad8ffb_77d4_4c21_8860_ab4a17d756b5.slice - libcontainer container kubepods-burstable-pod2aad8ffb_77d4_4c21_8860_ab4a17d756b5.slice. Sep 9 00:58:24.476441 kubelet[2947]: I0909 00:58:24.476274 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cs4n\" (UniqueName: \"kubernetes.io/projected/2aad8ffb-77d4-4c21-8860-ab4a17d756b5-kube-api-access-8cs4n\") pod \"coredns-668d6bf9bc-7lwx4\" (UID: \"2aad8ffb-77d4-4c21-8860-ab4a17d756b5\") " pod="kube-system/coredns-668d6bf9bc-7lwx4" Sep 9 00:58:24.476441 kubelet[2947]: I0909 00:58:24.476316 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5879948b-cc7a-4b88-85cf-7fcc781629af-config-volume\") pod \"coredns-668d6bf9bc-2stdn\" (UID: \"5879948b-cc7a-4b88-85cf-7fcc781629af\") " pod="kube-system/coredns-668d6bf9bc-2stdn" Sep 9 00:58:24.476441 kubelet[2947]: I0909 00:58:24.476338 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2aad8ffb-77d4-4c21-8860-ab4a17d756b5-config-volume\") pod \"coredns-668d6bf9bc-7lwx4\" (UID: \"2aad8ffb-77d4-4c21-8860-ab4a17d756b5\") " pod="kube-system/coredns-668d6bf9bc-7lwx4" Sep 9 00:58:24.476441 kubelet[2947]: I0909 00:58:24.476353 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz4bn\" (UniqueName: \"kubernetes.io/projected/5879948b-cc7a-4b88-85cf-7fcc781629af-kube-api-access-wz4bn\") pod \"coredns-668d6bf9bc-2stdn\" (UID: \"5879948b-cc7a-4b88-85cf-7fcc781629af\") " pod="kube-system/coredns-668d6bf9bc-2stdn" Sep 9 00:58:24.643108 kubelet[2947]: I0909 00:58:24.643064 2947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6ml68" podStartSLOduration=7.940635542 podStartE2EDuration="16.643050437s" podCreationTimestamp="2025-09-09 00:58:08 +0000 UTC" firstStartedPulling="2025-09-09 00:58:09.719571675 +0000 UTC m=+4.380768881" lastFinishedPulling="2025-09-09 00:58:18.421986576 +0000 UTC m=+13.083183776" observedRunningTime="2025-09-09 00:58:24.642871901 +0000 UTC m=+19.304069121" watchObservedRunningTime="2025-09-09 00:58:24.643050437 +0000 UTC m=+19.304247650" Sep 9 00:58:24.714191 containerd[1638]: time="2025-09-09T00:58:24.713967503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2stdn,Uid:5879948b-cc7a-4b88-85cf-7fcc781629af,Namespace:kube-system,Attempt:0,}" Sep 9 00:58:24.715218 containerd[1638]: time="2025-09-09T00:58:24.715202262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7lwx4,Uid:2aad8ffb-77d4-4c21-8860-ab4a17d756b5,Namespace:kube-system,Attempt:0,}" Sep 9 00:58:27.094063 systemd-networkd[1545]: cilium_host: Link UP Sep 9 00:58:27.094167 systemd-networkd[1545]: cilium_net: Link UP Sep 9 00:58:27.094273 systemd-networkd[1545]: cilium_net: Gained carrier Sep 9 00:58:27.094361 systemd-networkd[1545]: cilium_host: Gained carrier Sep 9 00:58:27.135526 systemd-networkd[1545]: cilium_net: Gained IPv6LL Sep 9 00:58:27.135700 systemd-networkd[1545]: cilium_host: Gained IPv6LL Sep 9 00:58:27.272522 systemd-networkd[1545]: cilium_vxlan: Link UP Sep 9 00:58:27.272526 systemd-networkd[1545]: cilium_vxlan: Gained carrier Sep 9 00:58:27.943493 kernel: NET: Registered PF_ALG protocol family Sep 9 00:58:28.427853 systemd-networkd[1545]: lxc_health: Link UP Sep 9 00:58:28.431603 systemd-networkd[1545]: lxc_health: Gained carrier Sep 9 00:58:28.479882 systemd-networkd[1545]: cilium_vxlan: Gained IPv6LL Sep 9 00:58:28.808328 systemd-networkd[1545]: lxc44f339c7565a: Link UP Sep 9 00:58:28.821596 systemd-networkd[1545]: lxc3a6706e3a732: Link UP Sep 9 00:58:28.823607 kernel: eth0: renamed from tmp82efa Sep 9 00:58:28.826117 systemd-networkd[1545]: lxc44f339c7565a: Gained carrier Sep 9 00:58:28.827460 kernel: eth0: renamed from tmp556b3 Sep 9 00:58:28.829340 systemd-networkd[1545]: lxc3a6706e3a732: Gained carrier Sep 9 00:58:29.951608 systemd-networkd[1545]: lxc_health: Gained IPv6LL Sep 9 00:58:30.015584 systemd-networkd[1545]: lxc44f339c7565a: Gained IPv6LL Sep 9 00:58:30.527618 systemd-networkd[1545]: lxc3a6706e3a732: Gained IPv6LL Sep 9 00:58:31.446002 containerd[1638]: time="2025-09-09T00:58:31.445932730Z" level=info msg="connecting to shim 82efaa9489419646566f0fc1478f01e465c17c1d3fcb97558ccb6be78ff47a6b" address="unix:///run/containerd/s/3ad72c0e9d3999e9c5632cfe1afceb242e563cd28116cc9831fa434da96bc116" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:58:31.467386 containerd[1638]: time="2025-09-09T00:58:31.466803382Z" level=info msg="connecting to shim 556b3d882dcecadcf4c7617ce1493fefef3be0c753254cc52eca3cd5d29a4b53" address="unix:///run/containerd/s/4d2345f3c8787565134c09d42973e60dddee67f8458462f09afe80d94409013d" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:58:31.492216 systemd[1]: Started cri-containerd-82efaa9489419646566f0fc1478f01e465c17c1d3fcb97558ccb6be78ff47a6b.scope - libcontainer container 82efaa9489419646566f0fc1478f01e465c17c1d3fcb97558ccb6be78ff47a6b. Sep 9 00:58:31.506923 systemd[1]: Started cri-containerd-556b3d882dcecadcf4c7617ce1493fefef3be0c753254cc52eca3cd5d29a4b53.scope - libcontainer container 556b3d882dcecadcf4c7617ce1493fefef3be0c753254cc52eca3cd5d29a4b53. Sep 9 00:58:31.516059 systemd-resolved[1490]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:58:31.524523 systemd-resolved[1490]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:58:31.565210 containerd[1638]: time="2025-09-09T00:58:31.565150709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2stdn,Uid:5879948b-cc7a-4b88-85cf-7fcc781629af,Namespace:kube-system,Attempt:0,} returns sandbox id \"82efaa9489419646566f0fc1478f01e465c17c1d3fcb97558ccb6be78ff47a6b\"" Sep 9 00:58:31.567259 containerd[1638]: time="2025-09-09T00:58:31.567169014Z" level=info msg="CreateContainer within sandbox \"82efaa9489419646566f0fc1478f01e465c17c1d3fcb97558ccb6be78ff47a6b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:58:31.573020 containerd[1638]: time="2025-09-09T00:58:31.572998855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7lwx4,Uid:2aad8ffb-77d4-4c21-8860-ab4a17d756b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"556b3d882dcecadcf4c7617ce1493fefef3be0c753254cc52eca3cd5d29a4b53\"" Sep 9 00:58:31.574953 containerd[1638]: time="2025-09-09T00:58:31.574932530Z" level=info msg="CreateContainer within sandbox \"556b3d882dcecadcf4c7617ce1493fefef3be0c753254cc52eca3cd5d29a4b53\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:58:31.581410 containerd[1638]: time="2025-09-09T00:58:31.581384258Z" level=info msg="Container 614e1aac1fe7fa5e8af5080d697d9954c595eb4631ddec34fd52ec23491f760e: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:58:31.582107 containerd[1638]: time="2025-09-09T00:58:31.582090040Z" level=info msg="Container f64bb92d6562e9c00188989e4bb4e270fd8f0e835d26c629f9754034bdf8f383: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:58:31.585271 containerd[1638]: time="2025-09-09T00:58:31.585242984Z" level=info msg="CreateContainer within sandbox \"82efaa9489419646566f0fc1478f01e465c17c1d3fcb97558ccb6be78ff47a6b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"614e1aac1fe7fa5e8af5080d697d9954c595eb4631ddec34fd52ec23491f760e\"" Sep 9 00:58:31.585560 containerd[1638]: time="2025-09-09T00:58:31.585293606Z" level=info msg="CreateContainer within sandbox \"556b3d882dcecadcf4c7617ce1493fefef3be0c753254cc52eca3cd5d29a4b53\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f64bb92d6562e9c00188989e4bb4e270fd8f0e835d26c629f9754034bdf8f383\"" Sep 9 00:58:31.586507 containerd[1638]: time="2025-09-09T00:58:31.586486587Z" level=info msg="StartContainer for \"614e1aac1fe7fa5e8af5080d697d9954c595eb4631ddec34fd52ec23491f760e\"" Sep 9 00:58:31.586743 containerd[1638]: time="2025-09-09T00:58:31.586718065Z" level=info msg="StartContainer for \"f64bb92d6562e9c00188989e4bb4e270fd8f0e835d26c629f9754034bdf8f383\"" Sep 9 00:58:31.588180 containerd[1638]: time="2025-09-09T00:58:31.588088481Z" level=info msg="connecting to shim f64bb92d6562e9c00188989e4bb4e270fd8f0e835d26c629f9754034bdf8f383" address="unix:///run/containerd/s/4d2345f3c8787565134c09d42973e60dddee67f8458462f09afe80d94409013d" protocol=ttrpc version=3 Sep 9 00:58:31.588180 containerd[1638]: time="2025-09-09T00:58:31.588144333Z" level=info msg="connecting to shim 614e1aac1fe7fa5e8af5080d697d9954c595eb4631ddec34fd52ec23491f760e" address="unix:///run/containerd/s/3ad72c0e9d3999e9c5632cfe1afceb242e563cd28116cc9831fa434da96bc116" protocol=ttrpc version=3 Sep 9 00:58:31.603730 systemd[1]: Started cri-containerd-614e1aac1fe7fa5e8af5080d697d9954c595eb4631ddec34fd52ec23491f760e.scope - libcontainer container 614e1aac1fe7fa5e8af5080d697d9954c595eb4631ddec34fd52ec23491f760e. Sep 9 00:58:31.612578 systemd[1]: Started cri-containerd-f64bb92d6562e9c00188989e4bb4e270fd8f0e835d26c629f9754034bdf8f383.scope - libcontainer container f64bb92d6562e9c00188989e4bb4e270fd8f0e835d26c629f9754034bdf8f383. Sep 9 00:58:31.638314 containerd[1638]: time="2025-09-09T00:58:31.638152684Z" level=info msg="StartContainer for \"614e1aac1fe7fa5e8af5080d697d9954c595eb4631ddec34fd52ec23491f760e\" returns successfully" Sep 9 00:58:31.643682 containerd[1638]: time="2025-09-09T00:58:31.643656124Z" level=info msg="StartContainer for \"f64bb92d6562e9c00188989e4bb4e270fd8f0e835d26c629f9754034bdf8f383\" returns successfully" Sep 9 00:58:32.434513 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1773121913.mount: Deactivated successfully. Sep 9 00:58:32.643026 kubelet[2947]: I0909 00:58:32.642591 2947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-7lwx4" podStartSLOduration=23.642571369 podStartE2EDuration="23.642571369s" podCreationTimestamp="2025-09-09 00:58:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:58:32.64201028 +0000 UTC m=+27.303207506" watchObservedRunningTime="2025-09-09 00:58:32.642571369 +0000 UTC m=+27.303768581" Sep 9 00:58:32.652994 kubelet[2947]: I0909 00:58:32.652947 2947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-2stdn" podStartSLOduration=23.652931564 podStartE2EDuration="23.652931564s" podCreationTimestamp="2025-09-09 00:58:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:58:32.652654113 +0000 UTC m=+27.313851324" watchObservedRunningTime="2025-09-09 00:58:32.652931564 +0000 UTC m=+27.314128773" Sep 9 00:58:38.690584 kubelet[2947]: I0909 00:58:38.690471 2947 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:59:18.704523 systemd[1]: Started sshd@7-139.178.70.105:22-139.178.68.195:56064.service - OpenSSH per-connection server daemon (139.178.68.195:56064). Sep 9 00:59:18.836136 sshd[4266]: Accepted publickey for core from 139.178.68.195 port 56064 ssh2: RSA SHA256:di4PNdyPvpfAB0WOT8AEsUYj4AxD4pouXbu16YJnSLk Sep 9 00:59:18.837209 sshd-session[4266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:59:18.844297 systemd-logind[1590]: New session 10 of user core. Sep 9 00:59:18.854540 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 00:59:19.384671 sshd[4269]: Connection closed by 139.178.68.195 port 56064 Sep 9 00:59:19.385110 sshd-session[4266]: pam_unix(sshd:session): session closed for user core Sep 9 00:59:19.390751 systemd[1]: sshd@7-139.178.70.105:22-139.178.68.195:56064.service: Deactivated successfully. Sep 9 00:59:19.392142 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 00:59:19.393879 systemd-logind[1590]: Session 10 logged out. Waiting for processes to exit. Sep 9 00:59:19.395565 systemd-logind[1590]: Removed session 10. Sep 9 00:59:24.395589 systemd[1]: Started sshd@8-139.178.70.105:22-139.178.68.195:45052.service - OpenSSH per-connection server daemon (139.178.68.195:45052). Sep 9 00:59:24.490491 sshd[4281]: Accepted publickey for core from 139.178.68.195 port 45052 ssh2: RSA SHA256:di4PNdyPvpfAB0WOT8AEsUYj4AxD4pouXbu16YJnSLk Sep 9 00:59:24.491309 sshd-session[4281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:59:24.493968 systemd-logind[1590]: New session 11 of user core. Sep 9 00:59:24.499729 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 00:59:24.785000 sshd[4284]: Connection closed by 139.178.68.195 port 45052 Sep 9 00:59:24.784546 sshd-session[4281]: pam_unix(sshd:session): session closed for user core Sep 9 00:59:24.786783 systemd[1]: sshd@8-139.178.70.105:22-139.178.68.195:45052.service: Deactivated successfully. Sep 9 00:59:24.788046 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 00:59:24.788669 systemd-logind[1590]: Session 11 logged out. Waiting for processes to exit. Sep 9 00:59:24.789748 systemd-logind[1590]: Removed session 11. Sep 9 00:59:29.795255 systemd[1]: Started sshd@9-139.178.70.105:22-139.178.68.195:45068.service - OpenSSH per-connection server daemon (139.178.68.195:45068). Sep 9 00:59:29.831981 sshd[4298]: Accepted publickey for core from 139.178.68.195 port 45068 ssh2: RSA SHA256:di4PNdyPvpfAB0WOT8AEsUYj4AxD4pouXbu16YJnSLk Sep 9 00:59:29.832819 sshd-session[4298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:59:29.835406 systemd-logind[1590]: New session 12 of user core. Sep 9 00:59:29.847612 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 00:59:29.944792 sshd[4301]: Connection closed by 139.178.68.195 port 45068 Sep 9 00:59:29.945230 sshd-session[4298]: pam_unix(sshd:session): session closed for user core Sep 9 00:59:29.948042 systemd[1]: sshd@9-139.178.70.105:22-139.178.68.195:45068.service: Deactivated successfully. Sep 9 00:59:29.949291 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 00:59:29.949842 systemd-logind[1590]: Session 12 logged out. Waiting for processes to exit. Sep 9 00:59:29.950709 systemd-logind[1590]: Removed session 12. Sep 9 00:59:34.954739 systemd[1]: Started sshd@10-139.178.70.105:22-139.178.68.195:57530.service - OpenSSH per-connection server daemon (139.178.68.195:57530). Sep 9 00:59:34.993871 sshd[4313]: Accepted publickey for core from 139.178.68.195 port 57530 ssh2: RSA SHA256:di4PNdyPvpfAB0WOT8AEsUYj4AxD4pouXbu16YJnSLk Sep 9 00:59:34.994756 sshd-session[4313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:59:34.998234 systemd-logind[1590]: New session 13 of user core. Sep 9 00:59:35.002601 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 00:59:35.090833 sshd[4316]: Connection closed by 139.178.68.195 port 57530 Sep 9 00:59:35.091224 sshd-session[4313]: pam_unix(sshd:session): session closed for user core Sep 9 00:59:35.098929 systemd[1]: sshd@10-139.178.70.105:22-139.178.68.195:57530.service: Deactivated successfully. Sep 9 00:59:35.100519 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 00:59:35.101219 systemd-logind[1590]: Session 13 logged out. Waiting for processes to exit. Sep 9 00:59:35.103773 systemd[1]: Started sshd@11-139.178.70.105:22-139.178.68.195:57536.service - OpenSSH per-connection server daemon (139.178.68.195:57536). Sep 9 00:59:35.104951 systemd-logind[1590]: Removed session 13. Sep 9 00:59:35.141908 sshd[4328]: Accepted publickey for core from 139.178.68.195 port 57536 ssh2: RSA SHA256:di4PNdyPvpfAB0WOT8AEsUYj4AxD4pouXbu16YJnSLk Sep 9 00:59:35.142828 sshd-session[4328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:59:35.145868 systemd-logind[1590]: New session 14 of user core. Sep 9 00:59:35.153568 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 00:59:35.343602 sshd[4331]: Connection closed by 139.178.68.195 port 57536 Sep 9 00:59:35.344218 sshd-session[4328]: pam_unix(sshd:session): session closed for user core Sep 9 00:59:35.354012 systemd[1]: sshd@11-139.178.70.105:22-139.178.68.195:57536.service: Deactivated successfully. Sep 9 00:59:35.355858 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 00:59:35.356573 systemd-logind[1590]: Session 14 logged out. Waiting for processes to exit. Sep 9 00:59:35.359872 systemd[1]: Started sshd@12-139.178.70.105:22-139.178.68.195:57542.service - OpenSSH per-connection server daemon (139.178.68.195:57542). Sep 9 00:59:35.360973 systemd-logind[1590]: Removed session 14. Sep 9 00:59:35.399320 sshd[4340]: Accepted publickey for core from 139.178.68.195 port 57542 ssh2: RSA SHA256:di4PNdyPvpfAB0WOT8AEsUYj4AxD4pouXbu16YJnSLk Sep 9 00:59:35.400531 sshd-session[4340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:59:35.404147 systemd-logind[1590]: New session 15 of user core. Sep 9 00:59:35.409608 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 00:59:35.538923 sshd[4344]: Connection closed by 139.178.68.195 port 57542 Sep 9 00:59:35.539277 sshd-session[4340]: pam_unix(sshd:session): session closed for user core Sep 9 00:59:35.541599 systemd[1]: sshd@12-139.178.70.105:22-139.178.68.195:57542.service: Deactivated successfully. Sep 9 00:59:35.542949 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 00:59:35.543602 systemd-logind[1590]: Session 15 logged out. Waiting for processes to exit. Sep 9 00:59:35.544398 systemd-logind[1590]: Removed session 15. Sep 9 00:59:40.552773 systemd[1]: Started sshd@13-139.178.70.105:22-139.178.68.195:34092.service - OpenSSH per-connection server daemon (139.178.68.195:34092). Sep 9 00:59:40.595345 sshd[4356]: Accepted publickey for core from 139.178.68.195 port 34092 ssh2: RSA SHA256:di4PNdyPvpfAB0WOT8AEsUYj4AxD4pouXbu16YJnSLk Sep 9 00:59:40.596216 sshd-session[4356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:59:40.598961 systemd-logind[1590]: New session 16 of user core. Sep 9 00:59:40.609805 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 00:59:40.699486 sshd[4359]: Connection closed by 139.178.68.195 port 34092 Sep 9 00:59:40.700673 sshd-session[4356]: pam_unix(sshd:session): session closed for user core Sep 9 00:59:40.702935 systemd[1]: sshd@13-139.178.70.105:22-139.178.68.195:34092.service: Deactivated successfully. Sep 9 00:59:40.704108 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 00:59:40.705888 systemd-logind[1590]: Session 16 logged out. Waiting for processes to exit. Sep 9 00:59:40.707077 systemd-logind[1590]: Removed session 16. Sep 9 00:59:45.710592 systemd[1]: Started sshd@14-139.178.70.105:22-139.178.68.195:34108.service - OpenSSH per-connection server daemon (139.178.68.195:34108). Sep 9 00:59:45.749609 sshd[4372]: Accepted publickey for core from 139.178.68.195 port 34108 ssh2: RSA SHA256:di4PNdyPvpfAB0WOT8AEsUYj4AxD4pouXbu16YJnSLk Sep 9 00:59:45.750528 sshd-session[4372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:59:45.753480 systemd-logind[1590]: New session 17 of user core. Sep 9 00:59:45.762599 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 00:59:45.853030 sshd[4375]: Connection closed by 139.178.68.195 port 34108 Sep 9 00:59:45.853933 sshd-session[4372]: pam_unix(sshd:session): session closed for user core Sep 9 00:59:45.859568 systemd[1]: sshd@14-139.178.70.105:22-139.178.68.195:34108.service: Deactivated successfully. Sep 9 00:59:45.861000 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 00:59:45.861659 systemd-logind[1590]: Session 17 logged out. Waiting for processes to exit. Sep 9 00:59:45.863660 systemd[1]: Started sshd@15-139.178.70.105:22-139.178.68.195:34118.service - OpenSSH per-connection server daemon (139.178.68.195:34118). Sep 9 00:59:45.865876 systemd-logind[1590]: Removed session 17. Sep 9 00:59:45.896214 sshd[4387]: Accepted publickey for core from 139.178.68.195 port 34118 ssh2: RSA SHA256:di4PNdyPvpfAB0WOT8AEsUYj4AxD4pouXbu16YJnSLk Sep 9 00:59:45.896753 sshd-session[4387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:59:45.899277 systemd-logind[1590]: New session 18 of user core. Sep 9 00:59:45.904539 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 00:59:46.602999 sshd[4390]: Connection closed by 139.178.68.195 port 34118 Sep 9 00:59:46.603761 sshd-session[4387]: pam_unix(sshd:session): session closed for user core Sep 9 00:59:46.609411 systemd[1]: sshd@15-139.178.70.105:22-139.178.68.195:34118.service: Deactivated successfully. Sep 9 00:59:46.610787 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 00:59:46.611494 systemd-logind[1590]: Session 18 logged out. Waiting for processes to exit. Sep 9 00:59:46.613987 systemd[1]: Started sshd@16-139.178.70.105:22-139.178.68.195:34132.service - OpenSSH per-connection server daemon (139.178.68.195:34132). Sep 9 00:59:46.614774 systemd-logind[1590]: Removed session 18. Sep 9 00:59:46.760470 sshd[4401]: Accepted publickey for core from 139.178.68.195 port 34132 ssh2: RSA SHA256:di4PNdyPvpfAB0WOT8AEsUYj4AxD4pouXbu16YJnSLk Sep 9 00:59:46.761350 sshd-session[4401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:59:46.764282 systemd-logind[1590]: New session 19 of user core. Sep 9 00:59:46.772582 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 00:59:47.762366 sshd[4404]: Connection closed by 139.178.68.195 port 34132 Sep 9 00:59:47.761782 sshd-session[4401]: pam_unix(sshd:session): session closed for user core Sep 9 00:59:47.774368 systemd[1]: sshd@16-139.178.70.105:22-139.178.68.195:34132.service: Deactivated successfully. Sep 9 00:59:47.776392 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 00:59:47.777293 systemd-logind[1590]: Session 19 logged out. Waiting for processes to exit. Sep 9 00:59:47.780670 systemd[1]: Started sshd@17-139.178.70.105:22-139.178.68.195:34146.service - OpenSSH per-connection server daemon (139.178.68.195:34146). Sep 9 00:59:47.782812 systemd-logind[1590]: Removed session 19. Sep 9 00:59:47.824264 sshd[4422]: Accepted publickey for core from 139.178.68.195 port 34146 ssh2: RSA SHA256:di4PNdyPvpfAB0WOT8AEsUYj4AxD4pouXbu16YJnSLk Sep 9 00:59:47.824910 sshd-session[4422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:59:47.827985 systemd-logind[1590]: New session 20 of user core. Sep 9 00:59:47.833609 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 00:59:48.027590 sshd[4425]: Connection closed by 139.178.68.195 port 34146 Sep 9 00:59:48.030223 sshd-session[4422]: pam_unix(sshd:session): session closed for user core Sep 9 00:59:48.036688 systemd[1]: sshd@17-139.178.70.105:22-139.178.68.195:34146.service: Deactivated successfully. Sep 9 00:59:48.038236 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 00:59:48.039006 systemd-logind[1590]: Session 20 logged out. Waiting for processes to exit. Sep 9 00:59:48.042815 systemd[1]: Started sshd@18-139.178.70.105:22-139.178.68.195:34152.service - OpenSSH per-connection server daemon (139.178.68.195:34152). Sep 9 00:59:48.045822 systemd-logind[1590]: Removed session 20. Sep 9 00:59:48.079138 sshd[4435]: Accepted publickey for core from 139.178.68.195 port 34152 ssh2: RSA SHA256:di4PNdyPvpfAB0WOT8AEsUYj4AxD4pouXbu16YJnSLk Sep 9 00:59:48.080138 sshd-session[4435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:59:48.083569 systemd-logind[1590]: New session 21 of user core. Sep 9 00:59:48.092627 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 00:59:48.190477 sshd[4438]: Connection closed by 139.178.68.195 port 34152 Sep 9 00:59:48.190979 sshd-session[4435]: pam_unix(sshd:session): session closed for user core Sep 9 00:59:48.193717 systemd-logind[1590]: Session 21 logged out. Waiting for processes to exit. Sep 9 00:59:48.193939 systemd[1]: sshd@18-139.178.70.105:22-139.178.68.195:34152.service: Deactivated successfully. Sep 9 00:59:48.195596 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 00:59:48.196735 systemd-logind[1590]: Removed session 21. Sep 9 00:59:53.200355 systemd[1]: Started sshd@19-139.178.70.105:22-139.178.68.195:52350.service - OpenSSH per-connection server daemon (139.178.68.195:52350). Sep 9 00:59:53.244480 sshd[4449]: Accepted publickey for core from 139.178.68.195 port 52350 ssh2: RSA SHA256:di4PNdyPvpfAB0WOT8AEsUYj4AxD4pouXbu16YJnSLk Sep 9 00:59:53.245588 sshd-session[4449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:59:53.248713 systemd-logind[1590]: New session 22 of user core. Sep 9 00:59:53.252528 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 00:59:53.377376 sshd[4452]: Connection closed by 139.178.68.195 port 52350 Sep 9 00:59:53.377741 sshd-session[4449]: pam_unix(sshd:session): session closed for user core Sep 9 00:59:53.380604 systemd[1]: sshd@19-139.178.70.105:22-139.178.68.195:52350.service: Deactivated successfully. Sep 9 00:59:53.382187 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 00:59:53.382860 systemd-logind[1590]: Session 22 logged out. Waiting for processes to exit. Sep 9 00:59:53.383683 systemd-logind[1590]: Removed session 22. Sep 9 00:59:58.388105 systemd[1]: Started sshd@20-139.178.70.105:22-139.178.68.195:52366.service - OpenSSH per-connection server daemon (139.178.68.195:52366). Sep 9 00:59:58.427486 sshd[4466]: Accepted publickey for core from 139.178.68.195 port 52366 ssh2: RSA SHA256:di4PNdyPvpfAB0WOT8AEsUYj4AxD4pouXbu16YJnSLk Sep 9 00:59:58.428644 sshd-session[4466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:59:58.432591 systemd-logind[1590]: New session 23 of user core. Sep 9 00:59:58.439583 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 00:59:58.544023 sshd[4469]: Connection closed by 139.178.68.195 port 52366 Sep 9 00:59:58.544465 sshd-session[4466]: pam_unix(sshd:session): session closed for user core Sep 9 00:59:58.546547 systemd-logind[1590]: Session 23 logged out. Waiting for processes to exit. Sep 9 00:59:58.546735 systemd[1]: sshd@20-139.178.70.105:22-139.178.68.195:52366.service: Deactivated successfully. Sep 9 00:59:58.548017 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 00:59:58.549314 systemd-logind[1590]: Removed session 23. Sep 9 01:00:03.555612 systemd[1]: Started sshd@21-139.178.70.105:22-139.178.68.195:46028.service - OpenSSH per-connection server daemon (139.178.68.195:46028). Sep 9 01:00:03.597368 sshd[4483]: Accepted publickey for core from 139.178.68.195 port 46028 ssh2: RSA SHA256:di4PNdyPvpfAB0WOT8AEsUYj4AxD4pouXbu16YJnSLk Sep 9 01:00:03.598307 sshd-session[4483]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 01:00:03.602010 systemd-logind[1590]: New session 24 of user core. Sep 9 01:00:03.604523 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 9 01:00:03.691405 sshd[4486]: Connection closed by 139.178.68.195 port 46028 Sep 9 01:00:03.691742 sshd-session[4483]: pam_unix(sshd:session): session closed for user core Sep 9 01:00:03.694493 systemd-logind[1590]: Session 24 logged out. Waiting for processes to exit. Sep 9 01:00:03.694673 systemd[1]: sshd@21-139.178.70.105:22-139.178.68.195:46028.service: Deactivated successfully. Sep 9 01:00:03.695723 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 01:00:03.696732 systemd-logind[1590]: Removed session 24. Sep 9 01:00:08.704228 systemd[1]: Started sshd@22-139.178.70.105:22-139.178.68.195:46042.service - OpenSSH per-connection server daemon (139.178.68.195:46042). Sep 9 01:00:08.735596 sshd[4499]: Accepted publickey for core from 139.178.68.195 port 46042 ssh2: RSA SHA256:di4PNdyPvpfAB0WOT8AEsUYj4AxD4pouXbu16YJnSLk Sep 9 01:00:08.736468 sshd-session[4499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 01:00:08.739036 systemd-logind[1590]: New session 25 of user core. Sep 9 01:00:08.746767 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 9 01:00:08.857475 sshd[4502]: Connection closed by 139.178.68.195 port 46042 Sep 9 01:00:08.856782 sshd-session[4499]: pam_unix(sshd:session): session closed for user core Sep 9 01:00:08.865153 systemd[1]: sshd@22-139.178.70.105:22-139.178.68.195:46042.service: Deactivated successfully. Sep 9 01:00:08.866405 systemd[1]: session-25.scope: Deactivated successfully. Sep 9 01:00:08.867245 systemd-logind[1590]: Session 25 logged out. Waiting for processes to exit. Sep 9 01:00:08.868530 systemd-logind[1590]: Removed session 25. Sep 9 01:00:08.869819 systemd[1]: Started sshd@23-139.178.70.105:22-139.178.68.195:46052.service - OpenSSH per-connection server daemon (139.178.68.195:46052). Sep 9 01:00:08.905486 sshd[4514]: Accepted publickey for core from 139.178.68.195 port 46052 ssh2: RSA SHA256:di4PNdyPvpfAB0WOT8AEsUYj4AxD4pouXbu16YJnSLk Sep 9 01:00:08.906359 sshd-session[4514]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 01:00:08.909482 systemd-logind[1590]: New session 26 of user core. Sep 9 01:00:08.916602 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 9 01:00:10.247997 containerd[1638]: time="2025-09-09T01:00:10.247829071Z" level=info msg="StopContainer for \"f186228d8ec9f725b4443851a0748c4c87709aa5d891e70402c4cda6a3ad3cf3\" with timeout 30 (s)" Sep 9 01:00:10.253790 containerd[1638]: time="2025-09-09T01:00:10.253618861Z" level=info msg="Stop container \"f186228d8ec9f725b4443851a0748c4c87709aa5d891e70402c4cda6a3ad3cf3\" with signal terminated" Sep 9 01:00:10.272937 systemd[1]: cri-containerd-f186228d8ec9f725b4443851a0748c4c87709aa5d891e70402c4cda6a3ad3cf3.scope: Deactivated successfully. Sep 9 01:00:10.275803 containerd[1638]: time="2025-09-09T01:00:10.275764764Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f186228d8ec9f725b4443851a0748c4c87709aa5d891e70402c4cda6a3ad3cf3\" id:\"f186228d8ec9f725b4443851a0748c4c87709aa5d891e70402c4cda6a3ad3cf3\" pid:3502 exited_at:{seconds:1757379610 nanos:274993822}" Sep 9 01:00:10.276032 containerd[1638]: time="2025-09-09T01:00:10.276004306Z" level=info msg="received exit event container_id:\"f186228d8ec9f725b4443851a0748c4c87709aa5d891e70402c4cda6a3ad3cf3\" id:\"f186228d8ec9f725b4443851a0748c4c87709aa5d891e70402c4cda6a3ad3cf3\" pid:3502 exited_at:{seconds:1757379610 nanos:274993822}" Sep 9 01:00:10.287631 containerd[1638]: time="2025-09-09T01:00:10.287602173Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 01:00:10.296833 containerd[1638]: time="2025-09-09T01:00:10.295871871Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9d1ae07599608ee0c2b15c7cf1823d6ffc71711c864ff2cf3fa547cc7f9b836b\" id:\"14f373cf8b9c79d7dc120a90f951c665d200cc1284f5d2fc3824a461bcc18d7e\" pid:4536 exited_at:{seconds:1757379610 nanos:295646236}" Sep 9 01:00:10.296154 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f186228d8ec9f725b4443851a0748c4c87709aa5d891e70402c4cda6a3ad3cf3-rootfs.mount: Deactivated successfully. Sep 9 01:00:10.300087 containerd[1638]: time="2025-09-09T01:00:10.300050285Z" level=info msg="StopContainer for \"9d1ae07599608ee0c2b15c7cf1823d6ffc71711c864ff2cf3fa547cc7f9b836b\" with timeout 2 (s)" Sep 9 01:00:10.300600 containerd[1638]: time="2025-09-09T01:00:10.300478949Z" level=info msg="Stop container \"9d1ae07599608ee0c2b15c7cf1823d6ffc71711c864ff2cf3fa547cc7f9b836b\" with signal terminated" Sep 9 01:00:10.301720 containerd[1638]: time="2025-09-09T01:00:10.301697100Z" level=info msg="StopContainer for \"f186228d8ec9f725b4443851a0748c4c87709aa5d891e70402c4cda6a3ad3cf3\" returns successfully" Sep 9 01:00:10.302238 containerd[1638]: time="2025-09-09T01:00:10.302045682Z" level=info msg="StopPodSandbox for \"ede3596b58af7193d91d1c66b6176ded23b8bade47b63df3315cd47ada5ce596\"" Sep 9 01:00:10.302238 containerd[1638]: time="2025-09-09T01:00:10.302089619Z" level=info msg="Container to stop \"f186228d8ec9f725b4443851a0748c4c87709aa5d891e70402c4cda6a3ad3cf3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 01:00:10.307204 systemd-networkd[1545]: lxc_health: Link DOWN Sep 9 01:00:10.307642 systemd-networkd[1545]: lxc_health: Lost carrier Sep 9 01:00:10.310743 systemd[1]: cri-containerd-ede3596b58af7193d91d1c66b6176ded23b8bade47b63df3315cd47ada5ce596.scope: Deactivated successfully. Sep 9 01:00:10.317142 containerd[1638]: time="2025-09-09T01:00:10.317118020Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ede3596b58af7193d91d1c66b6176ded23b8bade47b63df3315cd47ada5ce596\" id:\"ede3596b58af7193d91d1c66b6176ded23b8bade47b63df3315cd47ada5ce596\" pid:3174 exit_status:137 exited_at:{seconds:1757379610 nanos:316754535}" Sep 9 01:00:10.326074 systemd[1]: cri-containerd-9d1ae07599608ee0c2b15c7cf1823d6ffc71711c864ff2cf3fa547cc7f9b836b.scope: Deactivated successfully. Sep 9 01:00:10.326298 systemd[1]: cri-containerd-9d1ae07599608ee0c2b15c7cf1823d6ffc71711c864ff2cf3fa547cc7f9b836b.scope: Consumed 4.602s CPU time, 228.7M memory peak, 108.3M read from disk, 13.3M written to disk. Sep 9 01:00:10.331231 containerd[1638]: time="2025-09-09T01:00:10.328660427Z" level=info msg="received exit event container_id:\"9d1ae07599608ee0c2b15c7cf1823d6ffc71711c864ff2cf3fa547cc7f9b836b\" id:\"9d1ae07599608ee0c2b15c7cf1823d6ffc71711c864ff2cf3fa547cc7f9b836b\" pid:3574 exited_at:{seconds:1757379610 nanos:328227369}" Sep 9 01:00:10.350644 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d1ae07599608ee0c2b15c7cf1823d6ffc71711c864ff2cf3fa547cc7f9b836b-rootfs.mount: Deactivated successfully. Sep 9 01:00:10.358233 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ede3596b58af7193d91d1c66b6176ded23b8bade47b63df3315cd47ada5ce596-rootfs.mount: Deactivated successfully. Sep 9 01:00:10.364070 containerd[1638]: time="2025-09-09T01:00:10.364024272Z" level=info msg="shim disconnected" id=ede3596b58af7193d91d1c66b6176ded23b8bade47b63df3315cd47ada5ce596 namespace=k8s.io Sep 9 01:00:10.364070 containerd[1638]: time="2025-09-09T01:00:10.364059368Z" level=warning msg="cleaning up after shim disconnected" id=ede3596b58af7193d91d1c66b6176ded23b8bade47b63df3315cd47ada5ce596 namespace=k8s.io Sep 9 01:00:10.375119 containerd[1638]: time="2025-09-09T01:00:10.364064730Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 01:00:10.392281 containerd[1638]: time="2025-09-09T01:00:10.367822593Z" level=info msg="StopContainer for \"9d1ae07599608ee0c2b15c7cf1823d6ffc71711c864ff2cf3fa547cc7f9b836b\" returns successfully" Sep 9 01:00:10.392281 containerd[1638]: time="2025-09-09T01:00:10.375701547Z" level=info msg="StopPodSandbox for \"189adff7cb5524c20e2b626131b173ee2a657b8a6cfeb7deae2dbde54c2dc978\"" Sep 9 01:00:10.392281 containerd[1638]: time="2025-09-09T01:00:10.375754606Z" level=info msg="Container to stop \"22316139513ff270f41b2c218a1d420f3a3b5095fe4b20addc2603e5e1f4b9eb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 01:00:10.392281 containerd[1638]: time="2025-09-09T01:00:10.375762066Z" level=info msg="Container to stop \"3346a83d69e61fed4424d1edd7f0b19fe04a55b8b8d9a4c137b022581f3bba4c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 01:00:10.392281 containerd[1638]: time="2025-09-09T01:00:10.375768062Z" level=info msg="Container to stop \"2366c98ec46442a034a2dfc2957eedbcd8cc86a017214f26479c76cf4de5b13a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 01:00:10.392281 containerd[1638]: time="2025-09-09T01:00:10.375774071Z" level=info msg="Container to stop \"af152aad96b22a2de98651dd1e6d068ef653cc493cc0677f91fa3d358a4d4cda\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 01:00:10.392281 containerd[1638]: time="2025-09-09T01:00:10.375781590Z" level=info msg="Container to stop \"9d1ae07599608ee0c2b15c7cf1823d6ffc71711c864ff2cf3fa547cc7f9b836b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 01:00:10.380332 systemd[1]: cri-containerd-189adff7cb5524c20e2b626131b173ee2a657b8a6cfeb7deae2dbde54c2dc978.scope: Deactivated successfully. Sep 9 01:00:10.398036 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-189adff7cb5524c20e2b626131b173ee2a657b8a6cfeb7deae2dbde54c2dc978-rootfs.mount: Deactivated successfully. Sep 9 01:00:10.413501 containerd[1638]: time="2025-09-09T01:00:10.413473643Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9d1ae07599608ee0c2b15c7cf1823d6ffc71711c864ff2cf3fa547cc7f9b836b\" id:\"9d1ae07599608ee0c2b15c7cf1823d6ffc71711c864ff2cf3fa547cc7f9b836b\" pid:3574 exited_at:{seconds:1757379610 nanos:328227369}" Sep 9 01:00:10.414916 containerd[1638]: time="2025-09-09T01:00:10.414656462Z" level=info msg="TaskExit event in podsandbox handler container_id:\"189adff7cb5524c20e2b626131b173ee2a657b8a6cfeb7deae2dbde54c2dc978\" id:\"189adff7cb5524c20e2b626131b173ee2a657b8a6cfeb7deae2dbde54c2dc978\" pid:3084 exit_status:137 exited_at:{seconds:1757379610 nanos:386581029}" Sep 9 01:00:10.414662 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ede3596b58af7193d91d1c66b6176ded23b8bade47b63df3315cd47ada5ce596-shm.mount: Deactivated successfully. Sep 9 01:00:10.415534 containerd[1638]: time="2025-09-09T01:00:10.415212404Z" level=info msg="TearDown network for sandbox \"ede3596b58af7193d91d1c66b6176ded23b8bade47b63df3315cd47ada5ce596\" successfully" Sep 9 01:00:10.415534 containerd[1638]: time="2025-09-09T01:00:10.415228554Z" level=info msg="StopPodSandbox for \"ede3596b58af7193d91d1c66b6176ded23b8bade47b63df3315cd47ada5ce596\" returns successfully" Sep 9 01:00:10.418520 containerd[1638]: time="2025-09-09T01:00:10.418501614Z" level=info msg="received exit event sandbox_id:\"ede3596b58af7193d91d1c66b6176ded23b8bade47b63df3315cd47ada5ce596\" exit_status:137 exited_at:{seconds:1757379610 nanos:316754535}" Sep 9 01:00:10.441490 containerd[1638]: time="2025-09-09T01:00:10.441465445Z" level=info msg="received exit event sandbox_id:\"189adff7cb5524c20e2b626131b173ee2a657b8a6cfeb7deae2dbde54c2dc978\" exit_status:137 exited_at:{seconds:1757379610 nanos:386581029}" Sep 9 01:00:10.443648 containerd[1638]: time="2025-09-09T01:00:10.441887221Z" level=info msg="shim disconnected" id=189adff7cb5524c20e2b626131b173ee2a657b8a6cfeb7deae2dbde54c2dc978 namespace=k8s.io Sep 9 01:00:10.443894 containerd[1638]: time="2025-09-09T01:00:10.441933669Z" level=info msg="TearDown network for sandbox \"189adff7cb5524c20e2b626131b173ee2a657b8a6cfeb7deae2dbde54c2dc978\" successfully" Sep 9 01:00:10.444040 containerd[1638]: time="2025-09-09T01:00:10.443985589Z" level=info msg="StopPodSandbox for \"189adff7cb5524c20e2b626131b173ee2a657b8a6cfeb7deae2dbde54c2dc978\" returns successfully" Sep 9 01:00:10.444114 containerd[1638]: time="2025-09-09T01:00:10.444086586Z" level=warning msg="cleaning up after shim disconnected" id=189adff7cb5524c20e2b626131b173ee2a657b8a6cfeb7deae2dbde54c2dc978 namespace=k8s.io Sep 9 01:00:10.444141 containerd[1638]: time="2025-09-09T01:00:10.444099672Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 01:00:10.559911 kubelet[2947]: E0909 01:00:10.559867 2947 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 01:00:10.577263 kubelet[2947]: I0909 01:00:10.577229 2947 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/aa5b4bb1-fa79-45cb-888e-d3826e4219db-cilium-cgroup\") pod \"aa5b4bb1-fa79-45cb-888e-d3826e4219db\" (UID: \"aa5b4bb1-fa79-45cb-888e-d3826e4219db\") " Sep 9 01:00:10.577263 kubelet[2947]: I0909 01:00:10.577263 2947 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/aa5b4bb1-fa79-45cb-888e-d3826e4219db-bpf-maps\") pod \"aa5b4bb1-fa79-45cb-888e-d3826e4219db\" (UID: \"aa5b4bb1-fa79-45cb-888e-d3826e4219db\") " Sep 9 01:00:10.577376 kubelet[2947]: I0909 01:00:10.577278 2947 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6m48\" (UniqueName: \"kubernetes.io/projected/aa5b4bb1-fa79-45cb-888e-d3826e4219db-kube-api-access-q6m48\") pod \"aa5b4bb1-fa79-45cb-888e-d3826e4219db\" (UID: \"aa5b4bb1-fa79-45cb-888e-d3826e4219db\") " Sep 9 01:00:10.577376 kubelet[2947]: I0909 01:00:10.577291 2947 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f38568a6-588d-413d-b7a8-9f3d2da27f6a-cilium-config-path\") pod \"f38568a6-588d-413d-b7a8-9f3d2da27f6a\" (UID: \"f38568a6-588d-413d-b7a8-9f3d2da27f6a\") " Sep 9 01:00:10.577376 kubelet[2947]: I0909 01:00:10.577314 2947 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/aa5b4bb1-fa79-45cb-888e-d3826e4219db-clustermesh-secrets\") pod \"aa5b4bb1-fa79-45cb-888e-d3826e4219db\" (UID: \"aa5b4bb1-fa79-45cb-888e-d3826e4219db\") " Sep 9 01:00:10.577376 kubelet[2947]: I0909 01:00:10.577325 2947 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/aa5b4bb1-fa79-45cb-888e-d3826e4219db-hubble-tls\") pod \"aa5b4bb1-fa79-45cb-888e-d3826e4219db\" (UID: \"aa5b4bb1-fa79-45cb-888e-d3826e4219db\") " Sep 9 01:00:10.577376 kubelet[2947]: I0909 01:00:10.577339 2947 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aa5b4bb1-fa79-45cb-888e-d3826e4219db-etc-cni-netd\") pod \"aa5b4bb1-fa79-45cb-888e-d3826e4219db\" (UID: \"aa5b4bb1-fa79-45cb-888e-d3826e4219db\") " Sep 9 01:00:10.577376 kubelet[2947]: I0909 01:00:10.577347 2947 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa5b4bb1-fa79-45cb-888e-d3826e4219db-xtables-lock\") pod \"aa5b4bb1-fa79-45cb-888e-d3826e4219db\" (UID: \"aa5b4bb1-fa79-45cb-888e-d3826e4219db\") " Sep 9 01:00:10.577520 kubelet[2947]: I0909 01:00:10.577357 2947 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/aa5b4bb1-fa79-45cb-888e-d3826e4219db-cilium-run\") pod \"aa5b4bb1-fa79-45cb-888e-d3826e4219db\" (UID: \"aa5b4bb1-fa79-45cb-888e-d3826e4219db\") " Sep 9 01:00:10.577520 kubelet[2947]: I0909 01:00:10.577366 2947 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa5b4bb1-fa79-45cb-888e-d3826e4219db-lib-modules\") pod \"aa5b4bb1-fa79-45cb-888e-d3826e4219db\" (UID: \"aa5b4bb1-fa79-45cb-888e-d3826e4219db\") " Sep 9 01:00:10.577520 kubelet[2947]: I0909 01:00:10.577388 2947 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/aa5b4bb1-fa79-45cb-888e-d3826e4219db-host-proc-sys-net\") pod \"aa5b4bb1-fa79-45cb-888e-d3826e4219db\" (UID: \"aa5b4bb1-fa79-45cb-888e-d3826e4219db\") " Sep 9 01:00:10.577520 kubelet[2947]: I0909 01:00:10.577396 2947 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/aa5b4bb1-fa79-45cb-888e-d3826e4219db-hostproc\") pod \"aa5b4bb1-fa79-45cb-888e-d3826e4219db\" (UID: \"aa5b4bb1-fa79-45cb-888e-d3826e4219db\") " Sep 9 01:00:10.577520 kubelet[2947]: I0909 01:00:10.577404 2947 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/aa5b4bb1-fa79-45cb-888e-d3826e4219db-host-proc-sys-kernel\") pod \"aa5b4bb1-fa79-45cb-888e-d3826e4219db\" (UID: \"aa5b4bb1-fa79-45cb-888e-d3826e4219db\") " Sep 9 01:00:10.577520 kubelet[2947]: I0909 01:00:10.577417 2947 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aa5b4bb1-fa79-45cb-888e-d3826e4219db-cilium-config-path\") pod \"aa5b4bb1-fa79-45cb-888e-d3826e4219db\" (UID: \"aa5b4bb1-fa79-45cb-888e-d3826e4219db\") " Sep 9 01:00:10.577616 kubelet[2947]: I0909 01:00:10.577426 2947 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdkkf\" (UniqueName: \"kubernetes.io/projected/f38568a6-588d-413d-b7a8-9f3d2da27f6a-kube-api-access-cdkkf\") pod \"f38568a6-588d-413d-b7a8-9f3d2da27f6a\" (UID: \"f38568a6-588d-413d-b7a8-9f3d2da27f6a\") " Sep 9 01:00:10.577616 kubelet[2947]: I0909 01:00:10.577437 2947 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/aa5b4bb1-fa79-45cb-888e-d3826e4219db-cni-path\") pod \"aa5b4bb1-fa79-45cb-888e-d3826e4219db\" (UID: \"aa5b4bb1-fa79-45cb-888e-d3826e4219db\") " Sep 9 01:00:10.577616 kubelet[2947]: I0909 01:00:10.577508 2947 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa5b4bb1-fa79-45cb-888e-d3826e4219db-cni-path" (OuterVolumeSpecName: "cni-path") pod "aa5b4bb1-fa79-45cb-888e-d3826e4219db" (UID: "aa5b4bb1-fa79-45cb-888e-d3826e4219db"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 01:00:10.577616 kubelet[2947]: I0909 01:00:10.577545 2947 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa5b4bb1-fa79-45cb-888e-d3826e4219db-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "aa5b4bb1-fa79-45cb-888e-d3826e4219db" (UID: "aa5b4bb1-fa79-45cb-888e-d3826e4219db"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 01:00:10.577616 kubelet[2947]: I0909 01:00:10.577557 2947 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa5b4bb1-fa79-45cb-888e-d3826e4219db-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "aa5b4bb1-fa79-45cb-888e-d3826e4219db" (UID: "aa5b4bb1-fa79-45cb-888e-d3826e4219db"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 01:00:10.578458 kubelet[2947]: I0909 01:00:10.577716 2947 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa5b4bb1-fa79-45cb-888e-d3826e4219db-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "aa5b4bb1-fa79-45cb-888e-d3826e4219db" (UID: "aa5b4bb1-fa79-45cb-888e-d3826e4219db"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 01:00:10.578847 kubelet[2947]: I0909 01:00:10.578832 2947 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f38568a6-588d-413d-b7a8-9f3d2da27f6a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f38568a6-588d-413d-b7a8-9f3d2da27f6a" (UID: "f38568a6-588d-413d-b7a8-9f3d2da27f6a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 01:00:10.582497 kubelet[2947]: I0909 01:00:10.578900 2947 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa5b4bb1-fa79-45cb-888e-d3826e4219db-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "aa5b4bb1-fa79-45cb-888e-d3826e4219db" (UID: "aa5b4bb1-fa79-45cb-888e-d3826e4219db"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 01:00:10.582596 kubelet[2947]: I0909 01:00:10.578909 2947 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa5b4bb1-fa79-45cb-888e-d3826e4219db-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "aa5b4bb1-fa79-45cb-888e-d3826e4219db" (UID: "aa5b4bb1-fa79-45cb-888e-d3826e4219db"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 01:00:10.582629 kubelet[2947]: I0909 01:00:10.578916 2947 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa5b4bb1-fa79-45cb-888e-d3826e4219db-hostproc" (OuterVolumeSpecName: "hostproc") pod "aa5b4bb1-fa79-45cb-888e-d3826e4219db" (UID: "aa5b4bb1-fa79-45cb-888e-d3826e4219db"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 01:00:10.582665 kubelet[2947]: I0909 01:00:10.578922 2947 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa5b4bb1-fa79-45cb-888e-d3826e4219db-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "aa5b4bb1-fa79-45cb-888e-d3826e4219db" (UID: "aa5b4bb1-fa79-45cb-888e-d3826e4219db"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 01:00:10.582694 kubelet[2947]: I0909 01:00:10.579982 2947 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa5b4bb1-fa79-45cb-888e-d3826e4219db-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "aa5b4bb1-fa79-45cb-888e-d3826e4219db" (UID: "aa5b4bb1-fa79-45cb-888e-d3826e4219db"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 01:00:10.582722 kubelet[2947]: I0909 01:00:10.581788 2947 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa5b4bb1-fa79-45cb-888e-d3826e4219db-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "aa5b4bb1-fa79-45cb-888e-d3826e4219db" (UID: "aa5b4bb1-fa79-45cb-888e-d3826e4219db"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 01:00:10.583842 kubelet[2947]: I0909 01:00:10.583734 2947 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f38568a6-588d-413d-b7a8-9f3d2da27f6a-kube-api-access-cdkkf" (OuterVolumeSpecName: "kube-api-access-cdkkf") pod "f38568a6-588d-413d-b7a8-9f3d2da27f6a" (UID: "f38568a6-588d-413d-b7a8-9f3d2da27f6a"). InnerVolumeSpecName "kube-api-access-cdkkf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 01:00:10.583842 kubelet[2947]: I0909 01:00:10.583787 2947 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa5b4bb1-fa79-45cb-888e-d3826e4219db-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "aa5b4bb1-fa79-45cb-888e-d3826e4219db" (UID: "aa5b4bb1-fa79-45cb-888e-d3826e4219db"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 01:00:10.583842 kubelet[2947]: I0909 01:00:10.583828 2947 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa5b4bb1-fa79-45cb-888e-d3826e4219db-kube-api-access-q6m48" (OuterVolumeSpecName: "kube-api-access-q6m48") pod "aa5b4bb1-fa79-45cb-888e-d3826e4219db" (UID: "aa5b4bb1-fa79-45cb-888e-d3826e4219db"). InnerVolumeSpecName "kube-api-access-q6m48". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 01:00:10.584898 kubelet[2947]: I0909 01:00:10.584888 2947 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa5b4bb1-fa79-45cb-888e-d3826e4219db-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "aa5b4bb1-fa79-45cb-888e-d3826e4219db" (UID: "aa5b4bb1-fa79-45cb-888e-d3826e4219db"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 01:00:10.586308 kubelet[2947]: I0909 01:00:10.581803 2947 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa5b4bb1-fa79-45cb-888e-d3826e4219db-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "aa5b4bb1-fa79-45cb-888e-d3826e4219db" (UID: "aa5b4bb1-fa79-45cb-888e-d3826e4219db"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 01:00:10.678467 kubelet[2947]: I0909 01:00:10.678405 2947 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/aa5b4bb1-fa79-45cb-888e-d3826e4219db-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 9 01:00:10.678550 kubelet[2947]: I0909 01:00:10.678476 2947 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aa5b4bb1-fa79-45cb-888e-d3826e4219db-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 01:00:10.678550 kubelet[2947]: I0909 01:00:10.678486 2947 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cdkkf\" (UniqueName: \"kubernetes.io/projected/f38568a6-588d-413d-b7a8-9f3d2da27f6a-kube-api-access-cdkkf\") on node \"localhost\" DevicePath \"\"" Sep 9 01:00:10.678550 kubelet[2947]: I0909 01:00:10.678492 2947 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa5b4bb1-fa79-45cb-888e-d3826e4219db-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 9 01:00:10.678550 kubelet[2947]: I0909 01:00:10.678498 2947 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/aa5b4bb1-fa79-45cb-888e-d3826e4219db-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 9 01:00:10.678550 kubelet[2947]: I0909 01:00:10.678504 2947 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/aa5b4bb1-fa79-45cb-888e-d3826e4219db-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 9 01:00:10.678550 kubelet[2947]: I0909 01:00:10.678510 2947 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/aa5b4bb1-fa79-45cb-888e-d3826e4219db-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 9 01:00:10.678550 kubelet[2947]: I0909 01:00:10.678516 2947 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/aa5b4bb1-fa79-45cb-888e-d3826e4219db-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 9 01:00:10.678550 kubelet[2947]: I0909 01:00:10.678521 2947 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f38568a6-588d-413d-b7a8-9f3d2da27f6a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 01:00:10.678704 kubelet[2947]: I0909 01:00:10.678539 2947 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/aa5b4bb1-fa79-45cb-888e-d3826e4219db-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 9 01:00:10.678704 kubelet[2947]: I0909 01:00:10.678545 2947 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/aa5b4bb1-fa79-45cb-888e-d3826e4219db-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 9 01:00:10.678704 kubelet[2947]: I0909 01:00:10.678551 2947 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q6m48\" (UniqueName: \"kubernetes.io/projected/aa5b4bb1-fa79-45cb-888e-d3826e4219db-kube-api-access-q6m48\") on node \"localhost\" DevicePath \"\"" Sep 9 01:00:10.678704 kubelet[2947]: I0909 01:00:10.678557 2947 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/aa5b4bb1-fa79-45cb-888e-d3826e4219db-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 9 01:00:10.678704 kubelet[2947]: I0909 01:00:10.678563 2947 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/aa5b4bb1-fa79-45cb-888e-d3826e4219db-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 9 01:00:10.678704 kubelet[2947]: I0909 01:00:10.678568 2947 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa5b4bb1-fa79-45cb-888e-d3826e4219db-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 9 01:00:10.678704 kubelet[2947]: I0909 01:00:10.678572 2947 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aa5b4bb1-fa79-45cb-888e-d3826e4219db-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 9 01:00:10.771656 kubelet[2947]: I0909 01:00:10.771622 2947 scope.go:117] "RemoveContainer" containerID="9d1ae07599608ee0c2b15c7cf1823d6ffc71711c864ff2cf3fa547cc7f9b836b" Sep 9 01:00:10.774275 containerd[1638]: time="2025-09-09T01:00:10.774204499Z" level=info msg="RemoveContainer for \"9d1ae07599608ee0c2b15c7cf1823d6ffc71711c864ff2cf3fa547cc7f9b836b\"" Sep 9 01:00:10.777544 systemd[1]: Removed slice kubepods-burstable-podaa5b4bb1_fa79_45cb_888e_d3826e4219db.slice - libcontainer container kubepods-burstable-podaa5b4bb1_fa79_45cb_888e_d3826e4219db.slice. Sep 9 01:00:10.777613 systemd[1]: kubepods-burstable-podaa5b4bb1_fa79_45cb_888e_d3826e4219db.slice: Consumed 4.663s CPU time, 229.6M memory peak, 109.4M read from disk, 13.3M written to disk. Sep 9 01:00:10.779836 systemd[1]: Removed slice kubepods-besteffort-podf38568a6_588d_413d_b7a8_9f3d2da27f6a.slice - libcontainer container kubepods-besteffort-podf38568a6_588d_413d_b7a8_9f3d2da27f6a.slice. Sep 9 01:00:10.780460 containerd[1638]: time="2025-09-09T01:00:10.780427038Z" level=info msg="RemoveContainer for \"9d1ae07599608ee0c2b15c7cf1823d6ffc71711c864ff2cf3fa547cc7f9b836b\" returns successfully" Sep 9 01:00:10.781189 kubelet[2947]: I0909 01:00:10.781172 2947 scope.go:117] "RemoveContainer" containerID="af152aad96b22a2de98651dd1e6d068ef653cc493cc0677f91fa3d358a4d4cda" Sep 9 01:00:10.783299 containerd[1638]: time="2025-09-09T01:00:10.783278847Z" level=info msg="RemoveContainer for \"af152aad96b22a2de98651dd1e6d068ef653cc493cc0677f91fa3d358a4d4cda\"" Sep 9 01:00:10.786165 containerd[1638]: time="2025-09-09T01:00:10.786112369Z" level=info msg="RemoveContainer for \"af152aad96b22a2de98651dd1e6d068ef653cc493cc0677f91fa3d358a4d4cda\" returns successfully" Sep 9 01:00:10.786419 kubelet[2947]: I0909 01:00:10.786399 2947 scope.go:117] "RemoveContainer" containerID="2366c98ec46442a034a2dfc2957eedbcd8cc86a017214f26479c76cf4de5b13a" Sep 9 01:00:10.789664 containerd[1638]: time="2025-09-09T01:00:10.789521607Z" level=info msg="RemoveContainer for \"2366c98ec46442a034a2dfc2957eedbcd8cc86a017214f26479c76cf4de5b13a\"" Sep 9 01:00:10.793364 containerd[1638]: time="2025-09-09T01:00:10.793324429Z" level=info msg="RemoveContainer for \"2366c98ec46442a034a2dfc2957eedbcd8cc86a017214f26479c76cf4de5b13a\" returns successfully" Sep 9 01:00:10.793703 kubelet[2947]: I0909 01:00:10.793671 2947 scope.go:117] "RemoveContainer" containerID="3346a83d69e61fed4424d1edd7f0b19fe04a55b8b8d9a4c137b022581f3bba4c" Sep 9 01:00:10.795733 containerd[1638]: time="2025-09-09T01:00:10.795674605Z" level=info msg="RemoveContainer for \"3346a83d69e61fed4424d1edd7f0b19fe04a55b8b8d9a4c137b022581f3bba4c\"" Sep 9 01:00:10.798431 containerd[1638]: time="2025-09-09T01:00:10.798404318Z" level=info msg="RemoveContainer for \"3346a83d69e61fed4424d1edd7f0b19fe04a55b8b8d9a4c137b022581f3bba4c\" returns successfully" Sep 9 01:00:10.798975 kubelet[2947]: I0909 01:00:10.798714 2947 scope.go:117] "RemoveContainer" containerID="22316139513ff270f41b2c218a1d420f3a3b5095fe4b20addc2603e5e1f4b9eb" Sep 9 01:00:10.800608 containerd[1638]: time="2025-09-09T01:00:10.800567258Z" level=info msg="RemoveContainer for \"22316139513ff270f41b2c218a1d420f3a3b5095fe4b20addc2603e5e1f4b9eb\"" Sep 9 01:00:10.802022 containerd[1638]: time="2025-09-09T01:00:10.802004368Z" level=info msg="RemoveContainer for \"22316139513ff270f41b2c218a1d420f3a3b5095fe4b20addc2603e5e1f4b9eb\" returns successfully" Sep 9 01:00:10.802148 kubelet[2947]: I0909 01:00:10.802136 2947 scope.go:117] "RemoveContainer" containerID="9d1ae07599608ee0c2b15c7cf1823d6ffc71711c864ff2cf3fa547cc7f9b836b" Sep 9 01:00:10.802255 containerd[1638]: time="2025-09-09T01:00:10.802224729Z" level=error msg="ContainerStatus for \"9d1ae07599608ee0c2b15c7cf1823d6ffc71711c864ff2cf3fa547cc7f9b836b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9d1ae07599608ee0c2b15c7cf1823d6ffc71711c864ff2cf3fa547cc7f9b836b\": not found" Sep 9 01:00:10.802373 kubelet[2947]: E0909 01:00:10.802316 2947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9d1ae07599608ee0c2b15c7cf1823d6ffc71711c864ff2cf3fa547cc7f9b836b\": not found" containerID="9d1ae07599608ee0c2b15c7cf1823d6ffc71711c864ff2cf3fa547cc7f9b836b" Sep 9 01:00:10.802522 kubelet[2947]: I0909 01:00:10.802334 2947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9d1ae07599608ee0c2b15c7cf1823d6ffc71711c864ff2cf3fa547cc7f9b836b"} err="failed to get container status \"9d1ae07599608ee0c2b15c7cf1823d6ffc71711c864ff2cf3fa547cc7f9b836b\": rpc error: code = NotFound desc = an error occurred when try to find container \"9d1ae07599608ee0c2b15c7cf1823d6ffc71711c864ff2cf3fa547cc7f9b836b\": not found" Sep 9 01:00:10.802522 kubelet[2947]: I0909 01:00:10.802484 2947 scope.go:117] "RemoveContainer" containerID="af152aad96b22a2de98651dd1e6d068ef653cc493cc0677f91fa3d358a4d4cda" Sep 9 01:00:10.802683 containerd[1638]: time="2025-09-09T01:00:10.802625491Z" level=error msg="ContainerStatus for \"af152aad96b22a2de98651dd1e6d068ef653cc493cc0677f91fa3d358a4d4cda\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"af152aad96b22a2de98651dd1e6d068ef653cc493cc0677f91fa3d358a4d4cda\": not found" Sep 9 01:00:10.810118 kubelet[2947]: E0909 01:00:10.809500 2947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"af152aad96b22a2de98651dd1e6d068ef653cc493cc0677f91fa3d358a4d4cda\": not found" containerID="af152aad96b22a2de98651dd1e6d068ef653cc493cc0677f91fa3d358a4d4cda" Sep 9 01:00:10.810118 kubelet[2947]: I0909 01:00:10.809516 2947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"af152aad96b22a2de98651dd1e6d068ef653cc493cc0677f91fa3d358a4d4cda"} err="failed to get container status \"af152aad96b22a2de98651dd1e6d068ef653cc493cc0677f91fa3d358a4d4cda\": rpc error: code = NotFound desc = an error occurred when try to find container \"af152aad96b22a2de98651dd1e6d068ef653cc493cc0677f91fa3d358a4d4cda\": not found" Sep 9 01:00:10.810118 kubelet[2947]: I0909 01:00:10.809527 2947 scope.go:117] "RemoveContainer" containerID="2366c98ec46442a034a2dfc2957eedbcd8cc86a017214f26479c76cf4de5b13a" Sep 9 01:00:10.810118 kubelet[2947]: E0909 01:00:10.809660 2947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2366c98ec46442a034a2dfc2957eedbcd8cc86a017214f26479c76cf4de5b13a\": not found" containerID="2366c98ec46442a034a2dfc2957eedbcd8cc86a017214f26479c76cf4de5b13a" Sep 9 01:00:10.810118 kubelet[2947]: I0909 01:00:10.809670 2947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2366c98ec46442a034a2dfc2957eedbcd8cc86a017214f26479c76cf4de5b13a"} err="failed to get container status \"2366c98ec46442a034a2dfc2957eedbcd8cc86a017214f26479c76cf4de5b13a\": rpc error: code = NotFound desc = an error occurred when try to find container \"2366c98ec46442a034a2dfc2957eedbcd8cc86a017214f26479c76cf4de5b13a\": not found" Sep 9 01:00:10.810118 kubelet[2947]: I0909 01:00:10.809678 2947 scope.go:117] "RemoveContainer" containerID="3346a83d69e61fed4424d1edd7f0b19fe04a55b8b8d9a4c137b022581f3bba4c" Sep 9 01:00:10.810237 containerd[1638]: time="2025-09-09T01:00:10.809610251Z" level=error msg="ContainerStatus for \"2366c98ec46442a034a2dfc2957eedbcd8cc86a017214f26479c76cf4de5b13a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2366c98ec46442a034a2dfc2957eedbcd8cc86a017214f26479c76cf4de5b13a\": not found" Sep 9 01:00:10.810237 containerd[1638]: time="2025-09-09T01:00:10.809740153Z" level=error msg="ContainerStatus for \"3346a83d69e61fed4424d1edd7f0b19fe04a55b8b8d9a4c137b022581f3bba4c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3346a83d69e61fed4424d1edd7f0b19fe04a55b8b8d9a4c137b022581f3bba4c\": not found" Sep 9 01:00:10.810237 containerd[1638]: time="2025-09-09T01:00:10.809864089Z" level=error msg="ContainerStatus for \"22316139513ff270f41b2c218a1d420f3a3b5095fe4b20addc2603e5e1f4b9eb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"22316139513ff270f41b2c218a1d420f3a3b5095fe4b20addc2603e5e1f4b9eb\": not found" Sep 9 01:00:10.810293 kubelet[2947]: E0909 01:00:10.809785 2947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3346a83d69e61fed4424d1edd7f0b19fe04a55b8b8d9a4c137b022581f3bba4c\": not found" containerID="3346a83d69e61fed4424d1edd7f0b19fe04a55b8b8d9a4c137b022581f3bba4c" Sep 9 01:00:10.810293 kubelet[2947]: I0909 01:00:10.809795 2947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3346a83d69e61fed4424d1edd7f0b19fe04a55b8b8d9a4c137b022581f3bba4c"} err="failed to get container status \"3346a83d69e61fed4424d1edd7f0b19fe04a55b8b8d9a4c137b022581f3bba4c\": rpc error: code = NotFound desc = an error occurred when try to find container \"3346a83d69e61fed4424d1edd7f0b19fe04a55b8b8d9a4c137b022581f3bba4c\": not found" Sep 9 01:00:10.810293 kubelet[2947]: I0909 01:00:10.809803 2947 scope.go:117] "RemoveContainer" containerID="22316139513ff270f41b2c218a1d420f3a3b5095fe4b20addc2603e5e1f4b9eb" Sep 9 01:00:10.810293 kubelet[2947]: E0909 01:00:10.809909 2947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"22316139513ff270f41b2c218a1d420f3a3b5095fe4b20addc2603e5e1f4b9eb\": not found" containerID="22316139513ff270f41b2c218a1d420f3a3b5095fe4b20addc2603e5e1f4b9eb" Sep 9 01:00:10.810293 kubelet[2947]: I0909 01:00:10.809918 2947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"22316139513ff270f41b2c218a1d420f3a3b5095fe4b20addc2603e5e1f4b9eb"} err="failed to get container status \"22316139513ff270f41b2c218a1d420f3a3b5095fe4b20addc2603e5e1f4b9eb\": rpc error: code = NotFound desc = an error occurred when try to find container \"22316139513ff270f41b2c218a1d420f3a3b5095fe4b20addc2603e5e1f4b9eb\": not found" Sep 9 01:00:10.810293 kubelet[2947]: I0909 01:00:10.809927 2947 scope.go:117] "RemoveContainer" containerID="f186228d8ec9f725b4443851a0748c4c87709aa5d891e70402c4cda6a3ad3cf3" Sep 9 01:00:10.811252 containerd[1638]: time="2025-09-09T01:00:10.811005084Z" level=info msg="RemoveContainer for \"f186228d8ec9f725b4443851a0748c4c87709aa5d891e70402c4cda6a3ad3cf3\"" Sep 9 01:00:10.813855 containerd[1638]: time="2025-09-09T01:00:10.813843616Z" level=info msg="RemoveContainer for \"f186228d8ec9f725b4443851a0748c4c87709aa5d891e70402c4cda6a3ad3cf3\" returns successfully" Sep 9 01:00:10.813994 kubelet[2947]: I0909 01:00:10.813981 2947 scope.go:117] "RemoveContainer" containerID="f186228d8ec9f725b4443851a0748c4c87709aa5d891e70402c4cda6a3ad3cf3" Sep 9 01:00:10.814088 containerd[1638]: time="2025-09-09T01:00:10.814070572Z" level=error msg="ContainerStatus for \"f186228d8ec9f725b4443851a0748c4c87709aa5d891e70402c4cda6a3ad3cf3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f186228d8ec9f725b4443851a0748c4c87709aa5d891e70402c4cda6a3ad3cf3\": not found" Sep 9 01:00:10.814179 kubelet[2947]: E0909 01:00:10.814166 2947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f186228d8ec9f725b4443851a0748c4c87709aa5d891e70402c4cda6a3ad3cf3\": not found" containerID="f186228d8ec9f725b4443851a0748c4c87709aa5d891e70402c4cda6a3ad3cf3" Sep 9 01:00:10.814208 kubelet[2947]: I0909 01:00:10.814179 2947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f186228d8ec9f725b4443851a0748c4c87709aa5d891e70402c4cda6a3ad3cf3"} err="failed to get container status \"f186228d8ec9f725b4443851a0748c4c87709aa5d891e70402c4cda6a3ad3cf3\": rpc error: code = NotFound desc = an error occurred when try to find container \"f186228d8ec9f725b4443851a0748c4c87709aa5d891e70402c4cda6a3ad3cf3\": not found" Sep 9 01:00:11.295819 systemd[1]: var-lib-kubelet-pods-f38568a6\x2d588d\x2d413d\x2db7a8\x2d9f3d2da27f6a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcdkkf.mount: Deactivated successfully. Sep 9 01:00:11.295933 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-189adff7cb5524c20e2b626131b173ee2a657b8a6cfeb7deae2dbde54c2dc978-shm.mount: Deactivated successfully. Sep 9 01:00:11.296017 systemd[1]: var-lib-kubelet-pods-aa5b4bb1\x2dfa79\x2d45cb\x2d888e\x2dd3826e4219db-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq6m48.mount: Deactivated successfully. Sep 9 01:00:11.296072 systemd[1]: var-lib-kubelet-pods-aa5b4bb1\x2dfa79\x2d45cb\x2d888e\x2dd3826e4219db-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 9 01:00:11.296123 systemd[1]: var-lib-kubelet-pods-aa5b4bb1\x2dfa79\x2d45cb\x2d888e\x2dd3826e4219db-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 9 01:00:11.458669 kubelet[2947]: I0909 01:00:11.458385 2947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa5b4bb1-fa79-45cb-888e-d3826e4219db" path="/var/lib/kubelet/pods/aa5b4bb1-fa79-45cb-888e-d3826e4219db/volumes" Sep 9 01:00:11.458827 kubelet[2947]: I0909 01:00:11.458815 2947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f38568a6-588d-413d-b7a8-9f3d2da27f6a" path="/var/lib/kubelet/pods/f38568a6-588d-413d-b7a8-9f3d2da27f6a/volumes" Sep 9 01:00:12.202809 sshd[4517]: Connection closed by 139.178.68.195 port 46052 Sep 9 01:00:12.204228 sshd-session[4514]: pam_unix(sshd:session): session closed for user core Sep 9 01:00:12.210781 systemd[1]: sshd@23-139.178.70.105:22-139.178.68.195:46052.service: Deactivated successfully. Sep 9 01:00:12.212093 systemd[1]: session-26.scope: Deactivated successfully. Sep 9 01:00:12.212855 systemd-logind[1590]: Session 26 logged out. Waiting for processes to exit. Sep 9 01:00:12.215160 systemd[1]: Started sshd@24-139.178.70.105:22-139.178.68.195:55408.service - OpenSSH per-connection server daemon (139.178.68.195:55408). Sep 9 01:00:12.216214 systemd-logind[1590]: Removed session 26. Sep 9 01:00:12.258375 sshd[4670]: Accepted publickey for core from 139.178.68.195 port 55408 ssh2: RSA SHA256:di4PNdyPvpfAB0WOT8AEsUYj4AxD4pouXbu16YJnSLk Sep 9 01:00:12.259262 sshd-session[4670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 01:00:12.263067 systemd-logind[1590]: New session 27 of user core. Sep 9 01:00:12.274601 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 9 01:00:12.580405 sshd[4673]: Connection closed by 139.178.68.195 port 55408 Sep 9 01:00:12.580249 sshd-session[4670]: pam_unix(sshd:session): session closed for user core Sep 9 01:00:12.589180 systemd[1]: sshd@24-139.178.70.105:22-139.178.68.195:55408.service: Deactivated successfully. Sep 9 01:00:12.590959 systemd[1]: session-27.scope: Deactivated successfully. Sep 9 01:00:12.593563 systemd-logind[1590]: Session 27 logged out. Waiting for processes to exit. Sep 9 01:00:12.596965 systemd[1]: Started sshd@25-139.178.70.105:22-139.178.68.195:55418.service - OpenSSH per-connection server daemon (139.178.68.195:55418). Sep 9 01:00:12.599051 systemd-logind[1590]: Removed session 27. Sep 9 01:00:12.600045 kubelet[2947]: I0909 01:00:12.599789 2947 memory_manager.go:355] "RemoveStaleState removing state" podUID="f38568a6-588d-413d-b7a8-9f3d2da27f6a" containerName="cilium-operator" Sep 9 01:00:12.600045 kubelet[2947]: I0909 01:00:12.599806 2947 memory_manager.go:355] "RemoveStaleState removing state" podUID="aa5b4bb1-fa79-45cb-888e-d3826e4219db" containerName="cilium-agent" Sep 9 01:00:12.613131 systemd[1]: Created slice kubepods-burstable-pod4832127e_68c0_49f3_bf71_83cc822f0e80.slice - libcontainer container kubepods-burstable-pod4832127e_68c0_49f3_bf71_83cc822f0e80.slice. Sep 9 01:00:12.647245 sshd[4683]: Accepted publickey for core from 139.178.68.195 port 55418 ssh2: RSA SHA256:di4PNdyPvpfAB0WOT8AEsUYj4AxD4pouXbu16YJnSLk Sep 9 01:00:12.649063 sshd-session[4683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 01:00:12.654713 systemd-logind[1590]: New session 28 of user core. Sep 9 01:00:12.659647 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 9 01:00:12.689472 kubelet[2947]: I0909 01:00:12.689406 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4832127e-68c0-49f3-bf71-83cc822f0e80-lib-modules\") pod \"cilium-kvbkk\" (UID: \"4832127e-68c0-49f3-bf71-83cc822f0e80\") " pod="kube-system/cilium-kvbkk" Sep 9 01:00:12.689472 kubelet[2947]: I0909 01:00:12.689441 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4832127e-68c0-49f3-bf71-83cc822f0e80-clustermesh-secrets\") pod \"cilium-kvbkk\" (UID: \"4832127e-68c0-49f3-bf71-83cc822f0e80\") " pod="kube-system/cilium-kvbkk" Sep 9 01:00:12.689472 kubelet[2947]: I0909 01:00:12.689467 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4832127e-68c0-49f3-bf71-83cc822f0e80-cilium-run\") pod \"cilium-kvbkk\" (UID: \"4832127e-68c0-49f3-bf71-83cc822f0e80\") " pod="kube-system/cilium-kvbkk" Sep 9 01:00:12.689612 kubelet[2947]: I0909 01:00:12.689484 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9szwq\" (UniqueName: \"kubernetes.io/projected/4832127e-68c0-49f3-bf71-83cc822f0e80-kube-api-access-9szwq\") pod \"cilium-kvbkk\" (UID: \"4832127e-68c0-49f3-bf71-83cc822f0e80\") " pod="kube-system/cilium-kvbkk" Sep 9 01:00:12.689612 kubelet[2947]: I0909 01:00:12.689498 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4832127e-68c0-49f3-bf71-83cc822f0e80-cni-path\") pod \"cilium-kvbkk\" (UID: \"4832127e-68c0-49f3-bf71-83cc822f0e80\") " pod="kube-system/cilium-kvbkk" Sep 9 01:00:12.689612 kubelet[2947]: I0909 01:00:12.689509 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4832127e-68c0-49f3-bf71-83cc822f0e80-etc-cni-netd\") pod \"cilium-kvbkk\" (UID: \"4832127e-68c0-49f3-bf71-83cc822f0e80\") " pod="kube-system/cilium-kvbkk" Sep 9 01:00:12.689612 kubelet[2947]: I0909 01:00:12.689525 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4832127e-68c0-49f3-bf71-83cc822f0e80-host-proc-sys-net\") pod \"cilium-kvbkk\" (UID: \"4832127e-68c0-49f3-bf71-83cc822f0e80\") " pod="kube-system/cilium-kvbkk" Sep 9 01:00:12.689612 kubelet[2947]: I0909 01:00:12.689537 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4832127e-68c0-49f3-bf71-83cc822f0e80-bpf-maps\") pod \"cilium-kvbkk\" (UID: \"4832127e-68c0-49f3-bf71-83cc822f0e80\") " pod="kube-system/cilium-kvbkk" Sep 9 01:00:12.689612 kubelet[2947]: I0909 01:00:12.689546 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4832127e-68c0-49f3-bf71-83cc822f0e80-hubble-tls\") pod \"cilium-kvbkk\" (UID: \"4832127e-68c0-49f3-bf71-83cc822f0e80\") " pod="kube-system/cilium-kvbkk" Sep 9 01:00:12.689721 kubelet[2947]: I0909 01:00:12.689557 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4832127e-68c0-49f3-bf71-83cc822f0e80-cilium-cgroup\") pod \"cilium-kvbkk\" (UID: \"4832127e-68c0-49f3-bf71-83cc822f0e80\") " pod="kube-system/cilium-kvbkk" Sep 9 01:00:12.689721 kubelet[2947]: I0909 01:00:12.689566 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4832127e-68c0-49f3-bf71-83cc822f0e80-cilium-config-path\") pod \"cilium-kvbkk\" (UID: \"4832127e-68c0-49f3-bf71-83cc822f0e80\") " pod="kube-system/cilium-kvbkk" Sep 9 01:00:12.689721 kubelet[2947]: I0909 01:00:12.689576 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4832127e-68c0-49f3-bf71-83cc822f0e80-host-proc-sys-kernel\") pod \"cilium-kvbkk\" (UID: \"4832127e-68c0-49f3-bf71-83cc822f0e80\") " pod="kube-system/cilium-kvbkk" Sep 9 01:00:12.689721 kubelet[2947]: I0909 01:00:12.689586 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4832127e-68c0-49f3-bf71-83cc822f0e80-hostproc\") pod \"cilium-kvbkk\" (UID: \"4832127e-68c0-49f3-bf71-83cc822f0e80\") " pod="kube-system/cilium-kvbkk" Sep 9 01:00:12.689721 kubelet[2947]: I0909 01:00:12.689593 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4832127e-68c0-49f3-bf71-83cc822f0e80-xtables-lock\") pod \"cilium-kvbkk\" (UID: \"4832127e-68c0-49f3-bf71-83cc822f0e80\") " pod="kube-system/cilium-kvbkk" Sep 9 01:00:12.689721 kubelet[2947]: I0909 01:00:12.689602 2947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4832127e-68c0-49f3-bf71-83cc822f0e80-cilium-ipsec-secrets\") pod \"cilium-kvbkk\" (UID: \"4832127e-68c0-49f3-bf71-83cc822f0e80\") " pod="kube-system/cilium-kvbkk" Sep 9 01:00:12.707492 sshd[4686]: Connection closed by 139.178.68.195 port 55418 Sep 9 01:00:12.709046 sshd-session[4683]: pam_unix(sshd:session): session closed for user core Sep 9 01:00:12.715713 systemd[1]: sshd@25-139.178.70.105:22-139.178.68.195:55418.service: Deactivated successfully. Sep 9 01:00:12.717812 systemd[1]: session-28.scope: Deactivated successfully. Sep 9 01:00:12.719365 systemd-logind[1590]: Session 28 logged out. Waiting for processes to exit. Sep 9 01:00:12.722247 systemd[1]: Started sshd@26-139.178.70.105:22-139.178.68.195:55422.service - OpenSSH per-connection server daemon (139.178.68.195:55422). Sep 9 01:00:12.723976 systemd-logind[1590]: Removed session 28. Sep 9 01:00:12.765224 sshd[4693]: Accepted publickey for core from 139.178.68.195 port 55422 ssh2: RSA SHA256:di4PNdyPvpfAB0WOT8AEsUYj4AxD4pouXbu16YJnSLk Sep 9 01:00:12.767930 sshd-session[4693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 01:00:12.773163 systemd-logind[1590]: New session 29 of user core. Sep 9 01:00:12.778842 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 9 01:00:12.919672 containerd[1638]: time="2025-09-09T01:00:12.919586570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kvbkk,Uid:4832127e-68c0-49f3-bf71-83cc822f0e80,Namespace:kube-system,Attempt:0,}" Sep 9 01:00:12.932636 containerd[1638]: time="2025-09-09T01:00:12.932572525Z" level=info msg="connecting to shim f835f55d5f3b9a81edfd8f477099bcca47110dd266f5425760115e10920b7eaf" address="unix:///run/containerd/s/cebac6bee0db507c0e051d7dfccaef0f6ae39b106df4eab5bfaeb4dd96f0e2b0" namespace=k8s.io protocol=ttrpc version=3 Sep 9 01:00:12.960581 systemd[1]: Started cri-containerd-f835f55d5f3b9a81edfd8f477099bcca47110dd266f5425760115e10920b7eaf.scope - libcontainer container f835f55d5f3b9a81edfd8f477099bcca47110dd266f5425760115e10920b7eaf. Sep 9 01:00:12.979199 containerd[1638]: time="2025-09-09T01:00:12.979165442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kvbkk,Uid:4832127e-68c0-49f3-bf71-83cc822f0e80,Namespace:kube-system,Attempt:0,} returns sandbox id \"f835f55d5f3b9a81edfd8f477099bcca47110dd266f5425760115e10920b7eaf\"" Sep 9 01:00:12.981845 containerd[1638]: time="2025-09-09T01:00:12.981818454Z" level=info msg="CreateContainer within sandbox \"f835f55d5f3b9a81edfd8f477099bcca47110dd266f5425760115e10920b7eaf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 01:00:12.999907 containerd[1638]: time="2025-09-09T01:00:12.999430661Z" level=info msg="Container 9cc8d45ede679bd94777b22d2016a31e69d906ec5a450f76ecf442e4bde1c061: CDI devices from CRI Config.CDIDevices: []" Sep 9 01:00:13.003267 containerd[1638]: time="2025-09-09T01:00:13.003236398Z" level=info msg="CreateContainer within sandbox \"f835f55d5f3b9a81edfd8f477099bcca47110dd266f5425760115e10920b7eaf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9cc8d45ede679bd94777b22d2016a31e69d906ec5a450f76ecf442e4bde1c061\"" Sep 9 01:00:13.004057 containerd[1638]: time="2025-09-09T01:00:13.004031179Z" level=info msg="StartContainer for \"9cc8d45ede679bd94777b22d2016a31e69d906ec5a450f76ecf442e4bde1c061\"" Sep 9 01:00:13.005034 containerd[1638]: time="2025-09-09T01:00:13.004990150Z" level=info msg="connecting to shim 9cc8d45ede679bd94777b22d2016a31e69d906ec5a450f76ecf442e4bde1c061" address="unix:///run/containerd/s/cebac6bee0db507c0e051d7dfccaef0f6ae39b106df4eab5bfaeb4dd96f0e2b0" protocol=ttrpc version=3 Sep 9 01:00:13.024646 systemd[1]: Started cri-containerd-9cc8d45ede679bd94777b22d2016a31e69d906ec5a450f76ecf442e4bde1c061.scope - libcontainer container 9cc8d45ede679bd94777b22d2016a31e69d906ec5a450f76ecf442e4bde1c061. Sep 9 01:00:13.046941 containerd[1638]: time="2025-09-09T01:00:13.046577175Z" level=info msg="StartContainer for \"9cc8d45ede679bd94777b22d2016a31e69d906ec5a450f76ecf442e4bde1c061\" returns successfully" Sep 9 01:00:13.061530 systemd[1]: cri-containerd-9cc8d45ede679bd94777b22d2016a31e69d906ec5a450f76ecf442e4bde1c061.scope: Deactivated successfully. Sep 9 01:00:13.061930 systemd[1]: cri-containerd-9cc8d45ede679bd94777b22d2016a31e69d906ec5a450f76ecf442e4bde1c061.scope: Consumed 15ms CPU time, 9.6M memory peak, 3.3M read from disk. Sep 9 01:00:13.062696 containerd[1638]: time="2025-09-09T01:00:13.062672368Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9cc8d45ede679bd94777b22d2016a31e69d906ec5a450f76ecf442e4bde1c061\" id:\"9cc8d45ede679bd94777b22d2016a31e69d906ec5a450f76ecf442e4bde1c061\" pid:4767 exited_at:{seconds:1757379613 nanos:62321417}" Sep 9 01:00:13.063464 containerd[1638]: time="2025-09-09T01:00:13.062763217Z" level=info msg="received exit event container_id:\"9cc8d45ede679bd94777b22d2016a31e69d906ec5a450f76ecf442e4bde1c061\" id:\"9cc8d45ede679bd94777b22d2016a31e69d906ec5a450f76ecf442e4bde1c061\" pid:4767 exited_at:{seconds:1757379613 nanos:62321417}" Sep 9 01:00:13.786877 containerd[1638]: time="2025-09-09T01:00:13.786771406Z" level=info msg="CreateContainer within sandbox \"f835f55d5f3b9a81edfd8f477099bcca47110dd266f5425760115e10920b7eaf\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 01:00:13.793532 containerd[1638]: time="2025-09-09T01:00:13.793092010Z" level=info msg="Container ebe8a00dd38b60781c657a4cb4500fa994959e9a2124b0df92265282035cb170: CDI devices from CRI Config.CDIDevices: []" Sep 9 01:00:13.802008 containerd[1638]: time="2025-09-09T01:00:13.801958478Z" level=info msg="CreateContainer within sandbox \"f835f55d5f3b9a81edfd8f477099bcca47110dd266f5425760115e10920b7eaf\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ebe8a00dd38b60781c657a4cb4500fa994959e9a2124b0df92265282035cb170\"" Sep 9 01:00:13.807160 containerd[1638]: time="2025-09-09T01:00:13.807123420Z" level=info msg="StartContainer for \"ebe8a00dd38b60781c657a4cb4500fa994959e9a2124b0df92265282035cb170\"" Sep 9 01:00:13.807767 containerd[1638]: time="2025-09-09T01:00:13.807748151Z" level=info msg="connecting to shim ebe8a00dd38b60781c657a4cb4500fa994959e9a2124b0df92265282035cb170" address="unix:///run/containerd/s/cebac6bee0db507c0e051d7dfccaef0f6ae39b106df4eab5bfaeb4dd96f0e2b0" protocol=ttrpc version=3 Sep 9 01:00:13.826600 systemd[1]: Started cri-containerd-ebe8a00dd38b60781c657a4cb4500fa994959e9a2124b0df92265282035cb170.scope - libcontainer container ebe8a00dd38b60781c657a4cb4500fa994959e9a2124b0df92265282035cb170. Sep 9 01:00:13.847538 containerd[1638]: time="2025-09-09T01:00:13.847503598Z" level=info msg="StartContainer for \"ebe8a00dd38b60781c657a4cb4500fa994959e9a2124b0df92265282035cb170\" returns successfully" Sep 9 01:00:13.858894 systemd[1]: cri-containerd-ebe8a00dd38b60781c657a4cb4500fa994959e9a2124b0df92265282035cb170.scope: Deactivated successfully. Sep 9 01:00:13.859091 systemd[1]: cri-containerd-ebe8a00dd38b60781c657a4cb4500fa994959e9a2124b0df92265282035cb170.scope: Consumed 13ms CPU time, 7.3M memory peak, 2.1M read from disk. Sep 9 01:00:13.859645 containerd[1638]: time="2025-09-09T01:00:13.859614755Z" level=info msg="received exit event container_id:\"ebe8a00dd38b60781c657a4cb4500fa994959e9a2124b0df92265282035cb170\" id:\"ebe8a00dd38b60781c657a4cb4500fa994959e9a2124b0df92265282035cb170\" pid:4810 exited_at:{seconds:1757379613 nanos:859133554}" Sep 9 01:00:13.859929 containerd[1638]: time="2025-09-09T01:00:13.859896577Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ebe8a00dd38b60781c657a4cb4500fa994959e9a2124b0df92265282035cb170\" id:\"ebe8a00dd38b60781c657a4cb4500fa994959e9a2124b0df92265282035cb170\" pid:4810 exited_at:{seconds:1757379613 nanos:859133554}" Sep 9 01:00:13.872104 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ebe8a00dd38b60781c657a4cb4500fa994959e9a2124b0df92265282035cb170-rootfs.mount: Deactivated successfully. Sep 9 01:00:14.788254 containerd[1638]: time="2025-09-09T01:00:14.788209368Z" level=info msg="CreateContainer within sandbox \"f835f55d5f3b9a81edfd8f477099bcca47110dd266f5425760115e10920b7eaf\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 01:00:14.820204 containerd[1638]: time="2025-09-09T01:00:14.820177849Z" level=info msg="Container 90d8e903059c23e15fac181760c2ac7ddd5e6092b150725ccf9efd71c4c308d6: CDI devices from CRI Config.CDIDevices: []" Sep 9 01:00:14.835294 containerd[1638]: time="2025-09-09T01:00:14.835260531Z" level=info msg="CreateContainer within sandbox \"f835f55d5f3b9a81edfd8f477099bcca47110dd266f5425760115e10920b7eaf\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"90d8e903059c23e15fac181760c2ac7ddd5e6092b150725ccf9efd71c4c308d6\"" Sep 9 01:00:14.835748 containerd[1638]: time="2025-09-09T01:00:14.835655207Z" level=info msg="StartContainer for \"90d8e903059c23e15fac181760c2ac7ddd5e6092b150725ccf9efd71c4c308d6\"" Sep 9 01:00:14.837048 containerd[1638]: time="2025-09-09T01:00:14.837001044Z" level=info msg="connecting to shim 90d8e903059c23e15fac181760c2ac7ddd5e6092b150725ccf9efd71c4c308d6" address="unix:///run/containerd/s/cebac6bee0db507c0e051d7dfccaef0f6ae39b106df4eab5bfaeb4dd96f0e2b0" protocol=ttrpc version=3 Sep 9 01:00:14.857541 systemd[1]: Started cri-containerd-90d8e903059c23e15fac181760c2ac7ddd5e6092b150725ccf9efd71c4c308d6.scope - libcontainer container 90d8e903059c23e15fac181760c2ac7ddd5e6092b150725ccf9efd71c4c308d6. Sep 9 01:00:14.887786 containerd[1638]: time="2025-09-09T01:00:14.887732002Z" level=info msg="StartContainer for \"90d8e903059c23e15fac181760c2ac7ddd5e6092b150725ccf9efd71c4c308d6\" returns successfully" Sep 9 01:00:14.914079 systemd[1]: cri-containerd-90d8e903059c23e15fac181760c2ac7ddd5e6092b150725ccf9efd71c4c308d6.scope: Deactivated successfully. Sep 9 01:00:14.914309 systemd[1]: cri-containerd-90d8e903059c23e15fac181760c2ac7ddd5e6092b150725ccf9efd71c4c308d6.scope: Consumed 14ms CPU time, 5.9M memory peak, 1M read from disk. Sep 9 01:00:14.915414 containerd[1638]: time="2025-09-09T01:00:14.915359014Z" level=info msg="received exit event container_id:\"90d8e903059c23e15fac181760c2ac7ddd5e6092b150725ccf9efd71c4c308d6\" id:\"90d8e903059c23e15fac181760c2ac7ddd5e6092b150725ccf9efd71c4c308d6\" pid:4856 exited_at:{seconds:1757379614 nanos:915192442}" Sep 9 01:00:14.915414 containerd[1638]: time="2025-09-09T01:00:14.915388428Z" level=info msg="TaskExit event in podsandbox handler container_id:\"90d8e903059c23e15fac181760c2ac7ddd5e6092b150725ccf9efd71c4c308d6\" id:\"90d8e903059c23e15fac181760c2ac7ddd5e6092b150725ccf9efd71c4c308d6\" pid:4856 exited_at:{seconds:1757379614 nanos:915192442}" Sep 9 01:00:14.927745 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-90d8e903059c23e15fac181760c2ac7ddd5e6092b150725ccf9efd71c4c308d6-rootfs.mount: Deactivated successfully. Sep 9 01:00:15.561305 kubelet[2947]: E0909 01:00:15.561228 2947 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 01:00:15.794981 containerd[1638]: time="2025-09-09T01:00:15.794906540Z" level=info msg="CreateContainer within sandbox \"f835f55d5f3b9a81edfd8f477099bcca47110dd266f5425760115e10920b7eaf\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 01:00:15.826190 containerd[1638]: time="2025-09-09T01:00:15.826123419Z" level=info msg="Container cdb453e48e49d4276531d4b5aa3141b7669d394db32c23a0b44422ca621ca947: CDI devices from CRI Config.CDIDevices: []" Sep 9 01:00:15.831864 containerd[1638]: time="2025-09-09T01:00:15.831837336Z" level=info msg="CreateContainer within sandbox \"f835f55d5f3b9a81edfd8f477099bcca47110dd266f5425760115e10920b7eaf\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cdb453e48e49d4276531d4b5aa3141b7669d394db32c23a0b44422ca621ca947\"" Sep 9 01:00:15.832112 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4290620882.mount: Deactivated successfully. Sep 9 01:00:15.833648 containerd[1638]: time="2025-09-09T01:00:15.833305726Z" level=info msg="StartContainer for \"cdb453e48e49d4276531d4b5aa3141b7669d394db32c23a0b44422ca621ca947\"" Sep 9 01:00:15.834225 containerd[1638]: time="2025-09-09T01:00:15.834210737Z" level=info msg="connecting to shim cdb453e48e49d4276531d4b5aa3141b7669d394db32c23a0b44422ca621ca947" address="unix:///run/containerd/s/cebac6bee0db507c0e051d7dfccaef0f6ae39b106df4eab5bfaeb4dd96f0e2b0" protocol=ttrpc version=3 Sep 9 01:00:15.851543 systemd[1]: Started cri-containerd-cdb453e48e49d4276531d4b5aa3141b7669d394db32c23a0b44422ca621ca947.scope - libcontainer container cdb453e48e49d4276531d4b5aa3141b7669d394db32c23a0b44422ca621ca947. Sep 9 01:00:15.869792 systemd[1]: cri-containerd-cdb453e48e49d4276531d4b5aa3141b7669d394db32c23a0b44422ca621ca947.scope: Deactivated successfully. Sep 9 01:00:15.870745 containerd[1638]: time="2025-09-09T01:00:15.870724929Z" level=info msg="received exit event container_id:\"cdb453e48e49d4276531d4b5aa3141b7669d394db32c23a0b44422ca621ca947\" id:\"cdb453e48e49d4276531d4b5aa3141b7669d394db32c23a0b44422ca621ca947\" pid:4896 exited_at:{seconds:1757379615 nanos:870071384}" Sep 9 01:00:15.871101 containerd[1638]: time="2025-09-09T01:00:15.871063885Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cdb453e48e49d4276531d4b5aa3141b7669d394db32c23a0b44422ca621ca947\" id:\"cdb453e48e49d4276531d4b5aa3141b7669d394db32c23a0b44422ca621ca947\" pid:4896 exited_at:{seconds:1757379615 nanos:870071384}" Sep 9 01:00:15.877330 containerd[1638]: time="2025-09-09T01:00:15.877305439Z" level=info msg="StartContainer for \"cdb453e48e49d4276531d4b5aa3141b7669d394db32c23a0b44422ca621ca947\" returns successfully" Sep 9 01:00:15.891257 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cdb453e48e49d4276531d4b5aa3141b7669d394db32c23a0b44422ca621ca947-rootfs.mount: Deactivated successfully. Sep 9 01:00:16.798367 containerd[1638]: time="2025-09-09T01:00:16.798340774Z" level=info msg="CreateContainer within sandbox \"f835f55d5f3b9a81edfd8f477099bcca47110dd266f5425760115e10920b7eaf\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 01:00:16.807804 containerd[1638]: time="2025-09-09T01:00:16.807776888Z" level=info msg="Container 0225966f17823b59eb7ac4957e8839b1630e8195aa175e69434b1b68645ebf4c: CDI devices from CRI Config.CDIDevices: []" Sep 9 01:00:16.811928 containerd[1638]: time="2025-09-09T01:00:16.811885809Z" level=info msg="CreateContainer within sandbox \"f835f55d5f3b9a81edfd8f477099bcca47110dd266f5425760115e10920b7eaf\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0225966f17823b59eb7ac4957e8839b1630e8195aa175e69434b1b68645ebf4c\"" Sep 9 01:00:16.813468 containerd[1638]: time="2025-09-09T01:00:16.812520332Z" level=info msg="StartContainer for \"0225966f17823b59eb7ac4957e8839b1630e8195aa175e69434b1b68645ebf4c\"" Sep 9 01:00:16.813468 containerd[1638]: time="2025-09-09T01:00:16.812949018Z" level=info msg="connecting to shim 0225966f17823b59eb7ac4957e8839b1630e8195aa175e69434b1b68645ebf4c" address="unix:///run/containerd/s/cebac6bee0db507c0e051d7dfccaef0f6ae39b106df4eab5bfaeb4dd96f0e2b0" protocol=ttrpc version=3 Sep 9 01:00:16.828542 systemd[1]: Started cri-containerd-0225966f17823b59eb7ac4957e8839b1630e8195aa175e69434b1b68645ebf4c.scope - libcontainer container 0225966f17823b59eb7ac4957e8839b1630e8195aa175e69434b1b68645ebf4c. Sep 9 01:00:16.852070 containerd[1638]: time="2025-09-09T01:00:16.852042942Z" level=info msg="StartContainer for \"0225966f17823b59eb7ac4957e8839b1630e8195aa175e69434b1b68645ebf4c\" returns successfully" Sep 9 01:00:16.947996 containerd[1638]: time="2025-09-09T01:00:16.947741994Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0225966f17823b59eb7ac4957e8839b1630e8195aa175e69434b1b68645ebf4c\" id:\"52ffd90c05293b2284b87930eb7a1bfbcbdfe78c97d344be4894187ae5ef94e4\" pid:4962 exited_at:{seconds:1757379616 nanos:947510242}" Sep 9 01:00:17.438132 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Sep 9 01:00:17.584748 kubelet[2947]: I0909 01:00:17.584676 2947 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-09T01:00:17Z","lastTransitionTime":"2025-09-09T01:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 9 01:00:17.811790 kubelet[2947]: I0909 01:00:17.811667 2947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kvbkk" podStartSLOduration=5.81164708 podStartE2EDuration="5.81164708s" podCreationTimestamp="2025-09-09 01:00:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 01:00:17.809971838 +0000 UTC m=+132.471169059" watchObservedRunningTime="2025-09-09 01:00:17.81164708 +0000 UTC m=+132.472844293" Sep 9 01:00:19.263275 containerd[1638]: time="2025-09-09T01:00:19.263235609Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0225966f17823b59eb7ac4957e8839b1630e8195aa175e69434b1b68645ebf4c\" id:\"bbe46d51c9feeb6238a9cc38021a980384c7d31b3c6ccb5cdb15bd7f4dc834a1\" pid:5121 exit_status:1 exited_at:{seconds:1757379619 nanos:259349304}" Sep 9 01:00:20.058375 systemd-networkd[1545]: lxc_health: Link UP Sep 9 01:00:20.061223 systemd-networkd[1545]: lxc_health: Gained carrier Sep 9 01:00:21.119573 systemd-networkd[1545]: lxc_health: Gained IPv6LL Sep 9 01:00:21.348755 containerd[1638]: time="2025-09-09T01:00:21.348723746Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0225966f17823b59eb7ac4957e8839b1630e8195aa175e69434b1b68645ebf4c\" id:\"1a237276489502eb78ccfe08753d58c760a5e04d457b5d1047f392048f01c0ab\" pid:5487 exited_at:{seconds:1757379621 nanos:347934311}" Sep 9 01:00:21.351594 kubelet[2947]: E0909 01:00:21.351564 2947 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:60570->127.0.0.1:34725: write tcp 127.0.0.1:60570->127.0.0.1:34725: write: broken pipe Sep 9 01:00:23.464989 containerd[1638]: time="2025-09-09T01:00:23.464938022Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0225966f17823b59eb7ac4957e8839b1630e8195aa175e69434b1b68645ebf4c\" id:\"568dd8cc29abaf1dad947e2e328e3c90d622d51f27e748f7024255b788188032\" pid:5529 exited_at:{seconds:1757379623 nanos:464614698}" Sep 9 01:00:23.466202 kubelet[2947]: E0909 01:00:23.466105 2947 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:60582->127.0.0.1:34725: write tcp 127.0.0.1:60582->127.0.0.1:34725: write: connection reset by peer Sep 9 01:00:25.532428 containerd[1638]: time="2025-09-09T01:00:25.532402740Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0225966f17823b59eb7ac4957e8839b1630e8195aa175e69434b1b68645ebf4c\" id:\"564d2db5530e966220c1f548bccb56cda2924fa3c320228d2cc87d42672f36b3\" pid:5554 exited_at:{seconds:1757379625 nanos:532170913}" Sep 9 01:00:25.537347 sshd[4696]: Connection closed by 139.178.68.195 port 55422 Sep 9 01:00:25.537933 sshd-session[4693]: pam_unix(sshd:session): session closed for user core Sep 9 01:00:25.546969 systemd-logind[1590]: Session 29 logged out. Waiting for processes to exit. Sep 9 01:00:25.547354 systemd[1]: sshd@26-139.178.70.105:22-139.178.68.195:55422.service: Deactivated successfully. Sep 9 01:00:25.548776 systemd[1]: session-29.scope: Deactivated successfully. Sep 9 01:00:25.549999 systemd-logind[1590]: Removed session 29.