Jun 20 19:30:39.730253 kernel: Linux version 6.12.34-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jun 20 17:06:39 -00 2025 Jun 20 19:30:39.730270 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:30:39.730277 kernel: Disabled fast string operations Jun 20 19:30:39.730281 kernel: BIOS-provided physical RAM map: Jun 20 19:30:39.730285 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Jun 20 19:30:39.730289 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Jun 20 19:30:39.730295 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Jun 20 19:30:39.730300 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Jun 20 19:30:39.730304 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Jun 20 19:30:39.730308 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Jun 20 19:30:39.730313 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Jun 20 19:30:39.730317 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Jun 20 19:30:39.730321 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Jun 20 19:30:39.730326 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jun 20 19:30:39.730332 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Jun 20 19:30:39.730337 kernel: NX (Execute Disable) protection: active Jun 20 19:30:39.730342 kernel: APIC: Static calls initialized Jun 20 19:30:39.730347 kernel: SMBIOS 2.7 present. Jun 20 19:30:39.730352 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Jun 20 19:30:39.730357 kernel: DMI: Memory slots populated: 1/128 Jun 20 19:30:39.730363 kernel: vmware: hypercall mode: 0x00 Jun 20 19:30:39.730367 kernel: Hypervisor detected: VMware Jun 20 19:30:39.730372 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Jun 20 19:30:39.730377 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Jun 20 19:30:39.730382 kernel: vmware: using clock offset of 3283401509 ns Jun 20 19:30:39.730387 kernel: tsc: Detected 3408.000 MHz processor Jun 20 19:30:39.730393 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 20 19:30:39.730398 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 20 19:30:39.730403 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Jun 20 19:30:39.730408 kernel: total RAM covered: 3072M Jun 20 19:30:39.730414 kernel: Found optimal setting for mtrr clean up Jun 20 19:30:39.730420 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Jun 20 19:30:39.730425 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Jun 20 19:30:39.730430 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 20 19:30:39.730434 kernel: Using GB pages for direct mapping Jun 20 19:30:39.730439 kernel: ACPI: Early table checksum verification disabled Jun 20 19:30:39.730444 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Jun 20 19:30:39.730449 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Jun 20 19:30:39.730454 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Jun 20 19:30:39.730460 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Jun 20 19:30:39.730467 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jun 20 19:30:39.730472 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jun 20 19:30:39.730477 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Jun 20 19:30:39.730482 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Jun 20 19:30:39.730487 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Jun 20 19:30:39.730494 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Jun 20 19:30:39.730499 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Jun 20 19:30:39.730504 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Jun 20 19:30:39.730509 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Jun 20 19:30:39.730514 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Jun 20 19:30:39.730519 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jun 20 19:30:39.730524 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jun 20 19:30:39.730529 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Jun 20 19:30:39.730534 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Jun 20 19:30:39.730540 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Jun 20 19:30:39.730546 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Jun 20 19:30:39.730551 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Jun 20 19:30:39.730556 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Jun 20 19:30:39.730561 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jun 20 19:30:39.730566 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jun 20 19:30:39.730571 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Jun 20 19:30:39.730576 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00001000-0x7fffffff] Jun 20 19:30:39.730582 kernel: NODE_DATA(0) allocated [mem 0x7fff8dc0-0x7fffffff] Jun 20 19:30:39.730588 kernel: Zone ranges: Jun 20 19:30:39.730593 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 20 19:30:39.730598 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Jun 20 19:30:39.730603 kernel: Normal empty Jun 20 19:30:39.730608 kernel: Device empty Jun 20 19:30:39.730614 kernel: Movable zone start for each node Jun 20 19:30:39.730619 kernel: Early memory node ranges Jun 20 19:30:39.730624 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Jun 20 19:30:39.730629 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Jun 20 19:30:39.730634 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Jun 20 19:30:39.730640 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Jun 20 19:30:39.730645 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 20 19:30:39.730651 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Jun 20 19:30:39.730656 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Jun 20 19:30:39.730661 kernel: ACPI: PM-Timer IO Port: 0x1008 Jun 20 19:30:39.730666 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Jun 20 19:30:39.730671 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jun 20 19:30:39.730676 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jun 20 19:30:39.730681 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jun 20 19:30:39.730687 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jun 20 19:30:39.730693 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jun 20 19:30:39.730698 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jun 20 19:30:39.730703 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jun 20 19:30:39.730708 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jun 20 19:30:39.730713 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jun 20 19:30:39.730718 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jun 20 19:30:39.730722 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jun 20 19:30:39.730728 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jun 20 19:30:39.730732 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jun 20 19:30:39.730739 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jun 20 19:30:39.730744 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jun 20 19:30:39.730749 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jun 20 19:30:39.730754 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Jun 20 19:30:39.730758 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Jun 20 19:30:39.730764 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Jun 20 19:30:39.730768 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Jun 20 19:30:39.730774 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Jun 20 19:30:39.730779 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Jun 20 19:30:39.730784 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Jun 20 19:30:39.730790 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Jun 20 19:30:39.730795 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Jun 20 19:30:39.730800 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Jun 20 19:30:39.730805 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Jun 20 19:30:39.730810 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Jun 20 19:30:39.730815 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Jun 20 19:30:39.730820 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Jun 20 19:30:39.730825 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Jun 20 19:30:39.730830 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Jun 20 19:30:39.730835 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Jun 20 19:30:39.730841 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Jun 20 19:30:39.730846 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Jun 20 19:30:39.730851 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Jun 20 19:30:39.730862 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Jun 20 19:30:39.730876 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Jun 20 19:30:39.730883 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Jun 20 19:30:39.730894 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Jun 20 19:30:39.730899 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Jun 20 19:30:39.730904 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Jun 20 19:30:39.730910 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Jun 20 19:30:39.730916 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Jun 20 19:30:39.730921 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Jun 20 19:30:39.730927 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Jun 20 19:30:39.730932 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Jun 20 19:30:39.730938 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Jun 20 19:30:39.730943 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Jun 20 19:30:39.730948 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Jun 20 19:30:39.730955 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Jun 20 19:30:39.730960 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Jun 20 19:30:39.730965 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Jun 20 19:30:39.730971 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Jun 20 19:30:39.730976 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Jun 20 19:30:39.730982 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Jun 20 19:30:39.730987 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Jun 20 19:30:39.730992 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Jun 20 19:30:39.730998 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Jun 20 19:30:39.731003 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Jun 20 19:30:39.731009 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Jun 20 19:30:39.731015 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Jun 20 19:30:39.731020 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Jun 20 19:30:39.731025 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Jun 20 19:30:39.731031 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Jun 20 19:30:39.731036 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Jun 20 19:30:39.731041 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Jun 20 19:30:39.731047 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Jun 20 19:30:39.731052 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Jun 20 19:30:39.731058 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Jun 20 19:30:39.731064 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Jun 20 19:30:39.731069 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Jun 20 19:30:39.731075 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Jun 20 19:30:39.731080 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Jun 20 19:30:39.731085 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Jun 20 19:30:39.731091 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Jun 20 19:30:39.731096 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Jun 20 19:30:39.731101 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Jun 20 19:30:39.731107 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Jun 20 19:30:39.731112 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Jun 20 19:30:39.731118 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Jun 20 19:30:39.731124 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Jun 20 19:30:39.731129 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Jun 20 19:30:39.731134 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Jun 20 19:30:39.731139 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Jun 20 19:30:39.731145 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Jun 20 19:30:39.731150 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Jun 20 19:30:39.731156 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Jun 20 19:30:39.731161 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Jun 20 19:30:39.731167 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Jun 20 19:30:39.731172 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Jun 20 19:30:39.731178 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Jun 20 19:30:39.731183 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Jun 20 19:30:39.731188 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Jun 20 19:30:39.731194 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Jun 20 19:30:39.731199 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Jun 20 19:30:39.731204 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Jun 20 19:30:39.731210 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Jun 20 19:30:39.731215 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Jun 20 19:30:39.731222 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Jun 20 19:30:39.731227 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Jun 20 19:30:39.731232 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Jun 20 19:30:39.731238 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Jun 20 19:30:39.731243 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Jun 20 19:30:39.731248 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Jun 20 19:30:39.731254 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Jun 20 19:30:39.731259 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Jun 20 19:30:39.731264 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Jun 20 19:30:39.731270 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Jun 20 19:30:39.731276 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Jun 20 19:30:39.731281 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Jun 20 19:30:39.731287 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Jun 20 19:30:39.731292 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Jun 20 19:30:39.731297 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Jun 20 19:30:39.731303 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Jun 20 19:30:39.731308 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Jun 20 19:30:39.731313 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Jun 20 19:30:39.731319 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Jun 20 19:30:39.731324 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Jun 20 19:30:39.731330 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Jun 20 19:30:39.731336 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Jun 20 19:30:39.731341 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Jun 20 19:30:39.731346 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Jun 20 19:30:39.731352 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Jun 20 19:30:39.731357 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Jun 20 19:30:39.731362 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Jun 20 19:30:39.731368 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Jun 20 19:30:39.731373 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Jun 20 19:30:39.731378 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Jun 20 19:30:39.731385 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 20 19:30:39.731391 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Jun 20 19:30:39.731396 kernel: TSC deadline timer available Jun 20 19:30:39.731401 kernel: CPU topo: Max. logical packages: 128 Jun 20 19:30:39.731407 kernel: CPU topo: Max. logical dies: 128 Jun 20 19:30:39.731412 kernel: CPU topo: Max. dies per package: 1 Jun 20 19:30:39.731417 kernel: CPU topo: Max. threads per core: 1 Jun 20 19:30:39.731423 kernel: CPU topo: Num. cores per package: 1 Jun 20 19:30:39.731428 kernel: CPU topo: Num. threads per package: 1 Jun 20 19:30:39.731435 kernel: CPU topo: Allowing 2 present CPUs plus 126 hotplug CPUs Jun 20 19:30:39.731440 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Jun 20 19:30:39.731446 kernel: Booting paravirtualized kernel on VMware hypervisor Jun 20 19:30:39.731452 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 20 19:30:39.731457 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Jun 20 19:30:39.731463 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Jun 20 19:30:39.731468 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Jun 20 19:30:39.731474 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Jun 20 19:30:39.731479 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Jun 20 19:30:39.731485 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Jun 20 19:30:39.731491 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Jun 20 19:30:39.731496 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Jun 20 19:30:39.731501 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Jun 20 19:30:39.731506 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Jun 20 19:30:39.731512 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Jun 20 19:30:39.731517 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Jun 20 19:30:39.731522 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Jun 20 19:30:39.731527 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Jun 20 19:30:39.731534 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Jun 20 19:30:39.731539 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Jun 20 19:30:39.731545 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Jun 20 19:30:39.731550 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Jun 20 19:30:39.731555 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Jun 20 19:30:39.731561 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:30:39.731567 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 20 19:30:39.731573 kernel: random: crng init done Jun 20 19:30:39.731579 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Jun 20 19:30:39.731585 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Jun 20 19:30:39.731590 kernel: printk: log_buf_len min size: 262144 bytes Jun 20 19:30:39.731595 kernel: printk: log_buf_len: 1048576 bytes Jun 20 19:30:39.731601 kernel: printk: early log buf free: 245576(93%) Jun 20 19:30:39.731606 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 20 19:30:39.731612 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 20 19:30:39.731617 kernel: Fallback order for Node 0: 0 Jun 20 19:30:39.731623 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524157 Jun 20 19:30:39.731629 kernel: Policy zone: DMA32 Jun 20 19:30:39.731635 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 20 19:30:39.731640 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Jun 20 19:30:39.731646 kernel: ftrace: allocating 40093 entries in 157 pages Jun 20 19:30:39.731651 kernel: ftrace: allocated 157 pages with 5 groups Jun 20 19:30:39.731657 kernel: Dynamic Preempt: voluntary Jun 20 19:30:39.731662 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 20 19:30:39.731668 kernel: rcu: RCU event tracing is enabled. Jun 20 19:30:39.731674 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Jun 20 19:30:39.731680 kernel: Trampoline variant of Tasks RCU enabled. Jun 20 19:30:39.731686 kernel: Rude variant of Tasks RCU enabled. Jun 20 19:30:39.731691 kernel: Tracing variant of Tasks RCU enabled. Jun 20 19:30:39.731697 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 20 19:30:39.731702 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Jun 20 19:30:39.731707 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jun 20 19:30:39.731713 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jun 20 19:30:39.731719 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jun 20 19:30:39.731724 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Jun 20 19:30:39.731731 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Jun 20 19:30:39.731736 kernel: Console: colour VGA+ 80x25 Jun 20 19:30:39.731741 kernel: printk: legacy console [tty0] enabled Jun 20 19:30:39.731747 kernel: printk: legacy console [ttyS0] enabled Jun 20 19:30:39.731752 kernel: ACPI: Core revision 20240827 Jun 20 19:30:39.731758 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Jun 20 19:30:39.731763 kernel: APIC: Switch to symmetric I/O mode setup Jun 20 19:30:39.731769 kernel: x2apic enabled Jun 20 19:30:39.731774 kernel: APIC: Switched APIC routing to: physical x2apic Jun 20 19:30:39.731780 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jun 20 19:30:39.731786 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jun 20 19:30:39.731792 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Jun 20 19:30:39.731797 kernel: Disabled fast string operations Jun 20 19:30:39.731803 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jun 20 19:30:39.731808 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jun 20 19:30:39.731814 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 20 19:30:39.731819 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Jun 20 19:30:39.731825 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jun 20 19:30:39.731831 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jun 20 19:30:39.731837 kernel: RETBleed: Mitigation: Enhanced IBRS Jun 20 19:30:39.731842 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jun 20 19:30:39.731848 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jun 20 19:30:39.731862 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jun 20 19:30:39.731868 kernel: SRBDS: Unknown: Dependent on hypervisor status Jun 20 19:30:39.731882 kernel: GDS: Unknown: Dependent on hypervisor status Jun 20 19:30:39.731888 kernel: ITS: Mitigation: Aligned branch/return thunks Jun 20 19:30:39.731893 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 20 19:30:39.731901 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 20 19:30:39.731907 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 20 19:30:39.731912 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 20 19:30:39.731918 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jun 20 19:30:39.731923 kernel: Freeing SMP alternatives memory: 32K Jun 20 19:30:39.731929 kernel: pid_max: default: 131072 minimum: 1024 Jun 20 19:30:39.731935 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jun 20 19:30:39.731940 kernel: landlock: Up and running. Jun 20 19:30:39.731946 kernel: SELinux: Initializing. Jun 20 19:30:39.731952 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 20 19:30:39.731958 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 20 19:30:39.731963 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jun 20 19:30:39.731969 kernel: Performance Events: Skylake events, core PMU driver. Jun 20 19:30:39.731974 kernel: core: CPUID marked event: 'cpu cycles' unavailable Jun 20 19:30:39.731980 kernel: core: CPUID marked event: 'instructions' unavailable Jun 20 19:30:39.731986 kernel: core: CPUID marked event: 'bus cycles' unavailable Jun 20 19:30:39.731991 kernel: core: CPUID marked event: 'cache references' unavailable Jun 20 19:30:39.731996 kernel: core: CPUID marked event: 'cache misses' unavailable Jun 20 19:30:39.732003 kernel: core: CPUID marked event: 'branch instructions' unavailable Jun 20 19:30:39.732008 kernel: core: CPUID marked event: 'branch misses' unavailable Jun 20 19:30:39.732014 kernel: ... version: 1 Jun 20 19:30:39.732019 kernel: ... bit width: 48 Jun 20 19:30:39.732024 kernel: ... generic registers: 4 Jun 20 19:30:39.732030 kernel: ... value mask: 0000ffffffffffff Jun 20 19:30:39.732035 kernel: ... max period: 000000007fffffff Jun 20 19:30:39.732041 kernel: ... fixed-purpose events: 0 Jun 20 19:30:39.732046 kernel: ... event mask: 000000000000000f Jun 20 19:30:39.732053 kernel: signal: max sigframe size: 1776 Jun 20 19:30:39.732058 kernel: rcu: Hierarchical SRCU implementation. Jun 20 19:30:39.732064 kernel: rcu: Max phase no-delay instances is 400. Jun 20 19:30:39.732070 kernel: Timer migration: 3 hierarchy levels; 8 children per group; 3 crossnode level Jun 20 19:30:39.732075 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jun 20 19:30:39.732081 kernel: smp: Bringing up secondary CPUs ... Jun 20 19:30:39.732086 kernel: smpboot: x86: Booting SMP configuration: Jun 20 19:30:39.732092 kernel: .... node #0, CPUs: #1 Jun 20 19:30:39.732097 kernel: Disabled fast string operations Jun 20 19:30:39.732103 kernel: smp: Brought up 1 node, 2 CPUs Jun 20 19:30:39.732109 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Jun 20 19:30:39.732115 kernel: Memory: 1924252K/2096628K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54424K init, 2544K bss, 160996K reserved, 0K cma-reserved) Jun 20 19:30:39.732121 kernel: devtmpfs: initialized Jun 20 19:30:39.732126 kernel: x86/mm: Memory block size: 128MB Jun 20 19:30:39.732132 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Jun 20 19:30:39.732138 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 20 19:30:39.732143 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Jun 20 19:30:39.732149 kernel: pinctrl core: initialized pinctrl subsystem Jun 20 19:30:39.732155 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 20 19:30:39.732161 kernel: audit: initializing netlink subsys (disabled) Jun 20 19:30:39.732166 kernel: audit: type=2000 audit(1750447836.274:1): state=initialized audit_enabled=0 res=1 Jun 20 19:30:39.732172 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 20 19:30:39.732177 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 20 19:30:39.732183 kernel: cpuidle: using governor menu Jun 20 19:30:39.732188 kernel: Simple Boot Flag at 0x36 set to 0x80 Jun 20 19:30:39.732194 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 20 19:30:39.732199 kernel: dca service started, version 1.12.1 Jun 20 19:30:39.732206 kernel: PCI: ECAM [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) for domain 0000 [bus 00-7f] Jun 20 19:30:39.732218 kernel: PCI: Using configuration type 1 for base access Jun 20 19:30:39.732224 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 20 19:30:39.732230 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 20 19:30:39.732236 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 20 19:30:39.732242 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 20 19:30:39.732248 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 20 19:30:39.732254 kernel: ACPI: Added _OSI(Module Device) Jun 20 19:30:39.732259 kernel: ACPI: Added _OSI(Processor Device) Jun 20 19:30:39.732266 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 20 19:30:39.732272 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 20 19:30:39.732278 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jun 20 19:30:39.732283 kernel: ACPI: Interpreter enabled Jun 20 19:30:39.732289 kernel: ACPI: PM: (supports S0 S1 S5) Jun 20 19:30:39.732295 kernel: ACPI: Using IOAPIC for interrupt routing Jun 20 19:30:39.732304 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 20 19:30:39.732310 kernel: PCI: Using E820 reservations for host bridge windows Jun 20 19:30:39.732316 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Jun 20 19:30:39.732323 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Jun 20 19:30:39.732400 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 20 19:30:39.732454 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Jun 20 19:30:39.732503 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Jun 20 19:30:39.732512 kernel: PCI host bridge to bus 0000:00 Jun 20 19:30:39.732564 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 20 19:30:39.732612 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Jun 20 19:30:39.732656 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jun 20 19:30:39.732700 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 20 19:30:39.732743 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Jun 20 19:30:39.732786 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Jun 20 19:30:39.732846 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 conventional PCI endpoint Jun 20 19:30:39.732922 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 conventional PCI bridge Jun 20 19:30:39.732977 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jun 20 19:30:39.733033 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Jun 20 19:30:39.733089 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a conventional PCI endpoint Jun 20 19:30:39.733143 kernel: pci 0000:00:07.1: BAR 4 [io 0x1060-0x106f] Jun 20 19:30:39.733193 kernel: pci 0000:00:07.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Jun 20 19:30:39.733243 kernel: pci 0000:00:07.1: BAR 1 [io 0x03f6]: legacy IDE quirk Jun 20 19:30:39.733292 kernel: pci 0000:00:07.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Jun 20 19:30:39.733355 kernel: pci 0000:00:07.1: BAR 3 [io 0x0376]: legacy IDE quirk Jun 20 19:30:39.733412 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Jun 20 19:30:39.733464 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Jun 20 19:30:39.733517 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Jun 20 19:30:39.733572 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 conventional PCI endpoint Jun 20 19:30:39.733623 kernel: pci 0000:00:07.7: BAR 0 [io 0x1080-0x10bf] Jun 20 19:30:39.733673 kernel: pci 0000:00:07.7: BAR 1 [mem 0xfebfe000-0xfebfffff 64bit] Jun 20 19:30:39.733729 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 conventional PCI endpoint Jun 20 19:30:39.733780 kernel: pci 0000:00:0f.0: BAR 0 [io 0x1070-0x107f] Jun 20 19:30:39.733831 kernel: pci 0000:00:0f.0: BAR 1 [mem 0xe8000000-0xefffffff pref] Jun 20 19:30:39.733893 kernel: pci 0000:00:0f.0: BAR 2 [mem 0xfe000000-0xfe7fffff] Jun 20 19:30:39.733943 kernel: pci 0000:00:0f.0: ROM [mem 0x00000000-0x00007fff pref] Jun 20 19:30:39.733999 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 20 19:30:39.734057 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 conventional PCI bridge Jun 20 19:30:39.734107 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Jun 20 19:30:39.734157 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jun 20 19:30:39.734206 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jun 20 19:30:39.734259 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jun 20 19:30:39.734313 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:39.734364 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jun 20 19:30:39.734414 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jun 20 19:30:39.734465 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jun 20 19:30:39.734515 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Jun 20 19:30:39.734570 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:39.734630 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jun 20 19:30:39.734682 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jun 20 19:30:39.734732 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jun 20 19:30:39.734783 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jun 20 19:30:39.734833 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Jun 20 19:30:39.734895 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:39.734960 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jun 20 19:30:39.735018 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jun 20 19:30:39.735069 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jun 20 19:30:39.735120 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jun 20 19:30:39.735171 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Jun 20 19:30:39.735228 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:39.735280 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jun 20 19:30:39.735333 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jun 20 19:30:39.735384 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jun 20 19:30:39.735442 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Jun 20 19:30:39.735499 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:39.735551 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jun 20 19:30:39.735601 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jun 20 19:30:39.735652 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jun 20 19:30:39.735704 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Jun 20 19:30:39.735759 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:39.735809 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jun 20 19:30:39.735868 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jun 20 19:30:39.735930 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jun 20 19:30:39.735982 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Jun 20 19:30:39.736037 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:39.736092 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jun 20 19:30:39.736143 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jun 20 19:30:39.736193 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jun 20 19:30:39.736243 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Jun 20 19:30:39.736300 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:39.736351 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jun 20 19:30:39.736402 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jun 20 19:30:39.736462 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jun 20 19:30:39.736514 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Jun 20 19:30:39.736568 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:39.736619 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jun 20 19:30:39.736669 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jun 20 19:30:39.736719 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jun 20 19:30:39.736769 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Jun 20 19:30:39.736824 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:39.736898 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jun 20 19:30:39.736951 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jun 20 19:30:39.737002 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jun 20 19:30:39.737056 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jun 20 19:30:39.737107 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Jun 20 19:30:39.737161 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:39.737215 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jun 20 19:30:39.737266 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jun 20 19:30:39.737316 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jun 20 19:30:39.737367 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jun 20 19:30:39.737417 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Jun 20 19:30:39.737474 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:39.737525 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jun 20 19:30:39.737578 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jun 20 19:30:39.737629 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jun 20 19:30:39.737679 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Jun 20 19:30:39.737733 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:39.737784 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jun 20 19:30:39.737835 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jun 20 19:30:39.737909 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jun 20 19:30:39.737961 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Jun 20 19:30:39.738019 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:39.738070 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jun 20 19:30:39.738122 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jun 20 19:30:39.738172 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jun 20 19:30:39.738222 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Jun 20 19:30:39.738277 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:39.738329 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jun 20 19:30:39.738382 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jun 20 19:30:39.738432 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jun 20 19:30:39.738483 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Jun 20 19:30:39.738538 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:39.738590 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jun 20 19:30:39.738641 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jun 20 19:30:39.738691 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jun 20 19:30:39.738745 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Jun 20 19:30:39.738803 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:39.738870 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jun 20 19:30:39.738929 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jun 20 19:30:39.738980 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jun 20 19:30:39.739032 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jun 20 19:30:39.739082 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Jun 20 19:30:39.739141 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:39.739194 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jun 20 19:30:39.739245 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jun 20 19:30:39.739294 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jun 20 19:30:39.739351 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jun 20 19:30:39.739401 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Jun 20 19:30:39.739456 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:39.739507 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jun 20 19:30:39.739557 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jun 20 19:30:39.739607 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jun 20 19:30:39.739658 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jun 20 19:30:39.739710 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Jun 20 19:30:39.739774 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:39.739825 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jun 20 19:30:39.739886 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jun 20 19:30:39.739937 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jun 20 19:30:39.739987 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Jun 20 19:30:39.740043 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:39.740096 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jun 20 19:30:39.740147 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jun 20 19:30:39.740197 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jun 20 19:30:39.740247 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Jun 20 19:30:39.740301 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:39.740352 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jun 20 19:30:39.740403 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jun 20 19:30:39.740455 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jun 20 19:30:39.740505 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Jun 20 19:30:39.740559 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:39.740613 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jun 20 19:30:39.740664 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jun 20 19:30:39.740714 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jun 20 19:30:39.740764 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Jun 20 19:30:39.740822 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:39.740895 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jun 20 19:30:39.740948 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jun 20 19:30:39.740999 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jun 20 19:30:39.741050 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Jun 20 19:30:39.741116 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:39.741168 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jun 20 19:30:39.741222 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jun 20 19:30:39.741274 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jun 20 19:30:39.741324 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jun 20 19:30:39.741374 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Jun 20 19:30:39.741429 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:39.741481 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jun 20 19:30:39.741531 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jun 20 19:30:39.741762 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jun 20 19:30:39.741813 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jun 20 19:30:39.741873 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Jun 20 19:30:39.741930 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:39.741982 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jun 20 19:30:39.742032 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jun 20 19:30:39.742082 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jun 20 19:30:39.742135 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Jun 20 19:30:39.742190 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:39.742241 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jun 20 19:30:39.742292 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jun 20 19:30:39.742342 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jun 20 19:30:39.742392 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Jun 20 19:30:39.742447 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:39.742499 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jun 20 19:30:39.742551 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jun 20 19:30:39.742602 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jun 20 19:30:39.742652 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Jun 20 19:30:39.742709 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:39.742761 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jun 20 19:30:39.742811 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jun 20 19:30:39.742881 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jun 20 19:30:39.742938 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Jun 20 19:30:39.743000 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:39.743053 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jun 20 19:30:39.743104 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jun 20 19:30:39.743155 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jun 20 19:30:39.743204 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Jun 20 19:30:39.743258 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port Jun 20 19:30:39.743542 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jun 20 19:30:39.743596 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jun 20 19:30:39.743649 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jun 20 19:30:39.743700 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Jun 20 19:30:39.743761 kernel: pci_bus 0000:01: extended config space not accessible Jun 20 19:30:39.743813 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jun 20 19:30:39.743877 kernel: pci_bus 0000:02: extended config space not accessible Jun 20 19:30:39.743889 kernel: acpiphp: Slot [32] registered Jun 20 19:30:39.743903 kernel: acpiphp: Slot [33] registered Jun 20 19:30:39.743918 kernel: acpiphp: Slot [34] registered Jun 20 19:30:39.743924 kernel: acpiphp: Slot [35] registered Jun 20 19:30:39.743930 kernel: acpiphp: Slot [36] registered Jun 20 19:30:39.743936 kernel: acpiphp: Slot [37] registered Jun 20 19:30:39.743942 kernel: acpiphp: Slot [38] registered Jun 20 19:30:39.743947 kernel: acpiphp: Slot [39] registered Jun 20 19:30:39.743953 kernel: acpiphp: Slot [40] registered Jun 20 19:30:39.743959 kernel: acpiphp: Slot [41] registered Jun 20 19:30:39.743967 kernel: acpiphp: Slot [42] registered Jun 20 19:30:39.743973 kernel: acpiphp: Slot [43] registered Jun 20 19:30:39.743978 kernel: acpiphp: Slot [44] registered Jun 20 19:30:39.743984 kernel: acpiphp: Slot [45] registered Jun 20 19:30:39.743990 kernel: acpiphp: Slot [46] registered Jun 20 19:30:39.743996 kernel: acpiphp: Slot [47] registered Jun 20 19:30:39.744002 kernel: acpiphp: Slot [48] registered Jun 20 19:30:39.744008 kernel: acpiphp: Slot [49] registered Jun 20 19:30:39.744014 kernel: acpiphp: Slot [50] registered Jun 20 19:30:39.744021 kernel: acpiphp: Slot [51] registered Jun 20 19:30:39.744027 kernel: acpiphp: Slot [52] registered Jun 20 19:30:39.744032 kernel: acpiphp: Slot [53] registered Jun 20 19:30:39.745866 kernel: acpiphp: Slot [54] registered Jun 20 19:30:39.745876 kernel: acpiphp: Slot [55] registered Jun 20 19:30:39.745882 kernel: acpiphp: Slot [56] registered Jun 20 19:30:39.745888 kernel: acpiphp: Slot [57] registered Jun 20 19:30:39.745894 kernel: acpiphp: Slot [58] registered Jun 20 19:30:39.745900 kernel: acpiphp: Slot [59] registered Jun 20 19:30:39.745905 kernel: acpiphp: Slot [60] registered Jun 20 19:30:39.745914 kernel: acpiphp: Slot [61] registered Jun 20 19:30:39.745919 kernel: acpiphp: Slot [62] registered Jun 20 19:30:39.745925 kernel: acpiphp: Slot [63] registered Jun 20 19:30:39.745989 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Jun 20 19:30:39.746044 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Jun 20 19:30:39.746097 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Jun 20 19:30:39.746147 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Jun 20 19:30:39.746197 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Jun 20 19:30:39.746250 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Jun 20 19:30:39.746312 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 PCIe Endpoint Jun 20 19:30:39.746366 kernel: pci 0000:03:00.0: BAR 0 [io 0x4000-0x4007] Jun 20 19:30:39.746417 kernel: pci 0000:03:00.0: BAR 1 [mem 0xfd5f8000-0xfd5fffff 64bit] Jun 20 19:30:39.746468 kernel: pci 0000:03:00.0: ROM [mem 0x00000000-0x0000ffff pref] Jun 20 19:30:39.746519 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jun 20 19:30:39.746571 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jun 20 19:30:39.746625 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jun 20 19:30:39.746678 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jun 20 19:30:39.746728 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jun 20 19:30:39.746780 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jun 20 19:30:39.746833 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jun 20 19:30:39.746903 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jun 20 19:30:39.746957 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jun 20 19:30:39.747012 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jun 20 19:30:39.747070 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 PCIe Endpoint Jun 20 19:30:39.747123 kernel: pci 0000:0b:00.0: BAR 0 [mem 0xfd4fc000-0xfd4fcfff] Jun 20 19:30:39.747174 kernel: pci 0000:0b:00.0: BAR 1 [mem 0xfd4fd000-0xfd4fdfff] Jun 20 19:30:39.747226 kernel: pci 0000:0b:00.0: BAR 2 [mem 0xfd4fe000-0xfd4fffff] Jun 20 19:30:39.747277 kernel: pci 0000:0b:00.0: BAR 3 [io 0x5000-0x500f] Jun 20 19:30:39.747329 kernel: pci 0000:0b:00.0: ROM [mem 0x00000000-0x0000ffff pref] Jun 20 19:30:39.747382 kernel: pci 0000:0b:00.0: supports D1 D2 Jun 20 19:30:39.747433 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jun 20 19:30:39.747484 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jun 20 19:30:39.747536 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jun 20 19:30:39.747588 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jun 20 19:30:39.747666 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jun 20 19:30:39.747730 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jun 20 19:30:39.747783 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jun 20 19:30:39.747844 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jun 20 19:30:39.747917 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jun 20 19:30:39.747999 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jun 20 19:30:39.748060 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jun 20 19:30:39.748112 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jun 20 19:30:39.748165 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jun 20 19:30:39.748217 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jun 20 19:30:39.748268 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jun 20 19:30:39.748323 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jun 20 19:30:39.748375 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jun 20 19:30:39.748426 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jun 20 19:30:39.748477 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jun 20 19:30:39.748528 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jun 20 19:30:39.748580 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jun 20 19:30:39.748632 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jun 20 19:30:39.748685 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jun 20 19:30:39.748737 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jun 20 19:30:39.748794 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jun 20 19:30:39.748845 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jun 20 19:30:39.749603 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Jun 20 19:30:39.749613 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Jun 20 19:30:39.749620 kernel: ACPI: PCI: Interrupt link LNKB disabled Jun 20 19:30:39.749628 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 20 19:30:39.749634 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Jun 20 19:30:39.749645 kernel: iommu: Default domain type: Translated Jun 20 19:30:39.749651 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 20 19:30:39.749657 kernel: PCI: Using ACPI for IRQ routing Jun 20 19:30:39.749663 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 20 19:30:39.749669 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Jun 20 19:30:39.749675 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Jun 20 19:30:39.749747 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Jun 20 19:30:39.749805 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Jun 20 19:30:39.749891 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 20 19:30:39.749902 kernel: vgaarb: loaded Jun 20 19:30:39.749908 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Jun 20 19:30:39.749915 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Jun 20 19:30:39.749920 kernel: clocksource: Switched to clocksource tsc-early Jun 20 19:30:39.749926 kernel: VFS: Disk quotas dquot_6.6.0 Jun 20 19:30:39.749933 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 20 19:30:39.749938 kernel: pnp: PnP ACPI init Jun 20 19:30:39.750034 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Jun 20 19:30:39.750083 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Jun 20 19:30:39.750129 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Jun 20 19:30:39.750179 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Jun 20 19:30:39.750228 kernel: pnp 00:06: [dma 2] Jun 20 19:30:39.750280 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Jun 20 19:30:39.750329 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Jun 20 19:30:39.750375 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Jun 20 19:30:39.750384 kernel: pnp: PnP ACPI: found 8 devices Jun 20 19:30:39.750390 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 20 19:30:39.750396 kernel: NET: Registered PF_INET protocol family Jun 20 19:30:39.750402 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 20 19:30:39.750408 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jun 20 19:30:39.750414 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 20 19:30:39.750420 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 20 19:30:39.750428 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jun 20 19:30:39.750433 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jun 20 19:30:39.750440 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 20 19:30:39.750445 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 20 19:30:39.750451 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 20 19:30:39.750457 kernel: NET: Registered PF_XDP protocol family Jun 20 19:30:39.750510 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jun 20 19:30:39.750563 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jun 20 19:30:39.750618 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jun 20 19:30:39.750670 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jun 20 19:30:39.750722 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jun 20 19:30:39.750773 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jun 20 19:30:39.750825 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Jun 20 19:30:39.751931 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jun 20 19:30:39.752008 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jun 20 19:30:39.752073 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jun 20 19:30:39.752132 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jun 20 19:30:39.752187 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jun 20 19:30:39.752240 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jun 20 19:30:39.752293 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jun 20 19:30:39.752352 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jun 20 19:30:39.752405 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jun 20 19:30:39.752457 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jun 20 19:30:39.752511 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jun 20 19:30:39.752566 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jun 20 19:30:39.752619 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Jun 20 19:30:39.752672 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Jun 20 19:30:39.752724 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Jun 20 19:30:39.752778 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Jun 20 19:30:39.752830 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref]: assigned Jun 20 19:30:39.754580 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref]: assigned Jun 20 19:30:39.754642 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:39.754696 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:39.754749 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:39.754801 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:39.754853 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:39.754916 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:39.754967 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:39.755018 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:39.755072 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:39.755123 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:39.755174 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:39.755225 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:39.755276 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:39.755327 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:39.755377 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:39.755428 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:39.755481 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:39.755532 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:39.755583 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:39.755632 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:39.755683 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:39.755734 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:39.755785 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:39.755835 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:39.755900 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:39.755952 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:39.756004 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:39.756056 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:39.756106 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:39.756157 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:39.756208 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:39.756258 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:39.756316 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:39.756368 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:39.756419 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:39.756481 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:39.756535 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:39.756586 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:39.756636 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:39.756689 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:39.756740 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:39.756790 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:39.756841 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:39.756907 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:39.756960 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:39.757010 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:39.757061 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:39.757112 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:39.757165 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:39.757217 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:39.757267 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:39.757319 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:39.757369 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:39.757420 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:39.757470 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:39.757520 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:39.757571 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:39.757621 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:39.757675 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:39.757725 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:39.757776 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:39.757827 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:39.759219 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:39.759281 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:39.759645 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:39.759701 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:39.759754 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:39.759809 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:39.759872 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:39.759927 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:39.759978 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:39.760029 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:39.760080 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:39.760132 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:39.760186 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:39.760237 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:39.760302 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:39.760359 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:39.760431 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:39.760493 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:39.760559 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:39.760624 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:39.760696 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space Jun 20 19:30:39.760766 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign Jun 20 19:30:39.760820 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jun 20 19:30:39.762899 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Jun 20 19:30:39.762962 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jun 20 19:30:39.763015 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jun 20 19:30:39.763067 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jun 20 19:30:39.763123 kernel: pci 0000:03:00.0: ROM [mem 0xfd500000-0xfd50ffff pref]: assigned Jun 20 19:30:39.763176 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jun 20 19:30:39.763230 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jun 20 19:30:39.763281 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jun 20 19:30:39.763338 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Jun 20 19:30:39.763407 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jun 20 19:30:39.763456 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jun 20 19:30:39.763506 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jun 20 19:30:39.763556 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jun 20 19:30:39.763607 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jun 20 19:30:39.763657 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jun 20 19:30:39.763707 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jun 20 19:30:39.763759 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jun 20 19:30:39.763810 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jun 20 19:30:39.765905 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jun 20 19:30:39.765974 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jun 20 19:30:39.766037 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jun 20 19:30:39.766089 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jun 20 19:30:39.766138 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jun 20 19:30:39.766191 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jun 20 19:30:39.766241 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jun 20 19:30:39.766290 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jun 20 19:30:39.766361 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jun 20 19:30:39.766412 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jun 20 19:30:39.766472 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jun 20 19:30:39.766523 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jun 20 19:30:39.766573 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jun 20 19:30:39.766627 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jun 20 19:30:39.766682 kernel: pci 0000:0b:00.0: ROM [mem 0xfd400000-0xfd40ffff pref]: assigned Jun 20 19:30:39.766734 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jun 20 19:30:39.766783 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jun 20 19:30:39.766832 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jun 20 19:30:39.766891 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Jun 20 19:30:39.766958 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jun 20 19:30:39.767010 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jun 20 19:30:39.767062 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jun 20 19:30:39.767112 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jun 20 19:30:39.767163 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jun 20 19:30:39.767212 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jun 20 19:30:39.767261 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jun 20 19:30:39.767310 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jun 20 19:30:39.767358 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jun 20 19:30:39.767408 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jun 20 19:30:39.767458 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jun 20 19:30:39.767509 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jun 20 19:30:39.767559 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jun 20 19:30:39.767608 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jun 20 19:30:39.767657 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jun 20 19:30:39.767706 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jun 20 19:30:39.767755 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jun 20 19:30:39.767805 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jun 20 19:30:39.767861 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jun 20 19:30:39.767923 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jun 20 19:30:39.767992 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jun 20 19:30:39.768043 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jun 20 19:30:39.768093 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jun 20 19:30:39.768145 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jun 20 19:30:39.768195 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jun 20 19:30:39.768260 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jun 20 19:30:39.768311 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jun 20 19:30:39.768362 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jun 20 19:30:39.768411 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jun 20 19:30:39.768460 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jun 20 19:30:39.768508 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jun 20 19:30:39.768558 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jun 20 19:30:39.768622 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jun 20 19:30:39.768670 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jun 20 19:30:39.768718 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jun 20 19:30:39.768784 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jun 20 19:30:39.768851 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jun 20 19:30:39.768908 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jun 20 19:30:39.768958 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jun 20 19:30:39.769006 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jun 20 19:30:39.769073 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jun 20 19:30:39.769122 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jun 20 19:30:39.769172 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jun 20 19:30:39.769220 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jun 20 19:30:39.769287 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jun 20 19:30:39.769374 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jun 20 19:30:39.769423 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jun 20 19:30:39.769472 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jun 20 19:30:39.769520 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jun 20 19:30:39.769569 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jun 20 19:30:39.769621 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jun 20 19:30:39.769670 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jun 20 19:30:39.769718 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jun 20 19:30:39.769766 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jun 20 19:30:39.769815 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jun 20 19:30:39.769904 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jun 20 19:30:39.769969 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jun 20 19:30:39.770018 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jun 20 19:30:39.770067 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jun 20 19:30:39.770115 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jun 20 19:30:39.770167 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jun 20 19:30:39.770216 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jun 20 19:30:39.770264 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jun 20 19:30:39.770329 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jun 20 19:30:39.770395 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jun 20 19:30:39.770444 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jun 20 19:30:39.770492 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jun 20 19:30:39.770544 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jun 20 19:30:39.770592 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jun 20 19:30:39.770640 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jun 20 19:30:39.770688 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jun 20 19:30:39.770737 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jun 20 19:30:39.770785 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jun 20 19:30:39.770833 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jun 20 19:30:39.770894 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jun 20 19:30:39.770943 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jun 20 19:30:39.770994 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Jun 20 19:30:39.771037 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Jun 20 19:30:39.771080 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Jun 20 19:30:39.771122 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Jun 20 19:30:39.771182 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Jun 20 19:30:39.771230 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Jun 20 19:30:39.771278 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Jun 20 19:30:39.771323 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Jun 20 19:30:39.771368 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Jun 20 19:30:39.771432 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Jun 20 19:30:39.771496 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Jun 20 19:30:39.771542 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Jun 20 19:30:39.771587 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Jun 20 19:30:39.771639 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Jun 20 19:30:39.771685 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Jun 20 19:30:39.771730 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Jun 20 19:30:39.771779 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Jun 20 19:30:39.771824 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Jun 20 19:30:39.771882 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Jun 20 19:30:39.771932 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Jun 20 19:30:39.771981 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Jun 20 19:30:39.772041 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Jun 20 19:30:39.772097 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Jun 20 19:30:39.772144 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Jun 20 19:30:39.772194 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Jun 20 19:30:39.772258 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Jun 20 19:30:39.772315 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Jun 20 19:30:39.772378 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Jun 20 19:30:39.772428 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Jun 20 19:30:39.772473 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Jun 20 19:30:39.772521 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Jun 20 19:30:39.772567 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Jun 20 19:30:39.772620 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Jun 20 19:30:39.772666 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Jun 20 19:30:39.772711 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Jun 20 19:30:39.772759 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Jun 20 19:30:39.772812 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Jun 20 19:30:39.772880 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Jun 20 19:30:39.772936 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Jun 20 19:30:39.772982 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Jun 20 19:30:39.773046 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Jun 20 19:30:39.773097 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Jun 20 19:30:39.773143 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Jun 20 19:30:39.773195 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Jun 20 19:30:39.773244 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Jun 20 19:30:39.773294 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Jun 20 19:30:39.773347 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Jun 20 19:30:39.773397 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Jun 20 19:30:39.773443 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Jun 20 19:30:39.773493 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Jun 20 19:30:39.773541 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Jun 20 19:30:39.773593 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Jun 20 19:30:39.773640 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Jun 20 19:30:39.773685 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Jun 20 19:30:39.773737 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Jun 20 19:30:39.773784 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Jun 20 19:30:39.773830 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Jun 20 19:30:39.773904 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Jun 20 19:30:39.773952 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Jun 20 19:30:39.773998 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Jun 20 19:30:39.774047 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Jun 20 19:30:39.774095 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Jun 20 19:30:39.774144 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Jun 20 19:30:39.774191 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Jun 20 19:30:39.774246 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Jun 20 19:30:39.774293 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Jun 20 19:30:39.774343 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Jun 20 19:30:39.774390 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Jun 20 19:30:39.774440 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Jun 20 19:30:39.774486 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Jun 20 19:30:39.774538 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Jun 20 19:30:39.774585 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Jun 20 19:30:39.774631 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Jun 20 19:30:39.774682 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Jun 20 19:30:39.774728 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Jun 20 19:30:39.774774 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Jun 20 19:30:39.774826 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Jun 20 19:30:39.774889 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Jun 20 19:30:39.774946 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Jun 20 19:30:39.774993 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Jun 20 19:30:39.775043 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Jun 20 19:30:39.775090 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Jun 20 19:30:39.775140 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Jun 20 19:30:39.775189 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Jun 20 19:30:39.775240 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Jun 20 19:30:39.775287 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Jun 20 19:30:39.775347 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Jun 20 19:30:39.775404 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Jun 20 19:30:39.775467 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jun 20 19:30:39.775479 kernel: PCI: CLS 32 bytes, default 64 Jun 20 19:30:39.775485 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jun 20 19:30:39.775491 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jun 20 19:30:39.775498 kernel: clocksource: Switched to clocksource tsc Jun 20 19:30:39.775503 kernel: Initialise system trusted keyrings Jun 20 19:30:39.775510 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jun 20 19:30:39.775516 kernel: Key type asymmetric registered Jun 20 19:30:39.775522 kernel: Asymmetric key parser 'x509' registered Jun 20 19:30:39.775528 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jun 20 19:30:39.775535 kernel: io scheduler mq-deadline registered Jun 20 19:30:39.775541 kernel: io scheduler kyber registered Jun 20 19:30:39.775547 kernel: io scheduler bfq registered Jun 20 19:30:39.775599 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Jun 20 19:30:39.775651 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:30:39.775703 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Jun 20 19:30:39.775754 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:30:39.775807 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Jun 20 19:30:39.775867 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:30:39.775921 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Jun 20 19:30:39.775973 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:30:39.776025 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Jun 20 19:30:39.776076 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:30:39.776127 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Jun 20 19:30:39.776179 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:30:39.776233 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Jun 20 19:30:39.776285 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:30:39.776337 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Jun 20 19:30:39.776389 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:30:39.776440 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Jun 20 19:30:39.776498 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:30:39.776552 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Jun 20 19:30:39.776605 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:30:39.776657 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Jun 20 19:30:39.776708 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:30:39.776759 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Jun 20 19:30:39.776810 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:30:39.776881 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Jun 20 19:30:39.776936 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:30:39.776990 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Jun 20 19:30:39.777042 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:30:39.777094 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Jun 20 19:30:39.777145 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:30:39.777195 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Jun 20 19:30:39.777246 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:30:39.777298 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Jun 20 19:30:39.777349 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:30:39.777403 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Jun 20 19:30:39.777454 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:30:39.777505 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Jun 20 19:30:39.777555 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:30:39.777607 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Jun 20 19:30:39.777659 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:30:39.777710 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Jun 20 19:30:39.777764 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:30:39.777817 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Jun 20 19:30:39.777887 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:30:39.777941 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Jun 20 19:30:39.777992 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:30:39.778044 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Jun 20 19:30:39.778095 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:30:39.778148 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Jun 20 19:30:39.778202 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:30:39.778253 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Jun 20 19:30:39.778306 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:30:39.778357 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Jun 20 19:30:39.778407 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:30:39.778459 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Jun 20 19:30:39.778511 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:30:39.778565 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Jun 20 19:30:39.778616 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:30:39.778671 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Jun 20 19:30:39.778729 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:30:39.778790 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Jun 20 19:30:39.778841 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:30:39.778903 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Jun 20 19:30:39.778954 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jun 20 19:30:39.778967 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 20 19:30:39.778974 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 20 19:30:39.778984 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 20 19:30:39.778990 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Jun 20 19:30:39.778997 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 20 19:30:39.779003 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 20 19:30:39.779062 kernel: rtc_cmos 00:01: registered as rtc0 Jun 20 19:30:39.779130 kernel: rtc_cmos 00:01: setting system clock to 2025-06-20T19:30:39 UTC (1750447839) Jun 20 19:30:39.779179 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Jun 20 19:30:39.779188 kernel: intel_pstate: CPU model not supported Jun 20 19:30:39.779195 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jun 20 19:30:39.779201 kernel: NET: Registered PF_INET6 protocol family Jun 20 19:30:39.779208 kernel: Segment Routing with IPv6 Jun 20 19:30:39.779214 kernel: In-situ OAM (IOAM) with IPv6 Jun 20 19:30:39.779221 kernel: NET: Registered PF_PACKET protocol family Jun 20 19:30:39.779229 kernel: Key type dns_resolver registered Jun 20 19:30:39.779236 kernel: IPI shorthand broadcast: enabled Jun 20 19:30:39.779242 kernel: sched_clock: Marking stable (2703004444, 171349399)->(2887440001, -13086158) Jun 20 19:30:39.779249 kernel: registered taskstats version 1 Jun 20 19:30:39.779255 kernel: Loading compiled-in X.509 certificates Jun 20 19:30:39.779261 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.34-flatcar: 9a085d119111c823c157514215d0379e3a2f1b94' Jun 20 19:30:39.779267 kernel: Demotion targets for Node 0: null Jun 20 19:30:39.779274 kernel: Key type .fscrypt registered Jun 20 19:30:39.779280 kernel: Key type fscrypt-provisioning registered Jun 20 19:30:39.779287 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 20 19:30:39.779294 kernel: ima: Allocated hash algorithm: sha1 Jun 20 19:30:39.779303 kernel: ima: No architecture policies found Jun 20 19:30:39.779310 kernel: clk: Disabling unused clocks Jun 20 19:30:39.779316 kernel: Warning: unable to open an initial console. Jun 20 19:30:39.779323 kernel: Freeing unused kernel image (initmem) memory: 54424K Jun 20 19:30:39.779329 kernel: Write protecting the kernel read-only data: 24576k Jun 20 19:30:39.779335 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jun 20 19:30:39.779342 kernel: Run /init as init process Jun 20 19:30:39.779350 kernel: with arguments: Jun 20 19:30:39.779357 kernel: /init Jun 20 19:30:39.779363 kernel: with environment: Jun 20 19:30:39.779369 kernel: HOME=/ Jun 20 19:30:39.779375 kernel: TERM=linux Jun 20 19:30:39.779381 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 20 19:30:39.779388 systemd[1]: Successfully made /usr/ read-only. Jun 20 19:30:39.779397 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 19:30:39.779405 systemd[1]: Detected virtualization vmware. Jun 20 19:30:39.779411 systemd[1]: Detected architecture x86-64. Jun 20 19:30:39.779417 systemd[1]: Running in initrd. Jun 20 19:30:39.779424 systemd[1]: No hostname configured, using default hostname. Jun 20 19:30:39.779430 systemd[1]: Hostname set to . Jun 20 19:30:39.779437 systemd[1]: Initializing machine ID from random generator. Jun 20 19:30:39.779443 systemd[1]: Queued start job for default target initrd.target. Jun 20 19:30:39.779449 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:30:39.779457 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:30:39.779465 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 20 19:30:39.779472 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 19:30:39.779478 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 20 19:30:39.779485 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 20 19:30:39.779492 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 20 19:30:39.779498 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 20 19:30:39.779506 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:30:39.779513 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:30:39.779519 systemd[1]: Reached target paths.target - Path Units. Jun 20 19:30:39.779526 systemd[1]: Reached target slices.target - Slice Units. Jun 20 19:30:39.779532 systemd[1]: Reached target swap.target - Swaps. Jun 20 19:30:39.779538 systemd[1]: Reached target timers.target - Timer Units. Jun 20 19:30:39.779545 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 19:30:39.779551 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 19:30:39.779558 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 20 19:30:39.779565 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 20 19:30:39.779572 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:30:39.779578 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 19:30:39.779585 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:30:39.779591 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 19:30:39.779598 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 20 19:30:39.779604 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 19:30:39.779611 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 20 19:30:39.779618 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jun 20 19:30:39.779625 systemd[1]: Starting systemd-fsck-usr.service... Jun 20 19:30:39.779632 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 19:30:39.779638 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 19:30:39.779645 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:30:39.779651 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 20 19:30:39.779672 systemd-journald[244]: Collecting audit messages is disabled. Jun 20 19:30:39.779689 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:30:39.779697 systemd[1]: Finished systemd-fsck-usr.service. Jun 20 19:30:39.779704 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 20 19:30:39.779710 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 20 19:30:39.779717 kernel: Bridge firewalling registered Jun 20 19:30:39.779724 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 19:30:39.779730 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 19:30:39.779737 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:30:39.779743 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 19:30:39.779751 systemd-journald[244]: Journal started Jun 20 19:30:39.779767 systemd-journald[244]: Runtime Journal (/run/log/journal/2ac454ea9f544386b938553b2597e296) is 4.8M, max 38.8M, 34M free. Jun 20 19:30:39.746869 systemd-modules-load[245]: Inserted module 'overlay' Jun 20 19:30:39.766042 systemd-modules-load[245]: Inserted module 'br_netfilter' Jun 20 19:30:39.783896 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:30:39.786874 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 19:30:39.789428 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 19:30:39.792493 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:30:39.793644 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 19:30:39.795181 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:30:39.800035 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:30:39.801946 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 20 19:30:39.802742 systemd-tmpfiles[271]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jun 20 19:30:39.804330 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:30:39.807273 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 19:30:39.813157 dracut-cmdline[282]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:30:39.838104 systemd-resolved[284]: Positive Trust Anchors: Jun 20 19:30:39.838113 systemd-resolved[284]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 19:30:39.838136 systemd-resolved[284]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 19:30:39.840203 systemd-resolved[284]: Defaulting to hostname 'linux'. Jun 20 19:30:39.840791 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 19:30:39.841105 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:30:39.868875 kernel: SCSI subsystem initialized Jun 20 19:30:39.885908 kernel: Loading iSCSI transport class v2.0-870. Jun 20 19:30:39.894877 kernel: iscsi: registered transport (tcp) Jun 20 19:30:39.918876 kernel: iscsi: registered transport (qla4xxx) Jun 20 19:30:39.918920 kernel: QLogic iSCSI HBA Driver Jun 20 19:30:39.929383 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 19:30:39.940211 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:30:39.941457 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 19:30:39.966552 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 20 19:30:39.967728 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 20 19:30:40.015870 kernel: raid6: avx2x4 gen() 47456 MB/s Jun 20 19:30:40.032864 kernel: raid6: avx2x2 gen() 52787 MB/s Jun 20 19:30:40.050422 kernel: raid6: avx2x1 gen() 44695 MB/s Jun 20 19:30:40.050454 kernel: raid6: using algorithm avx2x2 gen() 52787 MB/s Jun 20 19:30:40.068069 kernel: raid6: .... xor() 31668 MB/s, rmw enabled Jun 20 19:30:40.068114 kernel: raid6: using avx2x2 recovery algorithm Jun 20 19:30:40.082875 kernel: xor: automatically using best checksumming function avx Jun 20 19:30:40.188877 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 20 19:30:40.192275 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 20 19:30:40.193573 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:30:40.210902 systemd-udevd[493]: Using default interface naming scheme 'v255'. Jun 20 19:30:40.214374 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:30:40.215776 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 20 19:30:40.239736 dracut-pre-trigger[499]: rd.md=0: removing MD RAID activation Jun 20 19:30:40.255059 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 19:30:40.256152 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 19:30:40.330026 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:30:40.332014 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 20 19:30:40.407198 kernel: VMware PVSCSI driver - version 1.0.7.0-k Jun 20 19:30:40.407240 kernel: vmw_pvscsi: using 64bit dma Jun 20 19:30:40.409064 kernel: vmw_pvscsi: max_id: 16 Jun 20 19:30:40.409087 kernel: vmw_pvscsi: setting ring_pages to 8 Jun 20 19:30:40.418711 kernel: VMware vmxnet3 virtual NIC driver - version 1.9.0.0-k-NAPI Jun 20 19:30:40.418748 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Jun 20 19:30:40.422882 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Jun 20 19:30:40.425915 kernel: vmw_pvscsi: enabling reqCallThreshold Jun 20 19:30:40.425940 kernel: vmw_pvscsi: driver-based request coalescing enabled Jun 20 19:30:40.425950 kernel: vmw_pvscsi: using MSI-X Jun 20 19:30:40.428392 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Jun 20 19:30:40.428496 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Jun 20 19:30:40.429975 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Jun 20 19:30:40.438869 kernel: cryptd: max_cpu_qlen set to 1000 Jun 20 19:30:40.442868 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Jun 20 19:30:40.453890 kernel: AES CTR mode by8 optimization enabled Jun 20 19:30:40.453931 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Jun 20 19:30:40.456432 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Jun 20 19:30:40.456535 kernel: sd 0:0:0:0: [sda] Write Protect is off Jun 20 19:30:40.456627 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Jun 20 19:30:40.456706 kernel: sd 0:0:0:0: [sda] Cache data unavailable Jun 20 19:30:40.456785 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Jun 20 19:30:40.459070 (udev-worker)[546]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Jun 20 19:30:40.461644 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:30:40.461896 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:30:40.462936 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:30:40.467492 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:30:40.468929 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 19:30:40.472327 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jun 20 19:30:40.482120 kernel: libata version 3.00 loaded. Jun 20 19:30:40.486111 kernel: ata_piix 0000:00:07.1: version 2.13 Jun 20 19:30:40.490998 kernel: scsi host1: ata_piix Jun 20 19:30:40.492874 kernel: scsi host2: ata_piix Jun 20 19:30:40.492962 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 lpm-pol 0 Jun 20 19:30:40.492972 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 lpm-pol 0 Jun 20 19:30:40.494016 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:30:40.571083 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Jun 20 19:30:40.576386 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Jun 20 19:30:40.581831 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jun 20 19:30:40.586244 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Jun 20 19:30:40.586393 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Jun 20 19:30:40.587170 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 20 19:30:40.666877 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Jun 20 19:30:40.672890 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Jun 20 19:30:40.676878 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 19:30:40.685873 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 19:30:40.721882 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Jun 20 19:30:40.722034 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 20 19:30:40.733881 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jun 20 19:30:41.037514 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 20 19:30:41.037884 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 19:30:41.038026 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:30:41.038232 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 19:30:41.039087 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 20 19:30:41.053717 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 20 19:30:41.749901 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 19:30:41.750374 disk-uuid[646]: The operation has completed successfully. Jun 20 19:30:42.054721 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 20 19:30:42.054789 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 20 19:30:42.055629 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 20 19:30:42.066697 sh[678]: Success Jun 20 19:30:42.091986 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 20 19:30:42.092029 kernel: device-mapper: uevent: version 1.0.3 Jun 20 19:30:42.093169 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jun 20 19:30:42.099873 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Jun 20 19:30:42.264265 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 20 19:30:42.265651 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 20 19:30:42.275467 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 20 19:30:42.287875 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jun 20 19:30:42.289897 kernel: BTRFS: device fsid 048b924a-9f97-43f5-98d6-0fff18874966 devid 1 transid 41 /dev/mapper/usr (254:0) scanned by mount (690) Jun 20 19:30:42.292874 kernel: BTRFS info (device dm-0): first mount of filesystem 048b924a-9f97-43f5-98d6-0fff18874966 Jun 20 19:30:42.292901 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:30:42.292913 kernel: BTRFS info (device dm-0): using free-space-tree Jun 20 19:30:42.343527 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 20 19:30:42.343954 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jun 20 19:30:42.344652 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Jun 20 19:30:42.345921 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 20 19:30:42.386875 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 (8:6) scanned by mount (713) Jun 20 19:30:42.386917 kernel: BTRFS info (device sda6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:30:42.389103 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:30:42.389132 kernel: BTRFS info (device sda6): using free-space-tree Jun 20 19:30:42.398869 kernel: BTRFS info (device sda6): last unmount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:30:42.399160 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 20 19:30:42.400259 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 20 19:30:42.476048 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jun 20 19:30:42.476772 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 20 19:30:42.540017 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 19:30:42.541558 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 19:30:42.575723 ignition[732]: Ignition 2.21.0 Jun 20 19:30:42.575734 ignition[732]: Stage: fetch-offline Jun 20 19:30:42.575758 ignition[732]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:30:42.575763 ignition[732]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jun 20 19:30:42.575820 ignition[732]: parsed url from cmdline: "" Jun 20 19:30:42.575822 ignition[732]: no config URL provided Jun 20 19:30:42.575826 ignition[732]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 19:30:42.575833 ignition[732]: no config at "/usr/lib/ignition/user.ign" Jun 20 19:30:42.576286 ignition[732]: config successfully fetched Jun 20 19:30:42.576309 ignition[732]: parsing config with SHA512: 1b7243ae1789f0e95be9c7993adb66c41c17ebc0bf7a4cd3632ceeea1e405a1c6d3d1594eceaa863fbe4d4b10f1630044a4c560d16a96cb08ed8b0f1ca61ef88 Jun 20 19:30:42.582265 unknown[732]: fetched base config from "system" Jun 20 19:30:42.582275 unknown[732]: fetched user config from "vmware" Jun 20 19:30:42.583387 ignition[732]: fetch-offline: fetch-offline passed Jun 20 19:30:42.583551 ignition[732]: Ignition finished successfully Jun 20 19:30:42.584759 systemd-networkd[864]: lo: Link UP Jun 20 19:30:42.584764 systemd-networkd[864]: lo: Gained carrier Jun 20 19:30:42.585261 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 19:30:42.585555 systemd-networkd[864]: Enumeration completed Jun 20 19:30:42.585602 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 19:30:42.585847 systemd-networkd[864]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Jun 20 19:30:42.586204 systemd[1]: Reached target network.target - Network. Jun 20 19:30:42.588908 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jun 20 19:30:42.589051 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jun 20 19:30:42.586473 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jun 20 19:30:42.587107 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 20 19:30:42.589318 systemd-networkd[864]: ens192: Link UP Jun 20 19:30:42.589321 systemd-networkd[864]: ens192: Gained carrier Jun 20 19:30:42.606345 ignition[873]: Ignition 2.21.0 Jun 20 19:30:42.606631 ignition[873]: Stage: kargs Jun 20 19:30:42.606846 ignition[873]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:30:42.607009 ignition[873]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jun 20 19:30:42.607769 ignition[873]: kargs: kargs passed Jun 20 19:30:42.607808 ignition[873]: Ignition finished successfully Jun 20 19:30:42.609793 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 20 19:30:42.610587 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 20 19:30:42.626507 ignition[880]: Ignition 2.21.0 Jun 20 19:30:42.626776 ignition[880]: Stage: disks Jun 20 19:30:42.626979 ignition[880]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:30:42.627108 ignition[880]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jun 20 19:30:42.627780 ignition[880]: disks: disks passed Jun 20 19:30:42.627924 ignition[880]: Ignition finished successfully Jun 20 19:30:42.628644 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 20 19:30:42.629000 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 20 19:30:42.629120 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 20 19:30:42.629316 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 19:30:42.629538 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 19:30:42.629711 systemd[1]: Reached target basic.target - Basic System. Jun 20 19:30:42.630406 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 20 19:30:42.671090 systemd-fsck[888]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Jun 20 19:30:42.672093 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 20 19:30:42.673113 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 20 19:30:42.963895 kernel: EXT4-fs (sda9): mounted filesystem 6290a154-3512-46a6-a5f5-a7fb62c65caa r/w with ordered data mode. Quota mode: none. Jun 20 19:30:42.964962 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 20 19:30:42.965575 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 20 19:30:42.967913 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 19:30:42.968989 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 20 19:30:42.969503 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 20 19:30:42.969541 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 20 19:30:42.969562 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 19:30:42.975572 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 20 19:30:42.976422 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 20 19:30:42.981879 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 (8:6) scanned by mount (896) Jun 20 19:30:42.985565 kernel: BTRFS info (device sda6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:30:42.985593 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:30:42.985607 kernel: BTRFS info (device sda6): using free-space-tree Jun 20 19:30:42.989697 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 19:30:43.013498 initrd-setup-root[920]: cut: /sysroot/etc/passwd: No such file or directory Jun 20 19:30:43.015823 initrd-setup-root[927]: cut: /sysroot/etc/group: No such file or directory Jun 20 19:30:43.018709 initrd-setup-root[934]: cut: /sysroot/etc/shadow: No such file or directory Jun 20 19:30:43.020775 initrd-setup-root[941]: cut: /sysroot/etc/gshadow: No such file or directory Jun 20 19:30:43.136836 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 20 19:30:43.138162 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 20 19:30:43.139994 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 20 19:30:43.153875 kernel: BTRFS info (device sda6): last unmount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:30:43.171844 ignition[1011]: INFO : Ignition 2.21.0 Jun 20 19:30:43.171844 ignition[1011]: INFO : Stage: mount Jun 20 19:30:43.172199 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:30:43.172199 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jun 20 19:30:43.173173 ignition[1011]: INFO : mount: mount passed Jun 20 19:30:43.173173 ignition[1011]: INFO : Ignition finished successfully Jun 20 19:30:43.174334 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 20 19:30:43.175911 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 20 19:30:43.178307 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 20 19:30:43.287504 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 20 19:30:43.288572 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 19:30:43.308402 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 (8:6) scanned by mount (1024) Jun 20 19:30:43.308442 kernel: BTRFS info (device sda6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:30:43.309899 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:30:43.309919 kernel: BTRFS info (device sda6): using free-space-tree Jun 20 19:30:43.316551 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 19:30:43.338335 ignition[1040]: INFO : Ignition 2.21.0 Jun 20 19:30:43.338335 ignition[1040]: INFO : Stage: files Jun 20 19:30:43.338746 ignition[1040]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:30:43.338746 ignition[1040]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jun 20 19:30:43.339252 ignition[1040]: DEBUG : files: compiled without relabeling support, skipping Jun 20 19:30:43.340113 ignition[1040]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 20 19:30:43.340113 ignition[1040]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 20 19:30:43.342214 ignition[1040]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 20 19:30:43.342507 ignition[1040]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 20 19:30:43.342736 unknown[1040]: wrote ssh authorized keys file for user: core Jun 20 19:30:43.343058 ignition[1040]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 20 19:30:43.344630 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jun 20 19:30:43.344630 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jun 20 19:30:43.384711 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 20 19:30:43.641934 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jun 20 19:30:43.641934 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 20 19:30:43.642330 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jun 20 19:30:44.135060 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 20 19:30:44.184666 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 20 19:30:44.184666 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jun 20 19:30:44.184666 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jun 20 19:30:44.184666 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 20 19:30:44.184666 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 20 19:30:44.184666 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 19:30:44.184666 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 19:30:44.184666 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 19:30:44.184666 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 19:30:44.196652 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 19:30:44.196891 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 19:30:44.196891 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 20 19:30:44.200744 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 20 19:30:44.200987 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 20 19:30:44.200987 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jun 20 19:30:44.417959 systemd-networkd[864]: ens192: Gained IPv6LL Jun 20 19:30:44.864589 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jun 20 19:30:45.253898 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 20 19:30:45.254276 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jun 20 19:30:45.260112 ignition[1040]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jun 20 19:30:45.260377 ignition[1040]: INFO : files: op(d): [started] processing unit "prepare-helm.service" Jun 20 19:30:45.283706 ignition[1040]: INFO : files: op(d): op(e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 19:30:45.291155 ignition[1040]: INFO : files: op(d): op(e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 19:30:45.291155 ignition[1040]: INFO : files: op(d): [finished] processing unit "prepare-helm.service" Jun 20 19:30:45.291155 ignition[1040]: INFO : files: op(f): [started] processing unit "coreos-metadata.service" Jun 20 19:30:45.291643 ignition[1040]: INFO : files: op(f): op(10): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 20 19:30:45.291643 ignition[1040]: INFO : files: op(f): op(10): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 20 19:30:45.291643 ignition[1040]: INFO : files: op(f): [finished] processing unit "coreos-metadata.service" Jun 20 19:30:45.291643 ignition[1040]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Jun 20 19:30:45.453088 ignition[1040]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Jun 20 19:30:45.455329 ignition[1040]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jun 20 19:30:45.455617 ignition[1040]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Jun 20 19:30:45.455617 ignition[1040]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Jun 20 19:30:45.455617 ignition[1040]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Jun 20 19:30:45.455617 ignition[1040]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 20 19:30:45.455617 ignition[1040]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 20 19:30:45.455617 ignition[1040]: INFO : files: files passed Jun 20 19:30:45.457278 ignition[1040]: INFO : Ignition finished successfully Jun 20 19:30:45.456735 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 20 19:30:45.457933 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 20 19:30:45.458939 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 20 19:30:45.468776 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 20 19:30:45.468850 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 20 19:30:45.471806 initrd-setup-root-after-ignition[1073]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:30:45.472098 initrd-setup-root-after-ignition[1073]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:30:45.473032 initrd-setup-root-after-ignition[1077]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:30:45.473917 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 19:30:45.474387 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 20 19:30:45.475302 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 20 19:30:45.513138 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 20 19:30:45.513198 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 20 19:30:45.513507 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 20 19:30:45.513646 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 20 19:30:45.513879 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 20 19:30:45.514343 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 20 19:30:45.529317 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 19:30:45.530116 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 20 19:30:45.544682 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:30:45.545072 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:30:45.545418 systemd[1]: Stopped target timers.target - Timer Units. Jun 20 19:30:45.545577 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 20 19:30:45.545657 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 19:30:45.546293 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 20 19:30:45.546456 systemd[1]: Stopped target basic.target - Basic System. Jun 20 19:30:45.546700 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 20 19:30:45.546999 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 19:30:45.547319 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 20 19:30:45.547628 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jun 20 19:30:45.547942 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 20 19:30:45.548383 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 19:30:45.548732 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 20 19:30:45.549036 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 20 19:30:45.549313 systemd[1]: Stopped target swap.target - Swaps. Jun 20 19:30:45.549565 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 20 19:30:45.549738 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 20 19:30:45.550137 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:30:45.550440 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:30:45.550719 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 20 19:30:45.550900 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:30:45.551151 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 20 19:30:45.551226 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 20 19:30:45.551692 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 20 19:30:45.551875 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 19:30:45.552185 systemd[1]: Stopped target paths.target - Path Units. Jun 20 19:30:45.552424 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 20 19:30:45.555883 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:30:45.556053 systemd[1]: Stopped target slices.target - Slice Units. Jun 20 19:30:45.556338 systemd[1]: Stopped target sockets.target - Socket Units. Jun 20 19:30:45.556523 systemd[1]: iscsid.socket: Deactivated successfully. Jun 20 19:30:45.556583 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 19:30:45.556749 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 20 19:30:45.556797 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 19:30:45.556990 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 20 19:30:45.557083 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 19:30:45.557331 systemd[1]: ignition-files.service: Deactivated successfully. Jun 20 19:30:45.557393 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 20 19:30:45.558951 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 20 19:30:45.559477 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 20 19:30:45.559581 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 20 19:30:45.559646 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:30:45.559800 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 20 19:30:45.559867 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 19:30:45.563125 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 20 19:30:45.566446 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 20 19:30:45.572440 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 20 19:30:45.575975 ignition[1097]: INFO : Ignition 2.21.0 Jun 20 19:30:45.575975 ignition[1097]: INFO : Stage: umount Jun 20 19:30:45.577160 ignition[1097]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:30:45.577160 ignition[1097]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jun 20 19:30:45.577160 ignition[1097]: INFO : umount: umount passed Jun 20 19:30:45.577160 ignition[1097]: INFO : Ignition finished successfully Jun 20 19:30:45.578424 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 20 19:30:45.578502 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 20 19:30:45.578731 systemd[1]: Stopped target network.target - Network. Jun 20 19:30:45.578813 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 20 19:30:45.578839 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 20 19:30:45.579007 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 20 19:30:45.579028 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 20 19:30:45.579173 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 20 19:30:45.579193 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 20 19:30:45.579341 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 20 19:30:45.579361 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 20 19:30:45.579565 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 20 19:30:45.579983 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 20 19:30:45.581641 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 20 19:30:45.581709 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 20 19:30:45.583085 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 20 19:30:45.583232 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 20 19:30:45.583256 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:30:45.584091 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:30:45.587668 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 20 19:30:45.587770 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 20 19:30:45.588513 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 20 19:30:45.588602 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jun 20 19:30:45.588920 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 20 19:30:45.588939 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:30:45.589502 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 20 19:30:45.589589 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 20 19:30:45.589614 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 19:30:45.589729 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Jun 20 19:30:45.589753 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jun 20 19:30:45.589942 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 19:30:45.589965 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:30:45.591823 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 20 19:30:45.591847 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 20 19:30:45.592111 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:30:45.593589 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 20 19:30:45.600229 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 20 19:30:45.600304 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:30:45.600702 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 20 19:30:45.600739 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 20 19:30:45.601312 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 20 19:30:45.601328 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:30:45.601485 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 20 19:30:45.601508 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 20 19:30:45.601788 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 20 19:30:45.601813 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 20 19:30:45.602130 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 19:30:45.602155 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:30:45.602885 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 20 19:30:45.602980 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jun 20 19:30:45.603007 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:30:45.605719 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 20 19:30:45.605746 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:30:45.606070 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jun 20 19:30:45.606092 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 19:30:45.606410 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 20 19:30:45.606432 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:30:45.606630 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:30:45.606651 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:30:45.611917 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 20 19:30:45.612222 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 20 19:30:45.613094 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 20 19:30:45.613287 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 20 19:30:45.766834 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 20 19:30:45.767096 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 20 19:30:45.767497 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 20 19:30:45.767736 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 20 19:30:45.767886 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 20 19:30:45.768649 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 20 19:30:45.792222 systemd[1]: Switching root. Jun 20 19:30:45.823350 systemd-journald[244]: Journal stopped Jun 20 19:30:47.437909 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Jun 20 19:30:47.437940 kernel: SELinux: policy capability network_peer_controls=1 Jun 20 19:30:47.437949 kernel: SELinux: policy capability open_perms=1 Jun 20 19:30:47.437955 kernel: SELinux: policy capability extended_socket_class=1 Jun 20 19:30:47.437961 kernel: SELinux: policy capability always_check_network=0 Jun 20 19:30:47.437968 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 20 19:30:47.437974 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 20 19:30:47.437980 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 20 19:30:47.437986 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 20 19:30:47.437991 kernel: SELinux: policy capability userspace_initial_context=0 Jun 20 19:30:47.437997 kernel: audit: type=1403 audit(1750447846.684:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 20 19:30:47.438004 systemd[1]: Successfully loaded SELinux policy in 36.334ms. Jun 20 19:30:47.438012 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.447ms. Jun 20 19:30:47.438020 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 19:30:47.438031 systemd[1]: Detected virtualization vmware. Jun 20 19:30:47.438039 systemd[1]: Detected architecture x86-64. Jun 20 19:30:47.438046 systemd[1]: Detected first boot. Jun 20 19:30:47.438053 systemd[1]: Initializing machine ID from random generator. Jun 20 19:30:47.438060 zram_generator::config[1140]: No configuration found. Jun 20 19:30:47.438157 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Jun 20 19:30:47.438169 kernel: Guest personality initialized and is active Jun 20 19:30:47.438176 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jun 20 19:30:47.438182 kernel: Initialized host personality Jun 20 19:30:47.438190 kernel: NET: Registered PF_VSOCK protocol family Jun 20 19:30:47.438197 systemd[1]: Populated /etc with preset unit settings. Jun 20 19:30:47.438205 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jun 20 19:30:47.438212 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Jun 20 19:30:47.438222 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 20 19:30:47.438229 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 20 19:30:47.438235 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 20 19:30:47.438243 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 20 19:30:47.438253 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 20 19:30:47.438262 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 20 19:30:47.438269 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 20 19:30:47.438276 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 20 19:30:47.438283 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 20 19:30:47.438289 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 20 19:30:47.438297 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 20 19:30:47.438311 systemd[1]: Created slice user.slice - User and Session Slice. Jun 20 19:30:47.438319 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:30:47.438329 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:30:47.438336 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 20 19:30:47.438343 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 20 19:30:47.438349 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 20 19:30:47.438356 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 19:30:47.438365 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 20 19:30:47.438372 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:30:47.438383 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:30:47.438390 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 20 19:30:47.438397 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 20 19:30:47.438404 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 20 19:30:47.438411 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 20 19:30:47.438420 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:30:47.438433 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 19:30:47.438443 systemd[1]: Reached target slices.target - Slice Units. Jun 20 19:30:47.438451 systemd[1]: Reached target swap.target - Swaps. Jun 20 19:30:47.438465 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 20 19:30:47.438475 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 20 19:30:47.438485 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 20 19:30:47.438492 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:30:47.438503 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 19:30:47.438510 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:30:47.438517 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 20 19:30:47.438525 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 20 19:30:47.438537 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 20 19:30:47.438546 systemd[1]: Mounting media.mount - External Media Directory... Jun 20 19:30:47.438556 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:30:47.438569 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 20 19:30:47.438578 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 20 19:30:47.438585 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 20 19:30:47.438596 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 20 19:30:47.438604 systemd[1]: Reached target machines.target - Containers. Jun 20 19:30:47.438611 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 20 19:30:47.438618 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Jun 20 19:30:47.438631 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 19:30:47.438639 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 20 19:30:47.438645 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:30:47.438653 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 19:30:47.438665 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:30:47.438675 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 20 19:30:47.438682 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:30:47.438690 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 20 19:30:47.438702 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 20 19:30:47.438710 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 20 19:30:47.438717 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 20 19:30:47.438727 systemd[1]: Stopped systemd-fsck-usr.service. Jun 20 19:30:47.438736 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:30:47.438743 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 19:30:47.438750 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 19:30:47.438762 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 19:30:47.438771 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 20 19:30:47.438780 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 20 19:30:47.438791 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 19:30:47.438799 systemd[1]: verity-setup.service: Deactivated successfully. Jun 20 19:30:47.438806 systemd[1]: Stopped verity-setup.service. Jun 20 19:30:47.438813 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:30:47.438841 systemd-journald[1223]: Collecting audit messages is disabled. Jun 20 19:30:47.438880 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 20 19:30:47.438889 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 20 19:30:47.438897 systemd[1]: Mounted media.mount - External Media Directory. Jun 20 19:30:47.441871 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 20 19:30:47.441907 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 20 19:30:47.441922 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 20 19:30:47.441939 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:30:47.441953 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 20 19:30:47.441965 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 20 19:30:47.441978 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:30:47.441989 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:30:47.441996 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:30:47.442003 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:30:47.442012 systemd-journald[1223]: Journal started Jun 20 19:30:47.442035 systemd-journald[1223]: Runtime Journal (/run/log/journal/5bed03b11982432b8c947a1d3ddc8222) is 4.8M, max 38.8M, 34M free. Jun 20 19:30:47.252378 systemd[1]: Queued start job for default target multi-user.target. Jun 20 19:30:47.272524 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jun 20 19:30:47.272827 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 20 19:30:47.443931 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 19:30:47.444050 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 20 19:30:47.444166 jq[1210]: true Jun 20 19:30:47.452097 kernel: fuse: init (API version 7.41) Jun 20 19:30:47.452001 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 20 19:30:47.452328 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 20 19:30:47.452938 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 19:30:47.453776 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 20 19:30:47.459397 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 20 19:30:47.459639 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:30:47.464033 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 20 19:30:47.467868 kernel: loop: module loaded Jun 20 19:30:47.468021 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 20 19:30:47.468192 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:30:47.473038 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 20 19:30:47.476932 jq[1241]: true Jun 20 19:30:47.477456 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 20 19:30:47.479035 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 20 19:30:47.482508 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 20 19:30:47.482641 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 20 19:30:47.482978 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:30:47.483101 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:30:47.483878 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 19:30:47.484186 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:30:47.492265 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 20 19:30:47.495050 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 19:30:47.498002 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 20 19:30:47.498473 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:30:47.500740 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:30:47.502084 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 20 19:30:47.502877 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 20 19:30:47.528679 systemd-journald[1223]: Time spent on flushing to /var/log/journal/5bed03b11982432b8c947a1d3ddc8222 is 33.661ms for 1757 entries. Jun 20 19:30:47.528679 systemd-journald[1223]: System Journal (/var/log/journal/5bed03b11982432b8c947a1d3ddc8222) is 8M, max 584.8M, 576.8M free. Jun 20 19:30:47.571934 systemd-journald[1223]: Received client request to flush runtime journal. Jun 20 19:30:47.571960 kernel: loop0: detected capacity change from 0 to 146240 Jun 20 19:30:47.527535 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 20 19:30:47.551598 ignition[1269]: Ignition 2.21.0 Jun 20 19:30:47.527760 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 20 19:30:47.551797 ignition[1269]: deleting config from guestinfo properties Jun 20 19:30:47.530265 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 20 19:30:47.531033 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 20 19:30:47.596015 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Jun 20 19:30:47.596569 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Jun 20 19:30:47.612521 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:30:47.618673 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 19:30:47.625371 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 20 19:30:47.625906 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 20 19:30:47.630053 ignition[1269]: Successfully deleted config Jun 20 19:30:47.637869 kernel: ACPI: bus type drm_connector registered Jun 20 19:30:47.643724 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 20 19:30:47.642331 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:30:47.642829 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 19:30:47.642995 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 19:30:47.644909 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Jun 20 19:30:47.650003 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 20 19:30:47.663485 kernel: loop1: detected capacity change from 0 to 113872 Jun 20 19:30:47.689034 kernel: loop2: detected capacity change from 0 to 224512 Jun 20 19:30:47.690030 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 20 19:30:47.692037 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 19:30:47.716200 systemd-tmpfiles[1315]: ACLs are not supported, ignoring. Jun 20 19:30:47.716431 systemd-tmpfiles[1315]: ACLs are not supported, ignoring. Jun 20 19:30:47.720414 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:30:47.794881 kernel: loop3: detected capacity change from 0 to 2960 Jun 20 19:30:47.823880 kernel: loop4: detected capacity change from 0 to 146240 Jun 20 19:30:48.274467 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 20 19:30:48.340881 kernel: loop5: detected capacity change from 0 to 113872 Jun 20 19:30:48.631877 kernel: loop6: detected capacity change from 0 to 224512 Jun 20 19:30:49.005876 kernel: loop7: detected capacity change from 0 to 2960 Jun 20 19:30:49.271482 (sd-merge)[1320]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. Jun 20 19:30:49.271786 (sd-merge)[1320]: Merged extensions into '/usr'. Jun 20 19:30:49.279505 systemd[1]: Reload requested from client PID 1266 ('systemd-sysext') (unit systemd-sysext.service)... Jun 20 19:30:49.279514 systemd[1]: Reloading... Jun 20 19:30:49.329868 zram_generator::config[1342]: No configuration found. Jun 20 19:30:49.411884 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:30:49.421301 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jun 20 19:30:49.466429 systemd[1]: Reloading finished in 186 ms. Jun 20 19:30:49.487499 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 20 19:30:49.487847 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 20 19:30:49.498898 systemd[1]: Starting ensure-sysext.service... Jun 20 19:30:49.499933 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 19:30:49.503876 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:30:49.516740 systemd-tmpfiles[1403]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jun 20 19:30:49.516761 systemd-tmpfiles[1403]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jun 20 19:30:49.516975 systemd-tmpfiles[1403]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 20 19:30:49.517131 systemd-tmpfiles[1403]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 20 19:30:49.517606 systemd-tmpfiles[1403]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 20 19:30:49.517771 systemd-tmpfiles[1403]: ACLs are not supported, ignoring. Jun 20 19:30:49.517807 systemd-tmpfiles[1403]: ACLs are not supported, ignoring. Jun 20 19:30:49.523162 systemd[1]: Reload requested from client PID 1402 ('systemctl') (unit ensure-sysext.service)... Jun 20 19:30:49.523172 systemd[1]: Reloading... Jun 20 19:30:49.525446 systemd-udevd[1404]: Using default interface naming scheme 'v255'. Jun 20 19:30:49.540972 systemd-tmpfiles[1403]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 19:30:49.541078 systemd-tmpfiles[1403]: Skipping /boot Jun 20 19:30:49.548951 systemd-tmpfiles[1403]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 19:30:49.549046 systemd-tmpfiles[1403]: Skipping /boot Jun 20 19:30:49.566871 zram_generator::config[1432]: No configuration found. Jun 20 19:30:49.648517 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:30:49.657380 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jun 20 19:30:49.704133 systemd[1]: Reloading finished in 180 ms. Jun 20 19:30:49.715979 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:30:49.726676 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 19:30:49.736380 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 20 19:30:49.738927 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 20 19:30:49.745137 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 19:30:49.746363 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 20 19:30:49.749521 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:30:49.750264 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:30:49.752209 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:30:49.753986 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:30:49.754163 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:30:49.754234 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:30:49.754301 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:30:49.755663 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:30:49.755762 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:30:49.755824 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:30:49.755895 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:30:49.758380 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:30:49.759724 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 19:30:49.759952 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:30:49.760019 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:30:49.760115 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:30:49.760464 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:30:49.761597 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:30:49.762841 systemd[1]: Finished ensure-sysext.service. Jun 20 19:30:49.769110 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 20 19:30:49.771709 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 20 19:30:49.775227 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:30:49.775369 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:30:49.775645 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:30:49.775761 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:30:49.776112 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 19:30:49.776214 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 19:30:49.777152 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:30:49.777301 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:30:49.780244 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 20 19:30:49.824173 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 20 19:30:49.849422 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 20 19:30:49.849852 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:30:49.853642 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 19:30:49.866149 augenrules[1543]: No rules Jun 20 19:30:49.867575 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 19:30:49.868964 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 19:30:49.937956 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 20 19:30:49.938168 systemd[1]: Reached target time-set.target - System Time Set. Jun 20 19:30:49.947902 systemd-networkd[1531]: lo: Link UP Jun 20 19:30:49.947907 systemd-networkd[1531]: lo: Gained carrier Jun 20 19:30:49.948317 systemd-networkd[1531]: Enumeration completed Jun 20 19:30:49.948365 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 19:30:49.949933 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 20 19:30:49.951727 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 20 19:30:49.955125 systemd-resolved[1493]: Positive Trust Anchors: Jun 20 19:30:49.955313 systemd-resolved[1493]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 19:30:49.955367 systemd-resolved[1493]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 19:30:49.984294 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 20 19:30:49.992968 systemd-resolved[1493]: Defaulting to hostname 'linux'. Jun 20 19:30:49.994098 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 19:30:49.994286 systemd[1]: Reached target network.target - Network. Jun 20 19:30:49.994382 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:30:49.999168 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 20 19:30:50.020096 systemd-networkd[1531]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Jun 20 19:30:50.022110 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jun 20 19:30:50.022250 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jun 20 19:30:50.024110 systemd-networkd[1531]: ens192: Link UP Jun 20 19:30:50.024206 systemd-networkd[1531]: ens192: Gained carrier Jun 20 19:30:50.028495 systemd-timesyncd[1504]: Network configuration changed, trying to establish connection. Jun 20 19:30:50.040884 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jun 20 19:30:50.044871 kernel: mousedev: PS/2 mouse device common for all mice Jun 20 19:30:50.048876 kernel: ACPI: button: Power Button [PWRF] Jun 20 19:30:50.119876 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Jun 20 19:30:50.172588 (udev-worker)[1562]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Jun 20 19:30:50.215684 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jun 20 19:30:50.217872 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 20 19:30:50.228790 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:30:50.244060 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 20 19:30:50.534157 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 20 19:30:50.534404 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 20 19:30:50.600407 ldconfig[1262]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 20 19:30:50.622701 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 20 19:30:50.624232 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 20 19:30:50.624622 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:30:50.656990 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 20 19:30:50.657367 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 19:30:50.657628 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 20 19:30:50.657809 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 20 19:30:50.657988 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jun 20 19:30:50.658239 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 20 19:30:50.658423 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 20 19:30:50.658580 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 20 19:30:50.658709 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 20 19:30:50.658733 systemd[1]: Reached target paths.target - Path Units. Jun 20 19:30:50.658834 systemd[1]: Reached target timers.target - Timer Units. Jun 20 19:30:50.662371 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 20 19:30:50.663536 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 20 19:30:50.665486 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 20 19:30:50.665716 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jun 20 19:30:50.665909 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jun 20 19:30:50.668188 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 20 19:30:50.668565 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 20 19:30:50.669106 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 20 19:30:50.669634 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 19:30:50.669737 systemd[1]: Reached target basic.target - Basic System. Jun 20 19:30:50.669875 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 20 19:30:50.669894 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 20 19:30:50.670676 systemd[1]: Starting containerd.service - containerd container runtime... Jun 20 19:30:50.672925 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 20 19:30:50.674439 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 20 19:30:50.675958 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 20 19:30:50.677570 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 20 19:30:50.677678 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 20 19:30:50.678976 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jun 20 19:30:50.681002 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 20 19:30:50.682800 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 20 19:30:50.691333 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 20 19:30:50.692725 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 20 19:30:50.695136 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 20 19:30:50.695700 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 20 19:30:50.696145 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 20 19:30:50.698551 jq[1614]: false Jun 20 19:30:50.698853 systemd[1]: Starting update-engine.service - Update Engine... Jun 20 19:30:50.707210 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 20 19:30:50.708273 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Jun 20 19:30:50.711105 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 20 19:30:50.711546 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 20 19:30:50.711665 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 20 19:30:50.716685 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 20 19:30:50.719316 oslogin_cache_refresh[1616]: Refreshing passwd entry cache Jun 20 19:30:50.720112 google_oslogin_nss_cache[1616]: oslogin_cache_refresh[1616]: Refreshing passwd entry cache Jun 20 19:30:50.716822 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 20 19:30:50.723473 extend-filesystems[1615]: Found /dev/sda6 Jun 20 19:30:50.723906 systemd[1]: motdgen.service: Deactivated successfully. Jun 20 19:30:50.725351 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 20 19:30:50.725711 google_oslogin_nss_cache[1616]: oslogin_cache_refresh[1616]: Failure getting users, quitting Jun 20 19:30:50.725706 oslogin_cache_refresh[1616]: Failure getting users, quitting Jun 20 19:30:50.725843 google_oslogin_nss_cache[1616]: oslogin_cache_refresh[1616]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 20 19:30:50.725717 oslogin_cache_refresh[1616]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 20 19:30:50.730769 jq[1628]: true Jun 20 19:30:50.733869 google_oslogin_nss_cache[1616]: oslogin_cache_refresh[1616]: Refreshing group entry cache Jun 20 19:30:50.733669 oslogin_cache_refresh[1616]: Refreshing group entry cache Jun 20 19:30:50.738976 google_oslogin_nss_cache[1616]: oslogin_cache_refresh[1616]: Failure getting groups, quitting Jun 20 19:30:50.738976 google_oslogin_nss_cache[1616]: oslogin_cache_refresh[1616]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 20 19:30:50.738927 oslogin_cache_refresh[1616]: Failure getting groups, quitting Jun 20 19:30:50.738936 oslogin_cache_refresh[1616]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 20 19:30:50.741503 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jun 20 19:30:50.741894 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jun 20 19:30:50.744325 extend-filesystems[1615]: Found /dev/sda9 Jun 20 19:30:50.746270 extend-filesystems[1615]: Checking size of /dev/sda9 Jun 20 19:30:50.749988 (ntainerd)[1645]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 20 19:30:50.750937 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Jun 20 19:30:50.753286 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Jun 20 19:30:50.754913 update_engine[1623]: I20250620 19:30:50.753688 1623 main.cc:92] Flatcar Update Engine starting Jun 20 19:30:50.761772 jq[1646]: true Jun 20 19:30:50.766306 tar[1632]: linux-amd64/LICENSE Jun 20 19:30:50.766744 tar[1632]: linux-amd64/helm Jun 20 19:30:50.786901 extend-filesystems[1615]: Old size kept for /dev/sda9 Jun 20 19:30:50.786496 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 20 19:30:50.786652 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 20 19:30:50.804559 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Jun 20 19:30:50.811461 systemd-logind[1622]: Watching system buttons on /dev/input/event2 (Power Button) Jun 20 19:30:50.812352 systemd-logind[1622]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 20 19:30:50.814238 systemd-logind[1622]: New seat seat0. Jun 20 19:30:50.816423 systemd[1]: Started systemd-logind.service - User Login Management. Jun 20 19:30:50.820312 unknown[1653]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Jun 20 19:30:50.825959 unknown[1653]: Core dump limit set to -1 Jun 20 19:30:50.866708 bash[1680]: Updated "/home/core/.ssh/authorized_keys" Jun 20 19:30:50.869153 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 20 19:30:50.869580 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 20 19:30:50.881729 dbus-daemon[1612]: [system] SELinux support is enabled Jun 20 19:30:50.882124 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 20 19:30:50.885318 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 20 19:30:50.885340 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 20 19:30:50.885902 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 20 19:30:50.885914 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 20 19:30:50.904813 update_engine[1623]: I20250620 19:30:50.896878 1623 update_check_scheduler.cc:74] Next update check in 4m51s Jun 20 19:30:50.898450 dbus-daemon[1612]: [system] Successfully activated service 'org.freedesktop.systemd1' Jun 20 19:30:50.897288 systemd[1]: Started update-engine.service - Update Engine. Jun 20 19:30:50.903338 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 20 19:30:50.909886 sshd_keygen[1655]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 20 19:30:50.945181 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 20 19:30:50.948048 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 20 19:30:50.966353 systemd[1]: issuegen.service: Deactivated successfully. Jun 20 19:30:50.966489 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 20 19:30:50.968139 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 20 19:30:50.993445 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 20 19:30:50.996127 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 20 19:30:50.997049 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 20 19:30:50.998023 systemd[1]: Reached target getty.target - Login Prompts. Jun 20 19:30:51.024401 locksmithd[1686]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 20 19:30:51.107271 containerd[1645]: time="2025-06-20T19:30:51Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jun 20 19:30:51.109619 containerd[1645]: time="2025-06-20T19:30:51.109598705Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jun 20 19:30:51.122326 containerd[1645]: time="2025-06-20T19:30:51.122295066Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="5.98µs" Jun 20 19:30:51.122326 containerd[1645]: time="2025-06-20T19:30:51.122319504Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jun 20 19:30:51.122326 containerd[1645]: time="2025-06-20T19:30:51.122331688Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jun 20 19:30:51.122439 containerd[1645]: time="2025-06-20T19:30:51.122424956Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jun 20 19:30:51.122439 containerd[1645]: time="2025-06-20T19:30:51.122437960Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jun 20 19:30:51.122481 containerd[1645]: time="2025-06-20T19:30:51.122453227Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 20 19:30:51.122498 containerd[1645]: time="2025-06-20T19:30:51.122487034Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 20 19:30:51.122498 containerd[1645]: time="2025-06-20T19:30:51.122494260Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 20 19:30:51.122644 containerd[1645]: time="2025-06-20T19:30:51.122628714Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 20 19:30:51.122644 containerd[1645]: time="2025-06-20T19:30:51.122640794Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 20 19:30:51.122675 containerd[1645]: time="2025-06-20T19:30:51.122647401Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 20 19:30:51.122675 containerd[1645]: time="2025-06-20T19:30:51.122652097Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jun 20 19:30:51.122701 containerd[1645]: time="2025-06-20T19:30:51.122694670Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jun 20 19:30:51.122829 containerd[1645]: time="2025-06-20T19:30:51.122816104Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 20 19:30:51.122846 containerd[1645]: time="2025-06-20T19:30:51.122838939Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 20 19:30:51.122888 containerd[1645]: time="2025-06-20T19:30:51.122847021Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jun 20 19:30:51.124934 containerd[1645]: time="2025-06-20T19:30:51.124917484Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jun 20 19:30:51.125251 containerd[1645]: time="2025-06-20T19:30:51.125234767Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jun 20 19:30:51.125290 containerd[1645]: time="2025-06-20T19:30:51.125283320Z" level=info msg="metadata content store policy set" policy=shared Jun 20 19:30:51.128238 containerd[1645]: time="2025-06-20T19:30:51.128213853Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jun 20 19:30:51.128295 containerd[1645]: time="2025-06-20T19:30:51.128246286Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jun 20 19:30:51.128295 containerd[1645]: time="2025-06-20T19:30:51.128255941Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jun 20 19:30:51.128295 containerd[1645]: time="2025-06-20T19:30:51.128262933Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jun 20 19:30:51.128295 containerd[1645]: time="2025-06-20T19:30:51.128270355Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jun 20 19:30:51.128295 containerd[1645]: time="2025-06-20T19:30:51.128276925Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jun 20 19:30:51.128295 containerd[1645]: time="2025-06-20T19:30:51.128286266Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jun 20 19:30:51.128295 containerd[1645]: time="2025-06-20T19:30:51.128292967Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jun 20 19:30:51.128390 containerd[1645]: time="2025-06-20T19:30:51.128298865Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jun 20 19:30:51.128390 containerd[1645]: time="2025-06-20T19:30:51.128304466Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jun 20 19:30:51.128390 containerd[1645]: time="2025-06-20T19:30:51.128309117Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jun 20 19:30:51.128390 containerd[1645]: time="2025-06-20T19:30:51.128316284Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jun 20 19:30:51.128390 containerd[1645]: time="2025-06-20T19:30:51.128383341Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jun 20 19:30:51.128452 containerd[1645]: time="2025-06-20T19:30:51.128395525Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jun 20 19:30:51.128452 containerd[1645]: time="2025-06-20T19:30:51.128403732Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jun 20 19:30:51.128452 containerd[1645]: time="2025-06-20T19:30:51.128411704Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jun 20 19:30:51.128452 containerd[1645]: time="2025-06-20T19:30:51.128419315Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jun 20 19:30:51.128452 containerd[1645]: time="2025-06-20T19:30:51.128425294Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jun 20 19:30:51.128452 containerd[1645]: time="2025-06-20T19:30:51.128433264Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jun 20 19:30:51.128452 containerd[1645]: time="2025-06-20T19:30:51.128439058Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jun 20 19:30:51.128452 containerd[1645]: time="2025-06-20T19:30:51.128445717Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jun 20 19:30:51.128452 containerd[1645]: time="2025-06-20T19:30:51.128453019Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jun 20 19:30:51.128568 containerd[1645]: time="2025-06-20T19:30:51.128459312Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jun 20 19:30:51.129878 containerd[1645]: time="2025-06-20T19:30:51.129210208Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jun 20 19:30:51.129878 containerd[1645]: time="2025-06-20T19:30:51.129231930Z" level=info msg="Start snapshots syncer" Jun 20 19:30:51.129878 containerd[1645]: time="2025-06-20T19:30:51.129350978Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jun 20 19:30:51.129946 containerd[1645]: time="2025-06-20T19:30:51.129607947Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jun 20 19:30:51.129946 containerd[1645]: time="2025-06-20T19:30:51.129647300Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jun 20 19:30:51.130361 containerd[1645]: time="2025-06-20T19:30:51.130345976Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jun 20 19:30:51.130510 containerd[1645]: time="2025-06-20T19:30:51.130491097Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jun 20 19:30:51.130579 containerd[1645]: time="2025-06-20T19:30:51.130564100Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jun 20 19:30:51.130626 containerd[1645]: time="2025-06-20T19:30:51.130614325Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jun 20 19:30:51.130663 containerd[1645]: time="2025-06-20T19:30:51.130655609Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jun 20 19:30:51.130710 containerd[1645]: time="2025-06-20T19:30:51.130699851Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jun 20 19:30:51.130746 containerd[1645]: time="2025-06-20T19:30:51.130737437Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jun 20 19:30:51.130783 containerd[1645]: time="2025-06-20T19:30:51.130773418Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jun 20 19:30:51.130849 containerd[1645]: time="2025-06-20T19:30:51.130834777Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jun 20 19:30:51.130914 containerd[1645]: time="2025-06-20T19:30:51.130905364Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jun 20 19:30:51.130950 containerd[1645]: time="2025-06-20T19:30:51.130943169Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jun 20 19:30:51.131549 containerd[1645]: time="2025-06-20T19:30:51.131533229Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 20 19:30:51.131616 containerd[1645]: time="2025-06-20T19:30:51.131599108Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 20 19:30:51.131663 containerd[1645]: time="2025-06-20T19:30:51.131654865Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 20 19:30:51.131707 containerd[1645]: time="2025-06-20T19:30:51.131699173Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 20 19:30:51.131745 containerd[1645]: time="2025-06-20T19:30:51.131735409Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jun 20 19:30:51.131786 containerd[1645]: time="2025-06-20T19:30:51.131778492Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jun 20 19:30:51.132002 containerd[1645]: time="2025-06-20T19:30:51.131989002Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jun 20 19:30:51.132062 containerd[1645]: time="2025-06-20T19:30:51.132051941Z" level=info msg="runtime interface created" Jun 20 19:30:51.132098 containerd[1645]: time="2025-06-20T19:30:51.132092176Z" level=info msg="created NRI interface" Jun 20 19:30:51.132133 containerd[1645]: time="2025-06-20T19:30:51.132123579Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jun 20 19:30:51.132167 containerd[1645]: time="2025-06-20T19:30:51.132161224Z" level=info msg="Connect containerd service" Jun 20 19:30:51.132879 containerd[1645]: time="2025-06-20T19:30:51.132279354Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 20 19:30:51.132879 containerd[1645]: time="2025-06-20T19:30:51.132828293Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 19:30:51.248702 containerd[1645]: time="2025-06-20T19:30:51.248677985Z" level=info msg="Start subscribing containerd event" Jun 20 19:30:51.248822 containerd[1645]: time="2025-06-20T19:30:51.248804618Z" level=info msg="Start recovering state" Jun 20 19:30:51.248921 containerd[1645]: time="2025-06-20T19:30:51.248909145Z" level=info msg="Start event monitor" Jun 20 19:30:51.248969 containerd[1645]: time="2025-06-20T19:30:51.248962390Z" level=info msg="Start cni network conf syncer for default" Jun 20 19:30:51.249000 containerd[1645]: time="2025-06-20T19:30:51.248994979Z" level=info msg="Start streaming server" Jun 20 19:30:51.249033 containerd[1645]: time="2025-06-20T19:30:51.249027222Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jun 20 19:30:51.249060 containerd[1645]: time="2025-06-20T19:30:51.249054675Z" level=info msg="runtime interface starting up..." Jun 20 19:30:51.249091 containerd[1645]: time="2025-06-20T19:30:51.249084996Z" level=info msg="starting plugins..." Jun 20 19:30:51.249126 containerd[1645]: time="2025-06-20T19:30:51.249120279Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jun 20 19:30:51.249212 containerd[1645]: time="2025-06-20T19:30:51.249005259Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 20 19:30:51.249288 containerd[1645]: time="2025-06-20T19:30:51.249279594Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 20 19:30:51.249370 containerd[1645]: time="2025-06-20T19:30:51.249362866Z" level=info msg="containerd successfully booted in 0.142323s" Jun 20 19:30:51.249431 systemd[1]: Started containerd.service - containerd container runtime. Jun 20 19:30:51.257097 tar[1632]: linux-amd64/README.md Jun 20 19:30:51.271271 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 20 19:30:51.842005 systemd-networkd[1531]: ens192: Gained IPv6LL Jun 20 19:30:51.843258 systemd-timesyncd[1504]: Network configuration changed, trying to establish connection. Jun 20 19:30:51.844069 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 20 19:30:51.844824 systemd[1]: Reached target network-online.target - Network is Online. Jun 20 19:30:51.847072 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Jun 20 19:30:51.849565 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:30:51.853822 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 20 19:30:51.874235 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 20 19:30:51.895061 systemd[1]: coreos-metadata.service: Deactivated successfully. Jun 20 19:30:51.895219 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Jun 20 19:30:51.896070 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 20 19:30:53.527138 systemd-timesyncd[1504]: Network configuration changed, trying to establish connection. Jun 20 19:30:54.105866 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:30:54.106235 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 20 19:30:54.108389 systemd[1]: Startup finished in 2.759s (kernel) + 7.084s (initrd) + 7.458s (userspace) = 17.302s. Jun 20 19:30:54.113128 (kubelet)[1809]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:30:54.155029 login[1706]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 20 19:30:54.155447 login[1707]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 20 19:30:54.166906 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 20 19:30:54.168960 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 20 19:30:54.171254 systemd-logind[1622]: New session 1 of user core. Jun 20 19:30:54.176195 systemd-logind[1622]: New session 2 of user core. Jun 20 19:30:54.183851 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 20 19:30:54.186036 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 20 19:30:54.201605 (systemd)[1816]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 20 19:30:54.203064 systemd-logind[1622]: New session c1 of user core. Jun 20 19:30:54.301770 systemd[1816]: Queued start job for default target default.target. Jun 20 19:30:54.312679 systemd[1816]: Created slice app.slice - User Application Slice. Jun 20 19:30:54.312700 systemd[1816]: Reached target paths.target - Paths. Jun 20 19:30:54.312726 systemd[1816]: Reached target timers.target - Timers. Jun 20 19:30:54.313465 systemd[1816]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 20 19:30:54.320076 systemd[1816]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 20 19:30:54.320163 systemd[1816]: Reached target sockets.target - Sockets. Jun 20 19:30:54.320229 systemd[1816]: Reached target basic.target - Basic System. Jun 20 19:30:54.320295 systemd[1816]: Reached target default.target - Main User Target. Jun 20 19:30:54.320310 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 20 19:30:54.320408 systemd[1816]: Startup finished in 113ms. Jun 20 19:30:54.321827 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 20 19:30:54.322391 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 20 19:30:54.973408 kubelet[1809]: E0620 19:30:54.973369 1809 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:30:54.975078 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:30:54.975260 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:30:54.975534 systemd[1]: kubelet.service: Consumed 708ms CPU time, 264.5M memory peak. Jun 20 19:31:05.123953 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 20 19:31:05.125107 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:31:05.427027 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:31:05.440029 (kubelet)[1858]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:31:05.480968 kubelet[1858]: E0620 19:31:05.480928 1858 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:31:05.483357 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:31:05.483445 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:31:05.483787 systemd[1]: kubelet.service: Consumed 93ms CPU time, 108.6M memory peak. Jun 20 19:31:15.623974 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 20 19:31:15.625251 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:31:16.099047 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:31:16.101329 (kubelet)[1873]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:31:16.137603 kubelet[1873]: E0620 19:31:16.137563 1873 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:31:16.139059 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:31:16.139147 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:31:16.139544 systemd[1]: kubelet.service: Consumed 92ms CPU time, 110.4M memory peak. Jun 20 19:31:20.991970 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 20 19:31:20.994217 systemd[1]: Started sshd@0-139.178.70.102:22-147.75.109.163:43020.service - OpenSSH per-connection server daemon (147.75.109.163:43020). Jun 20 19:31:21.085031 sshd[1881]: Accepted publickey for core from 147.75.109.163 port 43020 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:31:21.085880 sshd-session[1881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:31:21.088827 systemd-logind[1622]: New session 3 of user core. Jun 20 19:31:21.096237 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 20 19:31:21.150038 systemd[1]: Started sshd@1-139.178.70.102:22-147.75.109.163:43028.service - OpenSSH per-connection server daemon (147.75.109.163:43028). Jun 20 19:31:21.191076 sshd[1886]: Accepted publickey for core from 147.75.109.163 port 43028 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:31:21.192152 sshd-session[1886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:31:21.195689 systemd-logind[1622]: New session 4 of user core. Jun 20 19:31:21.200955 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 20 19:31:21.250722 sshd[1888]: Connection closed by 147.75.109.163 port 43028 Jun 20 19:31:21.251745 sshd-session[1886]: pam_unix(sshd:session): session closed for user core Jun 20 19:31:21.256464 systemd[1]: sshd@1-139.178.70.102:22-147.75.109.163:43028.service: Deactivated successfully. Jun 20 19:31:21.258040 systemd[1]: session-4.scope: Deactivated successfully. Jun 20 19:31:21.258659 systemd-logind[1622]: Session 4 logged out. Waiting for processes to exit. Jun 20 19:31:21.260346 systemd[1]: Started sshd@2-139.178.70.102:22-147.75.109.163:43040.service - OpenSSH per-connection server daemon (147.75.109.163:43040). Jun 20 19:31:21.262104 systemd-logind[1622]: Removed session 4. Jun 20 19:31:21.303663 sshd[1894]: Accepted publickey for core from 147.75.109.163 port 43040 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:31:21.304442 sshd-session[1894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:31:21.307541 systemd-logind[1622]: New session 5 of user core. Jun 20 19:31:21.316974 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 20 19:31:21.362985 sshd[1896]: Connection closed by 147.75.109.163 port 43040 Jun 20 19:31:21.363428 sshd-session[1894]: pam_unix(sshd:session): session closed for user core Jun 20 19:31:21.374602 systemd[1]: sshd@2-139.178.70.102:22-147.75.109.163:43040.service: Deactivated successfully. Jun 20 19:31:21.375681 systemd[1]: session-5.scope: Deactivated successfully. Jun 20 19:31:21.376284 systemd-logind[1622]: Session 5 logged out. Waiting for processes to exit. Jun 20 19:31:21.377932 systemd[1]: Started sshd@3-139.178.70.102:22-147.75.109.163:43050.service - OpenSSH per-connection server daemon (147.75.109.163:43050). Jun 20 19:31:21.379046 systemd-logind[1622]: Removed session 5. Jun 20 19:31:21.435176 sshd[1902]: Accepted publickey for core from 147.75.109.163 port 43050 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:31:21.435910 sshd-session[1902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:31:21.438551 systemd-logind[1622]: New session 6 of user core. Jun 20 19:31:21.445965 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 20 19:31:21.494019 sshd[1904]: Connection closed by 147.75.109.163 port 43050 Jun 20 19:31:21.494325 sshd-session[1902]: pam_unix(sshd:session): session closed for user core Jun 20 19:31:21.504117 systemd[1]: sshd@3-139.178.70.102:22-147.75.109.163:43050.service: Deactivated successfully. Jun 20 19:31:21.505235 systemd[1]: session-6.scope: Deactivated successfully. Jun 20 19:31:21.505742 systemd-logind[1622]: Session 6 logged out. Waiting for processes to exit. Jun 20 19:31:21.507088 systemd[1]: Started sshd@4-139.178.70.102:22-147.75.109.163:41736.service - OpenSSH per-connection server daemon (147.75.109.163:41736). Jun 20 19:31:21.508055 systemd-logind[1622]: Removed session 6. Jun 20 19:31:21.549677 sshd[1910]: Accepted publickey for core from 147.75.109.163 port 41736 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:31:21.550320 sshd-session[1910]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:31:21.552821 systemd-logind[1622]: New session 7 of user core. Jun 20 19:31:21.560973 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 20 19:31:21.620098 sudo[1913]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 20 19:31:21.620336 sudo[1913]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:31:21.634606 sudo[1913]: pam_unix(sudo:session): session closed for user root Jun 20 19:31:21.636433 sshd[1912]: Connection closed by 147.75.109.163 port 41736 Jun 20 19:31:21.635914 sshd-session[1910]: pam_unix(sshd:session): session closed for user core Jun 20 19:31:21.645436 systemd[1]: sshd@4-139.178.70.102:22-147.75.109.163:41736.service: Deactivated successfully. Jun 20 19:31:21.646466 systemd[1]: session-7.scope: Deactivated successfully. Jun 20 19:31:21.647031 systemd-logind[1622]: Session 7 logged out. Waiting for processes to exit. Jun 20 19:31:21.648400 systemd[1]: Started sshd@5-139.178.70.102:22-147.75.109.163:41750.service - OpenSSH per-connection server daemon (147.75.109.163:41750). Jun 20 19:31:21.650095 systemd-logind[1622]: Removed session 7. Jun 20 19:31:21.691730 sshd[1919]: Accepted publickey for core from 147.75.109.163 port 41750 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:31:21.692628 sshd-session[1919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:31:21.695888 systemd-logind[1622]: New session 8 of user core. Jun 20 19:31:21.710026 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 20 19:31:21.759481 sudo[1923]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 20 19:31:21.760168 sudo[1923]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:31:21.776594 sudo[1923]: pam_unix(sudo:session): session closed for user root Jun 20 19:31:21.779569 sudo[1922]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jun 20 19:31:21.779716 sudo[1922]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:31:21.785299 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 19:31:21.821817 augenrules[1945]: No rules Jun 20 19:31:21.822184 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 19:31:21.822429 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 19:31:21.823127 sudo[1922]: pam_unix(sudo:session): session closed for user root Jun 20 19:31:21.824882 sshd[1921]: Connection closed by 147.75.109.163 port 41750 Jun 20 19:31:21.824257 sshd-session[1919]: pam_unix(sshd:session): session closed for user core Jun 20 19:31:21.831779 systemd[1]: sshd@5-139.178.70.102:22-147.75.109.163:41750.service: Deactivated successfully. Jun 20 19:31:21.832657 systemd[1]: session-8.scope: Deactivated successfully. Jun 20 19:31:21.833352 systemd-logind[1622]: Session 8 logged out. Waiting for processes to exit. Jun 20 19:31:21.834655 systemd-logind[1622]: Removed session 8. Jun 20 19:31:21.835639 systemd[1]: Started sshd@6-139.178.70.102:22-147.75.109.163:41752.service - OpenSSH per-connection server daemon (147.75.109.163:41752). Jun 20 19:31:21.875638 sshd[1954]: Accepted publickey for core from 147.75.109.163 port 41752 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:31:21.876391 sshd-session[1954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:31:21.878919 systemd-logind[1622]: New session 9 of user core. Jun 20 19:31:21.889145 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 20 19:31:21.938536 sudo[1957]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 20 19:31:21.938709 sudo[1957]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:31:22.622005 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 20 19:31:22.637167 (dockerd)[1975]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 20 19:31:22.934792 dockerd[1975]: time="2025-06-20T19:31:22.934761870Z" level=info msg="Starting up" Jun 20 19:31:22.937307 dockerd[1975]: time="2025-06-20T19:31:22.937289705Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jun 20 19:31:22.981481 dockerd[1975]: time="2025-06-20T19:31:22.981342994Z" level=info msg="Loading containers: start." Jun 20 19:31:23.009896 kernel: Initializing XFRM netlink socket Jun 20 19:31:23.196770 systemd-timesyncd[1504]: Network configuration changed, trying to establish connection. Jun 20 19:31:23.221581 systemd-networkd[1531]: docker0: Link UP Jun 20 19:31:23.222526 dockerd[1975]: time="2025-06-20T19:31:23.222502781Z" level=info msg="Loading containers: done." Jun 20 19:31:23.229849 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1389392623-merged.mount: Deactivated successfully. Jun 20 19:31:23.239067 dockerd[1975]: time="2025-06-20T19:31:23.239043685Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 20 19:31:23.239132 dockerd[1975]: time="2025-06-20T19:31:23.239096180Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jun 20 19:31:23.239155 dockerd[1975]: time="2025-06-20T19:31:23.239149378Z" level=info msg="Initializing buildkit" Jun 20 19:31:23.284668 dockerd[1975]: time="2025-06-20T19:31:23.284640488Z" level=info msg="Completed buildkit initialization" Jun 20 19:31:23.292086 dockerd[1975]: time="2025-06-20T19:31:23.292061378Z" level=info msg="Daemon has completed initialization" Jun 20 19:31:23.292815 dockerd[1975]: time="2025-06-20T19:31:23.292143775Z" level=info msg="API listen on /run/docker.sock" Jun 20 19:31:23.292730 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 20 19:32:47.618451 systemd-resolved[1493]: Clock change detected. Flushing caches. Jun 20 19:32:47.619055 systemd-timesyncd[1504]: Contacted time server 99.28.14.242:123 (2.flatcar.pool.ntp.org). Jun 20 19:32:47.619269 systemd-timesyncd[1504]: Initial clock synchronization to Fri 2025-06-20 19:32:47.618057 UTC. Jun 20 19:32:49.172539 containerd[1645]: time="2025-06-20T19:32:49.172494336Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jun 20 19:32:49.793760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4078158055.mount: Deactivated successfully. Jun 20 19:32:50.692547 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 20 19:32:50.694403 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:32:50.859791 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:32:50.867923 (kubelet)[2238]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:32:50.911002 kubelet[2238]: E0620 19:32:50.910775 2238 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:32:50.913014 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:32:50.913096 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:32:50.913286 systemd[1]: kubelet.service: Consumed 106ms CPU time, 110.5M memory peak. Jun 20 19:32:51.177025 containerd[1645]: time="2025-06-20T19:32:51.176953652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:51.179359 containerd[1645]: time="2025-06-20T19:32:51.179341054Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799045" Jun 20 19:32:51.181488 containerd[1645]: time="2025-06-20T19:32:51.181475584Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:51.184248 containerd[1645]: time="2025-06-20T19:32:51.184234818Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:51.184717 containerd[1645]: time="2025-06-20T19:32:51.184548127Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 2.012028914s" Jun 20 19:32:51.184717 containerd[1645]: time="2025-06-20T19:32:51.184591720Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jun 20 19:32:51.184913 containerd[1645]: time="2025-06-20T19:32:51.184898783Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jun 20 19:32:52.451692 containerd[1645]: time="2025-06-20T19:32:52.451647820Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:52.461568 containerd[1645]: time="2025-06-20T19:32:52.461542384Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783912" Jun 20 19:32:52.468630 containerd[1645]: time="2025-06-20T19:32:52.468605044Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:52.475998 containerd[1645]: time="2025-06-20T19:32:52.475967051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:52.476482 containerd[1645]: time="2025-06-20T19:32:52.476373820Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 1.291135412s" Jun 20 19:32:52.476482 containerd[1645]: time="2025-06-20T19:32:52.476394196Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jun 20 19:32:52.476762 containerd[1645]: time="2025-06-20T19:32:52.476737307Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jun 20 19:32:53.728495 containerd[1645]: time="2025-06-20T19:32:53.727896474Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:53.730467 containerd[1645]: time="2025-06-20T19:32:53.730450440Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176916" Jun 20 19:32:53.734804 containerd[1645]: time="2025-06-20T19:32:53.734778637Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:53.740261 containerd[1645]: time="2025-06-20T19:32:53.740238175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:53.740952 containerd[1645]: time="2025-06-20T19:32:53.740903717Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 1.264146384s" Jun 20 19:32:53.741018 containerd[1645]: time="2025-06-20T19:32:53.741007184Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jun 20 19:32:53.741433 containerd[1645]: time="2025-06-20T19:32:53.741398846Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jun 20 19:32:54.756465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2125787806.mount: Deactivated successfully. Jun 20 19:32:55.115410 containerd[1645]: time="2025-06-20T19:32:55.115330771Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:55.123786 containerd[1645]: time="2025-06-20T19:32:55.123761086Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895363" Jun 20 19:32:55.128785 containerd[1645]: time="2025-06-20T19:32:55.128750923Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:55.133640 containerd[1645]: time="2025-06-20T19:32:55.133610497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:55.134140 containerd[1645]: time="2025-06-20T19:32:55.134045250Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 1.392508075s" Jun 20 19:32:55.134140 containerd[1645]: time="2025-06-20T19:32:55.134069315Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jun 20 19:32:55.134359 containerd[1645]: time="2025-06-20T19:32:55.134335853Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jun 20 19:32:55.703467 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2665331259.mount: Deactivated successfully. Jun 20 19:32:56.469597 containerd[1645]: time="2025-06-20T19:32:56.469084282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:56.471953 containerd[1645]: time="2025-06-20T19:32:56.471932210Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jun 20 19:32:56.476460 containerd[1645]: time="2025-06-20T19:32:56.476439952Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:56.481737 containerd[1645]: time="2025-06-20T19:32:56.481713742Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:32:56.482401 containerd[1645]: time="2025-06-20T19:32:56.482305599Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.34795174s" Jun 20 19:32:56.482401 containerd[1645]: time="2025-06-20T19:32:56.482323635Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jun 20 19:32:56.482878 containerd[1645]: time="2025-06-20T19:32:56.482855619Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 20 19:32:57.137457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount894761664.mount: Deactivated successfully. Jun 20 19:32:57.139721 containerd[1645]: time="2025-06-20T19:32:57.139699999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:32:57.140109 containerd[1645]: time="2025-06-20T19:32:57.140075021Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:32:57.140109 containerd[1645]: time="2025-06-20T19:32:57.140097173Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jun 20 19:32:57.141122 containerd[1645]: time="2025-06-20T19:32:57.141095521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:32:57.141685 containerd[1645]: time="2025-06-20T19:32:57.141493340Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 658.610318ms" Jun 20 19:32:57.141685 containerd[1645]: time="2025-06-20T19:32:57.141508293Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jun 20 19:32:57.141849 containerd[1645]: time="2025-06-20T19:32:57.141790759Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jun 20 19:32:57.603260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2682517421.mount: Deactivated successfully. Jun 20 19:33:00.137606 update_engine[1623]: I20250620 19:33:00.137393 1623 update_attempter.cc:509] Updating boot flags... Jun 20 19:33:00.942647 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jun 20 19:33:00.943910 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:33:01.312958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:33:01.315520 (kubelet)[2393]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:33:01.481983 kubelet[2393]: E0620 19:33:01.481958 2393 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:33:01.483330 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:33:01.483469 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:33:01.483853 systemd[1]: kubelet.service: Consumed 106ms CPU time, 111.1M memory peak. Jun 20 19:33:03.596742 containerd[1645]: time="2025-06-20T19:33:03.596693714Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:33:03.597494 containerd[1645]: time="2025-06-20T19:33:03.597462617Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Jun 20 19:33:03.597844 containerd[1645]: time="2025-06-20T19:33:03.597822342Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:33:03.599694 containerd[1645]: time="2025-06-20T19:33:03.599678424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:33:03.600473 containerd[1645]: time="2025-06-20T19:33:03.600456352Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 6.458568341s" Jun 20 19:33:03.600534 containerd[1645]: time="2025-06-20T19:33:03.600524088Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jun 20 19:33:05.677191 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:33:05.677556 systemd[1]: kubelet.service: Consumed 106ms CPU time, 111.1M memory peak. Jun 20 19:33:05.679172 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:33:05.700083 systemd[1]: Reload requested from client PID 2433 ('systemctl') (unit session-9.scope)... Jun 20 19:33:05.700093 systemd[1]: Reloading... Jun 20 19:33:05.768622 zram_generator::config[2477]: No configuration found. Jun 20 19:33:05.835886 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:33:05.845033 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jun 20 19:33:05.915342 systemd[1]: Reloading finished in 215 ms. Jun 20 19:33:05.937007 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 20 19:33:05.937070 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 20 19:33:05.937251 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:33:05.937283 systemd[1]: kubelet.service: Consumed 56ms CPU time, 92.3M memory peak. Jun 20 19:33:05.938528 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:33:06.226443 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:33:06.234939 (kubelet)[2544]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:33:06.270112 kubelet[2544]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:33:06.270112 kubelet[2544]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 19:33:06.270112 kubelet[2544]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:33:06.282229 kubelet[2544]: I0620 19:33:06.281818 2544 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 19:33:06.716065 kubelet[2544]: I0620 19:33:06.716038 2544 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jun 20 19:33:06.716065 kubelet[2544]: I0620 19:33:06.716058 2544 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 19:33:06.716638 kubelet[2544]: I0620 19:33:06.716628 2544 server.go:954] "Client rotation is on, will bootstrap in background" Jun 20 19:33:06.745949 kubelet[2544]: E0620 19:33:06.745926 2544 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.102:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:33:06.747778 kubelet[2544]: I0620 19:33:06.747764 2544 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:33:06.755204 kubelet[2544]: I0620 19:33:06.755182 2544 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 20 19:33:06.759077 kubelet[2544]: I0620 19:33:06.759054 2544 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 19:33:06.761524 kubelet[2544]: I0620 19:33:06.761472 2544 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 19:33:06.761679 kubelet[2544]: I0620 19:33:06.761526 2544 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 19:33:06.763008 kubelet[2544]: I0620 19:33:06.762991 2544 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 19:33:06.763008 kubelet[2544]: I0620 19:33:06.763009 2544 container_manager_linux.go:304] "Creating device plugin manager" Jun 20 19:33:06.763989 kubelet[2544]: I0620 19:33:06.763973 2544 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:33:06.768785 kubelet[2544]: I0620 19:33:06.768651 2544 kubelet.go:446] "Attempting to sync node with API server" Jun 20 19:33:06.768785 kubelet[2544]: I0620 19:33:06.768687 2544 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 19:33:06.769670 kubelet[2544]: I0620 19:33:06.769646 2544 kubelet.go:352] "Adding apiserver pod source" Jun 20 19:33:06.769670 kubelet[2544]: I0620 19:33:06.769670 2544 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 19:33:06.773939 kubelet[2544]: W0620 19:33:06.773814 2544 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.102:6443: connect: connection refused Jun 20 19:33:06.773939 kubelet[2544]: E0620 19:33:06.773856 2544 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:33:06.775105 kubelet[2544]: W0620 19:33:06.774943 2544 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.102:6443: connect: connection refused Jun 20 19:33:06.775105 kubelet[2544]: E0620 19:33:06.774974 2544 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:33:06.776287 kubelet[2544]: I0620 19:33:06.776273 2544 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 20 19:33:06.779476 kubelet[2544]: I0620 19:33:06.779231 2544 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 20 19:33:06.782933 kubelet[2544]: W0620 19:33:06.782203 2544 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 20 19:33:06.782933 kubelet[2544]: I0620 19:33:06.782668 2544 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 19:33:06.782933 kubelet[2544]: I0620 19:33:06.782727 2544 server.go:1287] "Started kubelet" Jun 20 19:33:06.783815 kubelet[2544]: I0620 19:33:06.783791 2544 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 19:33:06.785094 kubelet[2544]: I0620 19:33:06.784648 2544 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 19:33:06.785094 kubelet[2544]: I0620 19:33:06.785011 2544 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 19:33:06.785208 kubelet[2544]: I0620 19:33:06.785194 2544 server.go:479] "Adding debug handlers to kubelet server" Jun 20 19:33:06.789677 kubelet[2544]: I0620 19:33:06.789533 2544 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 19:33:06.795714 kubelet[2544]: I0620 19:33:06.795697 2544 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 19:33:06.796025 kubelet[2544]: E0620 19:33:06.791699 2544 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.102:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.102:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184ad72b71faa057 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-06-20 19:33:06.782675031 +0000 UTC m=+0.545428179,LastTimestamp:2025-06-20 19:33:06.782675031 +0000 UTC m=+0.545428179,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jun 20 19:33:06.797691 kubelet[2544]: I0620 19:33:06.797587 2544 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 19:33:06.798800 kubelet[2544]: E0620 19:33:06.797991 2544 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:33:06.798800 kubelet[2544]: I0620 19:33:06.798264 2544 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 19:33:06.798800 kubelet[2544]: I0620 19:33:06.798297 2544 reconciler.go:26] "Reconciler: start to sync state" Jun 20 19:33:06.798800 kubelet[2544]: E0620 19:33:06.798356 2544 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.102:6443: connect: connection refused" interval="200ms" Jun 20 19:33:06.799234 kubelet[2544]: I0620 19:33:06.799224 2544 factory.go:221] Registration of the systemd container factory successfully Jun 20 19:33:06.799357 kubelet[2544]: I0620 19:33:06.799337 2544 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 19:33:06.802794 kubelet[2544]: W0620 19:33:06.802769 2544 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.102:6443: connect: connection refused Jun 20 19:33:06.802899 kubelet[2544]: E0620 19:33:06.802889 2544 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:33:06.803469 kubelet[2544]: I0620 19:33:06.803459 2544 factory.go:221] Registration of the containerd container factory successfully Jun 20 19:33:06.807578 kubelet[2544]: I0620 19:33:06.806815 2544 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 20 19:33:06.807578 kubelet[2544]: I0620 19:33:06.807442 2544 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 20 19:33:06.807578 kubelet[2544]: I0620 19:33:06.807456 2544 status_manager.go:227] "Starting to sync pod status with apiserver" Jun 20 19:33:06.807578 kubelet[2544]: I0620 19:33:06.807487 2544 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 19:33:06.807578 kubelet[2544]: I0620 19:33:06.807493 2544 kubelet.go:2382] "Starting kubelet main sync loop" Jun 20 19:33:06.807578 kubelet[2544]: E0620 19:33:06.807522 2544 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 19:33:06.809187 kubelet[2544]: E0620 19:33:06.809172 2544 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 19:33:06.811640 kubelet[2544]: W0620 19:33:06.811611 2544 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.102:6443: connect: connection refused Jun 20 19:33:06.811710 kubelet[2544]: E0620 19:33:06.811644 2544 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:33:06.835730 kubelet[2544]: I0620 19:33:06.835714 2544 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 19:33:06.835875 kubelet[2544]: I0620 19:33:06.835868 2544 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 19:33:06.835932 kubelet[2544]: I0620 19:33:06.835927 2544 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:33:06.837059 kubelet[2544]: I0620 19:33:06.837051 2544 policy_none.go:49] "None policy: Start" Jun 20 19:33:06.837113 kubelet[2544]: I0620 19:33:06.837106 2544 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 19:33:06.837157 kubelet[2544]: I0620 19:33:06.837152 2544 state_mem.go:35] "Initializing new in-memory state store" Jun 20 19:33:06.840605 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 20 19:33:06.856940 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 20 19:33:06.859808 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 20 19:33:06.884279 kubelet[2544]: I0620 19:33:06.884263 2544 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 20 19:33:06.884475 kubelet[2544]: I0620 19:33:06.884466 2544 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 19:33:06.884540 kubelet[2544]: I0620 19:33:06.884516 2544 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 19:33:06.884785 kubelet[2544]: I0620 19:33:06.884778 2544 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 19:33:06.885732 kubelet[2544]: E0620 19:33:06.885718 2544 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 19:33:06.885785 kubelet[2544]: E0620 19:33:06.885775 2544 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jun 20 19:33:06.914723 systemd[1]: Created slice kubepods-burstable-pode9a9b233243d2f4b53feb14068b8e1b1.slice - libcontainer container kubepods-burstable-pode9a9b233243d2f4b53feb14068b8e1b1.slice. Jun 20 19:33:06.931683 kubelet[2544]: E0620 19:33:06.931652 2544 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 20 19:33:06.934076 systemd[1]: Created slice kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice - libcontainer container kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice. Jun 20 19:33:06.944579 kubelet[2544]: E0620 19:33:06.944468 2544 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 20 19:33:06.946494 systemd[1]: Created slice kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice - libcontainer container kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice. Jun 20 19:33:06.947653 kubelet[2544]: E0620 19:33:06.947638 2544 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 20 19:33:06.985909 kubelet[2544]: I0620 19:33:06.985819 2544 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jun 20 19:33:06.986168 kubelet[2544]: E0620 19:33:06.986137 2544 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.102:6443/api/v1/nodes\": dial tcp 139.178.70.102:6443: connect: connection refused" node="localhost" Jun 20 19:33:06.999546 kubelet[2544]: I0620 19:33:06.999410 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9a9b233243d2f4b53feb14068b8e1b1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e9a9b233243d2f4b53feb14068b8e1b1\") " pod="kube-system/kube-apiserver-localhost" Jun 20 19:33:06.999546 kubelet[2544]: I0620 19:33:06.999435 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:33:06.999546 kubelet[2544]: I0620 19:33:06.999447 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:33:06.999546 kubelet[2544]: I0620 19:33:06.999455 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:33:06.999546 kubelet[2544]: I0620 19:33:06.999466 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:33:06.999725 kubelet[2544]: I0620 19:33:06.999475 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jun 20 19:33:06.999725 kubelet[2544]: I0620 19:33:06.999483 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9a9b233243d2f4b53feb14068b8e1b1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e9a9b233243d2f4b53feb14068b8e1b1\") " pod="kube-system/kube-apiserver-localhost" Jun 20 19:33:06.999725 kubelet[2544]: I0620 19:33:06.999492 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9a9b233243d2f4b53feb14068b8e1b1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e9a9b233243d2f4b53feb14068b8e1b1\") " pod="kube-system/kube-apiserver-localhost" Jun 20 19:33:06.999725 kubelet[2544]: I0620 19:33:06.999511 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:33:06.999725 kubelet[2544]: E0620 19:33:06.999509 2544 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.102:6443: connect: connection refused" interval="400ms" Jun 20 19:33:07.187505 kubelet[2544]: I0620 19:33:07.187472 2544 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jun 20 19:33:07.188179 kubelet[2544]: E0620 19:33:07.188137 2544 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.102:6443/api/v1/nodes\": dial tcp 139.178.70.102:6443: connect: connection refused" node="localhost" Jun 20 19:33:07.235286 containerd[1645]: time="2025-06-20T19:33:07.235076174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e9a9b233243d2f4b53feb14068b8e1b1,Namespace:kube-system,Attempt:0,}" Jun 20 19:33:07.245197 containerd[1645]: time="2025-06-20T19:33:07.245139559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,}" Jun 20 19:33:07.271187 containerd[1645]: time="2025-06-20T19:33:07.270854471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,}" Jun 20 19:33:07.318428 containerd[1645]: time="2025-06-20T19:33:07.318400968Z" level=info msg="connecting to shim f3b47d53ad3862220c807c87b91a646bc22d2a9e1b9e9507a2a1b5ddbcd2b555" address="unix:///run/containerd/s/28a13e98a08c0af60b9b1e4bc4854d7577fe46b5fba5ba4ce84e4a19a9872cdc" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:33:07.320833 containerd[1645]: time="2025-06-20T19:33:07.320801052Z" level=info msg="connecting to shim af8215ca2e6818196142d10af099912f16608b3cf88eafef190099c8525ce45b" address="unix:///run/containerd/s/0e8e5b5a7477307203053f6f91a959a571d342cbe8e3afdfa93bd0dd6e2d7436" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:33:07.321052 containerd[1645]: time="2025-06-20T19:33:07.321033955Z" level=info msg="connecting to shim 3ed01b12b3d60c0b066310fa0663cca634ea8aae7c0eb8ef2f4f8f6a470f46b5" address="unix:///run/containerd/s/70f2a09a06a9d36cb25361a5971e6eea4f5777b626b23e0ae30b7557fe76a1f1" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:33:07.390785 systemd[1]: Started cri-containerd-3ed01b12b3d60c0b066310fa0663cca634ea8aae7c0eb8ef2f4f8f6a470f46b5.scope - libcontainer container 3ed01b12b3d60c0b066310fa0663cca634ea8aae7c0eb8ef2f4f8f6a470f46b5. Jun 20 19:33:07.391838 systemd[1]: Started cri-containerd-af8215ca2e6818196142d10af099912f16608b3cf88eafef190099c8525ce45b.scope - libcontainer container af8215ca2e6818196142d10af099912f16608b3cf88eafef190099c8525ce45b. Jun 20 19:33:07.392835 systemd[1]: Started cri-containerd-f3b47d53ad3862220c807c87b91a646bc22d2a9e1b9e9507a2a1b5ddbcd2b555.scope - libcontainer container f3b47d53ad3862220c807c87b91a646bc22d2a9e1b9e9507a2a1b5ddbcd2b555. Jun 20 19:33:07.404674 kubelet[2544]: E0620 19:33:07.404633 2544 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.102:6443: connect: connection refused" interval="800ms" Jun 20 19:33:07.494312 containerd[1645]: time="2025-06-20T19:33:07.494286724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e9a9b233243d2f4b53feb14068b8e1b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"af8215ca2e6818196142d10af099912f16608b3cf88eafef190099c8525ce45b\"" Jun 20 19:33:07.496707 containerd[1645]: time="2025-06-20T19:33:07.496643370Z" level=info msg="CreateContainer within sandbox \"af8215ca2e6818196142d10af099912f16608b3cf88eafef190099c8525ce45b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 20 19:33:07.521784 containerd[1645]: time="2025-06-20T19:33:07.521766321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"f3b47d53ad3862220c807c87b91a646bc22d2a9e1b9e9507a2a1b5ddbcd2b555\"" Jun 20 19:33:07.523871 containerd[1645]: time="2025-06-20T19:33:07.523845940Z" level=info msg="CreateContainer within sandbox \"f3b47d53ad3862220c807c87b91a646bc22d2a9e1b9e9507a2a1b5ddbcd2b555\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 20 19:33:07.528426 containerd[1645]: time="2025-06-20T19:33:07.528360901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ed01b12b3d60c0b066310fa0663cca634ea8aae7c0eb8ef2f4f8f6a470f46b5\"" Jun 20 19:33:07.529792 containerd[1645]: time="2025-06-20T19:33:07.529770172Z" level=info msg="CreateContainer within sandbox \"3ed01b12b3d60c0b066310fa0663cca634ea8aae7c0eb8ef2f4f8f6a470f46b5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 20 19:33:07.590179 kubelet[2544]: I0620 19:33:07.590154 2544 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jun 20 19:33:07.590443 kubelet[2544]: E0620 19:33:07.590426 2544 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.102:6443/api/v1/nodes\": dial tcp 139.178.70.102:6443: connect: connection refused" node="localhost" Jun 20 19:33:07.745693 kubelet[2544]: W0620 19:33:07.745615 2544 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.102:6443: connect: connection refused Jun 20 19:33:07.745693 kubelet[2544]: E0620 19:33:07.745670 2544 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:33:07.810904 containerd[1645]: time="2025-06-20T19:33:07.810276852Z" level=info msg="Container 3a8778d5ca159c10749f9d0992c201523906c92848027911bd9d2442d72f0b06: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:33:07.816703 containerd[1645]: time="2025-06-20T19:33:07.816684150Z" level=info msg="Container c95bebfd0f4da00610102d78ac72631d20cc39243ada933cb89bbbcb63d6d17e: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:33:07.829619 containerd[1645]: time="2025-06-20T19:33:07.829588746Z" level=info msg="Container f07858456e2f187ff28d6a2a21ebd443c8c354e6a5d44046438990fb78bf086e: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:33:07.847522 containerd[1645]: time="2025-06-20T19:33:07.847471185Z" level=info msg="CreateContainer within sandbox \"af8215ca2e6818196142d10af099912f16608b3cf88eafef190099c8525ce45b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3a8778d5ca159c10749f9d0992c201523906c92848027911bd9d2442d72f0b06\"" Jun 20 19:33:07.848457 containerd[1645]: time="2025-06-20T19:33:07.847951923Z" level=info msg="StartContainer for \"3a8778d5ca159c10749f9d0992c201523906c92848027911bd9d2442d72f0b06\"" Jun 20 19:33:07.855860 containerd[1645]: time="2025-06-20T19:33:07.855848082Z" level=info msg="connecting to shim 3a8778d5ca159c10749f9d0992c201523906c92848027911bd9d2442d72f0b06" address="unix:///run/containerd/s/0e8e5b5a7477307203053f6f91a959a571d342cbe8e3afdfa93bd0dd6e2d7436" protocol=ttrpc version=3 Jun 20 19:33:07.869683 systemd[1]: Started cri-containerd-3a8778d5ca159c10749f9d0992c201523906c92848027911bd9d2442d72f0b06.scope - libcontainer container 3a8778d5ca159c10749f9d0992c201523906c92848027911bd9d2442d72f0b06. Jun 20 19:33:07.890142 containerd[1645]: time="2025-06-20T19:33:07.890116707Z" level=info msg="CreateContainer within sandbox \"f3b47d53ad3862220c807c87b91a646bc22d2a9e1b9e9507a2a1b5ddbcd2b555\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c95bebfd0f4da00610102d78ac72631d20cc39243ada933cb89bbbcb63d6d17e\"" Jun 20 19:33:07.892497 containerd[1645]: time="2025-06-20T19:33:07.891738826Z" level=info msg="StartContainer for \"c95bebfd0f4da00610102d78ac72631d20cc39243ada933cb89bbbcb63d6d17e\"" Jun 20 19:33:07.892497 containerd[1645]: time="2025-06-20T19:33:07.892262505Z" level=info msg="connecting to shim c95bebfd0f4da00610102d78ac72631d20cc39243ada933cb89bbbcb63d6d17e" address="unix:///run/containerd/s/28a13e98a08c0af60b9b1e4bc4854d7577fe46b5fba5ba4ce84e4a19a9872cdc" protocol=ttrpc version=3 Jun 20 19:33:07.896912 containerd[1645]: time="2025-06-20T19:33:07.896894599Z" level=info msg="CreateContainer within sandbox \"3ed01b12b3d60c0b066310fa0663cca634ea8aae7c0eb8ef2f4f8f6a470f46b5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f07858456e2f187ff28d6a2a21ebd443c8c354e6a5d44046438990fb78bf086e\"" Jun 20 19:33:07.897260 kubelet[2544]: W0620 19:33:07.897230 2544 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.102:6443: connect: connection refused Jun 20 19:33:07.897297 kubelet[2544]: E0620 19:33:07.897266 2544 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:33:07.897382 containerd[1645]: time="2025-06-20T19:33:07.897356245Z" level=info msg="StartContainer for \"f07858456e2f187ff28d6a2a21ebd443c8c354e6a5d44046438990fb78bf086e\"" Jun 20 19:33:07.898035 containerd[1645]: time="2025-06-20T19:33:07.898020036Z" level=info msg="connecting to shim f07858456e2f187ff28d6a2a21ebd443c8c354e6a5d44046438990fb78bf086e" address="unix:///run/containerd/s/70f2a09a06a9d36cb25361a5971e6eea4f5777b626b23e0ae30b7557fe76a1f1" protocol=ttrpc version=3 Jun 20 19:33:07.906718 systemd[1]: Started cri-containerd-c95bebfd0f4da00610102d78ac72631d20cc39243ada933cb89bbbcb63d6d17e.scope - libcontainer container c95bebfd0f4da00610102d78ac72631d20cc39243ada933cb89bbbcb63d6d17e. Jun 20 19:33:07.917468 containerd[1645]: time="2025-06-20T19:33:07.917447852Z" level=info msg="StartContainer for \"3a8778d5ca159c10749f9d0992c201523906c92848027911bd9d2442d72f0b06\" returns successfully" Jun 20 19:33:07.921647 systemd[1]: Started cri-containerd-f07858456e2f187ff28d6a2a21ebd443c8c354e6a5d44046438990fb78bf086e.scope - libcontainer container f07858456e2f187ff28d6a2a21ebd443c8c354e6a5d44046438990fb78bf086e. Jun 20 19:33:07.954312 containerd[1645]: time="2025-06-20T19:33:07.954291911Z" level=info msg="StartContainer for \"c95bebfd0f4da00610102d78ac72631d20cc39243ada933cb89bbbcb63d6d17e\" returns successfully" Jun 20 19:33:07.965390 containerd[1645]: time="2025-06-20T19:33:07.965365067Z" level=info msg="StartContainer for \"f07858456e2f187ff28d6a2a21ebd443c8c354e6a5d44046438990fb78bf086e\" returns successfully" Jun 20 19:33:08.137871 kubelet[2544]: W0620 19:33:08.137768 2544 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.102:6443: connect: connection refused Jun 20 19:33:08.137871 kubelet[2544]: E0620 19:33:08.137815 2544 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:33:08.205421 kubelet[2544]: E0620 19:33:08.205393 2544 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.102:6443: connect: connection refused" interval="1.6s" Jun 20 19:33:08.317439 kubelet[2544]: W0620 19:33:08.317399 2544 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.102:6443: connect: connection refused Jun 20 19:33:08.317521 kubelet[2544]: E0620 19:33:08.317444 2544 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:33:08.391804 kubelet[2544]: I0620 19:33:08.391737 2544 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jun 20 19:33:08.838618 kubelet[2544]: E0620 19:33:08.838594 2544 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 20 19:33:08.840597 kubelet[2544]: E0620 19:33:08.840537 2544 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 20 19:33:08.842248 kubelet[2544]: E0620 19:33:08.842234 2544 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 20 19:33:09.342602 kubelet[2544]: I0620 19:33:09.341064 2544 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jun 20 19:33:09.342602 kubelet[2544]: E0620 19:33:09.341088 2544 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jun 20 19:33:09.358847 kubelet[2544]: E0620 19:33:09.358819 2544 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:33:09.459365 kubelet[2544]: E0620 19:33:09.459322 2544 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:33:09.559812 kubelet[2544]: E0620 19:33:09.559767 2544 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:33:09.660109 kubelet[2544]: E0620 19:33:09.659877 2544 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:33:09.760641 kubelet[2544]: E0620 19:33:09.760609 2544 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:33:09.843591 kubelet[2544]: E0620 19:33:09.843424 2544 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 20 19:33:09.843591 kubelet[2544]: E0620 19:33:09.843485 2544 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 20 19:33:09.844126 kubelet[2544]: E0620 19:33:09.844071 2544 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jun 20 19:33:09.861218 kubelet[2544]: E0620 19:33:09.861191 2544 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:33:09.961876 kubelet[2544]: E0620 19:33:09.961848 2544 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:33:10.062356 kubelet[2544]: E0620 19:33:10.062325 2544 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:33:10.162945 kubelet[2544]: E0620 19:33:10.162917 2544 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:33:10.263823 kubelet[2544]: E0620 19:33:10.263753 2544 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:33:10.364348 kubelet[2544]: E0620 19:33:10.364307 2544 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:33:10.465179 kubelet[2544]: E0620 19:33:10.464907 2544 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:33:10.565428 kubelet[2544]: E0620 19:33:10.565356 2544 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:33:10.666084 kubelet[2544]: E0620 19:33:10.665999 2544 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:33:10.766653 kubelet[2544]: E0620 19:33:10.766616 2544 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:33:10.843993 kubelet[2544]: I0620 19:33:10.843421 2544 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jun 20 19:33:10.843993 kubelet[2544]: I0620 19:33:10.843490 2544 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jun 20 19:33:10.898878 kubelet[2544]: I0620 19:33:10.898838 2544 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jun 20 19:33:10.918801 kubelet[2544]: I0620 19:33:10.918768 2544 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jun 20 19:33:10.931089 kubelet[2544]: E0620 19:33:10.930887 2544 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jun 20 19:33:10.931089 kubelet[2544]: I0620 19:33:10.930913 2544 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jun 20 19:33:10.942310 kubelet[2544]: E0620 19:33:10.942277 2544 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jun 20 19:33:11.033655 kubelet[2544]: I0620 19:33:11.033629 2544 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jun 20 19:33:11.063599 kubelet[2544]: E0620 19:33:11.063553 2544 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jun 20 19:33:11.425375 systemd[1]: Reload requested from client PID 2813 ('systemctl') (unit session-9.scope)... Jun 20 19:33:11.425571 systemd[1]: Reloading... Jun 20 19:33:11.478637 zram_generator::config[2856]: No configuration found. Jun 20 19:33:11.568833 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:33:11.577720 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jun 20 19:33:11.659237 systemd[1]: Reloading finished in 233 ms. Jun 20 19:33:11.682044 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:33:11.696375 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 19:33:11.696599 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:33:11.696645 systemd[1]: kubelet.service: Consumed 698ms CPU time, 128.7M memory peak. Jun 20 19:33:11.698319 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:33:12.142692 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:33:12.153964 (kubelet)[2923]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:33:12.286368 kubelet[2923]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:33:12.286368 kubelet[2923]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 19:33:12.286368 kubelet[2923]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:33:12.286368 kubelet[2923]: I0620 19:33:12.286320 2923 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 19:33:12.292605 kubelet[2923]: I0620 19:33:12.291700 2923 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jun 20 19:33:12.292605 kubelet[2923]: I0620 19:33:12.291721 2923 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 19:33:12.292605 kubelet[2923]: I0620 19:33:12.291882 2923 server.go:954] "Client rotation is on, will bootstrap in background" Jun 20 19:33:12.292778 kubelet[2923]: I0620 19:33:12.292769 2923 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 20 19:33:12.335825 kubelet[2923]: I0620 19:33:12.335804 2923 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:33:12.339618 kubelet[2923]: I0620 19:33:12.339605 2923 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 20 19:33:12.341999 kubelet[2923]: I0620 19:33:12.341964 2923 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 19:33:12.342106 kubelet[2923]: I0620 19:33:12.342090 2923 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 19:33:12.342264 kubelet[2923]: I0620 19:33:12.342106 2923 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 19:33:12.342323 kubelet[2923]: I0620 19:33:12.342270 2923 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 19:33:12.342323 kubelet[2923]: I0620 19:33:12.342279 2923 container_manager_linux.go:304] "Creating device plugin manager" Jun 20 19:33:12.342323 kubelet[2923]: I0620 19:33:12.342307 2923 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:33:12.342427 kubelet[2923]: I0620 19:33:12.342418 2923 kubelet.go:446] "Attempting to sync node with API server" Jun 20 19:33:12.342451 kubelet[2923]: I0620 19:33:12.342432 2923 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 19:33:12.342451 kubelet[2923]: I0620 19:33:12.342445 2923 kubelet.go:352] "Adding apiserver pod source" Jun 20 19:33:12.342482 kubelet[2923]: I0620 19:33:12.342452 2923 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 19:33:12.352177 kubelet[2923]: I0620 19:33:12.352160 2923 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 20 19:33:12.352404 kubelet[2923]: I0620 19:33:12.352393 2923 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 20 19:33:12.352662 kubelet[2923]: I0620 19:33:12.352652 2923 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 19:33:12.352692 kubelet[2923]: I0620 19:33:12.352670 2923 server.go:1287] "Started kubelet" Jun 20 19:33:12.354956 kubelet[2923]: I0620 19:33:12.354937 2923 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 19:33:12.355979 kubelet[2923]: E0620 19:33:12.355969 2923 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 19:33:12.358903 kubelet[2923]: I0620 19:33:12.358887 2923 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 19:33:12.359538 kubelet[2923]: I0620 19:33:12.359529 2923 server.go:479] "Adding debug handlers to kubelet server" Jun 20 19:33:12.360098 kubelet[2923]: I0620 19:33:12.360070 2923 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 19:33:12.360236 kubelet[2923]: I0620 19:33:12.360229 2923 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 19:33:12.360390 kubelet[2923]: I0620 19:33:12.360381 2923 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 19:33:12.368880 kubelet[2923]: I0620 19:33:12.368866 2923 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 19:33:12.369765 kubelet[2923]: I0620 19:33:12.368924 2923 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 19:33:12.369765 kubelet[2923]: E0620 19:33:12.369033 2923 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:33:12.369765 kubelet[2923]: I0620 19:33:12.369632 2923 reconciler.go:26] "Reconciler: start to sync state" Jun 20 19:33:12.371359 kubelet[2923]: I0620 19:33:12.371156 2923 factory.go:221] Registration of the systemd container factory successfully Jun 20 19:33:12.371359 kubelet[2923]: I0620 19:33:12.371233 2923 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 19:33:12.374208 kubelet[2923]: I0620 19:33:12.374192 2923 factory.go:221] Registration of the containerd container factory successfully Jun 20 19:33:12.376702 kubelet[2923]: I0620 19:33:12.376620 2923 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 20 19:33:12.378902 kubelet[2923]: I0620 19:33:12.378737 2923 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 20 19:33:12.379053 kubelet[2923]: I0620 19:33:12.378984 2923 status_manager.go:227] "Starting to sync pod status with apiserver" Jun 20 19:33:12.379053 kubelet[2923]: I0620 19:33:12.378998 2923 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 19:33:12.379053 kubelet[2923]: I0620 19:33:12.379005 2923 kubelet.go:2382] "Starting kubelet main sync loop" Jun 20 19:33:12.379053 kubelet[2923]: E0620 19:33:12.379030 2923 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 19:33:12.402442 kubelet[2923]: I0620 19:33:12.402236 2923 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 19:33:12.402442 kubelet[2923]: I0620 19:33:12.402247 2923 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 19:33:12.402442 kubelet[2923]: I0620 19:33:12.402257 2923 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:33:12.402442 kubelet[2923]: I0620 19:33:12.402354 2923 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 20 19:33:12.402442 kubelet[2923]: I0620 19:33:12.402360 2923 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 20 19:33:12.402442 kubelet[2923]: I0620 19:33:12.402372 2923 policy_none.go:49] "None policy: Start" Jun 20 19:33:12.403387 kubelet[2923]: I0620 19:33:12.403340 2923 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 19:33:12.403387 kubelet[2923]: I0620 19:33:12.403351 2923 state_mem.go:35] "Initializing new in-memory state store" Jun 20 19:33:12.403515 kubelet[2923]: I0620 19:33:12.403498 2923 state_mem.go:75] "Updated machine memory state" Jun 20 19:33:12.406573 kubelet[2923]: I0620 19:33:12.406365 2923 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 20 19:33:12.406573 kubelet[2923]: I0620 19:33:12.406453 2923 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 19:33:12.406573 kubelet[2923]: I0620 19:33:12.406459 2923 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 19:33:12.406727 kubelet[2923]: I0620 19:33:12.406720 2923 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 19:33:12.407429 kubelet[2923]: E0620 19:33:12.407421 2923 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 19:33:12.480468 kubelet[2923]: I0620 19:33:12.480450 2923 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jun 20 19:33:12.501253 kubelet[2923]: I0620 19:33:12.501223 2923 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jun 20 19:33:12.501438 kubelet[2923]: I0620 19:33:12.501387 2923 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jun 20 19:33:12.502872 kubelet[2923]: E0620 19:33:12.502836 2923 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jun 20 19:33:12.507693 kubelet[2923]: I0620 19:33:12.507683 2923 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jun 20 19:33:12.519269 kubelet[2923]: E0620 19:33:12.519245 2923 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jun 20 19:33:12.519494 kubelet[2923]: E0620 19:33:12.519482 2923 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jun 20 19:33:12.521997 kubelet[2923]: I0620 19:33:12.521956 2923 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jun 20 19:33:12.522196 kubelet[2923]: I0620 19:33:12.522172 2923 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jun 20 19:33:12.535550 sudo[2956]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jun 20 19:33:12.535748 sudo[2956]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jun 20 19:33:12.570602 kubelet[2923]: I0620 19:33:12.570469 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:33:12.570602 kubelet[2923]: I0620 19:33:12.570493 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:33:12.570602 kubelet[2923]: I0620 19:33:12.570504 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9a9b233243d2f4b53feb14068b8e1b1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e9a9b233243d2f4b53feb14068b8e1b1\") " pod="kube-system/kube-apiserver-localhost" Jun 20 19:33:12.570602 kubelet[2923]: I0620 19:33:12.570513 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9a9b233243d2f4b53feb14068b8e1b1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e9a9b233243d2f4b53feb14068b8e1b1\") " pod="kube-system/kube-apiserver-localhost" Jun 20 19:33:12.570602 kubelet[2923]: I0620 19:33:12.570521 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:33:12.570756 kubelet[2923]: I0620 19:33:12.570530 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:33:12.570756 kubelet[2923]: I0620 19:33:12.570540 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:33:12.570756 kubelet[2923]: I0620 19:33:12.570549 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jun 20 19:33:12.570756 kubelet[2923]: I0620 19:33:12.570558 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9a9b233243d2f4b53feb14068b8e1b1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e9a9b233243d2f4b53feb14068b8e1b1\") " pod="kube-system/kube-apiserver-localhost" Jun 20 19:33:13.176992 sudo[2956]: pam_unix(sudo:session): session closed for user root Jun 20 19:33:13.344074 kubelet[2923]: I0620 19:33:13.344053 2923 apiserver.go:52] "Watching apiserver" Jun 20 19:33:13.370323 kubelet[2923]: I0620 19:33:13.369979 2923 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 19:33:13.396241 kubelet[2923]: I0620 19:33:13.396222 2923 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jun 20 19:33:13.397711 kubelet[2923]: I0620 19:33:13.397699 2923 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jun 20 19:33:13.419995 kubelet[2923]: E0620 19:33:13.419968 2923 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jun 20 19:33:13.435621 kubelet[2923]: E0620 19:33:13.435550 2923 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jun 20 19:33:13.468924 kubelet[2923]: I0620 19:33:13.468887 2923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.468876063 podStartE2EDuration="3.468876063s" podCreationTimestamp="2025-06-20 19:33:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:33:13.468769235 +0000 UTC m=+1.220562149" watchObservedRunningTime="2025-06-20 19:33:13.468876063 +0000 UTC m=+1.220668974" Jun 20 19:33:13.469034 kubelet[2923]: I0620 19:33:13.468949 2923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.468945085 podStartE2EDuration="3.468945085s" podCreationTimestamp="2025-06-20 19:33:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:33:13.434135969 +0000 UTC m=+1.185928888" watchObservedRunningTime="2025-06-20 19:33:13.468945085 +0000 UTC m=+1.220737999" Jun 20 19:33:13.520700 kubelet[2923]: I0620 19:33:13.520449 2923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.520439046 podStartE2EDuration="3.520439046s" podCreationTimestamp="2025-06-20 19:33:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:33:13.520412634 +0000 UTC m=+1.272205544" watchObservedRunningTime="2025-06-20 19:33:13.520439046 +0000 UTC m=+1.272231961" Jun 20 19:33:15.374754 sudo[1957]: pam_unix(sudo:session): session closed for user root Jun 20 19:33:15.375674 sshd[1956]: Connection closed by 147.75.109.163 port 41752 Jun 20 19:33:15.376393 sshd-session[1954]: pam_unix(sshd:session): session closed for user core Jun 20 19:33:15.378387 systemd[1]: sshd@6-139.178.70.102:22-147.75.109.163:41752.service: Deactivated successfully. Jun 20 19:33:15.379934 systemd[1]: session-9.scope: Deactivated successfully. Jun 20 19:33:15.380135 systemd[1]: session-9.scope: Consumed 3.238s CPU time, 208.1M memory peak. Jun 20 19:33:15.381559 systemd-logind[1622]: Session 9 logged out. Waiting for processes to exit. Jun 20 19:33:15.382352 systemd-logind[1622]: Removed session 9. Jun 20 19:33:16.212805 kubelet[2923]: I0620 19:33:16.212782 2923 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 20 19:33:16.213080 containerd[1645]: time="2025-06-20T19:33:16.213041732Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 20 19:33:16.213452 kubelet[2923]: I0620 19:33:16.213434 2923 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 20 19:33:17.121688 systemd[1]: Created slice kubepods-besteffort-pod95794982_ca04_4751_ab66_bbc31b64a7e6.slice - libcontainer container kubepods-besteffort-pod95794982_ca04_4751_ab66_bbc31b64a7e6.slice. Jun 20 19:33:17.133375 systemd[1]: Created slice kubepods-burstable-pod7114b668_d678_4a9c_aee4_006fa66a3550.slice - libcontainer container kubepods-burstable-pod7114b668_d678_4a9c_aee4_006fa66a3550.slice. Jun 20 19:33:17.204365 kubelet[2923]: I0620 19:33:17.204088 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7114b668-d678-4a9c-aee4-006fa66a3550-hostproc\") pod \"cilium-46wd8\" (UID: \"7114b668-d678-4a9c-aee4-006fa66a3550\") " pod="kube-system/cilium-46wd8" Jun 20 19:33:17.204365 kubelet[2923]: I0620 19:33:17.204114 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7114b668-d678-4a9c-aee4-006fa66a3550-cni-path\") pod \"cilium-46wd8\" (UID: \"7114b668-d678-4a9c-aee4-006fa66a3550\") " pod="kube-system/cilium-46wd8" Jun 20 19:33:17.204365 kubelet[2923]: I0620 19:33:17.204124 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7114b668-d678-4a9c-aee4-006fa66a3550-host-proc-sys-net\") pod \"cilium-46wd8\" (UID: \"7114b668-d678-4a9c-aee4-006fa66a3550\") " pod="kube-system/cilium-46wd8" Jun 20 19:33:17.204365 kubelet[2923]: I0620 19:33:17.204137 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7114b668-d678-4a9c-aee4-006fa66a3550-cilium-run\") pod \"cilium-46wd8\" (UID: \"7114b668-d678-4a9c-aee4-006fa66a3550\") " pod="kube-system/cilium-46wd8" Jun 20 19:33:17.204365 kubelet[2923]: I0620 19:33:17.204146 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/95794982-ca04-4751-ab66-bbc31b64a7e6-kube-proxy\") pod \"kube-proxy-jv54w\" (UID: \"95794982-ca04-4751-ab66-bbc31b64a7e6\") " pod="kube-system/kube-proxy-jv54w" Jun 20 19:33:17.204365 kubelet[2923]: I0620 19:33:17.204158 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7114b668-d678-4a9c-aee4-006fa66a3550-clustermesh-secrets\") pod \"cilium-46wd8\" (UID: \"7114b668-d678-4a9c-aee4-006fa66a3550\") " pod="kube-system/cilium-46wd8" Jun 20 19:33:17.204534 kubelet[2923]: I0620 19:33:17.204167 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7114b668-d678-4a9c-aee4-006fa66a3550-xtables-lock\") pod \"cilium-46wd8\" (UID: \"7114b668-d678-4a9c-aee4-006fa66a3550\") " pod="kube-system/cilium-46wd8" Jun 20 19:33:17.204534 kubelet[2923]: I0620 19:33:17.204175 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7114b668-d678-4a9c-aee4-006fa66a3550-host-proc-sys-kernel\") pod \"cilium-46wd8\" (UID: \"7114b668-d678-4a9c-aee4-006fa66a3550\") " pod="kube-system/cilium-46wd8" Jun 20 19:33:17.204534 kubelet[2923]: I0620 19:33:17.204191 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/95794982-ca04-4751-ab66-bbc31b64a7e6-xtables-lock\") pod \"kube-proxy-jv54w\" (UID: \"95794982-ca04-4751-ab66-bbc31b64a7e6\") " pod="kube-system/kube-proxy-jv54w" Jun 20 19:33:17.204534 kubelet[2923]: I0620 19:33:17.204199 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/95794982-ca04-4751-ab66-bbc31b64a7e6-lib-modules\") pod \"kube-proxy-jv54w\" (UID: \"95794982-ca04-4751-ab66-bbc31b64a7e6\") " pod="kube-system/kube-proxy-jv54w" Jun 20 19:33:17.204534 kubelet[2923]: I0620 19:33:17.204215 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwjf7\" (UniqueName: \"kubernetes.io/projected/95794982-ca04-4751-ab66-bbc31b64a7e6-kube-api-access-vwjf7\") pod \"kube-proxy-jv54w\" (UID: \"95794982-ca04-4751-ab66-bbc31b64a7e6\") " pod="kube-system/kube-proxy-jv54w" Jun 20 19:33:17.204644 kubelet[2923]: I0620 19:33:17.204224 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7114b668-d678-4a9c-aee4-006fa66a3550-bpf-maps\") pod \"cilium-46wd8\" (UID: \"7114b668-d678-4a9c-aee4-006fa66a3550\") " pod="kube-system/cilium-46wd8" Jun 20 19:33:17.204644 kubelet[2923]: I0620 19:33:17.204234 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7114b668-d678-4a9c-aee4-006fa66a3550-etc-cni-netd\") pod \"cilium-46wd8\" (UID: \"7114b668-d678-4a9c-aee4-006fa66a3550\") " pod="kube-system/cilium-46wd8" Jun 20 19:33:17.204644 kubelet[2923]: I0620 19:33:17.204242 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7114b668-d678-4a9c-aee4-006fa66a3550-lib-modules\") pod \"cilium-46wd8\" (UID: \"7114b668-d678-4a9c-aee4-006fa66a3550\") " pod="kube-system/cilium-46wd8" Jun 20 19:33:17.204644 kubelet[2923]: I0620 19:33:17.204250 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7114b668-d678-4a9c-aee4-006fa66a3550-cilium-config-path\") pod \"cilium-46wd8\" (UID: \"7114b668-d678-4a9c-aee4-006fa66a3550\") " pod="kube-system/cilium-46wd8" Jun 20 19:33:17.204644 kubelet[2923]: I0620 19:33:17.204258 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssk59\" (UniqueName: \"kubernetes.io/projected/7114b668-d678-4a9c-aee4-006fa66a3550-kube-api-access-ssk59\") pod \"cilium-46wd8\" (UID: \"7114b668-d678-4a9c-aee4-006fa66a3550\") " pod="kube-system/cilium-46wd8" Jun 20 19:33:17.204644 kubelet[2923]: I0620 19:33:17.204266 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7114b668-d678-4a9c-aee4-006fa66a3550-cilium-cgroup\") pod \"cilium-46wd8\" (UID: \"7114b668-d678-4a9c-aee4-006fa66a3550\") " pod="kube-system/cilium-46wd8" Jun 20 19:33:17.204738 kubelet[2923]: I0620 19:33:17.204277 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7114b668-d678-4a9c-aee4-006fa66a3550-hubble-tls\") pod \"cilium-46wd8\" (UID: \"7114b668-d678-4a9c-aee4-006fa66a3550\") " pod="kube-system/cilium-46wd8" Jun 20 19:33:17.241598 systemd[1]: Created slice kubepods-besteffort-podf4fa4f99_f1b9_495f_9eb7_888368e9c869.slice - libcontainer container kubepods-besteffort-podf4fa4f99_f1b9_495f_9eb7_888368e9c869.slice. Jun 20 19:33:17.306139 kubelet[2923]: I0620 19:33:17.305040 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f4fa4f99-f1b9-495f-9eb7-888368e9c869-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-rs6f4\" (UID: \"f4fa4f99-f1b9-495f-9eb7-888368e9c869\") " pod="kube-system/cilium-operator-6c4d7847fc-rs6f4" Jun 20 19:33:17.306139 kubelet[2923]: I0620 19:33:17.305714 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl5p5\" (UniqueName: \"kubernetes.io/projected/f4fa4f99-f1b9-495f-9eb7-888368e9c869-kube-api-access-tl5p5\") pod \"cilium-operator-6c4d7847fc-rs6f4\" (UID: \"f4fa4f99-f1b9-495f-9eb7-888368e9c869\") " pod="kube-system/cilium-operator-6c4d7847fc-rs6f4" Jun 20 19:33:17.430298 containerd[1645]: time="2025-06-20T19:33:17.430204123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jv54w,Uid:95794982-ca04-4751-ab66-bbc31b64a7e6,Namespace:kube-system,Attempt:0,}" Jun 20 19:33:17.436154 containerd[1645]: time="2025-06-20T19:33:17.436115227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-46wd8,Uid:7114b668-d678-4a9c-aee4-006fa66a3550,Namespace:kube-system,Attempt:0,}" Jun 20 19:33:17.442931 containerd[1645]: time="2025-06-20T19:33:17.442887256Z" level=info msg="connecting to shim 3bd71da537cbe48b4d4d2d291ced3197e2f8509e5b60ebab11eee1b036b41d8a" address="unix:///run/containerd/s/abf3c01386ec811c73be5f76f4c38db9c0bbd46516a70d9f08c03789a635c7a5" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:33:17.450126 containerd[1645]: time="2025-06-20T19:33:17.449968051Z" level=info msg="connecting to shim 5a627f281bcc4e5319714f3bff6c2a9e0c6fa0dcb374efd053c2ffc2683776f8" address="unix:///run/containerd/s/09e876d11446e356948f24d831ebe36723da73e98c4a594364ee02f521f490f0" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:33:17.463698 systemd[1]: Started cri-containerd-3bd71da537cbe48b4d4d2d291ced3197e2f8509e5b60ebab11eee1b036b41d8a.scope - libcontainer container 3bd71da537cbe48b4d4d2d291ced3197e2f8509e5b60ebab11eee1b036b41d8a. Jun 20 19:33:17.469524 systemd[1]: Started cri-containerd-5a627f281bcc4e5319714f3bff6c2a9e0c6fa0dcb374efd053c2ffc2683776f8.scope - libcontainer container 5a627f281bcc4e5319714f3bff6c2a9e0c6fa0dcb374efd053c2ffc2683776f8. Jun 20 19:33:17.497277 containerd[1645]: time="2025-06-20T19:33:17.497212732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jv54w,Uid:95794982-ca04-4751-ab66-bbc31b64a7e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"3bd71da537cbe48b4d4d2d291ced3197e2f8509e5b60ebab11eee1b036b41d8a\"" Jun 20 19:33:17.500843 containerd[1645]: time="2025-06-20T19:33:17.500144495Z" level=info msg="CreateContainer within sandbox \"3bd71da537cbe48b4d4d2d291ced3197e2f8509e5b60ebab11eee1b036b41d8a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 20 19:33:17.504367 containerd[1645]: time="2025-06-20T19:33:17.504347179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-46wd8,Uid:7114b668-d678-4a9c-aee4-006fa66a3550,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a627f281bcc4e5319714f3bff6c2a9e0c6fa0dcb374efd053c2ffc2683776f8\"" Jun 20 19:33:17.510875 containerd[1645]: time="2025-06-20T19:33:17.510842847Z" level=info msg="Container 82e2b75d1ca0e4ebc34b2601c7a26643a55f48ad57fd82e1fe5a9feb79553acf: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:33:17.511252 containerd[1645]: time="2025-06-20T19:33:17.511234395Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jun 20 19:33:17.517328 containerd[1645]: time="2025-06-20T19:33:17.517303459Z" level=info msg="CreateContainer within sandbox \"3bd71da537cbe48b4d4d2d291ced3197e2f8509e5b60ebab11eee1b036b41d8a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"82e2b75d1ca0e4ebc34b2601c7a26643a55f48ad57fd82e1fe5a9feb79553acf\"" Jun 20 19:33:17.517827 containerd[1645]: time="2025-06-20T19:33:17.517811198Z" level=info msg="StartContainer for \"82e2b75d1ca0e4ebc34b2601c7a26643a55f48ad57fd82e1fe5a9feb79553acf\"" Jun 20 19:33:17.518972 containerd[1645]: time="2025-06-20T19:33:17.518951456Z" level=info msg="connecting to shim 82e2b75d1ca0e4ebc34b2601c7a26643a55f48ad57fd82e1fe5a9feb79553acf" address="unix:///run/containerd/s/abf3c01386ec811c73be5f76f4c38db9c0bbd46516a70d9f08c03789a635c7a5" protocol=ttrpc version=3 Jun 20 19:33:17.539713 systemd[1]: Started cri-containerd-82e2b75d1ca0e4ebc34b2601c7a26643a55f48ad57fd82e1fe5a9feb79553acf.scope - libcontainer container 82e2b75d1ca0e4ebc34b2601c7a26643a55f48ad57fd82e1fe5a9feb79553acf. Jun 20 19:33:17.545586 containerd[1645]: time="2025-06-20T19:33:17.544363002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-rs6f4,Uid:f4fa4f99-f1b9-495f-9eb7-888368e9c869,Namespace:kube-system,Attempt:0,}" Jun 20 19:33:17.598901 containerd[1645]: time="2025-06-20T19:33:17.598877415Z" level=info msg="StartContainer for \"82e2b75d1ca0e4ebc34b2601c7a26643a55f48ad57fd82e1fe5a9feb79553acf\" returns successfully" Jun 20 19:33:17.620239 containerd[1645]: time="2025-06-20T19:33:17.620202345Z" level=info msg="connecting to shim 47f0f087fa3098e0e97d4b6324df13ff5bab3b61d33b61b040fed0ab0b68687f" address="unix:///run/containerd/s/55934372f44cbe30e26821bf0e2b2f68915eae258c9d7eb472250fea8ead8ee8" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:33:17.638699 systemd[1]: Started cri-containerd-47f0f087fa3098e0e97d4b6324df13ff5bab3b61d33b61b040fed0ab0b68687f.scope - libcontainer container 47f0f087fa3098e0e97d4b6324df13ff5bab3b61d33b61b040fed0ab0b68687f. Jun 20 19:33:17.678312 containerd[1645]: time="2025-06-20T19:33:17.678286318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-rs6f4,Uid:f4fa4f99-f1b9-495f-9eb7-888368e9c869,Namespace:kube-system,Attempt:0,} returns sandbox id \"47f0f087fa3098e0e97d4b6324df13ff5bab3b61d33b61b040fed0ab0b68687f\"" Jun 20 19:33:18.547382 kubelet[2923]: I0620 19:33:18.547338 2923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jv54w" podStartSLOduration=1.547325459 podStartE2EDuration="1.547325459s" podCreationTimestamp="2025-06-20 19:33:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:33:18.435347216 +0000 UTC m=+6.187140136" watchObservedRunningTime="2025-06-20 19:33:18.547325459 +0000 UTC m=+6.299118373" Jun 20 19:33:21.217419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4056142448.mount: Deactivated successfully. Jun 20 19:33:23.635289 containerd[1645]: time="2025-06-20T19:33:23.635168528Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:33:23.647078 containerd[1645]: time="2025-06-20T19:33:23.647049346Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jun 20 19:33:23.654035 containerd[1645]: time="2025-06-20T19:33:23.653996294Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:33:23.655036 containerd[1645]: time="2025-06-20T19:33:23.654861927Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 6.143527891s" Jun 20 19:33:23.655036 containerd[1645]: time="2025-06-20T19:33:23.654886021Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jun 20 19:33:23.656193 containerd[1645]: time="2025-06-20T19:33:23.656181164Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jun 20 19:33:23.656621 containerd[1645]: time="2025-06-20T19:33:23.656603088Z" level=info msg="CreateContainer within sandbox \"5a627f281bcc4e5319714f3bff6c2a9e0c6fa0dcb374efd053c2ffc2683776f8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 20 19:33:23.727544 containerd[1645]: time="2025-06-20T19:33:23.726583261Z" level=info msg="Container f4b5037d58a4c8a583efa3eedd7fa7144d5ef46be6ffe9db5e3afe0da5d67cc8: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:33:23.727714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3117660110.mount: Deactivated successfully. Jun 20 19:33:23.762901 containerd[1645]: time="2025-06-20T19:33:23.762869260Z" level=info msg="CreateContainer within sandbox \"5a627f281bcc4e5319714f3bff6c2a9e0c6fa0dcb374efd053c2ffc2683776f8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f4b5037d58a4c8a583efa3eedd7fa7144d5ef46be6ffe9db5e3afe0da5d67cc8\"" Jun 20 19:33:23.766689 containerd[1645]: time="2025-06-20T19:33:23.766638087Z" level=info msg="StartContainer for \"f4b5037d58a4c8a583efa3eedd7fa7144d5ef46be6ffe9db5e3afe0da5d67cc8\"" Jun 20 19:33:23.768434 containerd[1645]: time="2025-06-20T19:33:23.768220144Z" level=info msg="connecting to shim f4b5037d58a4c8a583efa3eedd7fa7144d5ef46be6ffe9db5e3afe0da5d67cc8" address="unix:///run/containerd/s/09e876d11446e356948f24d831ebe36723da73e98c4a594364ee02f521f490f0" protocol=ttrpc version=3 Jun 20 19:33:23.812681 systemd[1]: Started cri-containerd-f4b5037d58a4c8a583efa3eedd7fa7144d5ef46be6ffe9db5e3afe0da5d67cc8.scope - libcontainer container f4b5037d58a4c8a583efa3eedd7fa7144d5ef46be6ffe9db5e3afe0da5d67cc8. Jun 20 19:33:23.832604 containerd[1645]: time="2025-06-20T19:33:23.832534855Z" level=info msg="StartContainer for \"f4b5037d58a4c8a583efa3eedd7fa7144d5ef46be6ffe9db5e3afe0da5d67cc8\" returns successfully" Jun 20 19:33:23.862601 systemd[1]: cri-containerd-f4b5037d58a4c8a583efa3eedd7fa7144d5ef46be6ffe9db5e3afe0da5d67cc8.scope: Deactivated successfully. Jun 20 19:33:23.862845 systemd[1]: cri-containerd-f4b5037d58a4c8a583efa3eedd7fa7144d5ef46be6ffe9db5e3afe0da5d67cc8.scope: Consumed 14ms CPU time, 5M memory peak, 8K read from disk, 3.2M written to disk. Jun 20 19:33:23.900344 containerd[1645]: time="2025-06-20T19:33:23.900173470Z" level=info msg="received exit event container_id:\"f4b5037d58a4c8a583efa3eedd7fa7144d5ef46be6ffe9db5e3afe0da5d67cc8\" id:\"f4b5037d58a4c8a583efa3eedd7fa7144d5ef46be6ffe9db5e3afe0da5d67cc8\" pid:3338 exited_at:{seconds:1750448003 nanos:863708722}" Jun 20 19:33:23.900593 containerd[1645]: time="2025-06-20T19:33:23.900581711Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f4b5037d58a4c8a583efa3eedd7fa7144d5ef46be6ffe9db5e3afe0da5d67cc8\" id:\"f4b5037d58a4c8a583efa3eedd7fa7144d5ef46be6ffe9db5e3afe0da5d67cc8\" pid:3338 exited_at:{seconds:1750448003 nanos:863708722}" Jun 20 19:33:24.722397 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f4b5037d58a4c8a583efa3eedd7fa7144d5ef46be6ffe9db5e3afe0da5d67cc8-rootfs.mount: Deactivated successfully. Jun 20 19:33:25.243781 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2254113572.mount: Deactivated successfully. Jun 20 19:33:25.434272 containerd[1645]: time="2025-06-20T19:33:25.434240017Z" level=info msg="CreateContainer within sandbox \"5a627f281bcc4e5319714f3bff6c2a9e0c6fa0dcb374efd053c2ffc2683776f8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 20 19:33:25.589994 containerd[1645]: time="2025-06-20T19:33:25.589794891Z" level=info msg="Container 25f05bdf332de98fe96462173359fac5563d1729d3cf3aa43eab988f2922afd9: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:33:25.592532 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount274977468.mount: Deactivated successfully. Jun 20 19:33:25.597353 containerd[1645]: time="2025-06-20T19:33:25.597330571Z" level=info msg="CreateContainer within sandbox \"5a627f281bcc4e5319714f3bff6c2a9e0c6fa0dcb374efd053c2ffc2683776f8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"25f05bdf332de98fe96462173359fac5563d1729d3cf3aa43eab988f2922afd9\"" Jun 20 19:33:25.597989 containerd[1645]: time="2025-06-20T19:33:25.597801151Z" level=info msg="StartContainer for \"25f05bdf332de98fe96462173359fac5563d1729d3cf3aa43eab988f2922afd9\"" Jun 20 19:33:25.599713 containerd[1645]: time="2025-06-20T19:33:25.599695584Z" level=info msg="connecting to shim 25f05bdf332de98fe96462173359fac5563d1729d3cf3aa43eab988f2922afd9" address="unix:///run/containerd/s/09e876d11446e356948f24d831ebe36723da73e98c4a594364ee02f521f490f0" protocol=ttrpc version=3 Jun 20 19:33:25.618706 systemd[1]: Started cri-containerd-25f05bdf332de98fe96462173359fac5563d1729d3cf3aa43eab988f2922afd9.scope - libcontainer container 25f05bdf332de98fe96462173359fac5563d1729d3cf3aa43eab988f2922afd9. Jun 20 19:33:25.646030 containerd[1645]: time="2025-06-20T19:33:25.646000548Z" level=info msg="StartContainer for \"25f05bdf332de98fe96462173359fac5563d1729d3cf3aa43eab988f2922afd9\" returns successfully" Jun 20 19:33:25.653794 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 19:33:25.653990 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:33:25.654546 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:33:25.656877 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:33:25.658625 systemd[1]: cri-containerd-25f05bdf332de98fe96462173359fac5563d1729d3cf3aa43eab988f2922afd9.scope: Deactivated successfully. Jun 20 19:33:25.660041 containerd[1645]: time="2025-06-20T19:33:25.660012808Z" level=info msg="received exit event container_id:\"25f05bdf332de98fe96462173359fac5563d1729d3cf3aa43eab988f2922afd9\" id:\"25f05bdf332de98fe96462173359fac5563d1729d3cf3aa43eab988f2922afd9\" pid:3389 exited_at:{seconds:1750448005 nanos:659452053}" Jun 20 19:33:25.660694 containerd[1645]: time="2025-06-20T19:33:25.660389779Z" level=info msg="TaskExit event in podsandbox handler container_id:\"25f05bdf332de98fe96462173359fac5563d1729d3cf3aa43eab988f2922afd9\" id:\"25f05bdf332de98fe96462173359fac5563d1729d3cf3aa43eab988f2922afd9\" pid:3389 exited_at:{seconds:1750448005 nanos:659452053}" Jun 20 19:33:25.691513 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:33:26.005791 containerd[1645]: time="2025-06-20T19:33:26.005758071Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:33:26.012927 containerd[1645]: time="2025-06-20T19:33:26.012908370Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jun 20 19:33:26.023918 containerd[1645]: time="2025-06-20T19:33:26.023871481Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:33:26.024686 containerd[1645]: time="2025-06-20T19:33:26.024619642Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.368358172s" Jun 20 19:33:26.024686 containerd[1645]: time="2025-06-20T19:33:26.024638108Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jun 20 19:33:26.026259 containerd[1645]: time="2025-06-20T19:33:26.026150920Z" level=info msg="CreateContainer within sandbox \"47f0f087fa3098e0e97d4b6324df13ff5bab3b61d33b61b040fed0ab0b68687f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jun 20 19:33:26.317086 containerd[1645]: time="2025-06-20T19:33:26.317026338Z" level=info msg="Container 028718044102f0d66137cc93043586f2f2b31701175543627c743ea41f6fc27d: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:33:26.318026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount473555676.mount: Deactivated successfully. Jun 20 19:33:26.370573 containerd[1645]: time="2025-06-20T19:33:26.370538561Z" level=info msg="CreateContainer within sandbox \"47f0f087fa3098e0e97d4b6324df13ff5bab3b61d33b61b040fed0ab0b68687f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"028718044102f0d66137cc93043586f2f2b31701175543627c743ea41f6fc27d\"" Jun 20 19:33:26.371261 containerd[1645]: time="2025-06-20T19:33:26.371060610Z" level=info msg="StartContainer for \"028718044102f0d66137cc93043586f2f2b31701175543627c743ea41f6fc27d\"" Jun 20 19:33:26.371823 containerd[1645]: time="2025-06-20T19:33:26.371805376Z" level=info msg="connecting to shim 028718044102f0d66137cc93043586f2f2b31701175543627c743ea41f6fc27d" address="unix:///run/containerd/s/55934372f44cbe30e26821bf0e2b2f68915eae258c9d7eb472250fea8ead8ee8" protocol=ttrpc version=3 Jun 20 19:33:26.391705 systemd[1]: Started cri-containerd-028718044102f0d66137cc93043586f2f2b31701175543627c743ea41f6fc27d.scope - libcontainer container 028718044102f0d66137cc93043586f2f2b31701175543627c743ea41f6fc27d. Jun 20 19:33:26.422449 containerd[1645]: time="2025-06-20T19:33:26.422427359Z" level=info msg="StartContainer for \"028718044102f0d66137cc93043586f2f2b31701175543627c743ea41f6fc27d\" returns successfully" Jun 20 19:33:26.443481 containerd[1645]: time="2025-06-20T19:33:26.443414163Z" level=info msg="CreateContainer within sandbox \"5a627f281bcc4e5319714f3bff6c2a9e0c6fa0dcb374efd053c2ffc2683776f8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 20 19:33:26.452116 kubelet[2923]: I0620 19:33:26.451987 2923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-rs6f4" podStartSLOduration=1.10571799 podStartE2EDuration="9.45197279s" podCreationTimestamp="2025-06-20 19:33:17 +0000 UTC" firstStartedPulling="2025-06-20 19:33:17.679012513 +0000 UTC m=+5.430805423" lastFinishedPulling="2025-06-20 19:33:26.025267314 +0000 UTC m=+13.777060223" observedRunningTime="2025-06-20 19:33:26.451854245 +0000 UTC m=+14.203647164" watchObservedRunningTime="2025-06-20 19:33:26.45197279 +0000 UTC m=+14.203765705" Jun 20 19:33:26.508394 containerd[1645]: time="2025-06-20T19:33:26.507803837Z" level=info msg="Container d45b00a63201a7b6d303c29182b0ae7fa201e2ab49e710de485716fb2a7c42c3: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:33:26.561335 containerd[1645]: time="2025-06-20T19:33:26.561307214Z" level=info msg="CreateContainer within sandbox \"5a627f281bcc4e5319714f3bff6c2a9e0c6fa0dcb374efd053c2ffc2683776f8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d45b00a63201a7b6d303c29182b0ae7fa201e2ab49e710de485716fb2a7c42c3\"" Jun 20 19:33:26.561829 containerd[1645]: time="2025-06-20T19:33:26.561739435Z" level=info msg="StartContainer for \"d45b00a63201a7b6d303c29182b0ae7fa201e2ab49e710de485716fb2a7c42c3\"" Jun 20 19:33:26.564192 containerd[1645]: time="2025-06-20T19:33:26.563814467Z" level=info msg="connecting to shim d45b00a63201a7b6d303c29182b0ae7fa201e2ab49e710de485716fb2a7c42c3" address="unix:///run/containerd/s/09e876d11446e356948f24d831ebe36723da73e98c4a594364ee02f521f490f0" protocol=ttrpc version=3 Jun 20 19:33:26.581826 systemd[1]: Started cri-containerd-d45b00a63201a7b6d303c29182b0ae7fa201e2ab49e710de485716fb2a7c42c3.scope - libcontainer container d45b00a63201a7b6d303c29182b0ae7fa201e2ab49e710de485716fb2a7c42c3. Jun 20 19:33:26.638032 containerd[1645]: time="2025-06-20T19:33:26.638011507Z" level=info msg="StartContainer for \"d45b00a63201a7b6d303c29182b0ae7fa201e2ab49e710de485716fb2a7c42c3\" returns successfully" Jun 20 19:33:26.694854 systemd[1]: cri-containerd-d45b00a63201a7b6d303c29182b0ae7fa201e2ab49e710de485716fb2a7c42c3.scope: Deactivated successfully. Jun 20 19:33:26.696245 containerd[1645]: time="2025-06-20T19:33:26.694890108Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d45b00a63201a7b6d303c29182b0ae7fa201e2ab49e710de485716fb2a7c42c3\" id:\"d45b00a63201a7b6d303c29182b0ae7fa201e2ab49e710de485716fb2a7c42c3\" pid:3475 exited_at:{seconds:1750448006 nanos:694689723}" Jun 20 19:33:26.696245 containerd[1645]: time="2025-06-20T19:33:26.694945611Z" level=info msg="received exit event container_id:\"d45b00a63201a7b6d303c29182b0ae7fa201e2ab49e710de485716fb2a7c42c3\" id:\"d45b00a63201a7b6d303c29182b0ae7fa201e2ab49e710de485716fb2a7c42c3\" pid:3475 exited_at:{seconds:1750448006 nanos:694689723}" Jun 20 19:33:26.695172 systemd[1]: cri-containerd-d45b00a63201a7b6d303c29182b0ae7fa201e2ab49e710de485716fb2a7c42c3.scope: Consumed 15ms CPU time, 4.2M memory peak, 1.2M read from disk. Jun 20 19:33:27.448805 containerd[1645]: time="2025-06-20T19:33:27.448747056Z" level=info msg="CreateContainer within sandbox \"5a627f281bcc4e5319714f3bff6c2a9e0c6fa0dcb374efd053c2ffc2683776f8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 20 19:33:27.464793 containerd[1645]: time="2025-06-20T19:33:27.464754580Z" level=info msg="Container 773b4376e736c5b0ec60f878c89a22c54fa4806666e58cc1d44fc741377582aa: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:33:27.466225 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount184566703.mount: Deactivated successfully. Jun 20 19:33:27.468091 containerd[1645]: time="2025-06-20T19:33:27.468045396Z" level=info msg="CreateContainer within sandbox \"5a627f281bcc4e5319714f3bff6c2a9e0c6fa0dcb374efd053c2ffc2683776f8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"773b4376e736c5b0ec60f878c89a22c54fa4806666e58cc1d44fc741377582aa\"" Jun 20 19:33:27.468510 containerd[1645]: time="2025-06-20T19:33:27.468483428Z" level=info msg="StartContainer for \"773b4376e736c5b0ec60f878c89a22c54fa4806666e58cc1d44fc741377582aa\"" Jun 20 19:33:27.469209 containerd[1645]: time="2025-06-20T19:33:27.469166148Z" level=info msg="connecting to shim 773b4376e736c5b0ec60f878c89a22c54fa4806666e58cc1d44fc741377582aa" address="unix:///run/containerd/s/09e876d11446e356948f24d831ebe36723da73e98c4a594364ee02f521f490f0" protocol=ttrpc version=3 Jun 20 19:33:27.489694 systemd[1]: Started cri-containerd-773b4376e736c5b0ec60f878c89a22c54fa4806666e58cc1d44fc741377582aa.scope - libcontainer container 773b4376e736c5b0ec60f878c89a22c54fa4806666e58cc1d44fc741377582aa. Jun 20 19:33:27.506869 systemd[1]: cri-containerd-773b4376e736c5b0ec60f878c89a22c54fa4806666e58cc1d44fc741377582aa.scope: Deactivated successfully. Jun 20 19:33:27.508492 containerd[1645]: time="2025-06-20T19:33:27.508466097Z" level=info msg="received exit event container_id:\"773b4376e736c5b0ec60f878c89a22c54fa4806666e58cc1d44fc741377582aa\" id:\"773b4376e736c5b0ec60f878c89a22c54fa4806666e58cc1d44fc741377582aa\" pid:3514 exited_at:{seconds:1750448007 nanos:507750791}" Jun 20 19:33:27.508664 containerd[1645]: time="2025-06-20T19:33:27.508651445Z" level=info msg="TaskExit event in podsandbox handler container_id:\"773b4376e736c5b0ec60f878c89a22c54fa4806666e58cc1d44fc741377582aa\" id:\"773b4376e736c5b0ec60f878c89a22c54fa4806666e58cc1d44fc741377582aa\" pid:3514 exited_at:{seconds:1750448007 nanos:507750791}" Jun 20 19:33:27.509095 containerd[1645]: time="2025-06-20T19:33:27.509082132Z" level=info msg="StartContainer for \"773b4376e736c5b0ec60f878c89a22c54fa4806666e58cc1d44fc741377582aa\" returns successfully" Jun 20 19:33:27.723084 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-773b4376e736c5b0ec60f878c89a22c54fa4806666e58cc1d44fc741377582aa-rootfs.mount: Deactivated successfully. Jun 20 19:33:28.450111 containerd[1645]: time="2025-06-20T19:33:28.450083242Z" level=info msg="CreateContainer within sandbox \"5a627f281bcc4e5319714f3bff6c2a9e0c6fa0dcb374efd053c2ffc2683776f8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 20 19:33:28.460037 containerd[1645]: time="2025-06-20T19:33:28.459611484Z" level=info msg="Container a48595aa9959a77fe6d1e5fb0f47266d65d554040c0bc6d11081dcb77c2c0de1: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:33:28.465348 containerd[1645]: time="2025-06-20T19:33:28.465307022Z" level=info msg="CreateContainer within sandbox \"5a627f281bcc4e5319714f3bff6c2a9e0c6fa0dcb374efd053c2ffc2683776f8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a48595aa9959a77fe6d1e5fb0f47266d65d554040c0bc6d11081dcb77c2c0de1\"" Jun 20 19:33:28.466582 containerd[1645]: time="2025-06-20T19:33:28.466531956Z" level=info msg="StartContainer for \"a48595aa9959a77fe6d1e5fb0f47266d65d554040c0bc6d11081dcb77c2c0de1\"" Jun 20 19:33:28.467473 containerd[1645]: time="2025-06-20T19:33:28.467456221Z" level=info msg="connecting to shim a48595aa9959a77fe6d1e5fb0f47266d65d554040c0bc6d11081dcb77c2c0de1" address="unix:///run/containerd/s/09e876d11446e356948f24d831ebe36723da73e98c4a594364ee02f521f490f0" protocol=ttrpc version=3 Jun 20 19:33:28.498677 systemd[1]: Started cri-containerd-a48595aa9959a77fe6d1e5fb0f47266d65d554040c0bc6d11081dcb77c2c0de1.scope - libcontainer container a48595aa9959a77fe6d1e5fb0f47266d65d554040c0bc6d11081dcb77c2c0de1. Jun 20 19:33:28.519556 containerd[1645]: time="2025-06-20T19:33:28.519524018Z" level=info msg="StartContainer for \"a48595aa9959a77fe6d1e5fb0f47266d65d554040c0bc6d11081dcb77c2c0de1\" returns successfully" Jun 20 19:33:28.645346 containerd[1645]: time="2025-06-20T19:33:28.645324144Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a48595aa9959a77fe6d1e5fb0f47266d65d554040c0bc6d11081dcb77c2c0de1\" id:\"e1b3afadf69b4ac2c9651eb947baa01ee8bdcef74424cba1bbc82a6126d72cf2\" pid:3585 exited_at:{seconds:1750448008 nanos:644942609}" Jun 20 19:33:28.744249 kubelet[2923]: I0620 19:33:28.744196 2923 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jun 20 19:33:28.774117 systemd[1]: Created slice kubepods-burstable-pode817518a_bed8_4cb2_97e9_63e228d01aab.slice - libcontainer container kubepods-burstable-pode817518a_bed8_4cb2_97e9_63e228d01aab.slice. Jun 20 19:33:28.781097 kubelet[2923]: I0620 19:33:28.780826 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcw6d\" (UniqueName: \"kubernetes.io/projected/e817518a-bed8-4cb2-97e9-63e228d01aab-kube-api-access-xcw6d\") pod \"coredns-668d6bf9bc-wkfvl\" (UID: \"e817518a-bed8-4cb2-97e9-63e228d01aab\") " pod="kube-system/coredns-668d6bf9bc-wkfvl" Jun 20 19:33:28.781097 kubelet[2923]: I0620 19:33:28.780861 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e817518a-bed8-4cb2-97e9-63e228d01aab-config-volume\") pod \"coredns-668d6bf9bc-wkfvl\" (UID: \"e817518a-bed8-4cb2-97e9-63e228d01aab\") " pod="kube-system/coredns-668d6bf9bc-wkfvl" Jun 20 19:33:28.782501 systemd[1]: Created slice kubepods-burstable-podb32ce9cd_8192_4b24_a39f_ae9f6076e8d1.slice - libcontainer container kubepods-burstable-podb32ce9cd_8192_4b24_a39f_ae9f6076e8d1.slice. Jun 20 19:33:28.881582 kubelet[2923]: I0620 19:33:28.881095 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b32ce9cd-8192-4b24-a39f-ae9f6076e8d1-config-volume\") pod \"coredns-668d6bf9bc-682fj\" (UID: \"b32ce9cd-8192-4b24-a39f-ae9f6076e8d1\") " pod="kube-system/coredns-668d6bf9bc-682fj" Jun 20 19:33:28.881582 kubelet[2923]: I0620 19:33:28.881125 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl5tr\" (UniqueName: \"kubernetes.io/projected/b32ce9cd-8192-4b24-a39f-ae9f6076e8d1-kube-api-access-hl5tr\") pod \"coredns-668d6bf9bc-682fj\" (UID: \"b32ce9cd-8192-4b24-a39f-ae9f6076e8d1\") " pod="kube-system/coredns-668d6bf9bc-682fj" Jun 20 19:33:29.080020 containerd[1645]: time="2025-06-20T19:33:29.079682135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wkfvl,Uid:e817518a-bed8-4cb2-97e9-63e228d01aab,Namespace:kube-system,Attempt:0,}" Jun 20 19:33:29.089481 containerd[1645]: time="2025-06-20T19:33:29.089459449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-682fj,Uid:b32ce9cd-8192-4b24-a39f-ae9f6076e8d1,Namespace:kube-system,Attempt:0,}" Jun 20 19:33:29.465111 kubelet[2923]: I0620 19:33:29.465072 2923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-46wd8" podStartSLOduration=6.315246064 podStartE2EDuration="12.465060553s" podCreationTimestamp="2025-06-20 19:33:17 +0000 UTC" firstStartedPulling="2025-06-20 19:33:17.505543904 +0000 UTC m=+5.257336818" lastFinishedPulling="2025-06-20 19:33:23.655358398 +0000 UTC m=+11.407151307" observedRunningTime="2025-06-20 19:33:29.464612394 +0000 UTC m=+17.216405313" watchObservedRunningTime="2025-06-20 19:33:29.465060553 +0000 UTC m=+17.216853472" Jun 20 19:33:30.752051 systemd-networkd[1531]: cilium_host: Link UP Jun 20 19:33:30.752438 systemd-networkd[1531]: cilium_net: Link UP Jun 20 19:33:30.752975 systemd-networkd[1531]: cilium_net: Gained carrier Jun 20 19:33:30.753249 systemd-networkd[1531]: cilium_host: Gained carrier Jun 20 19:33:30.852395 systemd-networkd[1531]: cilium_vxlan: Link UP Jun 20 19:33:30.852399 systemd-networkd[1531]: cilium_vxlan: Gained carrier Jun 20 19:33:31.289585 kernel: NET: Registered PF_ALG protocol family Jun 20 19:33:31.297734 systemd-networkd[1531]: cilium_host: Gained IPv6LL Jun 20 19:33:31.616690 systemd-networkd[1531]: cilium_net: Gained IPv6LL Jun 20 19:33:31.872637 systemd-networkd[1531]: cilium_vxlan: Gained IPv6LL Jun 20 19:33:31.903526 systemd-networkd[1531]: lxc_health: Link UP Jun 20 19:33:31.917021 systemd-networkd[1531]: lxc_health: Gained carrier Jun 20 19:33:32.118170 systemd-networkd[1531]: lxcba6060e0cc9b: Link UP Jun 20 19:33:32.174026 kernel: eth0: renamed from tmp30c3e Jun 20 19:33:32.175426 systemd-networkd[1531]: lxcb533d5374a13: Link UP Jun 20 19:33:32.178737 kernel: eth0: renamed from tmp167ce Jun 20 19:33:32.181594 systemd-networkd[1531]: lxcb533d5374a13: Gained carrier Jun 20 19:33:32.181817 systemd-networkd[1531]: lxcba6060e0cc9b: Gained carrier Jun 20 19:33:33.216669 systemd-networkd[1531]: lxcba6060e0cc9b: Gained IPv6LL Jun 20 19:33:33.664695 systemd-networkd[1531]: lxcb533d5374a13: Gained IPv6LL Jun 20 19:33:33.728672 systemd-networkd[1531]: lxc_health: Gained IPv6LL Jun 20 19:33:34.723298 containerd[1645]: time="2025-06-20T19:33:34.723099213Z" level=info msg="connecting to shim 167cea10841f80b5f48c4e0b8eb7ec63a6a33f6b734db7e5ccca7653990e34c4" address="unix:///run/containerd/s/ad65a9c7a393764a753e3188bfddff5d832ba8b8c56cde7cb79d10554f59631d" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:33:34.727457 containerd[1645]: time="2025-06-20T19:33:34.727431574Z" level=info msg="connecting to shim 30c3ef107dadc0e863568e138c2f0e396b204eeac2267b1566a63098b9441368" address="unix:///run/containerd/s/5ebc2956622cd5d8a6e8a79ee410784fc3da6d8f0b436609f77036ca065d0910" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:33:34.748656 systemd[1]: Started cri-containerd-167cea10841f80b5f48c4e0b8eb7ec63a6a33f6b734db7e5ccca7653990e34c4.scope - libcontainer container 167cea10841f80b5f48c4e0b8eb7ec63a6a33f6b734db7e5ccca7653990e34c4. Jun 20 19:33:34.763652 systemd[1]: Started cri-containerd-30c3ef107dadc0e863568e138c2f0e396b204eeac2267b1566a63098b9441368.scope - libcontainer container 30c3ef107dadc0e863568e138c2f0e396b204eeac2267b1566a63098b9441368. Jun 20 19:33:34.766695 systemd-resolved[1493]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 20 19:33:34.778384 systemd-resolved[1493]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 20 19:33:34.807165 containerd[1645]: time="2025-06-20T19:33:34.807141801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wkfvl,Uid:e817518a-bed8-4cb2-97e9-63e228d01aab,Namespace:kube-system,Attempt:0,} returns sandbox id \"167cea10841f80b5f48c4e0b8eb7ec63a6a33f6b734db7e5ccca7653990e34c4\"" Jun 20 19:33:34.813577 containerd[1645]: time="2025-06-20T19:33:34.811150299Z" level=info msg="CreateContainer within sandbox \"167cea10841f80b5f48c4e0b8eb7ec63a6a33f6b734db7e5ccca7653990e34c4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 19:33:34.827742 containerd[1645]: time="2025-06-20T19:33:34.827714821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-682fj,Uid:b32ce9cd-8192-4b24-a39f-ae9f6076e8d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"30c3ef107dadc0e863568e138c2f0e396b204eeac2267b1566a63098b9441368\"" Jun 20 19:33:34.832108 containerd[1645]: time="2025-06-20T19:33:34.831823888Z" level=info msg="CreateContainer within sandbox \"30c3ef107dadc0e863568e138c2f0e396b204eeac2267b1566a63098b9441368\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 19:33:34.832680 containerd[1645]: time="2025-06-20T19:33:34.832665758Z" level=info msg="Container df066d17826832fd8e6523a9319c41da245522453174644dafb65dc7b47e638f: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:33:34.836755 containerd[1645]: time="2025-06-20T19:33:34.836731629Z" level=info msg="Container e82059ee43f6d48ea02a2d1a87752117a6c2669cf76f7ca01d6e09464e2544ac: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:33:34.850280 containerd[1645]: time="2025-06-20T19:33:34.850254398Z" level=info msg="CreateContainer within sandbox \"30c3ef107dadc0e863568e138c2f0e396b204eeac2267b1566a63098b9441368\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e82059ee43f6d48ea02a2d1a87752117a6c2669cf76f7ca01d6e09464e2544ac\"" Jun 20 19:33:34.850723 containerd[1645]: time="2025-06-20T19:33:34.850612041Z" level=info msg="CreateContainer within sandbox \"167cea10841f80b5f48c4e0b8eb7ec63a6a33f6b734db7e5ccca7653990e34c4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"df066d17826832fd8e6523a9319c41da245522453174644dafb65dc7b47e638f\"" Jun 20 19:33:34.850904 containerd[1645]: time="2025-06-20T19:33:34.850888831Z" level=info msg="StartContainer for \"e82059ee43f6d48ea02a2d1a87752117a6c2669cf76f7ca01d6e09464e2544ac\"" Jun 20 19:33:34.851282 containerd[1645]: time="2025-06-20T19:33:34.851128233Z" level=info msg="StartContainer for \"df066d17826832fd8e6523a9319c41da245522453174644dafb65dc7b47e638f\"" Jun 20 19:33:34.851713 containerd[1645]: time="2025-06-20T19:33:34.851291464Z" level=info msg="connecting to shim e82059ee43f6d48ea02a2d1a87752117a6c2669cf76f7ca01d6e09464e2544ac" address="unix:///run/containerd/s/5ebc2956622cd5d8a6e8a79ee410784fc3da6d8f0b436609f77036ca065d0910" protocol=ttrpc version=3 Jun 20 19:33:34.853387 containerd[1645]: time="2025-06-20T19:33:34.853274244Z" level=info msg="connecting to shim df066d17826832fd8e6523a9319c41da245522453174644dafb65dc7b47e638f" address="unix:///run/containerd/s/ad65a9c7a393764a753e3188bfddff5d832ba8b8c56cde7cb79d10554f59631d" protocol=ttrpc version=3 Jun 20 19:33:34.871670 systemd[1]: Started cri-containerd-e82059ee43f6d48ea02a2d1a87752117a6c2669cf76f7ca01d6e09464e2544ac.scope - libcontainer container e82059ee43f6d48ea02a2d1a87752117a6c2669cf76f7ca01d6e09464e2544ac. Jun 20 19:33:34.874449 systemd[1]: Started cri-containerd-df066d17826832fd8e6523a9319c41da245522453174644dafb65dc7b47e638f.scope - libcontainer container df066d17826832fd8e6523a9319c41da245522453174644dafb65dc7b47e638f. Jun 20 19:33:34.902561 containerd[1645]: time="2025-06-20T19:33:34.902482838Z" level=info msg="StartContainer for \"df066d17826832fd8e6523a9319c41da245522453174644dafb65dc7b47e638f\" returns successfully" Jun 20 19:33:34.902702 containerd[1645]: time="2025-06-20T19:33:34.902540005Z" level=info msg="StartContainer for \"e82059ee43f6d48ea02a2d1a87752117a6c2669cf76f7ca01d6e09464e2544ac\" returns successfully" Jun 20 19:33:35.499236 kubelet[2923]: I0620 19:33:35.499002 2923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-682fj" podStartSLOduration=18.498892644 podStartE2EDuration="18.498892644s" podCreationTimestamp="2025-06-20 19:33:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:33:35.498361671 +0000 UTC m=+23.250154591" watchObservedRunningTime="2025-06-20 19:33:35.498892644 +0000 UTC m=+23.250685557" Jun 20 19:33:35.715633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2288637001.mount: Deactivated successfully. Jun 20 19:33:35.761920 kubelet[2923]: I0620 19:33:35.761846 2923 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:33:35.772044 kubelet[2923]: I0620 19:33:35.772010 2923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wkfvl" podStartSLOduration=18.771999921 podStartE2EDuration="18.771999921s" podCreationTimestamp="2025-06-20 19:33:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:33:35.508466766 +0000 UTC m=+23.260259684" watchObservedRunningTime="2025-06-20 19:33:35.771999921 +0000 UTC m=+23.523792834" Jun 20 19:34:15.291166 systemd[1]: Started sshd@7-139.178.70.102:22-147.75.109.163:51880.service - OpenSSH per-connection server daemon (147.75.109.163:51880). Jun 20 19:34:15.396597 sshd[4251]: Accepted publickey for core from 147.75.109.163 port 51880 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:34:15.397686 sshd-session[4251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:34:15.406359 systemd-logind[1622]: New session 10 of user core. Jun 20 19:34:15.417750 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 20 19:34:15.841966 sshd[4253]: Connection closed by 147.75.109.163 port 51880 Jun 20 19:34:15.842419 sshd-session[4251]: pam_unix(sshd:session): session closed for user core Jun 20 19:34:15.847188 systemd-logind[1622]: Session 10 logged out. Waiting for processes to exit. Jun 20 19:34:15.847774 systemd[1]: sshd@7-139.178.70.102:22-147.75.109.163:51880.service: Deactivated successfully. Jun 20 19:34:15.849267 systemd[1]: session-10.scope: Deactivated successfully. Jun 20 19:34:15.850283 systemd-logind[1622]: Removed session 10. Jun 20 19:34:20.856502 systemd[1]: Started sshd@8-139.178.70.102:22-147.75.109.163:47194.service - OpenSSH per-connection server daemon (147.75.109.163:47194). Jun 20 19:34:20.902419 sshd[4268]: Accepted publickey for core from 147.75.109.163 port 47194 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:34:20.903207 sshd-session[4268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:34:20.906337 systemd-logind[1622]: New session 11 of user core. Jun 20 19:34:20.914644 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 20 19:34:21.000271 sshd[4270]: Connection closed by 147.75.109.163 port 47194 Jun 20 19:34:21.000623 sshd-session[4268]: pam_unix(sshd:session): session closed for user core Jun 20 19:34:21.002652 systemd[1]: sshd@8-139.178.70.102:22-147.75.109.163:47194.service: Deactivated successfully. Jun 20 19:34:21.003748 systemd[1]: session-11.scope: Deactivated successfully. Jun 20 19:34:21.004229 systemd-logind[1622]: Session 11 logged out. Waiting for processes to exit. Jun 20 19:34:21.005036 systemd-logind[1622]: Removed session 11. Jun 20 19:34:26.009382 systemd[1]: Started sshd@9-139.178.70.102:22-147.75.109.163:55700.service - OpenSSH per-connection server daemon (147.75.109.163:55700). Jun 20 19:34:26.048690 sshd[4283]: Accepted publickey for core from 147.75.109.163 port 55700 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:34:26.049509 sshd-session[4283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:34:26.053402 systemd-logind[1622]: New session 12 of user core. Jun 20 19:34:26.062669 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 20 19:34:26.151198 sshd[4285]: Connection closed by 147.75.109.163 port 55700 Jun 20 19:34:26.151653 sshd-session[4283]: pam_unix(sshd:session): session closed for user core Jun 20 19:34:26.158266 systemd[1]: sshd@9-139.178.70.102:22-147.75.109.163:55700.service: Deactivated successfully. Jun 20 19:34:26.159410 systemd[1]: session-12.scope: Deactivated successfully. Jun 20 19:34:26.159997 systemd-logind[1622]: Session 12 logged out. Waiting for processes to exit. Jun 20 19:34:26.161172 systemd-logind[1622]: Removed session 12. Jun 20 19:34:26.162399 systemd[1]: Started sshd@10-139.178.70.102:22-147.75.109.163:55714.service - OpenSSH per-connection server daemon (147.75.109.163:55714). Jun 20 19:34:26.209569 sshd[4298]: Accepted publickey for core from 147.75.109.163 port 55714 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:34:26.210186 sshd-session[4298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:34:26.212651 systemd-logind[1622]: New session 13 of user core. Jun 20 19:34:26.216781 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 20 19:34:26.343775 sshd[4300]: Connection closed by 147.75.109.163 port 55714 Jun 20 19:34:26.344544 sshd-session[4298]: pam_unix(sshd:session): session closed for user core Jun 20 19:34:26.351039 systemd[1]: sshd@10-139.178.70.102:22-147.75.109.163:55714.service: Deactivated successfully. Jun 20 19:34:26.353432 systemd[1]: session-13.scope: Deactivated successfully. Jun 20 19:34:26.355171 systemd-logind[1622]: Session 13 logged out. Waiting for processes to exit. Jun 20 19:34:26.358242 systemd[1]: Started sshd@11-139.178.70.102:22-147.75.109.163:55728.service - OpenSSH per-connection server daemon (147.75.109.163:55728). Jun 20 19:34:26.360707 systemd-logind[1622]: Removed session 13. Jun 20 19:34:26.419988 sshd[4310]: Accepted publickey for core from 147.75.109.163 port 55728 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:34:26.420266 sshd-session[4310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:34:26.423238 systemd-logind[1622]: New session 14 of user core. Jun 20 19:34:26.430747 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 20 19:34:26.524212 sshd[4312]: Connection closed by 147.75.109.163 port 55728 Jun 20 19:34:26.524614 sshd-session[4310]: pam_unix(sshd:session): session closed for user core Jun 20 19:34:26.526263 systemd-logind[1622]: Session 14 logged out. Waiting for processes to exit. Jun 20 19:34:26.526455 systemd[1]: sshd@11-139.178.70.102:22-147.75.109.163:55728.service: Deactivated successfully. Jun 20 19:34:26.527525 systemd[1]: session-14.scope: Deactivated successfully. Jun 20 19:34:26.528791 systemd-logind[1622]: Removed session 14. Jun 20 19:34:31.535342 systemd[1]: Started sshd@12-139.178.70.102:22-147.75.109.163:55742.service - OpenSSH per-connection server daemon (147.75.109.163:55742). Jun 20 19:34:31.582274 sshd[4324]: Accepted publickey for core from 147.75.109.163 port 55742 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:34:31.583521 sshd-session[4324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:34:31.588412 systemd-logind[1622]: New session 15 of user core. Jun 20 19:34:31.595660 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 20 19:34:31.698096 sshd[4326]: Connection closed by 147.75.109.163 port 55742 Jun 20 19:34:31.698273 sshd-session[4324]: pam_unix(sshd:session): session closed for user core Jun 20 19:34:31.701174 systemd[1]: sshd@12-139.178.70.102:22-147.75.109.163:55742.service: Deactivated successfully. Jun 20 19:34:31.702324 systemd[1]: session-15.scope: Deactivated successfully. Jun 20 19:34:31.702872 systemd-logind[1622]: Session 15 logged out. Waiting for processes to exit. Jun 20 19:34:31.703716 systemd-logind[1622]: Removed session 15. Jun 20 19:34:36.709320 systemd[1]: Started sshd@13-139.178.70.102:22-147.75.109.163:39102.service - OpenSSH per-connection server daemon (147.75.109.163:39102). Jun 20 19:34:36.790208 sshd[4338]: Accepted publickey for core from 147.75.109.163 port 39102 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:34:36.791084 sshd-session[4338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:34:36.795292 systemd-logind[1622]: New session 16 of user core. Jun 20 19:34:36.803667 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 20 19:34:36.898214 sshd[4340]: Connection closed by 147.75.109.163 port 39102 Jun 20 19:34:36.898682 sshd-session[4338]: pam_unix(sshd:session): session closed for user core Jun 20 19:34:36.908805 systemd[1]: sshd@13-139.178.70.102:22-147.75.109.163:39102.service: Deactivated successfully. Jun 20 19:34:36.909853 systemd[1]: session-16.scope: Deactivated successfully. Jun 20 19:34:36.910368 systemd-logind[1622]: Session 16 logged out. Waiting for processes to exit. Jun 20 19:34:36.911918 systemd[1]: Started sshd@14-139.178.70.102:22-147.75.109.163:39114.service - OpenSSH per-connection server daemon (147.75.109.163:39114). Jun 20 19:34:36.913069 systemd-logind[1622]: Removed session 16. Jun 20 19:34:36.950658 sshd[4352]: Accepted publickey for core from 147.75.109.163 port 39114 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:34:36.951513 sshd-session[4352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:34:36.955716 systemd-logind[1622]: New session 17 of user core. Jun 20 19:34:36.959675 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 20 19:34:37.331747 sshd[4354]: Connection closed by 147.75.109.163 port 39114 Jun 20 19:34:37.332936 sshd-session[4352]: pam_unix(sshd:session): session closed for user core Jun 20 19:34:37.340069 systemd[1]: sshd@14-139.178.70.102:22-147.75.109.163:39114.service: Deactivated successfully. Jun 20 19:34:37.341148 systemd[1]: session-17.scope: Deactivated successfully. Jun 20 19:34:37.342039 systemd-logind[1622]: Session 17 logged out. Waiting for processes to exit. Jun 20 19:34:37.343307 systemd[1]: Started sshd@15-139.178.70.102:22-147.75.109.163:39116.service - OpenSSH per-connection server daemon (147.75.109.163:39116). Jun 20 19:34:37.344915 systemd-logind[1622]: Removed session 17. Jun 20 19:34:37.410344 sshd[4364]: Accepted publickey for core from 147.75.109.163 port 39116 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:34:37.411267 sshd-session[4364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:34:37.414885 systemd-logind[1622]: New session 18 of user core. Jun 20 19:34:37.423724 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 20 19:34:38.244901 sshd[4366]: Connection closed by 147.75.109.163 port 39116 Jun 20 19:34:38.245447 sshd-session[4364]: pam_unix(sshd:session): session closed for user core Jun 20 19:34:38.252103 systemd[1]: sshd@15-139.178.70.102:22-147.75.109.163:39116.service: Deactivated successfully. Jun 20 19:34:38.254033 systemd[1]: session-18.scope: Deactivated successfully. Jun 20 19:34:38.255123 systemd-logind[1622]: Session 18 logged out. Waiting for processes to exit. Jun 20 19:34:38.257603 systemd[1]: Started sshd@16-139.178.70.102:22-147.75.109.163:39128.service - OpenSSH per-connection server daemon (147.75.109.163:39128). Jun 20 19:34:38.259936 systemd-logind[1622]: Removed session 18. Jun 20 19:34:38.301235 sshd[4383]: Accepted publickey for core from 147.75.109.163 port 39128 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:34:38.302083 sshd-session[4383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:34:38.305488 systemd-logind[1622]: New session 19 of user core. Jun 20 19:34:38.314746 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 20 19:34:38.496918 sshd[4385]: Connection closed by 147.75.109.163 port 39128 Jun 20 19:34:38.497262 sshd-session[4383]: pam_unix(sshd:session): session closed for user core Jun 20 19:34:38.505905 systemd[1]: sshd@16-139.178.70.102:22-147.75.109.163:39128.service: Deactivated successfully. Jun 20 19:34:38.507147 systemd[1]: session-19.scope: Deactivated successfully. Jun 20 19:34:38.507701 systemd-logind[1622]: Session 19 logged out. Waiting for processes to exit. Jun 20 19:34:38.509705 systemd[1]: Started sshd@17-139.178.70.102:22-147.75.109.163:39138.service - OpenSSH per-connection server daemon (147.75.109.163:39138). Jun 20 19:34:38.510054 systemd-logind[1622]: Removed session 19. Jun 20 19:34:38.551598 sshd[4395]: Accepted publickey for core from 147.75.109.163 port 39138 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:34:38.552464 sshd-session[4395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:34:38.556430 systemd-logind[1622]: New session 20 of user core. Jun 20 19:34:38.566742 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 20 19:34:38.661902 sshd[4397]: Connection closed by 147.75.109.163 port 39138 Jun 20 19:34:38.662250 sshd-session[4395]: pam_unix(sshd:session): session closed for user core Jun 20 19:34:38.663961 systemd[1]: sshd@17-139.178.70.102:22-147.75.109.163:39138.service: Deactivated successfully. Jun 20 19:34:38.665255 systemd[1]: session-20.scope: Deactivated successfully. Jun 20 19:34:38.666225 systemd-logind[1622]: Session 20 logged out. Waiting for processes to exit. Jun 20 19:34:38.667154 systemd-logind[1622]: Removed session 20. Jun 20 19:34:43.672275 systemd[1]: Started sshd@18-139.178.70.102:22-147.75.109.163:39146.service - OpenSSH per-connection server daemon (147.75.109.163:39146). Jun 20 19:34:43.712626 sshd[4411]: Accepted publickey for core from 147.75.109.163 port 39146 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:34:43.713666 sshd-session[4411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:34:43.716980 systemd-logind[1622]: New session 21 of user core. Jun 20 19:34:43.720685 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 20 19:34:43.815547 sshd[4413]: Connection closed by 147.75.109.163 port 39146 Jun 20 19:34:43.815884 sshd-session[4411]: pam_unix(sshd:session): session closed for user core Jun 20 19:34:43.817906 systemd[1]: sshd@18-139.178.70.102:22-147.75.109.163:39146.service: Deactivated successfully. Jun 20 19:34:43.819006 systemd[1]: session-21.scope: Deactivated successfully. Jun 20 19:34:43.819523 systemd-logind[1622]: Session 21 logged out. Waiting for processes to exit. Jun 20 19:34:43.820450 systemd-logind[1622]: Removed session 21. Jun 20 19:34:48.826369 systemd[1]: Started sshd@19-139.178.70.102:22-147.75.109.163:56690.service - OpenSSH per-connection server daemon (147.75.109.163:56690). Jun 20 19:34:48.877886 sshd[4426]: Accepted publickey for core from 147.75.109.163 port 56690 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:34:48.878831 sshd-session[4426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:34:48.882119 systemd-logind[1622]: New session 22 of user core. Jun 20 19:34:48.889742 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 20 19:34:48.998163 sshd[4428]: Connection closed by 147.75.109.163 port 56690 Jun 20 19:34:48.998591 sshd-session[4426]: pam_unix(sshd:session): session closed for user core Jun 20 19:34:49.000689 systemd[1]: sshd@19-139.178.70.102:22-147.75.109.163:56690.service: Deactivated successfully. Jun 20 19:34:49.001718 systemd[1]: session-22.scope: Deactivated successfully. Jun 20 19:34:49.002131 systemd-logind[1622]: Session 22 logged out. Waiting for processes to exit. Jun 20 19:34:49.002891 systemd-logind[1622]: Removed session 22. Jun 20 19:34:54.012179 systemd[1]: Started sshd@20-139.178.70.102:22-147.75.109.163:56700.service - OpenSSH per-connection server daemon (147.75.109.163:56700). Jun 20 19:34:54.077334 sshd[4439]: Accepted publickey for core from 147.75.109.163 port 56700 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:34:54.078236 sshd-session[4439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:34:54.081416 systemd-logind[1622]: New session 23 of user core. Jun 20 19:34:54.089791 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 20 19:34:54.176949 sshd[4441]: Connection closed by 147.75.109.163 port 56700 Jun 20 19:34:54.177380 sshd-session[4439]: pam_unix(sshd:session): session closed for user core Jun 20 19:34:54.183771 systemd[1]: sshd@20-139.178.70.102:22-147.75.109.163:56700.service: Deactivated successfully. Jun 20 19:34:54.184729 systemd[1]: session-23.scope: Deactivated successfully. Jun 20 19:34:54.185193 systemd-logind[1622]: Session 23 logged out. Waiting for processes to exit. Jun 20 19:34:54.186880 systemd[1]: Started sshd@21-139.178.70.102:22-147.75.109.163:56708.service - OpenSSH per-connection server daemon (147.75.109.163:56708). Jun 20 19:34:54.187434 systemd-logind[1622]: Removed session 23. Jun 20 19:34:54.223890 sshd[4452]: Accepted publickey for core from 147.75.109.163 port 56708 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:34:54.224753 sshd-session[4452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:34:54.229044 systemd-logind[1622]: New session 24 of user core. Jun 20 19:34:54.234670 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 20 19:34:55.558214 containerd[1645]: time="2025-06-20T19:34:55.558179720Z" level=info msg="StopContainer for \"028718044102f0d66137cc93043586f2f2b31701175543627c743ea41f6fc27d\" with timeout 30 (s)" Jun 20 19:34:55.559094 containerd[1645]: time="2025-06-20T19:34:55.559074359Z" level=info msg="Stop container \"028718044102f0d66137cc93043586f2f2b31701175543627c743ea41f6fc27d\" with signal terminated" Jun 20 19:34:55.572387 systemd[1]: cri-containerd-028718044102f0d66137cc93043586f2f2b31701175543627c743ea41f6fc27d.scope: Deactivated successfully. Jun 20 19:34:55.574465 containerd[1645]: time="2025-06-20T19:34:55.574444729Z" level=info msg="received exit event container_id:\"028718044102f0d66137cc93043586f2f2b31701175543627c743ea41f6fc27d\" id:\"028718044102f0d66137cc93043586f2f2b31701175543627c743ea41f6fc27d\" pid:3444 exited_at:{seconds:1750448095 nanos:574100720}" Jun 20 19:34:55.577398 containerd[1645]: time="2025-06-20T19:34:55.577378833Z" level=info msg="TaskExit event in podsandbox handler container_id:\"028718044102f0d66137cc93043586f2f2b31701175543627c743ea41f6fc27d\" id:\"028718044102f0d66137cc93043586f2f2b31701175543627c743ea41f6fc27d\" pid:3444 exited_at:{seconds:1750448095 nanos:574100720}" Jun 20 19:34:55.590020 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-028718044102f0d66137cc93043586f2f2b31701175543627c743ea41f6fc27d-rootfs.mount: Deactivated successfully. Jun 20 19:34:55.593771 containerd[1645]: time="2025-06-20T19:34:55.593745545Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 19:34:55.597508 containerd[1645]: time="2025-06-20T19:34:55.597487500Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a48595aa9959a77fe6d1e5fb0f47266d65d554040c0bc6d11081dcb77c2c0de1\" id:\"a9c226f0b2870a5cea892e06201d4cad0400852171c9f7e2ce1458bd4bf32312\" pid:4479 exited_at:{seconds:1750448095 nanos:597278212}" Jun 20 19:34:55.601556 containerd[1645]: time="2025-06-20T19:34:55.601477189Z" level=info msg="StopContainer for \"a48595aa9959a77fe6d1e5fb0f47266d65d554040c0bc6d11081dcb77c2c0de1\" with timeout 2 (s)" Jun 20 19:34:55.601889 containerd[1645]: time="2025-06-20T19:34:55.601810379Z" level=info msg="Stop container \"a48595aa9959a77fe6d1e5fb0f47266d65d554040c0bc6d11081dcb77c2c0de1\" with signal terminated" Jun 20 19:34:55.606385 containerd[1645]: time="2025-06-20T19:34:55.606341169Z" level=info msg="StopContainer for \"028718044102f0d66137cc93043586f2f2b31701175543627c743ea41f6fc27d\" returns successfully" Jun 20 19:34:55.607075 containerd[1645]: time="2025-06-20T19:34:55.607059223Z" level=info msg="StopPodSandbox for \"47f0f087fa3098e0e97d4b6324df13ff5bab3b61d33b61b040fed0ab0b68687f\"" Jun 20 19:34:55.607131 containerd[1645]: time="2025-06-20T19:34:55.607117412Z" level=info msg="Container to stop \"028718044102f0d66137cc93043586f2f2b31701175543627c743ea41f6fc27d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:34:55.610044 systemd-networkd[1531]: lxc_health: Link DOWN Jun 20 19:34:55.610047 systemd-networkd[1531]: lxc_health: Lost carrier Jun 20 19:34:55.616279 systemd[1]: cri-containerd-47f0f087fa3098e0e97d4b6324df13ff5bab3b61d33b61b040fed0ab0b68687f.scope: Deactivated successfully. Jun 20 19:34:55.618890 containerd[1645]: time="2025-06-20T19:34:55.618822936Z" level=info msg="TaskExit event in podsandbox handler container_id:\"47f0f087fa3098e0e97d4b6324df13ff5bab3b61d33b61b040fed0ab0b68687f\" id:\"47f0f087fa3098e0e97d4b6324df13ff5bab3b61d33b61b040fed0ab0b68687f\" pid:3148 exit_status:137 exited_at:{seconds:1750448095 nanos:618551594}" Jun 20 19:34:55.631327 systemd[1]: cri-containerd-a48595aa9959a77fe6d1e5fb0f47266d65d554040c0bc6d11081dcb77c2c0de1.scope: Deactivated successfully. Jun 20 19:34:55.632210 systemd[1]: cri-containerd-a48595aa9959a77fe6d1e5fb0f47266d65d554040c0bc6d11081dcb77c2c0de1.scope: Consumed 4.397s CPU time, 221.9M memory peak, 100.4M read from disk, 13.3M written to disk. Jun 20 19:34:55.633039 containerd[1645]: time="2025-06-20T19:34:55.633021696Z" level=info msg="received exit event container_id:\"a48595aa9959a77fe6d1e5fb0f47266d65d554040c0bc6d11081dcb77c2c0de1\" id:\"a48595aa9959a77fe6d1e5fb0f47266d65d554040c0bc6d11081dcb77c2c0de1\" pid:3552 exited_at:{seconds:1750448095 nanos:631902911}" Jun 20 19:34:55.643946 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47f0f087fa3098e0e97d4b6324df13ff5bab3b61d33b61b040fed0ab0b68687f-rootfs.mount: Deactivated successfully. Jun 20 19:34:55.646887 containerd[1645]: time="2025-06-20T19:34:55.646839523Z" level=info msg="shim disconnected" id=47f0f087fa3098e0e97d4b6324df13ff5bab3b61d33b61b040fed0ab0b68687f namespace=k8s.io Jun 20 19:34:55.646887 containerd[1645]: time="2025-06-20T19:34:55.646858595Z" level=warning msg="cleaning up after shim disconnected" id=47f0f087fa3098e0e97d4b6324df13ff5bab3b61d33b61b040fed0ab0b68687f namespace=k8s.io Jun 20 19:34:55.647058 containerd[1645]: time="2025-06-20T19:34:55.646863227Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:34:55.647180 containerd[1645]: time="2025-06-20T19:34:55.647169097Z" level=info msg="received exit event sandbox_id:\"47f0f087fa3098e0e97d4b6324df13ff5bab3b61d33b61b040fed0ab0b68687f\" exit_status:137 exited_at:{seconds:1750448095 nanos:618551594}" Jun 20 19:34:55.648674 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-47f0f087fa3098e0e97d4b6324df13ff5bab3b61d33b61b040fed0ab0b68687f-shm.mount: Deactivated successfully. Jun 20 19:34:55.652429 containerd[1645]: time="2025-06-20T19:34:55.652404677Z" level=info msg="TearDown network for sandbox \"47f0f087fa3098e0e97d4b6324df13ff5bab3b61d33b61b040fed0ab0b68687f\" successfully" Jun 20 19:34:55.652550 containerd[1645]: time="2025-06-20T19:34:55.652460143Z" level=info msg="StopPodSandbox for \"47f0f087fa3098e0e97d4b6324df13ff5bab3b61d33b61b040fed0ab0b68687f\" returns successfully" Jun 20 19:34:55.667634 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a48595aa9959a77fe6d1e5fb0f47266d65d554040c0bc6d11081dcb77c2c0de1-rootfs.mount: Deactivated successfully. Jun 20 19:34:55.675508 containerd[1645]: time="2025-06-20T19:34:55.675481870Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a48595aa9959a77fe6d1e5fb0f47266d65d554040c0bc6d11081dcb77c2c0de1\" id:\"a48595aa9959a77fe6d1e5fb0f47266d65d554040c0bc6d11081dcb77c2c0de1\" pid:3552 exited_at:{seconds:1750448095 nanos:631902911}" Jun 20 19:34:55.675810 containerd[1645]: time="2025-06-20T19:34:55.675799433Z" level=info msg="StopContainer for \"a48595aa9959a77fe6d1e5fb0f47266d65d554040c0bc6d11081dcb77c2c0de1\" returns successfully" Jun 20 19:34:55.676190 containerd[1645]: time="2025-06-20T19:34:55.676146607Z" level=info msg="StopPodSandbox for \"5a627f281bcc4e5319714f3bff6c2a9e0c6fa0dcb374efd053c2ffc2683776f8\"" Jun 20 19:34:55.676269 containerd[1645]: time="2025-06-20T19:34:55.676259501Z" level=info msg="Container to stop \"f4b5037d58a4c8a583efa3eedd7fa7144d5ef46be6ffe9db5e3afe0da5d67cc8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:34:55.676395 containerd[1645]: time="2025-06-20T19:34:55.676336345Z" level=info msg="Container to stop \"a48595aa9959a77fe6d1e5fb0f47266d65d554040c0bc6d11081dcb77c2c0de1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:34:55.676395 containerd[1645]: time="2025-06-20T19:34:55.676361239Z" level=info msg="Container to stop \"25f05bdf332de98fe96462173359fac5563d1729d3cf3aa43eab988f2922afd9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:34:55.676395 containerd[1645]: time="2025-06-20T19:34:55.676366924Z" level=info msg="Container to stop \"d45b00a63201a7b6d303c29182b0ae7fa201e2ab49e710de485716fb2a7c42c3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:34:55.676395 containerd[1645]: time="2025-06-20T19:34:55.676371357Z" level=info msg="Container to stop \"773b4376e736c5b0ec60f878c89a22c54fa4806666e58cc1d44fc741377582aa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:34:55.682109 systemd[1]: cri-containerd-5a627f281bcc4e5319714f3bff6c2a9e0c6fa0dcb374efd053c2ffc2683776f8.scope: Deactivated successfully. Jun 20 19:34:55.683297 containerd[1645]: time="2025-06-20T19:34:55.683249939Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a627f281bcc4e5319714f3bff6c2a9e0c6fa0dcb374efd053c2ffc2683776f8\" id:\"5a627f281bcc4e5319714f3bff6c2a9e0c6fa0dcb374efd053c2ffc2683776f8\" pid:3071 exit_status:137 exited_at:{seconds:1750448095 nanos:683075326}" Jun 20 19:34:55.688580 kubelet[2923]: I0620 19:34:55.688544 2923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tl5p5\" (UniqueName: \"kubernetes.io/projected/f4fa4f99-f1b9-495f-9eb7-888368e9c869-kube-api-access-tl5p5\") pod \"f4fa4f99-f1b9-495f-9eb7-888368e9c869\" (UID: \"f4fa4f99-f1b9-495f-9eb7-888368e9c869\") " Jun 20 19:34:55.688814 kubelet[2923]: I0620 19:34:55.688590 2923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f4fa4f99-f1b9-495f-9eb7-888368e9c869-cilium-config-path\") pod \"f4fa4f99-f1b9-495f-9eb7-888368e9c869\" (UID: \"f4fa4f99-f1b9-495f-9eb7-888368e9c869\") " Jun 20 19:34:55.690088 kubelet[2923]: I0620 19:34:55.689723 2923 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4fa4f99-f1b9-495f-9eb7-888368e9c869-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f4fa4f99-f1b9-495f-9eb7-888368e9c869" (UID: "f4fa4f99-f1b9-495f-9eb7-888368e9c869"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 20 19:34:55.698259 systemd[1]: var-lib-kubelet-pods-f4fa4f99\x2df1b9\x2d495f\x2d9eb7\x2d888368e9c869-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtl5p5.mount: Deactivated successfully. Jun 20 19:34:55.699066 kubelet[2923]: I0620 19:34:55.699040 2923 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4fa4f99-f1b9-495f-9eb7-888368e9c869-kube-api-access-tl5p5" (OuterVolumeSpecName: "kube-api-access-tl5p5") pod "f4fa4f99-f1b9-495f-9eb7-888368e9c869" (UID: "f4fa4f99-f1b9-495f-9eb7-888368e9c869"). InnerVolumeSpecName "kube-api-access-tl5p5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 19:34:55.703217 containerd[1645]: time="2025-06-20T19:34:55.703180748Z" level=info msg="shim disconnected" id=5a627f281bcc4e5319714f3bff6c2a9e0c6fa0dcb374efd053c2ffc2683776f8 namespace=k8s.io Jun 20 19:34:55.703217 containerd[1645]: time="2025-06-20T19:34:55.703199860Z" level=warning msg="cleaning up after shim disconnected" id=5a627f281bcc4e5319714f3bff6c2a9e0c6fa0dcb374efd053c2ffc2683776f8 namespace=k8s.io Jun 20 19:34:55.703666 containerd[1645]: time="2025-06-20T19:34:55.703205023Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:34:55.710234 containerd[1645]: time="2025-06-20T19:34:55.710212460Z" level=info msg="received exit event sandbox_id:\"5a627f281bcc4e5319714f3bff6c2a9e0c6fa0dcb374efd053c2ffc2683776f8\" exit_status:137 exited_at:{seconds:1750448095 nanos:683075326}" Jun 20 19:34:55.710530 containerd[1645]: time="2025-06-20T19:34:55.710392917Z" level=info msg="TearDown network for sandbox \"5a627f281bcc4e5319714f3bff6c2a9e0c6fa0dcb374efd053c2ffc2683776f8\" successfully" Jun 20 19:34:55.710530 containerd[1645]: time="2025-06-20T19:34:55.710473785Z" level=info msg="StopPodSandbox for \"5a627f281bcc4e5319714f3bff6c2a9e0c6fa0dcb374efd053c2ffc2683776f8\" returns successfully" Jun 20 19:34:55.789382 kubelet[2923]: I0620 19:34:55.789344 2923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7114b668-d678-4a9c-aee4-006fa66a3550-host-proc-sys-net\") pod \"7114b668-d678-4a9c-aee4-006fa66a3550\" (UID: \"7114b668-d678-4a9c-aee4-006fa66a3550\") " Jun 20 19:34:55.789532 kubelet[2923]: I0620 19:34:55.789423 2923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7114b668-d678-4a9c-aee4-006fa66a3550-bpf-maps\") pod \"7114b668-d678-4a9c-aee4-006fa66a3550\" (UID: \"7114b668-d678-4a9c-aee4-006fa66a3550\") " Jun 20 19:34:55.789532 kubelet[2923]: I0620 19:34:55.789447 2923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7114b668-d678-4a9c-aee4-006fa66a3550-cni-path\") pod \"7114b668-d678-4a9c-aee4-006fa66a3550\" (UID: \"7114b668-d678-4a9c-aee4-006fa66a3550\") " Jun 20 19:34:55.789532 kubelet[2923]: I0620 19:34:55.789458 2923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7114b668-d678-4a9c-aee4-006fa66a3550-xtables-lock\") pod \"7114b668-d678-4a9c-aee4-006fa66a3550\" (UID: \"7114b668-d678-4a9c-aee4-006fa66a3550\") " Jun 20 19:34:55.789532 kubelet[2923]: I0620 19:34:55.789477 2923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7114b668-d678-4a9c-aee4-006fa66a3550-hubble-tls\") pod \"7114b668-d678-4a9c-aee4-006fa66a3550\" (UID: \"7114b668-d678-4a9c-aee4-006fa66a3550\") " Jun 20 19:34:55.789532 kubelet[2923]: I0620 19:34:55.789497 2923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7114b668-d678-4a9c-aee4-006fa66a3550-etc-cni-netd\") pod \"7114b668-d678-4a9c-aee4-006fa66a3550\" (UID: \"7114b668-d678-4a9c-aee4-006fa66a3550\") " Jun 20 19:34:55.789532 kubelet[2923]: I0620 19:34:55.789511 2923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7114b668-d678-4a9c-aee4-006fa66a3550-lib-modules\") pod \"7114b668-d678-4a9c-aee4-006fa66a3550\" (UID: \"7114b668-d678-4a9c-aee4-006fa66a3550\") " Jun 20 19:34:55.790062 kubelet[2923]: I0620 19:34:55.789527 2923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7114b668-d678-4a9c-aee4-006fa66a3550-cilium-config-path\") pod \"7114b668-d678-4a9c-aee4-006fa66a3550\" (UID: \"7114b668-d678-4a9c-aee4-006fa66a3550\") " Jun 20 19:34:55.790062 kubelet[2923]: I0620 19:34:55.789538 2923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7114b668-d678-4a9c-aee4-006fa66a3550-cilium-cgroup\") pod \"7114b668-d678-4a9c-aee4-006fa66a3550\" (UID: \"7114b668-d678-4a9c-aee4-006fa66a3550\") " Jun 20 19:34:55.790062 kubelet[2923]: I0620 19:34:55.789549 2923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7114b668-d678-4a9c-aee4-006fa66a3550-cilium-run\") pod \"7114b668-d678-4a9c-aee4-006fa66a3550\" (UID: \"7114b668-d678-4a9c-aee4-006fa66a3550\") " Jun 20 19:34:55.790062 kubelet[2923]: I0620 19:34:55.789398 2923 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7114b668-d678-4a9c-aee4-006fa66a3550-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7114b668-d678-4a9c-aee4-006fa66a3550" (UID: "7114b668-d678-4a9c-aee4-006fa66a3550"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:34:55.790062 kubelet[2923]: I0620 19:34:55.789643 2923 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7114b668-d678-4a9c-aee4-006fa66a3550-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7114b668-d678-4a9c-aee4-006fa66a3550" (UID: "7114b668-d678-4a9c-aee4-006fa66a3550"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:34:55.790184 kubelet[2923]: I0620 19:34:55.789657 2923 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7114b668-d678-4a9c-aee4-006fa66a3550-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7114b668-d678-4a9c-aee4-006fa66a3550" (UID: "7114b668-d678-4a9c-aee4-006fa66a3550"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:34:55.790184 kubelet[2923]: I0620 19:34:55.789680 2923 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7114b668-d678-4a9c-aee4-006fa66a3550-cni-path" (OuterVolumeSpecName: "cni-path") pod "7114b668-d678-4a9c-aee4-006fa66a3550" (UID: "7114b668-d678-4a9c-aee4-006fa66a3550"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:34:55.790184 kubelet[2923]: I0620 19:34:55.789693 2923 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7114b668-d678-4a9c-aee4-006fa66a3550-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7114b668-d678-4a9c-aee4-006fa66a3550" (UID: "7114b668-d678-4a9c-aee4-006fa66a3550"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:34:55.790184 kubelet[2923]: I0620 19:34:55.789844 2923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7114b668-d678-4a9c-aee4-006fa66a3550-clustermesh-secrets\") pod \"7114b668-d678-4a9c-aee4-006fa66a3550\" (UID: \"7114b668-d678-4a9c-aee4-006fa66a3550\") " Jun 20 19:34:55.790184 kubelet[2923]: I0620 19:34:55.789986 2923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7114b668-d678-4a9c-aee4-006fa66a3550-host-proc-sys-kernel\") pod \"7114b668-d678-4a9c-aee4-006fa66a3550\" (UID: \"7114b668-d678-4a9c-aee4-006fa66a3550\") " Jun 20 19:34:55.790293 kubelet[2923]: I0620 19:34:55.790006 2923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7114b668-d678-4a9c-aee4-006fa66a3550-hostproc\") pod \"7114b668-d678-4a9c-aee4-006fa66a3550\" (UID: \"7114b668-d678-4a9c-aee4-006fa66a3550\") " Jun 20 19:34:55.790293 kubelet[2923]: I0620 19:34:55.790020 2923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssk59\" (UniqueName: \"kubernetes.io/projected/7114b668-d678-4a9c-aee4-006fa66a3550-kube-api-access-ssk59\") pod \"7114b668-d678-4a9c-aee4-006fa66a3550\" (UID: \"7114b668-d678-4a9c-aee4-006fa66a3550\") " Jun 20 19:34:55.790293 kubelet[2923]: I0620 19:34:55.790149 2923 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7114b668-d678-4a9c-aee4-006fa66a3550-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jun 20 19:34:55.790293 kubelet[2923]: I0620 19:34:55.790160 2923 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7114b668-d678-4a9c-aee4-006fa66a3550-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jun 20 19:34:55.790293 kubelet[2923]: I0620 19:34:55.790167 2923 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7114b668-d678-4a9c-aee4-006fa66a3550-cni-path\") on node \"localhost\" DevicePath \"\"" Jun 20 19:34:55.790293 kubelet[2923]: I0620 19:34:55.790188 2923 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7114b668-d678-4a9c-aee4-006fa66a3550-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jun 20 19:34:55.791219 kubelet[2923]: I0620 19:34:55.791018 2923 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7114b668-d678-4a9c-aee4-006fa66a3550-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jun 20 19:34:55.791219 kubelet[2923]: I0620 19:34:55.791033 2923 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tl5p5\" (UniqueName: \"kubernetes.io/projected/f4fa4f99-f1b9-495f-9eb7-888368e9c869-kube-api-access-tl5p5\") on node \"localhost\" DevicePath \"\"" Jun 20 19:34:55.791219 kubelet[2923]: I0620 19:34:55.791041 2923 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f4fa4f99-f1b9-495f-9eb7-888368e9c869-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jun 20 19:34:55.791219 kubelet[2923]: I0620 19:34:55.791062 2923 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7114b668-d678-4a9c-aee4-006fa66a3550-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7114b668-d678-4a9c-aee4-006fa66a3550" (UID: "7114b668-d678-4a9c-aee4-006fa66a3550"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:34:55.791219 kubelet[2923]: I0620 19:34:55.791080 2923 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7114b668-d678-4a9c-aee4-006fa66a3550-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7114b668-d678-4a9c-aee4-006fa66a3550" (UID: "7114b668-d678-4a9c-aee4-006fa66a3550"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:34:55.791391 kubelet[2923]: I0620 19:34:55.791320 2923 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7114b668-d678-4a9c-aee4-006fa66a3550-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7114b668-d678-4a9c-aee4-006fa66a3550" (UID: "7114b668-d678-4a9c-aee4-006fa66a3550"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:34:55.791391 kubelet[2923]: I0620 19:34:55.791342 2923 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7114b668-d678-4a9c-aee4-006fa66a3550-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7114b668-d678-4a9c-aee4-006fa66a3550" (UID: "7114b668-d678-4a9c-aee4-006fa66a3550"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:34:55.791513 kubelet[2923]: I0620 19:34:55.791501 2923 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7114b668-d678-4a9c-aee4-006fa66a3550-hostproc" (OuterVolumeSpecName: "hostproc") pod "7114b668-d678-4a9c-aee4-006fa66a3550" (UID: "7114b668-d678-4a9c-aee4-006fa66a3550"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:34:55.792972 kubelet[2923]: I0620 19:34:55.792957 2923 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7114b668-d678-4a9c-aee4-006fa66a3550-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7114b668-d678-4a9c-aee4-006fa66a3550" (UID: "7114b668-d678-4a9c-aee4-006fa66a3550"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 20 19:34:55.794348 kubelet[2923]: I0620 19:34:55.794326 2923 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7114b668-d678-4a9c-aee4-006fa66a3550-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7114b668-d678-4a9c-aee4-006fa66a3550" (UID: "7114b668-d678-4a9c-aee4-006fa66a3550"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 19:34:55.794447 kubelet[2923]: I0620 19:34:55.794434 2923 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7114b668-d678-4a9c-aee4-006fa66a3550-kube-api-access-ssk59" (OuterVolumeSpecName: "kube-api-access-ssk59") pod "7114b668-d678-4a9c-aee4-006fa66a3550" (UID: "7114b668-d678-4a9c-aee4-006fa66a3550"). InnerVolumeSpecName "kube-api-access-ssk59". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 19:34:55.795894 kubelet[2923]: I0620 19:34:55.795877 2923 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7114b668-d678-4a9c-aee4-006fa66a3550-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7114b668-d678-4a9c-aee4-006fa66a3550" (UID: "7114b668-d678-4a9c-aee4-006fa66a3550"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jun 20 19:34:55.891915 kubelet[2923]: I0620 19:34:55.891835 2923 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7114b668-d678-4a9c-aee4-006fa66a3550-hostproc\") on node \"localhost\" DevicePath \"\"" Jun 20 19:34:55.892168 kubelet[2923]: I0620 19:34:55.891958 2923 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ssk59\" (UniqueName: \"kubernetes.io/projected/7114b668-d678-4a9c-aee4-006fa66a3550-kube-api-access-ssk59\") on node \"localhost\" DevicePath \"\"" Jun 20 19:34:55.892168 kubelet[2923]: I0620 19:34:55.891971 2923 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7114b668-d678-4a9c-aee4-006fa66a3550-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jun 20 19:34:55.892168 kubelet[2923]: I0620 19:34:55.891979 2923 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7114b668-d678-4a9c-aee4-006fa66a3550-lib-modules\") on node \"localhost\" DevicePath \"\"" Jun 20 19:34:55.892168 kubelet[2923]: I0620 19:34:55.891986 2923 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7114b668-d678-4a9c-aee4-006fa66a3550-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jun 20 19:34:55.892168 kubelet[2923]: I0620 19:34:55.891992 2923 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7114b668-d678-4a9c-aee4-006fa66a3550-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jun 20 19:34:55.892168 kubelet[2923]: I0620 19:34:55.891999 2923 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7114b668-d678-4a9c-aee4-006fa66a3550-cilium-run\") on node \"localhost\" DevicePath \"\"" Jun 20 19:34:55.892168 kubelet[2923]: I0620 19:34:55.892007 2923 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7114b668-d678-4a9c-aee4-006fa66a3550-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jun 20 19:34:55.892168 kubelet[2923]: I0620 19:34:55.892013 2923 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7114b668-d678-4a9c-aee4-006fa66a3550-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jun 20 19:34:56.384944 systemd[1]: Removed slice kubepods-burstable-pod7114b668_d678_4a9c_aee4_006fa66a3550.slice - libcontainer container kubepods-burstable-pod7114b668_d678_4a9c_aee4_006fa66a3550.slice. Jun 20 19:34:56.385151 systemd[1]: kubepods-burstable-pod7114b668_d678_4a9c_aee4_006fa66a3550.slice: Consumed 4.457s CPU time, 223M memory peak, 101.7M read from disk, 16.6M written to disk. Jun 20 19:34:56.386603 systemd[1]: Removed slice kubepods-besteffort-podf4fa4f99_f1b9_495f_9eb7_888368e9c869.slice - libcontainer container kubepods-besteffort-podf4fa4f99_f1b9_495f_9eb7_888368e9c869.slice. Jun 20 19:34:56.589695 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a627f281bcc4e5319714f3bff6c2a9e0c6fa0dcb374efd053c2ffc2683776f8-rootfs.mount: Deactivated successfully. Jun 20 19:34:56.589774 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5a627f281bcc4e5319714f3bff6c2a9e0c6fa0dcb374efd053c2ffc2683776f8-shm.mount: Deactivated successfully. Jun 20 19:34:56.589839 systemd[1]: var-lib-kubelet-pods-7114b668\x2dd678\x2d4a9c\x2daee4\x2d006fa66a3550-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dssk59.mount: Deactivated successfully. Jun 20 19:34:56.589891 systemd[1]: var-lib-kubelet-pods-7114b668\x2dd678\x2d4a9c\x2daee4\x2d006fa66a3550-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jun 20 19:34:56.589940 systemd[1]: var-lib-kubelet-pods-7114b668\x2dd678\x2d4a9c\x2daee4\x2d006fa66a3550-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jun 20 19:34:56.609307 kubelet[2923]: I0620 19:34:56.609236 2923 scope.go:117] "RemoveContainer" containerID="028718044102f0d66137cc93043586f2f2b31701175543627c743ea41f6fc27d" Jun 20 19:34:56.611775 containerd[1645]: time="2025-06-20T19:34:56.611148436Z" level=info msg="RemoveContainer for \"028718044102f0d66137cc93043586f2f2b31701175543627c743ea41f6fc27d\"" Jun 20 19:34:56.614298 containerd[1645]: time="2025-06-20T19:34:56.614258124Z" level=info msg="RemoveContainer for \"028718044102f0d66137cc93043586f2f2b31701175543627c743ea41f6fc27d\" returns successfully" Jun 20 19:34:56.616496 kubelet[2923]: I0620 19:34:56.616470 2923 scope.go:117] "RemoveContainer" containerID="a48595aa9959a77fe6d1e5fb0f47266d65d554040c0bc6d11081dcb77c2c0de1" Jun 20 19:34:56.621249 containerd[1645]: time="2025-06-20T19:34:56.621226293Z" level=info msg="RemoveContainer for \"a48595aa9959a77fe6d1e5fb0f47266d65d554040c0bc6d11081dcb77c2c0de1\"" Jun 20 19:34:56.623220 containerd[1645]: time="2025-06-20T19:34:56.623197078Z" level=info msg="RemoveContainer for \"a48595aa9959a77fe6d1e5fb0f47266d65d554040c0bc6d11081dcb77c2c0de1\" returns successfully" Jun 20 19:34:56.623366 kubelet[2923]: I0620 19:34:56.623291 2923 scope.go:117] "RemoveContainer" containerID="773b4376e736c5b0ec60f878c89a22c54fa4806666e58cc1d44fc741377582aa" Jun 20 19:34:56.625341 containerd[1645]: time="2025-06-20T19:34:56.625219092Z" level=info msg="RemoveContainer for \"773b4376e736c5b0ec60f878c89a22c54fa4806666e58cc1d44fc741377582aa\"" Jun 20 19:34:56.629447 containerd[1645]: time="2025-06-20T19:34:56.629422746Z" level=info msg="RemoveContainer for \"773b4376e736c5b0ec60f878c89a22c54fa4806666e58cc1d44fc741377582aa\" returns successfully" Jun 20 19:34:56.629788 kubelet[2923]: I0620 19:34:56.629742 2923 scope.go:117] "RemoveContainer" containerID="d45b00a63201a7b6d303c29182b0ae7fa201e2ab49e710de485716fb2a7c42c3" Jun 20 19:34:56.631592 containerd[1645]: time="2025-06-20T19:34:56.631576796Z" level=info msg="RemoveContainer for \"d45b00a63201a7b6d303c29182b0ae7fa201e2ab49e710de485716fb2a7c42c3\"" Jun 20 19:34:56.633338 containerd[1645]: time="2025-06-20T19:34:56.633323039Z" level=info msg="RemoveContainer for \"d45b00a63201a7b6d303c29182b0ae7fa201e2ab49e710de485716fb2a7c42c3\" returns successfully" Jun 20 19:34:56.633455 kubelet[2923]: I0620 19:34:56.633398 2923 scope.go:117] "RemoveContainer" containerID="25f05bdf332de98fe96462173359fac5563d1729d3cf3aa43eab988f2922afd9" Jun 20 19:34:56.634125 containerd[1645]: time="2025-06-20T19:34:56.634110751Z" level=info msg="RemoveContainer for \"25f05bdf332de98fe96462173359fac5563d1729d3cf3aa43eab988f2922afd9\"" Jun 20 19:34:56.635406 containerd[1645]: time="2025-06-20T19:34:56.635364382Z" level=info msg="RemoveContainer for \"25f05bdf332de98fe96462173359fac5563d1729d3cf3aa43eab988f2922afd9\" returns successfully" Jun 20 19:34:56.635487 kubelet[2923]: I0620 19:34:56.635477 2923 scope.go:117] "RemoveContainer" containerID="f4b5037d58a4c8a583efa3eedd7fa7144d5ef46be6ffe9db5e3afe0da5d67cc8" Jun 20 19:34:56.636519 containerd[1645]: time="2025-06-20T19:34:56.636508243Z" level=info msg="RemoveContainer for \"f4b5037d58a4c8a583efa3eedd7fa7144d5ef46be6ffe9db5e3afe0da5d67cc8\"" Jun 20 19:34:56.637812 containerd[1645]: time="2025-06-20T19:34:56.637714572Z" level=info msg="RemoveContainer for \"f4b5037d58a4c8a583efa3eedd7fa7144d5ef46be6ffe9db5e3afe0da5d67cc8\" returns successfully" Jun 20 19:34:56.637850 kubelet[2923]: I0620 19:34:56.637781 2923 scope.go:117] "RemoveContainer" containerID="a48595aa9959a77fe6d1e5fb0f47266d65d554040c0bc6d11081dcb77c2c0de1" Jun 20 19:34:56.639465 containerd[1645]: time="2025-06-20T19:34:56.637927860Z" level=error msg="ContainerStatus for \"a48595aa9959a77fe6d1e5fb0f47266d65d554040c0bc6d11081dcb77c2c0de1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a48595aa9959a77fe6d1e5fb0f47266d65d554040c0bc6d11081dcb77c2c0de1\": not found" Jun 20 19:34:56.639580 kubelet[2923]: E0620 19:34:56.639530 2923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a48595aa9959a77fe6d1e5fb0f47266d65d554040c0bc6d11081dcb77c2c0de1\": not found" containerID="a48595aa9959a77fe6d1e5fb0f47266d65d554040c0bc6d11081dcb77c2c0de1" Jun 20 19:34:56.639653 kubelet[2923]: I0620 19:34:56.639548 2923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a48595aa9959a77fe6d1e5fb0f47266d65d554040c0bc6d11081dcb77c2c0de1"} err="failed to get container status \"a48595aa9959a77fe6d1e5fb0f47266d65d554040c0bc6d11081dcb77c2c0de1\": rpc error: code = NotFound desc = an error occurred when try to find container \"a48595aa9959a77fe6d1e5fb0f47266d65d554040c0bc6d11081dcb77c2c0de1\": not found" Jun 20 19:34:56.639691 kubelet[2923]: I0620 19:34:56.639685 2923 scope.go:117] "RemoveContainer" containerID="773b4376e736c5b0ec60f878c89a22c54fa4806666e58cc1d44fc741377582aa" Jun 20 19:34:56.639810 containerd[1645]: time="2025-06-20T19:34:56.639791422Z" level=error msg="ContainerStatus for \"773b4376e736c5b0ec60f878c89a22c54fa4806666e58cc1d44fc741377582aa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"773b4376e736c5b0ec60f878c89a22c54fa4806666e58cc1d44fc741377582aa\": not found" Jun 20 19:34:56.639948 kubelet[2923]: E0620 19:34:56.639936 2923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"773b4376e736c5b0ec60f878c89a22c54fa4806666e58cc1d44fc741377582aa\": not found" containerID="773b4376e736c5b0ec60f878c89a22c54fa4806666e58cc1d44fc741377582aa" Jun 20 19:34:56.640000 kubelet[2923]: I0620 19:34:56.639990 2923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"773b4376e736c5b0ec60f878c89a22c54fa4806666e58cc1d44fc741377582aa"} err="failed to get container status \"773b4376e736c5b0ec60f878c89a22c54fa4806666e58cc1d44fc741377582aa\": rpc error: code = NotFound desc = an error occurred when try to find container \"773b4376e736c5b0ec60f878c89a22c54fa4806666e58cc1d44fc741377582aa\": not found" Jun 20 19:34:56.640045 kubelet[2923]: I0620 19:34:56.640027 2923 scope.go:117] "RemoveContainer" containerID="d45b00a63201a7b6d303c29182b0ae7fa201e2ab49e710de485716fb2a7c42c3" Jun 20 19:34:56.640208 containerd[1645]: time="2025-06-20T19:34:56.640179000Z" level=error msg="ContainerStatus for \"d45b00a63201a7b6d303c29182b0ae7fa201e2ab49e710de485716fb2a7c42c3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d45b00a63201a7b6d303c29182b0ae7fa201e2ab49e710de485716fb2a7c42c3\": not found" Jun 20 19:34:56.640278 kubelet[2923]: E0620 19:34:56.640269 2923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d45b00a63201a7b6d303c29182b0ae7fa201e2ab49e710de485716fb2a7c42c3\": not found" containerID="d45b00a63201a7b6d303c29182b0ae7fa201e2ab49e710de485716fb2a7c42c3" Jun 20 19:34:56.640420 kubelet[2923]: I0620 19:34:56.640321 2923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d45b00a63201a7b6d303c29182b0ae7fa201e2ab49e710de485716fb2a7c42c3"} err="failed to get container status \"d45b00a63201a7b6d303c29182b0ae7fa201e2ab49e710de485716fb2a7c42c3\": rpc error: code = NotFound desc = an error occurred when try to find container \"d45b00a63201a7b6d303c29182b0ae7fa201e2ab49e710de485716fb2a7c42c3\": not found" Jun 20 19:34:56.640420 kubelet[2923]: I0620 19:34:56.640330 2923 scope.go:117] "RemoveContainer" containerID="25f05bdf332de98fe96462173359fac5563d1729d3cf3aa43eab988f2922afd9" Jun 20 19:34:56.640461 containerd[1645]: time="2025-06-20T19:34:56.640392170Z" level=error msg="ContainerStatus for \"25f05bdf332de98fe96462173359fac5563d1729d3cf3aa43eab988f2922afd9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"25f05bdf332de98fe96462173359fac5563d1729d3cf3aa43eab988f2922afd9\": not found" Jun 20 19:34:56.640570 kubelet[2923]: E0620 19:34:56.640530 2923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"25f05bdf332de98fe96462173359fac5563d1729d3cf3aa43eab988f2922afd9\": not found" containerID="25f05bdf332de98fe96462173359fac5563d1729d3cf3aa43eab988f2922afd9" Jun 20 19:34:56.640570 kubelet[2923]: I0620 19:34:56.640540 2923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"25f05bdf332de98fe96462173359fac5563d1729d3cf3aa43eab988f2922afd9"} err="failed to get container status \"25f05bdf332de98fe96462173359fac5563d1729d3cf3aa43eab988f2922afd9\": rpc error: code = NotFound desc = an error occurred when try to find container \"25f05bdf332de98fe96462173359fac5563d1729d3cf3aa43eab988f2922afd9\": not found" Jun 20 19:34:56.640570 kubelet[2923]: I0620 19:34:56.640548 2923 scope.go:117] "RemoveContainer" containerID="f4b5037d58a4c8a583efa3eedd7fa7144d5ef46be6ffe9db5e3afe0da5d67cc8" Jun 20 19:34:56.640725 containerd[1645]: time="2025-06-20T19:34:56.640696729Z" level=error msg="ContainerStatus for \"f4b5037d58a4c8a583efa3eedd7fa7144d5ef46be6ffe9db5e3afe0da5d67cc8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f4b5037d58a4c8a583efa3eedd7fa7144d5ef46be6ffe9db5e3afe0da5d67cc8\": not found" Jun 20 19:34:56.640803 kubelet[2923]: E0620 19:34:56.640795 2923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f4b5037d58a4c8a583efa3eedd7fa7144d5ef46be6ffe9db5e3afe0da5d67cc8\": not found" containerID="f4b5037d58a4c8a583efa3eedd7fa7144d5ef46be6ffe9db5e3afe0da5d67cc8" Jun 20 19:34:56.640855 kubelet[2923]: I0620 19:34:56.640834 2923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f4b5037d58a4c8a583efa3eedd7fa7144d5ef46be6ffe9db5e3afe0da5d67cc8"} err="failed to get container status \"f4b5037d58a4c8a583efa3eedd7fa7144d5ef46be6ffe9db5e3afe0da5d67cc8\": rpc error: code = NotFound desc = an error occurred when try to find container \"f4b5037d58a4c8a583efa3eedd7fa7144d5ef46be6ffe9db5e3afe0da5d67cc8\": not found" Jun 20 19:34:57.433099 kubelet[2923]: E0620 19:34:57.432997 2923 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 20 19:34:57.529644 sshd[4454]: Connection closed by 147.75.109.163 port 56708 Jun 20 19:34:57.529233 sshd-session[4452]: pam_unix(sshd:session): session closed for user core Jun 20 19:34:57.536410 systemd[1]: sshd@21-139.178.70.102:22-147.75.109.163:56708.service: Deactivated successfully. Jun 20 19:34:57.537879 systemd[1]: session-24.scope: Deactivated successfully. Jun 20 19:34:57.538664 systemd-logind[1622]: Session 24 logged out. Waiting for processes to exit. Jun 20 19:34:57.540810 systemd[1]: Started sshd@22-139.178.70.102:22-147.75.109.163:55162.service - OpenSSH per-connection server daemon (147.75.109.163:55162). Jun 20 19:34:57.541281 systemd-logind[1622]: Removed session 24. Jun 20 19:34:57.580817 sshd[4614]: Accepted publickey for core from 147.75.109.163 port 55162 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:34:57.581718 sshd-session[4614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:34:57.585752 systemd-logind[1622]: New session 25 of user core. Jun 20 19:34:57.589684 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 20 19:34:57.896835 sshd[4616]: Connection closed by 147.75.109.163 port 55162 Jun 20 19:34:57.898448 sshd-session[4614]: pam_unix(sshd:session): session closed for user core Jun 20 19:34:57.906946 systemd[1]: sshd@22-139.178.70.102:22-147.75.109.163:55162.service: Deactivated successfully. Jun 20 19:34:57.908238 systemd[1]: session-25.scope: Deactivated successfully. Jun 20 19:34:57.910442 systemd-logind[1622]: Session 25 logged out. Waiting for processes to exit. Jun 20 19:34:57.913973 kubelet[2923]: I0620 19:34:57.913950 2923 memory_manager.go:355] "RemoveStaleState removing state" podUID="7114b668-d678-4a9c-aee4-006fa66a3550" containerName="cilium-agent" Jun 20 19:34:57.913973 kubelet[2923]: I0620 19:34:57.913964 2923 memory_manager.go:355] "RemoveStaleState removing state" podUID="f4fa4f99-f1b9-495f-9eb7-888368e9c869" containerName="cilium-operator" Jun 20 19:34:57.917799 systemd[1]: Started sshd@23-139.178.70.102:22-147.75.109.163:55168.service - OpenSSH per-connection server daemon (147.75.109.163:55168). Jun 20 19:34:57.918971 systemd-logind[1622]: Removed session 25. Jun 20 19:34:57.929852 systemd[1]: Created slice kubepods-burstable-pod3b17ce32_1299_4299_bebb_510f46fbad05.slice - libcontainer container kubepods-burstable-pod3b17ce32_1299_4299_bebb_510f46fbad05.slice. Jun 20 19:34:57.970189 sshd[4626]: Accepted publickey for core from 147.75.109.163 port 55168 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:34:57.971285 sshd-session[4626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:34:57.974600 systemd-logind[1622]: New session 26 of user core. Jun 20 19:34:57.979671 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 20 19:34:58.004482 kubelet[2923]: I0620 19:34:58.004457 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3b17ce32-1299-4299-bebb-510f46fbad05-xtables-lock\") pod \"cilium-d4989\" (UID: \"3b17ce32-1299-4299-bebb-510f46fbad05\") " pod="kube-system/cilium-d4989" Jun 20 19:34:58.004606 kubelet[2923]: I0620 19:34:58.004484 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3b17ce32-1299-4299-bebb-510f46fbad05-clustermesh-secrets\") pod \"cilium-d4989\" (UID: \"3b17ce32-1299-4299-bebb-510f46fbad05\") " pod="kube-system/cilium-d4989" Jun 20 19:34:58.004606 kubelet[2923]: I0620 19:34:58.004499 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d59l\" (UniqueName: \"kubernetes.io/projected/3b17ce32-1299-4299-bebb-510f46fbad05-kube-api-access-6d59l\") pod \"cilium-d4989\" (UID: \"3b17ce32-1299-4299-bebb-510f46fbad05\") " pod="kube-system/cilium-d4989" Jun 20 19:34:58.004606 kubelet[2923]: I0620 19:34:58.004513 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b17ce32-1299-4299-bebb-510f46fbad05-lib-modules\") pod \"cilium-d4989\" (UID: \"3b17ce32-1299-4299-bebb-510f46fbad05\") " pod="kube-system/cilium-d4989" Jun 20 19:34:58.004606 kubelet[2923]: I0620 19:34:58.004525 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3b17ce32-1299-4299-bebb-510f46fbad05-host-proc-sys-kernel\") pod \"cilium-d4989\" (UID: \"3b17ce32-1299-4299-bebb-510f46fbad05\") " pod="kube-system/cilium-d4989" Jun 20 19:34:58.004606 kubelet[2923]: I0620 19:34:58.004537 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3b17ce32-1299-4299-bebb-510f46fbad05-host-proc-sys-net\") pod \"cilium-d4989\" (UID: \"3b17ce32-1299-4299-bebb-510f46fbad05\") " pod="kube-system/cilium-d4989" Jun 20 19:34:58.004727 kubelet[2923]: I0620 19:34:58.004549 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3b17ce32-1299-4299-bebb-510f46fbad05-hubble-tls\") pod \"cilium-d4989\" (UID: \"3b17ce32-1299-4299-bebb-510f46fbad05\") " pod="kube-system/cilium-d4989" Jun 20 19:34:58.004727 kubelet[2923]: I0620 19:34:58.004574 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3b17ce32-1299-4299-bebb-510f46fbad05-cilium-cgroup\") pod \"cilium-d4989\" (UID: \"3b17ce32-1299-4299-bebb-510f46fbad05\") " pod="kube-system/cilium-d4989" Jun 20 19:34:58.004727 kubelet[2923]: I0620 19:34:58.004587 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3b17ce32-1299-4299-bebb-510f46fbad05-cilium-ipsec-secrets\") pod \"cilium-d4989\" (UID: \"3b17ce32-1299-4299-bebb-510f46fbad05\") " pod="kube-system/cilium-d4989" Jun 20 19:34:58.004727 kubelet[2923]: I0620 19:34:58.004598 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3b17ce32-1299-4299-bebb-510f46fbad05-hostproc\") pod \"cilium-d4989\" (UID: \"3b17ce32-1299-4299-bebb-510f46fbad05\") " pod="kube-system/cilium-d4989" Jun 20 19:34:58.004727 kubelet[2923]: I0620 19:34:58.004609 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3b17ce32-1299-4299-bebb-510f46fbad05-cilium-config-path\") pod \"cilium-d4989\" (UID: \"3b17ce32-1299-4299-bebb-510f46fbad05\") " pod="kube-system/cilium-d4989" Jun 20 19:34:58.004727 kubelet[2923]: I0620 19:34:58.004623 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3b17ce32-1299-4299-bebb-510f46fbad05-etc-cni-netd\") pod \"cilium-d4989\" (UID: \"3b17ce32-1299-4299-bebb-510f46fbad05\") " pod="kube-system/cilium-d4989" Jun 20 19:34:58.004856 kubelet[2923]: I0620 19:34:58.004636 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3b17ce32-1299-4299-bebb-510f46fbad05-cilium-run\") pod \"cilium-d4989\" (UID: \"3b17ce32-1299-4299-bebb-510f46fbad05\") " pod="kube-system/cilium-d4989" Jun 20 19:34:58.004856 kubelet[2923]: I0620 19:34:58.004648 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3b17ce32-1299-4299-bebb-510f46fbad05-bpf-maps\") pod \"cilium-d4989\" (UID: \"3b17ce32-1299-4299-bebb-510f46fbad05\") " pod="kube-system/cilium-d4989" Jun 20 19:34:58.004856 kubelet[2923]: I0620 19:34:58.004660 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3b17ce32-1299-4299-bebb-510f46fbad05-cni-path\") pod \"cilium-d4989\" (UID: \"3b17ce32-1299-4299-bebb-510f46fbad05\") " pod="kube-system/cilium-d4989" Jun 20 19:34:58.027976 sshd[4628]: Connection closed by 147.75.109.163 port 55168 Jun 20 19:34:58.027653 sshd-session[4626]: pam_unix(sshd:session): session closed for user core Jun 20 19:34:58.037770 systemd[1]: sshd@23-139.178.70.102:22-147.75.109.163:55168.service: Deactivated successfully. Jun 20 19:34:58.039688 systemd[1]: session-26.scope: Deactivated successfully. Jun 20 19:34:58.040897 systemd-logind[1622]: Session 26 logged out. Waiting for processes to exit. Jun 20 19:34:58.042351 systemd-logind[1622]: Removed session 26. Jun 20 19:34:58.043594 systemd[1]: Started sshd@24-139.178.70.102:22-147.75.109.163:55170.service - OpenSSH per-connection server daemon (147.75.109.163:55170). Jun 20 19:34:58.087231 sshd[4635]: Accepted publickey for core from 147.75.109.163 port 55170 ssh2: RSA SHA256:6mwSOnQ8XJGfIVY5Vbg0bVgZPwjakTRUB8GgWsnoHsQ Jun 20 19:34:58.088089 sshd-session[4635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:34:58.091448 systemd-logind[1622]: New session 27 of user core. Jun 20 19:34:58.104699 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 20 19:34:58.233642 containerd[1645]: time="2025-06-20T19:34:58.233610799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d4989,Uid:3b17ce32-1299-4299-bebb-510f46fbad05,Namespace:kube-system,Attempt:0,}" Jun 20 19:34:58.246069 containerd[1645]: time="2025-06-20T19:34:58.246034763Z" level=info msg="connecting to shim e196ed803e7fd12ec210f634923dd9ec5326602cb1469db1383cb1b76b3fbf62" address="unix:///run/containerd/s/127f1144af2144116c6ceced0bc7bd18e52a5411899ca009ad594f883aa3ce34" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:34:58.266710 systemd[1]: Started cri-containerd-e196ed803e7fd12ec210f634923dd9ec5326602cb1469db1383cb1b76b3fbf62.scope - libcontainer container e196ed803e7fd12ec210f634923dd9ec5326602cb1469db1383cb1b76b3fbf62. Jun 20 19:34:58.285320 containerd[1645]: time="2025-06-20T19:34:58.285300603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d4989,Uid:3b17ce32-1299-4299-bebb-510f46fbad05,Namespace:kube-system,Attempt:0,} returns sandbox id \"e196ed803e7fd12ec210f634923dd9ec5326602cb1469db1383cb1b76b3fbf62\"" Jun 20 19:34:58.287005 containerd[1645]: time="2025-06-20T19:34:58.286986759Z" level=info msg="CreateContainer within sandbox \"e196ed803e7fd12ec210f634923dd9ec5326602cb1469db1383cb1b76b3fbf62\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 20 19:34:58.290113 containerd[1645]: time="2025-06-20T19:34:58.290088283Z" level=info msg="Container 327b06400e90a0dfd7c619a71b17cf441bc22fecb9ac0dbeb0cbe832b56d8251: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:34:58.294122 containerd[1645]: time="2025-06-20T19:34:58.294089562Z" level=info msg="CreateContainer within sandbox \"e196ed803e7fd12ec210f634923dd9ec5326602cb1469db1383cb1b76b3fbf62\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"327b06400e90a0dfd7c619a71b17cf441bc22fecb9ac0dbeb0cbe832b56d8251\"" Jun 20 19:34:58.294678 containerd[1645]: time="2025-06-20T19:34:58.294661374Z" level=info msg="StartContainer for \"327b06400e90a0dfd7c619a71b17cf441bc22fecb9ac0dbeb0cbe832b56d8251\"" Jun 20 19:34:58.295270 containerd[1645]: time="2025-06-20T19:34:58.295255548Z" level=info msg="connecting to shim 327b06400e90a0dfd7c619a71b17cf441bc22fecb9ac0dbeb0cbe832b56d8251" address="unix:///run/containerd/s/127f1144af2144116c6ceced0bc7bd18e52a5411899ca009ad594f883aa3ce34" protocol=ttrpc version=3 Jun 20 19:34:58.309730 systemd[1]: Started cri-containerd-327b06400e90a0dfd7c619a71b17cf441bc22fecb9ac0dbeb0cbe832b56d8251.scope - libcontainer container 327b06400e90a0dfd7c619a71b17cf441bc22fecb9ac0dbeb0cbe832b56d8251. Jun 20 19:34:58.327434 containerd[1645]: time="2025-06-20T19:34:58.326918120Z" level=info msg="StartContainer for \"327b06400e90a0dfd7c619a71b17cf441bc22fecb9ac0dbeb0cbe832b56d8251\" returns successfully" Jun 20 19:34:58.342029 systemd[1]: cri-containerd-327b06400e90a0dfd7c619a71b17cf441bc22fecb9ac0dbeb0cbe832b56d8251.scope: Deactivated successfully. Jun 20 19:34:58.342226 systemd[1]: cri-containerd-327b06400e90a0dfd7c619a71b17cf441bc22fecb9ac0dbeb0cbe832b56d8251.scope: Consumed 13ms CPU time, 9.7M memory peak, 3.3M read from disk. Jun 20 19:34:58.343994 containerd[1645]: time="2025-06-20T19:34:58.343919914Z" level=info msg="received exit event container_id:\"327b06400e90a0dfd7c619a71b17cf441bc22fecb9ac0dbeb0cbe832b56d8251\" id:\"327b06400e90a0dfd7c619a71b17cf441bc22fecb9ac0dbeb0cbe832b56d8251\" pid:4706 exited_at:{seconds:1750448098 nanos:343721258}" Jun 20 19:34:58.344204 containerd[1645]: time="2025-06-20T19:34:58.344192976Z" level=info msg="TaskExit event in podsandbox handler container_id:\"327b06400e90a0dfd7c619a71b17cf441bc22fecb9ac0dbeb0cbe832b56d8251\" id:\"327b06400e90a0dfd7c619a71b17cf441bc22fecb9ac0dbeb0cbe832b56d8251\" pid:4706 exited_at:{seconds:1750448098 nanos:343721258}" Jun 20 19:34:58.381330 kubelet[2923]: I0620 19:34:58.381301 2923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7114b668-d678-4a9c-aee4-006fa66a3550" path="/var/lib/kubelet/pods/7114b668-d678-4a9c-aee4-006fa66a3550/volumes" Jun 20 19:34:58.381981 kubelet[2923]: I0620 19:34:58.381843 2923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4fa4f99-f1b9-495f-9eb7-888368e9c869" path="/var/lib/kubelet/pods/f4fa4f99-f1b9-495f-9eb7-888368e9c869/volumes" Jun 20 19:34:58.624792 containerd[1645]: time="2025-06-20T19:34:58.624711528Z" level=info msg="CreateContainer within sandbox \"e196ed803e7fd12ec210f634923dd9ec5326602cb1469db1383cb1b76b3fbf62\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 20 19:34:58.629327 containerd[1645]: time="2025-06-20T19:34:58.629294218Z" level=info msg="Container e5879f93cf44c36ab7533ab5814b218168457ed45a4920b693fb382e4ea47623: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:34:58.635195 containerd[1645]: time="2025-06-20T19:34:58.635125536Z" level=info msg="CreateContainer within sandbox \"e196ed803e7fd12ec210f634923dd9ec5326602cb1469db1383cb1b76b3fbf62\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e5879f93cf44c36ab7533ab5814b218168457ed45a4920b693fb382e4ea47623\"" Jun 20 19:34:58.635460 containerd[1645]: time="2025-06-20T19:34:58.635449415Z" level=info msg="StartContainer for \"e5879f93cf44c36ab7533ab5814b218168457ed45a4920b693fb382e4ea47623\"" Jun 20 19:34:58.636738 containerd[1645]: time="2025-06-20T19:34:58.636538036Z" level=info msg="connecting to shim e5879f93cf44c36ab7533ab5814b218168457ed45a4920b693fb382e4ea47623" address="unix:///run/containerd/s/127f1144af2144116c6ceced0bc7bd18e52a5411899ca009ad594f883aa3ce34" protocol=ttrpc version=3 Jun 20 19:34:58.653649 systemd[1]: Started cri-containerd-e5879f93cf44c36ab7533ab5814b218168457ed45a4920b693fb382e4ea47623.scope - libcontainer container e5879f93cf44c36ab7533ab5814b218168457ed45a4920b693fb382e4ea47623. Jun 20 19:34:58.672048 containerd[1645]: time="2025-06-20T19:34:58.672023641Z" level=info msg="StartContainer for \"e5879f93cf44c36ab7533ab5814b218168457ed45a4920b693fb382e4ea47623\" returns successfully" Jun 20 19:34:58.684949 systemd[1]: cri-containerd-e5879f93cf44c36ab7533ab5814b218168457ed45a4920b693fb382e4ea47623.scope: Deactivated successfully. Jun 20 19:34:58.685243 containerd[1645]: time="2025-06-20T19:34:58.685219273Z" level=info msg="received exit event container_id:\"e5879f93cf44c36ab7533ab5814b218168457ed45a4920b693fb382e4ea47623\" id:\"e5879f93cf44c36ab7533ab5814b218168457ed45a4920b693fb382e4ea47623\" pid:4749 exited_at:{seconds:1750448098 nanos:685085995}" Jun 20 19:34:58.685424 containerd[1645]: time="2025-06-20T19:34:58.685390286Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e5879f93cf44c36ab7533ab5814b218168457ed45a4920b693fb382e4ea47623\" id:\"e5879f93cf44c36ab7533ab5814b218168457ed45a4920b693fb382e4ea47623\" pid:4749 exited_at:{seconds:1750448098 nanos:685085995}" Jun 20 19:34:58.685452 systemd[1]: cri-containerd-e5879f93cf44c36ab7533ab5814b218168457ed45a4920b693fb382e4ea47623.scope: Consumed 11ms CPU time, 7.4M memory peak, 2.2M read from disk. Jun 20 19:34:59.626380 containerd[1645]: time="2025-06-20T19:34:59.626304121Z" level=info msg="CreateContainer within sandbox \"e196ed803e7fd12ec210f634923dd9ec5326602cb1469db1383cb1b76b3fbf62\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 20 19:34:59.636747 containerd[1645]: time="2025-06-20T19:34:59.635877214Z" level=info msg="Container 70ebb01f75505cecf1f75e21b05786e2b78959f1c297bea89ea6801e29646a5f: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:34:59.643971 containerd[1645]: time="2025-06-20T19:34:59.643947823Z" level=info msg="CreateContainer within sandbox \"e196ed803e7fd12ec210f634923dd9ec5326602cb1469db1383cb1b76b3fbf62\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"70ebb01f75505cecf1f75e21b05786e2b78959f1c297bea89ea6801e29646a5f\"" Jun 20 19:34:59.644549 containerd[1645]: time="2025-06-20T19:34:59.644281980Z" level=info msg="StartContainer for \"70ebb01f75505cecf1f75e21b05786e2b78959f1c297bea89ea6801e29646a5f\"" Jun 20 19:34:59.645852 containerd[1645]: time="2025-06-20T19:34:59.645834505Z" level=info msg="connecting to shim 70ebb01f75505cecf1f75e21b05786e2b78959f1c297bea89ea6801e29646a5f" address="unix:///run/containerd/s/127f1144af2144116c6ceced0bc7bd18e52a5411899ca009ad594f883aa3ce34" protocol=ttrpc version=3 Jun 20 19:34:59.661664 systemd[1]: Started cri-containerd-70ebb01f75505cecf1f75e21b05786e2b78959f1c297bea89ea6801e29646a5f.scope - libcontainer container 70ebb01f75505cecf1f75e21b05786e2b78959f1c297bea89ea6801e29646a5f. Jun 20 19:34:59.684130 containerd[1645]: time="2025-06-20T19:34:59.684096990Z" level=info msg="StartContainer for \"70ebb01f75505cecf1f75e21b05786e2b78959f1c297bea89ea6801e29646a5f\" returns successfully" Jun 20 19:34:59.689678 systemd[1]: cri-containerd-70ebb01f75505cecf1f75e21b05786e2b78959f1c297bea89ea6801e29646a5f.scope: Deactivated successfully. Jun 20 19:34:59.690916 containerd[1645]: time="2025-06-20T19:34:59.690895533Z" level=info msg="TaskExit event in podsandbox handler container_id:\"70ebb01f75505cecf1f75e21b05786e2b78959f1c297bea89ea6801e29646a5f\" id:\"70ebb01f75505cecf1f75e21b05786e2b78959f1c297bea89ea6801e29646a5f\" pid:4794 exited_at:{seconds:1750448099 nanos:690730743}" Jun 20 19:34:59.691042 containerd[1645]: time="2025-06-20T19:34:59.691007468Z" level=info msg="received exit event container_id:\"70ebb01f75505cecf1f75e21b05786e2b78959f1c297bea89ea6801e29646a5f\" id:\"70ebb01f75505cecf1f75e21b05786e2b78959f1c297bea89ea6801e29646a5f\" pid:4794 exited_at:{seconds:1750448099 nanos:690730743}" Jun 20 19:35:00.632139 containerd[1645]: time="2025-06-20T19:35:00.632082246Z" level=info msg="CreateContainer within sandbox \"e196ed803e7fd12ec210f634923dd9ec5326602cb1469db1383cb1b76b3fbf62\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 20 19:35:00.643405 containerd[1645]: time="2025-06-20T19:35:00.642843777Z" level=info msg="Container 625597c66d1440cd95cbd0786608c88d695fc67ffa15da8d7bbb037eba684724: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:35:00.645896 containerd[1645]: time="2025-06-20T19:35:00.645866442Z" level=info msg="CreateContainer within sandbox \"e196ed803e7fd12ec210f634923dd9ec5326602cb1469db1383cb1b76b3fbf62\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"625597c66d1440cd95cbd0786608c88d695fc67ffa15da8d7bbb037eba684724\"" Jun 20 19:35:00.646726 containerd[1645]: time="2025-06-20T19:35:00.646646032Z" level=info msg="StartContainer for \"625597c66d1440cd95cbd0786608c88d695fc67ffa15da8d7bbb037eba684724\"" Jun 20 19:35:00.647175 containerd[1645]: time="2025-06-20T19:35:00.647161385Z" level=info msg="connecting to shim 625597c66d1440cd95cbd0786608c88d695fc67ffa15da8d7bbb037eba684724" address="unix:///run/containerd/s/127f1144af2144116c6ceced0bc7bd18e52a5411899ca009ad594f883aa3ce34" protocol=ttrpc version=3 Jun 20 19:35:00.664681 systemd[1]: Started cri-containerd-625597c66d1440cd95cbd0786608c88d695fc67ffa15da8d7bbb037eba684724.scope - libcontainer container 625597c66d1440cd95cbd0786608c88d695fc67ffa15da8d7bbb037eba684724. Jun 20 19:35:00.682358 systemd[1]: cri-containerd-625597c66d1440cd95cbd0786608c88d695fc67ffa15da8d7bbb037eba684724.scope: Deactivated successfully. Jun 20 19:35:00.683351 containerd[1645]: time="2025-06-20T19:35:00.683325517Z" level=info msg="TaskExit event in podsandbox handler container_id:\"625597c66d1440cd95cbd0786608c88d695fc67ffa15da8d7bbb037eba684724\" id:\"625597c66d1440cd95cbd0786608c88d695fc67ffa15da8d7bbb037eba684724\" pid:4832 exited_at:{seconds:1750448100 nanos:682554703}" Jun 20 19:35:00.683495 containerd[1645]: time="2025-06-20T19:35:00.683400625Z" level=info msg="received exit event container_id:\"625597c66d1440cd95cbd0786608c88d695fc67ffa15da8d7bbb037eba684724\" id:\"625597c66d1440cd95cbd0786608c88d695fc67ffa15da8d7bbb037eba684724\" pid:4832 exited_at:{seconds:1750448100 nanos:682554703}" Jun 20 19:35:00.688086 containerd[1645]: time="2025-06-20T19:35:00.688074433Z" level=info msg="StartContainer for \"625597c66d1440cd95cbd0786608c88d695fc67ffa15da8d7bbb037eba684724\" returns successfully" Jun 20 19:35:00.697003 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-625597c66d1440cd95cbd0786608c88d695fc67ffa15da8d7bbb037eba684724-rootfs.mount: Deactivated successfully. Jun 20 19:35:01.635724 containerd[1645]: time="2025-06-20T19:35:01.635665925Z" level=info msg="CreateContainer within sandbox \"e196ed803e7fd12ec210f634923dd9ec5326602cb1469db1383cb1b76b3fbf62\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 20 19:35:01.645971 containerd[1645]: time="2025-06-20T19:35:01.645948252Z" level=info msg="Container 83923bd404e44f3f8cdf0911b57873f61389a16fd02530238644cf2995258008: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:35:01.650114 containerd[1645]: time="2025-06-20T19:35:01.650091063Z" level=info msg="CreateContainer within sandbox \"e196ed803e7fd12ec210f634923dd9ec5326602cb1469db1383cb1b76b3fbf62\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"83923bd404e44f3f8cdf0911b57873f61389a16fd02530238644cf2995258008\"" Jun 20 19:35:01.650494 containerd[1645]: time="2025-06-20T19:35:01.650482761Z" level=info msg="StartContainer for \"83923bd404e44f3f8cdf0911b57873f61389a16fd02530238644cf2995258008\"" Jun 20 19:35:01.651104 containerd[1645]: time="2025-06-20T19:35:01.651076735Z" level=info msg="connecting to shim 83923bd404e44f3f8cdf0911b57873f61389a16fd02530238644cf2995258008" address="unix:///run/containerd/s/127f1144af2144116c6ceced0bc7bd18e52a5411899ca009ad594f883aa3ce34" protocol=ttrpc version=3 Jun 20 19:35:01.665744 systemd[1]: Started cri-containerd-83923bd404e44f3f8cdf0911b57873f61389a16fd02530238644cf2995258008.scope - libcontainer container 83923bd404e44f3f8cdf0911b57873f61389a16fd02530238644cf2995258008. Jun 20 19:35:01.685788 containerd[1645]: time="2025-06-20T19:35:01.685764583Z" level=info msg="StartContainer for \"83923bd404e44f3f8cdf0911b57873f61389a16fd02530238644cf2995258008\" returns successfully" Jun 20 19:35:01.770101 containerd[1645]: time="2025-06-20T19:35:01.770076937Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83923bd404e44f3f8cdf0911b57873f61389a16fd02530238644cf2995258008\" id:\"3f529f8cc31071c2d5200c74e9f63fb55cddb3d7822d0ad1309c2c6926b80b34\" pid:4895 exited_at:{seconds:1750448101 nanos:769521794}" Jun 20 19:35:02.242589 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jun 20 19:35:02.652791 kubelet[2923]: I0620 19:35:02.652627 2923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-d4989" podStartSLOduration=5.652604203 podStartE2EDuration="5.652604203s" podCreationTimestamp="2025-06-20 19:34:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:35:02.651964486 +0000 UTC m=+110.403757413" watchObservedRunningTime="2025-06-20 19:35:02.652604203 +0000 UTC m=+110.404397130" Jun 20 19:35:04.416943 containerd[1645]: time="2025-06-20T19:35:04.416916833Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83923bd404e44f3f8cdf0911b57873f61389a16fd02530238644cf2995258008\" id:\"ac4739562aeaa31dbbd150dbf83c88ca15947ddeb6af8a428cde595cda716edf\" pid:5290 exit_status:1 exited_at:{seconds:1750448104 nanos:416400007}" Jun 20 19:35:04.710163 systemd-networkd[1531]: lxc_health: Link UP Jun 20 19:35:04.711750 systemd-networkd[1531]: lxc_health: Gained carrier Jun 20 19:35:05.952731 systemd-networkd[1531]: lxc_health: Gained IPv6LL Jun 20 19:35:06.519928 containerd[1645]: time="2025-06-20T19:35:06.519900752Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83923bd404e44f3f8cdf0911b57873f61389a16fd02530238644cf2995258008\" id:\"689a59f797ba4edd4ea5a7066371b32155f98ade92c87c68c2f170911f2c33d9\" pid:5432 exited_at:{seconds:1750448106 nanos:519579381}" Jun 20 19:35:08.582933 containerd[1645]: time="2025-06-20T19:35:08.582850339Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83923bd404e44f3f8cdf0911b57873f61389a16fd02530238644cf2995258008\" id:\"7f086083eeba5e1966ff2eca3106fde37480401aa4ffd4f0acb1727e97bb30ad\" pid:5463 exited_at:{seconds:1750448108 nanos:582405280}" Jun 20 19:35:10.650329 containerd[1645]: time="2025-06-20T19:35:10.650299455Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83923bd404e44f3f8cdf0911b57873f61389a16fd02530238644cf2995258008\" id:\"8f6219b03dafc40a05e06da3939964840a3d9add30edf83865528647e5a14de0\" pid:5485 exited_at:{seconds:1750448110 nanos:649773729}" Jun 20 19:35:10.651991 kubelet[2923]: E0620 19:35:10.651911 2923 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:37318->127.0.0.1:46085: write tcp 127.0.0.1:37318->127.0.0.1:46085: write: broken pipe Jun 20 19:35:10.658848 sshd[4639]: Connection closed by 147.75.109.163 port 55170 Jun 20 19:35:10.659329 sshd-session[4635]: pam_unix(sshd:session): session closed for user core Jun 20 19:35:10.661689 systemd[1]: sshd@24-139.178.70.102:22-147.75.109.163:55170.service: Deactivated successfully. Jun 20 19:35:10.662870 systemd[1]: session-27.scope: Deactivated successfully. Jun 20 19:35:10.663729 systemd-logind[1622]: Session 27 logged out. Waiting for processes to exit. Jun 20 19:35:10.664326 systemd-logind[1622]: Removed session 27. Jun 20 19:35:12.360861 containerd[1645]: time="2025-06-20T19:35:12.360788016Z" level=info msg="StopPodSandbox for \"5a627f281bcc4e5319714f3bff6c2a9e0c6fa0dcb374efd053c2ffc2683776f8\"" Jun 20 19:35:12.361206 containerd[1645]: time="2025-06-20T19:35:12.360978165Z" level=info msg="TearDown network for sandbox \"5a627f281bcc4e5319714f3bff6c2a9e0c6fa0dcb374efd053c2ffc2683776f8\" successfully" Jun 20 19:35:12.361206 containerd[1645]: time="2025-06-20T19:35:12.360990172Z" level=info msg="StopPodSandbox for \"5a627f281bcc4e5319714f3bff6c2a9e0c6fa0dcb374efd053c2ffc2683776f8\" returns successfully" Jun 20 19:35:12.361556 containerd[1645]: time="2025-06-20T19:35:12.361535151Z" level=info msg="RemovePodSandbox for \"5a627f281bcc4e5319714f3bff6c2a9e0c6fa0dcb374efd053c2ffc2683776f8\"" Jun 20 19:35:12.366587 containerd[1645]: time="2025-06-20T19:35:12.366430865Z" level=info msg="Forcibly stopping sandbox \"5a627f281bcc4e5319714f3bff6c2a9e0c6fa0dcb374efd053c2ffc2683776f8\"" Jun 20 19:35:12.366587 containerd[1645]: time="2025-06-20T19:35:12.366505111Z" level=info msg="TearDown network for sandbox \"5a627f281bcc4e5319714f3bff6c2a9e0c6fa0dcb374efd053c2ffc2683776f8\" successfully" Jun 20 19:35:12.370357 containerd[1645]: time="2025-06-20T19:35:12.370018807Z" level=info msg="Ensure that sandbox 5a627f281bcc4e5319714f3bff6c2a9e0c6fa0dcb374efd053c2ffc2683776f8 in task-service has been cleanup successfully" Jun 20 19:35:12.371605 containerd[1645]: time="2025-06-20T19:35:12.371588111Z" level=info msg="RemovePodSandbox \"5a627f281bcc4e5319714f3bff6c2a9e0c6fa0dcb374efd053c2ffc2683776f8\" returns successfully" Jun 20 19:35:12.371787 containerd[1645]: time="2025-06-20T19:35:12.371775895Z" level=info msg="StopPodSandbox for \"47f0f087fa3098e0e97d4b6324df13ff5bab3b61d33b61b040fed0ab0b68687f\"" Jun 20 19:35:12.372010 containerd[1645]: time="2025-06-20T19:35:12.371880903Z" level=info msg="TearDown network for sandbox \"47f0f087fa3098e0e97d4b6324df13ff5bab3b61d33b61b040fed0ab0b68687f\" successfully" Jun 20 19:35:12.372010 containerd[1645]: time="2025-06-20T19:35:12.371888822Z" level=info msg="StopPodSandbox for \"47f0f087fa3098e0e97d4b6324df13ff5bab3b61d33b61b040fed0ab0b68687f\" returns successfully" Jun 20 19:35:12.372050 containerd[1645]: time="2025-06-20T19:35:12.372040427Z" level=info msg="RemovePodSandbox for \"47f0f087fa3098e0e97d4b6324df13ff5bab3b61d33b61b040fed0ab0b68687f\"" Jun 20 19:35:12.372067 containerd[1645]: time="2025-06-20T19:35:12.372051359Z" level=info msg="Forcibly stopping sandbox \"47f0f087fa3098e0e97d4b6324df13ff5bab3b61d33b61b040fed0ab0b68687f\"" Jun 20 19:35:12.372106 containerd[1645]: time="2025-06-20T19:35:12.372092391Z" level=info msg="TearDown network for sandbox \"47f0f087fa3098e0e97d4b6324df13ff5bab3b61d33b61b040fed0ab0b68687f\" successfully" Jun 20 19:35:12.372610 containerd[1645]: time="2025-06-20T19:35:12.372594256Z" level=info msg="Ensure that sandbox 47f0f087fa3098e0e97d4b6324df13ff5bab3b61d33b61b040fed0ab0b68687f in task-service has been cleanup successfully" Jun 20 19:35:12.373345 containerd[1645]: time="2025-06-20T19:35:12.373331781Z" level=info msg="RemovePodSandbox \"47f0f087fa3098e0e97d4b6324df13ff5bab3b61d33b61b040fed0ab0b68687f\" returns successfully"