May 13 12:51:44.721856 kernel: Linux version 6.12.28-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 13 11:28:50 -00 2025 May 13 12:51:44.721873 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=7099d7ee582d4f3e6d25a3763207cfa25fb4eb117c83034e2c517b959b8370a1 May 13 12:51:44.721879 kernel: Disabled fast string operations May 13 12:51:44.721883 kernel: BIOS-provided physical RAM map: May 13 12:51:44.721887 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable May 13 12:51:44.721891 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved May 13 12:51:44.721897 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved May 13 12:51:44.721901 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable May 13 12:51:44.721906 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data May 13 12:51:44.721910 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS May 13 12:51:44.721914 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable May 13 12:51:44.721918 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved May 13 12:51:44.721922 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved May 13 12:51:44.721927 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved May 13 12:51:44.721942 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved May 13 12:51:44.721948 kernel: NX (Execute Disable) protection: active May 13 12:51:44.721953 kernel: APIC: Static calls initialized May 13 12:51:44.721960 kernel: SMBIOS 2.7 present. May 13 12:51:44.721965 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 May 13 12:51:44.721970 kernel: DMI: Memory slots populated: 1/128 May 13 12:51:44.721976 kernel: vmware: hypercall mode: 0x00 May 13 12:51:44.721981 kernel: Hypervisor detected: VMware May 13 12:51:44.721986 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz May 13 12:51:44.721991 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz May 13 12:51:44.721995 kernel: vmware: using clock offset of 4589663768 ns May 13 12:51:44.722000 kernel: tsc: Detected 3408.000 MHz processor May 13 12:51:44.722005 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 13 12:51:44.722010 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 13 12:51:44.722015 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 May 13 12:51:44.722020 kernel: total RAM covered: 3072M May 13 12:51:44.722026 kernel: Found optimal setting for mtrr clean up May 13 12:51:44.722031 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G May 13 12:51:44.722036 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs May 13 12:51:44.722041 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 13 12:51:44.722046 kernel: Using GB pages for direct mapping May 13 12:51:44.722051 kernel: ACPI: Early table checksum verification disabled May 13 12:51:44.722056 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) May 13 12:51:44.722061 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) May 13 12:51:44.722065 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) May 13 12:51:44.722072 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) May 13 12:51:44.722078 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 May 13 12:51:44.722083 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 May 13 12:51:44.722088 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) May 13 12:51:44.722094 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) May 13 12:51:44.722099 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) May 13 12:51:44.722105 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) May 13 12:51:44.722110 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) May 13 12:51:44.722115 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) May 13 12:51:44.722120 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] May 13 12:51:44.722125 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] May 13 12:51:44.722130 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] May 13 12:51:44.722136 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] May 13 12:51:44.722141 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] May 13 12:51:44.722146 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] May 13 12:51:44.722152 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] May 13 12:51:44.722157 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] May 13 12:51:44.722162 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] May 13 12:51:44.722166 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] May 13 12:51:44.722172 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 13 12:51:44.722177 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] May 13 12:51:44.722182 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug May 13 12:51:44.722187 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00001000-0x7fffffff] May 13 12:51:44.722192 kernel: NODE_DATA(0) allocated [mem 0x7fff8dc0-0x7fffffff] May 13 12:51:44.722198 kernel: Zone ranges: May 13 12:51:44.722203 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 13 12:51:44.722208 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] May 13 12:51:44.722213 kernel: Normal empty May 13 12:51:44.722218 kernel: Device empty May 13 12:51:44.722223 kernel: Movable zone start for each node May 13 12:51:44.722228 kernel: Early memory node ranges May 13 12:51:44.722233 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] May 13 12:51:44.722238 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] May 13 12:51:44.722243 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] May 13 12:51:44.722249 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] May 13 12:51:44.722254 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 12:51:44.722260 kernel: On node 0, zone DMA: 98 pages in unavailable ranges May 13 12:51:44.722265 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges May 13 12:51:44.722270 kernel: ACPI: PM-Timer IO Port: 0x1008 May 13 12:51:44.722275 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) May 13 12:51:44.722280 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) May 13 12:51:44.722285 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) May 13 12:51:44.722290 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) May 13 12:51:44.722296 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) May 13 12:51:44.722301 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) May 13 12:51:44.722306 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) May 13 12:51:44.722310 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) May 13 12:51:44.722316 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) May 13 12:51:44.722320 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) May 13 12:51:44.722325 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) May 13 12:51:44.722330 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) May 13 12:51:44.722335 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) May 13 12:51:44.722341 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) May 13 12:51:44.722346 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) May 13 12:51:44.722351 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) May 13 12:51:44.722356 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) May 13 12:51:44.722362 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) May 13 12:51:44.722366 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) May 13 12:51:44.722371 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) May 13 12:51:44.722376 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) May 13 12:51:44.722381 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) May 13 12:51:44.722386 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) May 13 12:51:44.722391 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) May 13 12:51:44.722398 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) May 13 12:51:44.722403 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) May 13 12:51:44.722408 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) May 13 12:51:44.722413 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) May 13 12:51:44.722418 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) May 13 12:51:44.722423 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) May 13 12:51:44.722428 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) May 13 12:51:44.722433 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) May 13 12:51:44.722438 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) May 13 12:51:44.722443 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) May 13 12:51:44.722449 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) May 13 12:51:44.722454 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) May 13 12:51:44.722459 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) May 13 12:51:44.722464 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) May 13 12:51:44.722469 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) May 13 12:51:44.722475 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) May 13 12:51:44.722483 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) May 13 12:51:44.722489 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) May 13 12:51:44.722494 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) May 13 12:51:44.722501 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) May 13 12:51:44.722506 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) May 13 12:51:44.722511 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) May 13 12:51:44.722517 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) May 13 12:51:44.722522 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) May 13 12:51:44.722527 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) May 13 12:51:44.722533 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) May 13 12:51:44.722538 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) May 13 12:51:44.722543 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) May 13 12:51:44.722550 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) May 13 12:51:44.722555 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) May 13 12:51:44.722560 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) May 13 12:51:44.722565 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) May 13 12:51:44.722571 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) May 13 12:51:44.722576 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) May 13 12:51:44.722581 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) May 13 12:51:44.722587 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) May 13 12:51:44.722592 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) May 13 12:51:44.722598 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) May 13 12:51:44.722604 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) May 13 12:51:44.722609 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) May 13 12:51:44.722614 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) May 13 12:51:44.722620 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) May 13 12:51:44.722625 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) May 13 12:51:44.722630 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) May 13 12:51:44.722635 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) May 13 12:51:44.722641 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) May 13 12:51:44.722646 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) May 13 12:51:44.722652 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) May 13 12:51:44.722658 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) May 13 12:51:44.722663 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) May 13 12:51:44.722668 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) May 13 12:51:44.722673 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) May 13 12:51:44.722679 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) May 13 12:51:44.722684 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) May 13 12:51:44.722689 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) May 13 12:51:44.722695 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) May 13 12:51:44.722700 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) May 13 12:51:44.722706 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) May 13 12:51:44.722711 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) May 13 12:51:44.722717 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) May 13 12:51:44.722722 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) May 13 12:51:44.722727 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) May 13 12:51:44.722733 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) May 13 12:51:44.722738 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) May 13 12:51:44.722743 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) May 13 12:51:44.722749 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) May 13 12:51:44.722754 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) May 13 12:51:44.722760 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) May 13 12:51:44.722765 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) May 13 12:51:44.722771 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) May 13 12:51:44.722776 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) May 13 12:51:44.722782 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) May 13 12:51:44.722787 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) May 13 12:51:44.722792 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) May 13 12:51:44.722797 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) May 13 12:51:44.722803 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) May 13 12:51:44.722808 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) May 13 12:51:44.722814 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) May 13 12:51:44.722819 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) May 13 12:51:44.722825 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) May 13 12:51:44.722830 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) May 13 12:51:44.722836 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) May 13 12:51:44.722841 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) May 13 12:51:44.722846 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) May 13 12:51:44.722851 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) May 13 12:51:44.722857 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) May 13 12:51:44.722863 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) May 13 12:51:44.722869 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) May 13 12:51:44.722874 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) May 13 12:51:44.722879 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) May 13 12:51:44.722885 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) May 13 12:51:44.722890 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) May 13 12:51:44.722895 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) May 13 12:51:44.722900 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) May 13 12:51:44.722906 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) May 13 12:51:44.722911 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) May 13 12:51:44.722917 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) May 13 12:51:44.722923 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) May 13 12:51:44.722928 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) May 13 12:51:44.722947 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) May 13 12:51:44.722954 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) May 13 12:51:44.722959 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) May 13 12:51:44.722964 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) May 13 12:51:44.722970 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) May 13 12:51:44.722975 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 May 13 12:51:44.722981 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) May 13 12:51:44.722988 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 13 12:51:44.722994 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 May 13 12:51:44.722999 kernel: TSC deadline timer available May 13 12:51:44.723004 kernel: CPU topo: Max. logical packages: 128 May 13 12:51:44.723010 kernel: CPU topo: Max. logical dies: 128 May 13 12:51:44.723015 kernel: CPU topo: Max. dies per package: 1 May 13 12:51:44.723021 kernel: CPU topo: Max. threads per core: 1 May 13 12:51:44.723026 kernel: CPU topo: Num. cores per package: 1 May 13 12:51:44.723031 kernel: CPU topo: Num. threads per package: 1 May 13 12:51:44.723038 kernel: CPU topo: Allowing 2 present CPUs plus 126 hotplug CPUs May 13 12:51:44.723043 kernel: [mem 0x80000000-0xefffffff] available for PCI devices May 13 12:51:44.723049 kernel: Booting paravirtualized kernel on VMware hypervisor May 13 12:51:44.723054 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 13 12:51:44.723060 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 May 13 12:51:44.723066 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 May 13 12:51:44.723071 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 May 13 12:51:44.723076 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 May 13 12:51:44.723082 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 May 13 12:51:44.723088 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 May 13 12:51:44.723093 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 May 13 12:51:44.723099 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 May 13 12:51:44.723104 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 May 13 12:51:44.723109 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 May 13 12:51:44.723114 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 May 13 12:51:44.723120 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 May 13 12:51:44.723125 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 May 13 12:51:44.723130 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 May 13 12:51:44.723136 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 May 13 12:51:44.723142 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 May 13 12:51:44.723147 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 May 13 12:51:44.723152 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 May 13 12:51:44.723158 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 May 13 12:51:44.723164 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=7099d7ee582d4f3e6d25a3763207cfa25fb4eb117c83034e2c517b959b8370a1 May 13 12:51:44.723169 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 12:51:44.723175 kernel: random: crng init done May 13 12:51:44.723181 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes May 13 12:51:44.723187 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes May 13 12:51:44.723192 kernel: printk: log_buf_len min size: 262144 bytes May 13 12:51:44.723198 kernel: printk: log_buf_len: 1048576 bytes May 13 12:51:44.723203 kernel: printk: early log buf free: 245576(93%) May 13 12:51:44.723209 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 12:51:44.723214 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 13 12:51:44.723219 kernel: Fallback order for Node 0: 0 May 13 12:51:44.723225 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524157 May 13 12:51:44.723231 kernel: Policy zone: DMA32 May 13 12:51:44.723237 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 12:51:44.723242 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 May 13 12:51:44.723248 kernel: ftrace: allocating 40071 entries in 157 pages May 13 12:51:44.723253 kernel: ftrace: allocated 157 pages with 5 groups May 13 12:51:44.723258 kernel: Dynamic Preempt: voluntary May 13 12:51:44.723264 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 12:51:44.723269 kernel: rcu: RCU event tracing is enabled. May 13 12:51:44.723275 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. May 13 12:51:44.723281 kernel: Trampoline variant of Tasks RCU enabled. May 13 12:51:44.723287 kernel: Rude variant of Tasks RCU enabled. May 13 12:51:44.723292 kernel: Tracing variant of Tasks RCU enabled. May 13 12:51:44.723297 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 12:51:44.723303 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 May 13 12:51:44.723308 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. May 13 12:51:44.723314 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. May 13 12:51:44.723319 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. May 13 12:51:44.723325 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 May 13 12:51:44.723331 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. May 13 12:51:44.723337 kernel: Console: colour VGA+ 80x25 May 13 12:51:44.723342 kernel: printk: legacy console [tty0] enabled May 13 12:51:44.723347 kernel: printk: legacy console [ttyS0] enabled May 13 12:51:44.723353 kernel: ACPI: Core revision 20240827 May 13 12:51:44.723358 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns May 13 12:51:44.723364 kernel: APIC: Switch to symmetric I/O mode setup May 13 12:51:44.723369 kernel: x2apic enabled May 13 12:51:44.723375 kernel: APIC: Switched APIC routing to: physical x2apic May 13 12:51:44.723380 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 13 12:51:44.723387 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns May 13 12:51:44.723392 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) May 13 12:51:44.723398 kernel: Disabled fast string operations May 13 12:51:44.723403 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 13 12:51:44.723409 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 13 12:51:44.723414 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 13 12:51:44.723420 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit May 13 12:51:44.723425 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS May 13 12:51:44.723431 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT May 13 12:51:44.723437 kernel: RETBleed: Mitigation: Enhanced IBRS May 13 12:51:44.723443 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 13 12:51:44.723448 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 13 12:51:44.723453 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 13 12:51:44.723459 kernel: SRBDS: Unknown: Dependent on hypervisor status May 13 12:51:44.723464 kernel: GDS: Unknown: Dependent on hypervisor status May 13 12:51:44.723470 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 13 12:51:44.723476 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 13 12:51:44.723482 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 13 12:51:44.723488 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 13 12:51:44.723493 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 13 12:51:44.723499 kernel: Freeing SMP alternatives memory: 32K May 13 12:51:44.723504 kernel: pid_max: default: 131072 minimum: 1024 May 13 12:51:44.723510 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 13 12:51:44.723515 kernel: landlock: Up and running. May 13 12:51:44.723520 kernel: SELinux: Initializing. May 13 12:51:44.723526 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 13 12:51:44.723532 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 13 12:51:44.723538 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) May 13 12:51:44.723543 kernel: Performance Events: Skylake events, core PMU driver. May 13 12:51:44.723549 kernel: core: CPUID marked event: 'cpu cycles' unavailable May 13 12:51:44.723555 kernel: core: CPUID marked event: 'instructions' unavailable May 13 12:51:44.723560 kernel: core: CPUID marked event: 'bus cycles' unavailable May 13 12:51:44.723565 kernel: core: CPUID marked event: 'cache references' unavailable May 13 12:51:44.723570 kernel: core: CPUID marked event: 'cache misses' unavailable May 13 12:51:44.723575 kernel: core: CPUID marked event: 'branch instructions' unavailable May 13 12:51:44.723582 kernel: core: CPUID marked event: 'branch misses' unavailable May 13 12:51:44.723587 kernel: ... version: 1 May 13 12:51:44.723592 kernel: ... bit width: 48 May 13 12:51:44.723598 kernel: ... generic registers: 4 May 13 12:51:44.723603 kernel: ... value mask: 0000ffffffffffff May 13 12:51:44.723609 kernel: ... max period: 000000007fffffff May 13 12:51:44.723614 kernel: ... fixed-purpose events: 0 May 13 12:51:44.723620 kernel: ... event mask: 000000000000000f May 13 12:51:44.723625 kernel: signal: max sigframe size: 1776 May 13 12:51:44.723631 kernel: rcu: Hierarchical SRCU implementation. May 13 12:51:44.723637 kernel: rcu: Max phase no-delay instances is 400. May 13 12:51:44.723642 kernel: Timer migration: 3 hierarchy levels; 8 children per group; 3 crossnode level May 13 12:51:44.723648 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 13 12:51:44.723653 kernel: smp: Bringing up secondary CPUs ... May 13 12:51:44.723658 kernel: smpboot: x86: Booting SMP configuration: May 13 12:51:44.723664 kernel: .... node #0, CPUs: #1 May 13 12:51:44.723669 kernel: Disabled fast string operations May 13 12:51:44.723675 kernel: smp: Brought up 1 node, 2 CPUs May 13 12:51:44.723681 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) May 13 12:51:44.723687 kernel: Memory: 1924224K/2096628K available (14336K kernel code, 2430K rwdata, 9948K rodata, 54420K init, 2548K bss, 161020K reserved, 0K cma-reserved) May 13 12:51:44.723692 kernel: devtmpfs: initialized May 13 12:51:44.723698 kernel: x86/mm: Memory block size: 128MB May 13 12:51:44.723703 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) May 13 12:51:44.723709 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 12:51:44.723714 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) May 13 12:51:44.723719 kernel: pinctrl core: initialized pinctrl subsystem May 13 12:51:44.723725 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 12:51:44.723731 kernel: audit: initializing netlink subsys (disabled) May 13 12:51:44.723737 kernel: audit: type=2000 audit(1747140701.067:1): state=initialized audit_enabled=0 res=1 May 13 12:51:44.723743 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 12:51:44.723748 kernel: thermal_sys: Registered thermal governor 'user_space' May 13 12:51:44.723754 kernel: cpuidle: using governor menu May 13 12:51:44.723759 kernel: Simple Boot Flag at 0x36 set to 0x80 May 13 12:51:44.723764 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 12:51:44.723770 kernel: dca service started, version 1.12.1 May 13 12:51:44.723776 kernel: PCI: ECAM [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) for domain 0000 [bus 00-7f] May 13 12:51:44.723782 kernel: PCI: Using configuration type 1 for base access May 13 12:51:44.723794 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 13 12:51:44.723801 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 12:51:44.723807 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 13 12:51:44.723812 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 12:51:44.723818 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 13 12:51:44.723824 kernel: ACPI: Added _OSI(Module Device) May 13 12:51:44.723829 kernel: ACPI: Added _OSI(Processor Device) May 13 12:51:44.723835 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 12:51:44.723842 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 12:51:44.723848 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 12:51:44.723853 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored May 13 12:51:44.723859 kernel: ACPI: Interpreter enabled May 13 12:51:44.723865 kernel: ACPI: PM: (supports S0 S1 S5) May 13 12:51:44.723871 kernel: ACPI: Using IOAPIC for interrupt routing May 13 12:51:44.723876 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 13 12:51:44.723882 kernel: PCI: Using E820 reservations for host bridge windows May 13 12:51:44.723888 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F May 13 12:51:44.723895 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) May 13 12:51:44.723996 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 12:51:44.724050 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] May 13 12:51:44.724098 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] May 13 12:51:44.724107 kernel: PCI host bridge to bus 0000:00 May 13 12:51:44.724159 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 13 12:51:44.724204 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] May 13 12:51:44.724251 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 13 12:51:44.724294 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 13 12:51:44.724337 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] May 13 12:51:44.724380 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] May 13 12:51:44.724439 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 conventional PCI endpoint May 13 12:51:44.724495 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 conventional PCI bridge May 13 12:51:44.724548 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 13 12:51:44.724603 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint May 13 12:51:44.724658 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a conventional PCI endpoint May 13 12:51:44.724710 kernel: pci 0000:00:07.1: BAR 4 [io 0x1060-0x106f] May 13 12:51:44.724763 kernel: pci 0000:00:07.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk May 13 12:51:44.724812 kernel: pci 0000:00:07.1: BAR 1 [io 0x03f6]: legacy IDE quirk May 13 12:51:44.724861 kernel: pci 0000:00:07.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk May 13 12:51:44.724910 kernel: pci 0000:00:07.1: BAR 3 [io 0x0376]: legacy IDE quirk May 13 12:51:44.724997 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint May 13 12:51:44.725049 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI May 13 12:51:44.725101 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB May 13 12:51:44.725154 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 conventional PCI endpoint May 13 12:51:44.725204 kernel: pci 0000:00:07.7: BAR 0 [io 0x1080-0x10bf] May 13 12:51:44.725253 kernel: pci 0000:00:07.7: BAR 1 [mem 0xfebfe000-0xfebfffff 64bit] May 13 12:51:44.725306 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 conventional PCI endpoint May 13 12:51:44.725355 kernel: pci 0000:00:0f.0: BAR 0 [io 0x1070-0x107f] May 13 12:51:44.725403 kernel: pci 0000:00:0f.0: BAR 1 [mem 0xe8000000-0xefffffff pref] May 13 12:51:44.725454 kernel: pci 0000:00:0f.0: BAR 2 [mem 0xfe000000-0xfe7fffff] May 13 12:51:44.725502 kernel: pci 0000:00:0f.0: ROM [mem 0x00000000-0x00007fff pref] May 13 12:51:44.725550 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 13 12:51:44.725605 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 conventional PCI bridge May 13 12:51:44.725655 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) May 13 12:51:44.725702 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] May 13 12:51:44.725751 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] May 13 12:51:44.725802 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] May 13 12:51:44.725857 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:51:44.725908 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] May 13 12:51:44.725967 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] May 13 12:51:44.726016 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] May 13 12:51:44.726066 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold May 13 12:51:44.726120 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:51:44.726172 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] May 13 12:51:44.726221 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] May 13 12:51:44.726270 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] May 13 12:51:44.726318 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] May 13 12:51:44.726366 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold May 13 12:51:44.726435 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:51:44.726489 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] May 13 12:51:44.726541 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] May 13 12:51:44.726590 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] May 13 12:51:44.726639 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] May 13 12:51:44.726687 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold May 13 12:51:44.726742 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:51:44.726792 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] May 13 12:51:44.726844 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] May 13 12:51:44.726892 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] May 13 12:51:44.726949 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold May 13 12:51:44.727003 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:51:44.727054 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] May 13 12:51:44.727103 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] May 13 12:51:44.727152 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] May 13 12:51:44.727209 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold May 13 12:51:44.727265 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:51:44.727315 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] May 13 12:51:44.727363 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] May 13 12:51:44.727412 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] May 13 12:51:44.727471 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold May 13 12:51:44.727524 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:51:44.727576 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] May 13 12:51:44.727626 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] May 13 12:51:44.727676 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] May 13 12:51:44.727724 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold May 13 12:51:44.727779 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:51:44.727834 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] May 13 12:51:44.727886 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] May 13 12:51:44.728082 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] May 13 12:51:44.728138 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold May 13 12:51:44.728194 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:51:44.728244 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] May 13 12:51:44.728293 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] May 13 12:51:44.728341 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] May 13 12:51:44.728390 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold May 13 12:51:44.728442 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:51:44.728495 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] May 13 12:51:44.728544 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] May 13 12:51:44.728593 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] May 13 12:51:44.728642 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] May 13 12:51:44.728690 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold May 13 12:51:44.728742 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:51:44.728793 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] May 13 12:51:44.728963 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] May 13 12:51:44.729018 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] May 13 12:51:44.729068 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] May 13 12:51:44.729118 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold May 13 12:51:44.729175 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:51:44.729225 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] May 13 12:51:44.729277 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] May 13 12:51:44.729326 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] May 13 12:51:44.729375 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold May 13 12:51:44.729430 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:51:44.729480 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] May 13 12:51:44.729529 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] May 13 12:51:44.729577 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] May 13 12:51:44.729626 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold May 13 12:51:44.729681 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:51:44.729730 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] May 13 12:51:44.729779 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] May 13 12:51:44.729828 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] May 13 12:51:44.729877 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold May 13 12:51:44.729930 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:51:44.730111 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] May 13 12:51:44.730165 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] May 13 12:51:44.730215 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] May 13 12:51:44.730266 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold May 13 12:51:44.730320 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:51:44.730370 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] May 13 12:51:44.730420 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] May 13 12:51:44.730469 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] May 13 12:51:44.730521 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold May 13 12:51:44.730576 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:51:44.730627 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] May 13 12:51:44.730676 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] May 13 12:51:44.730726 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] May 13 12:51:44.730775 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] May 13 12:51:44.730825 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold May 13 12:51:44.730882 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:51:44.730939 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] May 13 12:51:44.731000 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] May 13 12:51:44.731050 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] May 13 12:51:44.731103 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] May 13 12:51:44.731152 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold May 13 12:51:44.731207 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:51:44.731259 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] May 13 12:51:44.731309 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] May 13 12:51:44.731358 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] May 13 12:51:44.731408 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] May 13 12:51:44.731460 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold May 13 12:51:44.731514 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:51:44.731565 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] May 13 12:51:44.731614 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] May 13 12:51:44.731663 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] May 13 12:51:44.731713 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold May 13 12:51:44.731767 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:51:44.731820 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] May 13 12:51:44.731869 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] May 13 12:51:44.731919 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] May 13 12:51:44.731995 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold May 13 12:51:44.732052 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:51:44.733005 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] May 13 12:51:44.733064 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] May 13 12:51:44.733120 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] May 13 12:51:44.733171 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold May 13 12:51:44.733227 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:51:44.733278 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] May 13 12:51:44.733328 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] May 13 12:51:44.733379 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] May 13 12:51:44.733428 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold May 13 12:51:44.733483 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:51:44.733536 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] May 13 12:51:44.733586 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] May 13 12:51:44.733635 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] May 13 12:51:44.733687 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold May 13 12:51:44.733742 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:51:44.733792 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] May 13 12:51:44.733844 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] May 13 12:51:44.733894 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] May 13 12:51:44.736617 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] May 13 12:51:44.736681 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold May 13 12:51:44.736740 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:51:44.736794 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] May 13 12:51:44.736845 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] May 13 12:51:44.736902 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] May 13 12:51:44.736972 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] May 13 12:51:44.737024 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold May 13 12:51:44.737080 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:51:44.737131 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] May 13 12:51:44.737182 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] May 13 12:51:44.737231 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] May 13 12:51:44.737281 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold May 13 12:51:44.737339 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:51:44.737389 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] May 13 12:51:44.737439 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] May 13 12:51:44.737488 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] May 13 12:51:44.737537 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold May 13 12:51:44.737590 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:51:44.737641 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] May 13 12:51:44.737693 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] May 13 12:51:44.737742 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] May 13 12:51:44.737791 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold May 13 12:51:44.737847 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:51:44.737898 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] May 13 12:51:44.737962 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] May 13 12:51:44.738012 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] May 13 12:51:44.738064 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold May 13 12:51:44.738132 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:51:44.738183 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] May 13 12:51:44.738232 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] May 13 12:51:44.738292 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] May 13 12:51:44.738343 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold May 13 12:51:44.738395 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:51:44.738465 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] May 13 12:51:44.738526 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] May 13 12:51:44.738577 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] May 13 12:51:44.738626 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold May 13 12:51:44.738678 kernel: pci_bus 0000:01: extended config space not accessible May 13 12:51:44.738729 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 13 12:51:44.738791 kernel: pci_bus 0000:02: extended config space not accessible May 13 12:51:44.738803 kernel: acpiphp: Slot [32] registered May 13 12:51:44.738809 kernel: acpiphp: Slot [33] registered May 13 12:51:44.738815 kernel: acpiphp: Slot [34] registered May 13 12:51:44.738821 kernel: acpiphp: Slot [35] registered May 13 12:51:44.738827 kernel: acpiphp: Slot [36] registered May 13 12:51:44.738832 kernel: acpiphp: Slot [37] registered May 13 12:51:44.738838 kernel: acpiphp: Slot [38] registered May 13 12:51:44.738844 kernel: acpiphp: Slot [39] registered May 13 12:51:44.738850 kernel: acpiphp: Slot [40] registered May 13 12:51:44.738855 kernel: acpiphp: Slot [41] registered May 13 12:51:44.738863 kernel: acpiphp: Slot [42] registered May 13 12:51:44.738868 kernel: acpiphp: Slot [43] registered May 13 12:51:44.738874 kernel: acpiphp: Slot [44] registered May 13 12:51:44.738880 kernel: acpiphp: Slot [45] registered May 13 12:51:44.738886 kernel: acpiphp: Slot [46] registered May 13 12:51:44.738892 kernel: acpiphp: Slot [47] registered May 13 12:51:44.738898 kernel: acpiphp: Slot [48] registered May 13 12:51:44.738906 kernel: acpiphp: Slot [49] registered May 13 12:51:44.738915 kernel: acpiphp: Slot [50] registered May 13 12:51:44.738925 kernel: acpiphp: Slot [51] registered May 13 12:51:44.739336 kernel: acpiphp: Slot [52] registered May 13 12:51:44.739346 kernel: acpiphp: Slot [53] registered May 13 12:51:44.739352 kernel: acpiphp: Slot [54] registered May 13 12:51:44.739358 kernel: acpiphp: Slot [55] registered May 13 12:51:44.739364 kernel: acpiphp: Slot [56] registered May 13 12:51:44.739370 kernel: acpiphp: Slot [57] registered May 13 12:51:44.739376 kernel: acpiphp: Slot [58] registered May 13 12:51:44.739383 kernel: acpiphp: Slot [59] registered May 13 12:51:44.739393 kernel: acpiphp: Slot [60] registered May 13 12:51:44.739407 kernel: acpiphp: Slot [61] registered May 13 12:51:44.739418 kernel: acpiphp: Slot [62] registered May 13 12:51:44.739425 kernel: acpiphp: Slot [63] registered May 13 12:51:44.739502 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) May 13 12:51:44.739558 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) May 13 12:51:44.739612 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) May 13 12:51:44.739665 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) May 13 12:51:44.739723 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) May 13 12:51:44.739776 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) May 13 12:51:44.739835 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 PCIe Endpoint May 13 12:51:44.739903 kernel: pci 0000:03:00.0: BAR 0 [io 0x4000-0x4007] May 13 12:51:44.740033 kernel: pci 0000:03:00.0: BAR 1 [mem 0xfd5f8000-0xfd5fffff 64bit] May 13 12:51:44.740462 kernel: pci 0000:03:00.0: ROM [mem 0x00000000-0x0000ffff pref] May 13 12:51:44.740519 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold May 13 12:51:44.740572 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' May 13 12:51:44.740628 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] May 13 12:51:44.740682 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] May 13 12:51:44.740734 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] May 13 12:51:44.740786 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] May 13 12:51:44.740838 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] May 13 12:51:44.740891 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] May 13 12:51:44.740952 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] May 13 12:51:44.741008 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] May 13 12:51:44.741065 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 PCIe Endpoint May 13 12:51:44.741117 kernel: pci 0000:0b:00.0: BAR 0 [mem 0xfd4fc000-0xfd4fcfff] May 13 12:51:44.741168 kernel: pci 0000:0b:00.0: BAR 1 [mem 0xfd4fd000-0xfd4fdfff] May 13 12:51:44.741218 kernel: pci 0000:0b:00.0: BAR 2 [mem 0xfd4fe000-0xfd4fffff] May 13 12:51:44.741268 kernel: pci 0000:0b:00.0: BAR 3 [io 0x5000-0x500f] May 13 12:51:44.741317 kernel: pci 0000:0b:00.0: ROM [mem 0x00000000-0x0000ffff pref] May 13 12:51:44.741370 kernel: pci 0000:0b:00.0: supports D1 D2 May 13 12:51:44.741420 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 13 12:51:44.741470 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' May 13 12:51:44.741521 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] May 13 12:51:44.741573 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] May 13 12:51:44.741624 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] May 13 12:51:44.741676 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] May 13 12:51:44.741727 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] May 13 12:51:44.741781 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] May 13 12:51:44.741832 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] May 13 12:51:44.741882 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] May 13 12:51:44.741940 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] May 13 12:51:44.741998 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] May 13 12:51:44.742052 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] May 13 12:51:44.742104 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] May 13 12:51:44.742155 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] May 13 12:51:44.742208 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] May 13 12:51:44.742259 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] May 13 12:51:44.742310 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] May 13 12:51:44.742360 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] May 13 12:51:44.742410 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] May 13 12:51:44.742459 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] May 13 12:51:44.742509 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] May 13 12:51:44.742561 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] May 13 12:51:44.742610 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] May 13 12:51:44.742662 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] May 13 12:51:44.742713 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] May 13 12:51:44.742725 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 May 13 12:51:44.742732 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 May 13 12:51:44.742738 kernel: ACPI: PCI: Interrupt link LNKB disabled May 13 12:51:44.742746 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 13 12:51:44.742752 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 May 13 12:51:44.742758 kernel: iommu: Default domain type: Translated May 13 12:51:44.742764 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 13 12:51:44.742770 kernel: PCI: Using ACPI for IRQ routing May 13 12:51:44.742776 kernel: PCI: pci_cache_line_size set to 64 bytes May 13 12:51:44.742782 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] May 13 12:51:44.742788 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] May 13 12:51:44.742843 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device May 13 12:51:44.742907 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible May 13 12:51:44.742988 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 13 12:51:44.742999 kernel: vgaarb: loaded May 13 12:51:44.743005 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 May 13 12:51:44.743011 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter May 13 12:51:44.743017 kernel: clocksource: Switched to clocksource tsc-early May 13 12:51:44.743023 kernel: VFS: Disk quotas dquot_6.6.0 May 13 12:51:44.743029 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 12:51:44.743035 kernel: pnp: PnP ACPI init May 13 12:51:44.743101 kernel: system 00:00: [io 0x1000-0x103f] has been reserved May 13 12:51:44.743151 kernel: system 00:00: [io 0x1040-0x104f] has been reserved May 13 12:51:44.743196 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved May 13 12:51:44.743244 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved May 13 12:51:44.743292 kernel: pnp 00:06: [dma 2] May 13 12:51:44.743342 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved May 13 12:51:44.743388 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved May 13 12:51:44.743435 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved May 13 12:51:44.743443 kernel: pnp: PnP ACPI: found 8 devices May 13 12:51:44.743454 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 13 12:51:44.743460 kernel: NET: Registered PF_INET protocol family May 13 12:51:44.743466 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 12:51:44.743472 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 13 12:51:44.743478 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 12:51:44.743484 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 13 12:51:44.743492 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 13 12:51:44.743498 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 13 12:51:44.743504 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 13 12:51:44.743510 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 13 12:51:44.743516 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 12:51:44.743522 kernel: NET: Registered PF_XDP protocol family May 13 12:51:44.743578 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 13 12:51:44.743641 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 May 13 12:51:44.743699 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 May 13 12:51:44.743750 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 May 13 12:51:44.743801 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 May 13 12:51:44.743851 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 May 13 12:51:44.743902 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 May 13 12:51:44.743962 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 May 13 12:51:44.744049 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 May 13 12:51:44.744101 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 May 13 12:51:44.744153 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 May 13 12:51:44.744205 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 May 13 12:51:44.744255 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 May 13 12:51:44.744305 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 May 13 12:51:44.744355 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 May 13 12:51:44.744405 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 May 13 12:51:44.744455 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 May 13 12:51:44.744505 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 May 13 12:51:44.744557 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 May 13 12:51:44.744607 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 May 13 12:51:44.744657 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 May 13 12:51:44.744707 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 May 13 12:51:44.744757 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 May 13 12:51:44.744807 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref]: assigned May 13 12:51:44.744857 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref]: assigned May 13 12:51:44.744906 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space May 13 12:51:44.744973 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign May 13 12:51:44.745024 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space May 13 12:51:44.745073 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign May 13 12:51:44.745122 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space May 13 12:51:44.745179 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign May 13 12:51:44.745235 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space May 13 12:51:44.745285 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign May 13 12:51:44.745336 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space May 13 12:51:44.745386 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign May 13 12:51:44.745434 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space May 13 12:51:44.745482 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign May 13 12:51:44.745532 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space May 13 12:51:44.745580 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign May 13 12:51:44.745629 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space May 13 12:51:44.745678 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign May 13 12:51:44.745729 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space May 13 12:51:44.745778 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign May 13 12:51:44.745827 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space May 13 12:51:44.745875 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign May 13 12:51:44.745925 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space May 13 12:51:44.746507 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign May 13 12:51:44.746561 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space May 13 12:51:44.746613 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign May 13 12:51:44.746667 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space May 13 12:51:44.746718 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign May 13 12:51:44.746768 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space May 13 12:51:44.746817 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign May 13 12:51:44.746868 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space May 13 12:51:44.746917 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign May 13 12:51:44.748997 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space May 13 12:51:44.749056 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign May 13 12:51:44.749112 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space May 13 12:51:44.749164 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign May 13 12:51:44.749214 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space May 13 12:51:44.749264 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign May 13 12:51:44.749315 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space May 13 12:51:44.749365 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign May 13 12:51:44.749414 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space May 13 12:51:44.749463 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign May 13 12:51:44.749515 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space May 13 12:51:44.749564 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign May 13 12:51:44.749612 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space May 13 12:51:44.749661 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign May 13 12:51:44.749711 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space May 13 12:51:44.749760 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign May 13 12:51:44.749807 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space May 13 12:51:44.749868 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign May 13 12:51:44.749917 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space May 13 12:51:44.750022 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign May 13 12:51:44.750073 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space May 13 12:51:44.750123 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign May 13 12:51:44.750172 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space May 13 12:51:44.750221 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign May 13 12:51:44.750270 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space May 13 12:51:44.750319 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign May 13 12:51:44.750369 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space May 13 12:51:44.750417 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign May 13 12:51:44.750470 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space May 13 12:51:44.750519 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign May 13 12:51:44.750568 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space May 13 12:51:44.750617 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign May 13 12:51:44.750665 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space May 13 12:51:44.750714 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign May 13 12:51:44.750780 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space May 13 12:51:44.750828 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign May 13 12:51:44.750876 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space May 13 12:51:44.750928 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign May 13 12:51:44.751537 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space May 13 12:51:44.751608 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign May 13 12:51:44.751659 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space May 13 12:51:44.751709 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign May 13 12:51:44.751760 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space May 13 12:51:44.751828 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign May 13 12:51:44.751881 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space May 13 12:51:44.751966 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign May 13 12:51:44.752042 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space May 13 12:51:44.752092 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign May 13 12:51:44.752140 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space May 13 12:51:44.752188 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign May 13 12:51:44.752237 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space May 13 12:51:44.752285 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign May 13 12:51:44.752335 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space May 13 12:51:44.752386 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign May 13 12:51:44.752470 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 13 12:51:44.752520 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] May 13 12:51:44.752568 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] May 13 12:51:44.752635 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] May 13 12:51:44.752684 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] May 13 12:51:44.752736 kernel: pci 0000:03:00.0: ROM [mem 0xfd500000-0xfd50ffff pref]: assigned May 13 12:51:44.753058 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] May 13 12:51:44.753116 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] May 13 12:51:44.753166 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] May 13 12:51:44.753216 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] May 13 12:51:44.753268 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] May 13 12:51:44.753319 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] May 13 12:51:44.753368 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] May 13 12:51:44.753417 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] May 13 12:51:44.753468 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] May 13 12:51:44.753518 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] May 13 12:51:44.753568 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] May 13 12:51:44.753620 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] May 13 12:51:44.753670 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] May 13 12:51:44.753719 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] May 13 12:51:44.753767 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] May 13 12:51:44.753816 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] May 13 12:51:44.753865 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] May 13 12:51:44.753913 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] May 13 12:51:44.753984 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] May 13 12:51:44.754035 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] May 13 12:51:44.754085 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] May 13 12:51:44.754135 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] May 13 12:51:44.754184 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] May 13 12:51:44.754234 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] May 13 12:51:44.754284 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] May 13 12:51:44.754333 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] May 13 12:51:44.754386 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] May 13 12:51:44.754440 kernel: pci 0000:0b:00.0: ROM [mem 0xfd400000-0xfd40ffff pref]: assigned May 13 12:51:44.754490 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] May 13 12:51:44.754539 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] May 13 12:51:44.754589 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] May 13 12:51:44.754638 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] May 13 12:51:44.754688 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] May 13 12:51:44.754738 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] May 13 12:51:44.754790 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] May 13 12:51:44.755166 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] May 13 12:51:44.755228 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] May 13 12:51:44.755282 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] May 13 12:51:44.755333 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] May 13 12:51:44.755384 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] May 13 12:51:44.755435 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] May 13 12:51:44.755486 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] May 13 12:51:44.755536 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] May 13 12:51:44.756212 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] May 13 12:51:44.756264 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] May 13 12:51:44.756315 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] May 13 12:51:44.756365 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] May 13 12:51:44.756415 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] May 13 12:51:44.756465 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] May 13 12:51:44.756515 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] May 13 12:51:44.756564 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] May 13 12:51:44.756616 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] May 13 12:51:44.756667 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] May 13 12:51:44.756716 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] May 13 12:51:44.756765 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] May 13 12:51:44.756816 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] May 13 12:51:44.756865 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] May 13 12:51:44.756920 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] May 13 12:51:44.757990 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] May 13 12:51:44.758043 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] May 13 12:51:44.758093 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] May 13 12:51:44.758143 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] May 13 12:51:44.758192 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] May 13 12:51:44.758243 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] May 13 12:51:44.758293 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] May 13 12:51:44.758342 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] May 13 12:51:44.758392 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] May 13 12:51:44.758441 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] May 13 12:51:44.758493 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] May 13 12:51:44.758542 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] May 13 12:51:44.758591 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] May 13 12:51:44.758640 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] May 13 12:51:44.758688 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] May 13 12:51:44.758738 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] May 13 12:51:44.758788 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] May 13 12:51:44.758837 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] May 13 12:51:44.758889 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] May 13 12:51:44.758945 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] May 13 12:51:44.758999 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] May 13 12:51:44.759049 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] May 13 12:51:44.759098 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] May 13 12:51:44.759147 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] May 13 12:51:44.759244 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] May 13 12:51:44.759496 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] May 13 12:51:44.759579 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] May 13 12:51:44.761983 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] May 13 12:51:44.762040 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] May 13 12:51:44.762101 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] May 13 12:51:44.762152 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] May 13 12:51:44.762202 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] May 13 12:51:44.762253 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] May 13 12:51:44.762303 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] May 13 12:51:44.762355 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] May 13 12:51:44.762406 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] May 13 12:51:44.762455 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] May 13 12:51:44.762504 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] May 13 12:51:44.762554 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] May 13 12:51:44.762603 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] May 13 12:51:44.762652 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] May 13 12:51:44.762706 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] May 13 12:51:44.762754 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] May 13 12:51:44.762803 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] May 13 12:51:44.762854 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] May 13 12:51:44.762902 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] May 13 12:51:44.762970 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] May 13 12:51:44.763022 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] May 13 12:51:44.763074 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] May 13 12:51:44.763123 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] May 13 12:51:44.763173 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] May 13 12:51:44.763218 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] May 13 12:51:44.763272 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] May 13 12:51:44.763318 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] May 13 12:51:44.763360 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] May 13 12:51:44.763418 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] May 13 12:51:44.766556 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] May 13 12:51:44.766609 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] May 13 12:51:44.766658 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] May 13 12:51:44.767061 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] May 13 12:51:44.767121 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] May 13 12:51:44.767169 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] May 13 12:51:44.767215 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] May 13 12:51:44.767268 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] May 13 12:51:44.767314 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] May 13 12:51:44.767358 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] May 13 12:51:44.767408 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] May 13 12:51:44.767454 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] May 13 12:51:44.767499 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] May 13 12:51:44.767546 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] May 13 12:51:44.767608 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] May 13 12:51:44.767667 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] May 13 12:51:44.767717 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] May 13 12:51:44.767762 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] May 13 12:51:44.767812 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] May 13 12:51:44.767858 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] May 13 12:51:44.767910 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] May 13 12:51:44.767969 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] May 13 12:51:44.768019 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] May 13 12:51:44.768065 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] May 13 12:51:44.768114 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] May 13 12:51:44.768159 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] May 13 12:51:44.768212 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] May 13 12:51:44.768258 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] May 13 12:51:44.768302 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] May 13 12:51:44.768350 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] May 13 12:51:44.768395 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] May 13 12:51:44.768440 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] May 13 12:51:44.768499 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] May 13 12:51:44.768545 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] May 13 12:51:44.768590 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] May 13 12:51:44.768638 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] May 13 12:51:44.768684 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] May 13 12:51:44.768734 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] May 13 12:51:44.768780 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] May 13 12:51:44.768831 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] May 13 12:51:44.768882 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] May 13 12:51:44.768951 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] May 13 12:51:44.769005 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] May 13 12:51:44.769061 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] May 13 12:51:44.769107 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] May 13 12:51:44.769159 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] May 13 12:51:44.769204 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] May 13 12:51:44.769249 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] May 13 12:51:44.769300 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] May 13 12:51:44.769345 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] May 13 12:51:44.769389 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] May 13 12:51:44.769449 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] May 13 12:51:44.769495 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] May 13 12:51:44.769539 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] May 13 12:51:44.769587 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] May 13 12:51:44.769633 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] May 13 12:51:44.769682 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] May 13 12:51:44.769727 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] May 13 12:51:44.769780 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] May 13 12:51:44.769825 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] May 13 12:51:44.769881 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] May 13 12:51:44.769926 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] May 13 12:51:44.769994 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] May 13 12:51:44.770040 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] May 13 12:51:44.770092 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] May 13 12:51:44.770137 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] May 13 12:51:44.770181 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] May 13 12:51:44.770232 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] May 13 12:51:44.770288 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] May 13 12:51:44.770340 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] May 13 12:51:44.770396 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] May 13 12:51:44.770446 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] May 13 12:51:44.770495 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] May 13 12:51:44.770540 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] May 13 12:51:44.770588 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] May 13 12:51:44.770633 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] May 13 12:51:44.770682 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] May 13 12:51:44.770730 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] May 13 12:51:44.770789 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] May 13 12:51:44.770835 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] May 13 12:51:44.770885 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] May 13 12:51:44.770930 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] May 13 12:51:44.771001 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 13 12:51:44.771012 kernel: PCI: CLS 32 bytes, default 64 May 13 12:51:44.771019 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 13 12:51:44.771025 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns May 13 12:51:44.771032 kernel: clocksource: Switched to clocksource tsc May 13 12:51:44.771038 kernel: Initialise system trusted keyrings May 13 12:51:44.771044 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 13 12:51:44.771050 kernel: Key type asymmetric registered May 13 12:51:44.771056 kernel: Asymmetric key parser 'x509' registered May 13 12:51:44.771062 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 13 12:51:44.771069 kernel: io scheduler mq-deadline registered May 13 12:51:44.771075 kernel: io scheduler kyber registered May 13 12:51:44.771081 kernel: io scheduler bfq registered May 13 12:51:44.771134 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 May 13 12:51:44.771185 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:51:44.771237 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 May 13 12:51:44.771287 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:51:44.771341 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 May 13 12:51:44.771391 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:51:44.771444 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 May 13 12:51:44.771495 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:51:44.771548 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 May 13 12:51:44.771599 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:51:44.771650 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 May 13 12:51:44.771701 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:51:44.771754 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 May 13 12:51:44.771803 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:51:44.771854 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 May 13 12:51:44.771904 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:51:44.771972 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 May 13 12:51:44.772024 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:51:44.772075 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 May 13 12:51:44.772128 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:51:44.772178 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 May 13 12:51:44.772228 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:51:44.772279 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 May 13 12:51:44.772329 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:51:44.772389 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 May 13 12:51:44.772441 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:51:44.772494 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 May 13 12:51:44.772544 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:51:44.772595 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 May 13 12:51:44.772645 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:51:44.772697 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 May 13 12:51:44.772747 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:51:44.772797 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 May 13 12:51:44.772847 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:51:44.772901 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 May 13 12:51:44.772980 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:51:44.773043 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 May 13 12:51:44.773104 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:51:44.773156 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 May 13 12:51:44.773206 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:51:44.773256 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 May 13 12:51:44.773309 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:51:44.773361 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 May 13 12:51:44.773411 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:51:44.773461 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 May 13 12:51:44.773512 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:51:44.773563 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 May 13 12:51:44.773613 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:51:44.773664 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 May 13 12:51:44.773716 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:51:44.773767 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 May 13 12:51:44.773817 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:51:44.773867 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 May 13 12:51:44.773917 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:51:44.773987 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 May 13 12:51:44.774039 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:51:44.774092 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 May 13 12:51:44.774144 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:51:44.774195 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 May 13 12:51:44.774245 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:51:44.774296 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 May 13 12:51:44.774346 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:51:44.774396 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 May 13 12:51:44.774445 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:51:44.774457 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 13 12:51:44.774464 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 12:51:44.774471 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 13 12:51:44.774479 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 May 13 12:51:44.774486 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 13 12:51:44.774492 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 13 12:51:44.774542 kernel: rtc_cmos 00:01: registered as rtc0 May 13 12:51:44.774553 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 13 12:51:44.774598 kernel: rtc_cmos 00:01: setting system clock to 2025-05-13T12:51:44 UTC (1747140704) May 13 12:51:44.774643 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram May 13 12:51:44.774652 kernel: intel_pstate: CPU model not supported May 13 12:51:44.774659 kernel: NET: Registered PF_INET6 protocol family May 13 12:51:44.774665 kernel: Segment Routing with IPv6 May 13 12:51:44.774671 kernel: In-situ OAM (IOAM) with IPv6 May 13 12:51:44.774677 kernel: NET: Registered PF_PACKET protocol family May 13 12:51:44.774684 kernel: Key type dns_resolver registered May 13 12:51:44.774692 kernel: IPI shorthand broadcast: enabled May 13 12:51:44.774698 kernel: sched_clock: Marking stable (2523034537, 172757813)->(2769128264, -73335914) May 13 12:51:44.774704 kernel: registered taskstats version 1 May 13 12:51:44.774711 kernel: Loading compiled-in X.509 certificates May 13 12:51:44.774717 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.28-flatcar: d81efc2839896c91a2830d4cfad7b0572af8b26a' May 13 12:51:44.774723 kernel: Demotion targets for Node 0: null May 13 12:51:44.774729 kernel: Key type .fscrypt registered May 13 12:51:44.774736 kernel: Key type fscrypt-provisioning registered May 13 12:51:44.774743 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 12:51:44.774750 kernel: ima: Allocated hash algorithm: sha1 May 13 12:51:44.774756 kernel: ima: No architecture policies found May 13 12:51:44.774762 kernel: clk: Disabling unused clocks May 13 12:51:44.774768 kernel: Warning: unable to open an initial console. May 13 12:51:44.774775 kernel: Freeing unused kernel image (initmem) memory: 54420K May 13 12:51:44.774781 kernel: Write protecting the kernel read-only data: 24576k May 13 12:51:44.774787 kernel: Freeing unused kernel image (rodata/data gap) memory: 292K May 13 12:51:44.774794 kernel: Run /init as init process May 13 12:51:44.774801 kernel: with arguments: May 13 12:51:44.774808 kernel: /init May 13 12:51:44.774814 kernel: with environment: May 13 12:51:44.774820 kernel: HOME=/ May 13 12:51:44.774826 kernel: TERM=linux May 13 12:51:44.774832 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 12:51:44.774839 systemd[1]: Successfully made /usr/ read-only. May 13 12:51:44.774848 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 12:51:44.774856 systemd[1]: Detected virtualization vmware. May 13 12:51:44.774862 systemd[1]: Detected architecture x86-64. May 13 12:51:44.774868 systemd[1]: Running in initrd. May 13 12:51:44.774874 systemd[1]: No hostname configured, using default hostname. May 13 12:51:44.774882 systemd[1]: Hostname set to . May 13 12:51:44.774888 systemd[1]: Initializing machine ID from random generator. May 13 12:51:44.774895 systemd[1]: Queued start job for default target initrd.target. May 13 12:51:44.774901 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 12:51:44.774909 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 12:51:44.774916 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 12:51:44.774923 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 12:51:44.774929 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 12:51:44.775078 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 12:51:44.775086 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 12:51:44.775093 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 12:51:44.775101 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 12:51:44.775107 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 12:51:44.775114 systemd[1]: Reached target paths.target - Path Units. May 13 12:51:44.775120 systemd[1]: Reached target slices.target - Slice Units. May 13 12:51:44.775126 systemd[1]: Reached target swap.target - Swaps. May 13 12:51:44.775133 systemd[1]: Reached target timers.target - Timer Units. May 13 12:51:44.775139 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 12:51:44.775146 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 12:51:44.775152 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 12:51:44.775363 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 13 12:51:44.775373 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 12:51:44.775380 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 12:51:44.775387 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 12:51:44.775393 systemd[1]: Reached target sockets.target - Socket Units. May 13 12:51:44.775400 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 12:51:44.775407 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 12:51:44.775413 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 12:51:44.775422 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 13 12:51:44.775429 systemd[1]: Starting systemd-fsck-usr.service... May 13 12:51:44.775436 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 12:51:44.775442 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 12:51:44.775449 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 12:51:44.775468 systemd-journald[244]: Collecting audit messages is disabled. May 13 12:51:44.775488 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 12:51:44.775495 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 12:51:44.775503 systemd[1]: Finished systemd-fsck-usr.service. May 13 12:51:44.775510 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 12:51:44.775517 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 12:51:44.775523 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 12:51:44.775530 kernel: Bridge firewalling registered May 13 12:51:44.775536 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 12:51:44.775543 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 12:51:44.775550 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 12:51:44.775558 systemd-journald[244]: Journal started May 13 12:51:44.775574 systemd-journald[244]: Runtime Journal (/run/log/journal/71c21a1451964efab5d733bcd156fe79) is 4.8M, max 38.8M, 34M free. May 13 12:51:44.742009 systemd-modules-load[245]: Inserted module 'overlay' May 13 12:51:44.767657 systemd-modules-load[245]: Inserted module 'br_netfilter' May 13 12:51:44.776949 systemd[1]: Started systemd-journald.service - Journal Service. May 13 12:51:44.779591 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 12:51:44.781001 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 12:51:44.781578 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 12:51:44.782025 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 12:51:44.788874 systemd-tmpfiles[267]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 13 12:51:44.789870 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 12:51:44.790858 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 12:51:44.793022 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 12:51:44.800269 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 12:51:44.801462 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 12:51:44.815338 dracut-cmdline[285]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=7099d7ee582d4f3e6d25a3763207cfa25fb4eb117c83034e2c517b959b8370a1 May 13 12:51:44.822726 systemd-resolved[274]: Positive Trust Anchors: May 13 12:51:44.822733 systemd-resolved[274]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 12:51:44.822756 systemd-resolved[274]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 12:51:44.824813 systemd-resolved[274]: Defaulting to hostname 'linux'. May 13 12:51:44.825415 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 12:51:44.825560 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 12:51:44.867958 kernel: SCSI subsystem initialized May 13 12:51:44.874956 kernel: Loading iSCSI transport class v2.0-870. May 13 12:51:44.883049 kernel: iscsi: registered transport (tcp) May 13 12:51:44.898114 kernel: iscsi: registered transport (qla4xxx) May 13 12:51:44.898159 kernel: QLogic iSCSI HBA Driver May 13 12:51:44.909278 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 12:51:44.921071 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 12:51:44.922114 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 12:51:44.945119 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 12:51:44.945913 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 12:51:44.983960 kernel: raid6: avx2x4 gen() 44083 MB/s May 13 12:51:45.000959 kernel: raid6: avx2x2 gen() 44717 MB/s May 13 12:51:45.018167 kernel: raid6: avx2x1 gen() 43779 MB/s May 13 12:51:45.018210 kernel: raid6: using algorithm avx2x2 gen() 44717 MB/s May 13 12:51:45.036185 kernel: raid6: .... xor() 31453 MB/s, rmw enabled May 13 12:51:45.036244 kernel: raid6: using avx2x2 recovery algorithm May 13 12:51:45.050960 kernel: xor: automatically using best checksumming function avx May 13 12:51:45.156960 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 12:51:45.161019 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 12:51:45.161961 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 12:51:45.181733 systemd-udevd[493]: Using default interface naming scheme 'v255'. May 13 12:51:45.185571 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 12:51:45.186826 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 12:51:45.200707 dracut-pre-trigger[498]: rd.md=0: removing MD RAID activation May 13 12:51:45.215855 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 12:51:45.216658 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 12:51:45.299451 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 12:51:45.301180 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 12:51:45.374492 kernel: VMware PVSCSI driver - version 1.0.7.0-k May 13 12:51:45.374526 kernel: vmw_pvscsi: using 64bit dma May 13 12:51:45.374534 kernel: vmw_pvscsi: max_id: 16 May 13 12:51:45.374541 kernel: vmw_pvscsi: setting ring_pages to 8 May 13 12:51:45.383943 kernel: VMware vmxnet3 virtual NIC driver - version 1.9.0.0-k-NAPI May 13 12:51:45.384949 kernel: vmw_pvscsi: enabling reqCallThreshold May 13 12:51:45.384966 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 May 13 12:51:45.385065 kernel: vmw_pvscsi: driver-based request coalescing enabled May 13 12:51:45.386947 kernel: vmw_pvscsi: using MSI-X May 13 12:51:45.389948 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps May 13 12:51:45.397953 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 May 13 12:51:45.403958 kernel: libata version 3.00 loaded. May 13 12:51:45.403993 kernel: cryptd: max_cpu_qlen set to 1000 May 13 12:51:45.410954 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 May 13 12:51:45.414370 kernel: ata_piix 0000:00:07.1: version 2.13 May 13 12:51:45.414515 kernel: scsi host1: ata_piix May 13 12:51:45.414716 (udev-worker)[552]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. May 13 12:51:45.416007 kernel: AES CTR mode by8 optimization enabled May 13 12:51:45.427873 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 12:51:45.428407 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 12:51:45.431984 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 May 13 12:51:45.432958 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 May 13 12:51:45.432986 kernel: scsi host2: ata_piix May 13 12:51:45.431612 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 12:51:45.433690 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 12:51:45.435942 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 May 13 12:51:45.438953 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 lpm-pol 0 May 13 12:51:45.438985 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 lpm-pol 0 May 13 12:51:45.446285 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) May 13 12:51:45.446413 kernel: sd 0:0:0:0: [sda] Write Protect is off May 13 12:51:45.446483 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 May 13 12:51:45.446548 kernel: sd 0:0:0:0: [sda] Cache data unavailable May 13 12:51:45.447752 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through May 13 12:51:45.455956 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 13 12:51:45.456948 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 13 12:51:45.462016 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 12:51:45.601988 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 May 13 12:51:45.605966 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 May 13 12:51:45.631411 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray May 13 12:51:45.631550 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 13 12:51:45.639980 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 13 12:51:45.688131 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. May 13 12:51:45.695089 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. May 13 12:51:45.702284 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. May 13 12:51:45.708098 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. May 13 12:51:45.708295 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. May 13 12:51:45.709192 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 12:51:45.743973 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 13 12:51:45.755953 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 13 12:51:45.893278 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 12:51:45.904066 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 12:51:45.904217 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 12:51:45.904420 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 12:51:45.905151 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 12:51:45.918731 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 12:51:46.788922 disk-uuid[650]: The operation has completed successfully. May 13 12:51:46.789165 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 13 12:51:46.830478 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 12:51:46.830538 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 12:51:46.853920 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 12:51:46.863816 sh[679]: Success May 13 12:51:46.876242 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 12:51:46.876284 kernel: device-mapper: uevent: version 1.0.3 May 13 12:51:46.877753 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 13 12:51:46.884944 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" May 13 12:51:46.959083 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 12:51:46.960004 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 12:51:46.966096 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 12:51:46.978958 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 13 12:51:46.980950 kernel: BTRFS: device fsid 3042589c-b63f-42f0-9a6f-a4369b1889f9 devid 1 transid 40 /dev/mapper/usr (254:0) scanned by mount (691) May 13 12:51:46.985980 kernel: BTRFS info (device dm-0): first mount of filesystem 3042589c-b63f-42f0-9a6f-a4369b1889f9 May 13 12:51:46.986004 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 13 12:51:46.987491 kernel: BTRFS info (device dm-0): using free-space-tree May 13 12:51:47.024665 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 12:51:47.025128 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 13 12:51:47.025820 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... May 13 12:51:47.027683 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 12:51:47.051947 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 (8:6) scanned by mount (714) May 13 12:51:47.056729 kernel: BTRFS info (device sda6): first mount of filesystem 00c8da9a-330c-44ff-bf12-f9831c2c14e1 May 13 12:51:47.056758 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 13 12:51:47.056771 kernel: BTRFS info (device sda6): using free-space-tree May 13 12:51:47.075961 kernel: BTRFS info (device sda6): last unmount of filesystem 00c8da9a-330c-44ff-bf12-f9831c2c14e1 May 13 12:51:47.076658 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 12:51:47.077626 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 12:51:47.115188 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. May 13 12:51:47.116463 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 12:51:47.190018 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 12:51:47.191208 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 12:51:47.219522 systemd-networkd[865]: lo: Link UP May 13 12:51:47.220299 systemd-networkd[865]: lo: Gained carrier May 13 12:51:47.221194 systemd-networkd[865]: Enumeration completed May 13 12:51:47.221264 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 12:51:47.221431 systemd[1]: Reached target network.target - Network. May 13 12:51:47.221691 systemd-networkd[865]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. May 13 12:51:47.224079 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated May 13 12:51:47.224229 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps May 13 12:51:47.225280 systemd-networkd[865]: ens192: Link UP May 13 12:51:47.225284 systemd-networkd[865]: ens192: Gained carrier May 13 12:51:47.270969 ignition[733]: Ignition 2.21.0 May 13 12:51:47.270979 ignition[733]: Stage: fetch-offline May 13 12:51:47.271005 ignition[733]: no configs at "/usr/lib/ignition/base.d" May 13 12:51:47.271011 ignition[733]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 13 12:51:47.271065 ignition[733]: parsed url from cmdline: "" May 13 12:51:47.271067 ignition[733]: no config URL provided May 13 12:51:47.271071 ignition[733]: reading system config file "/usr/lib/ignition/user.ign" May 13 12:51:47.271075 ignition[733]: no config at "/usr/lib/ignition/user.ign" May 13 12:51:47.271438 ignition[733]: config successfully fetched May 13 12:51:47.271457 ignition[733]: parsing config with SHA512: b0288cd5a0069386aafb1fb61fc76095b51a6368dbb51b065e49e5a5f96cab2b3296c2a8965baf877962aa293b97bd0e2f739b35e3384e24d674e9dc850c0f35 May 13 12:51:47.275479 unknown[733]: fetched base config from "system" May 13 12:51:47.275724 ignition[733]: fetch-offline: fetch-offline passed May 13 12:51:47.275486 unknown[733]: fetched user config from "vmware" May 13 12:51:47.275760 ignition[733]: Ignition finished successfully May 13 12:51:47.276567 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 12:51:47.276947 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 12:51:47.277455 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 12:51:47.293060 ignition[876]: Ignition 2.21.0 May 13 12:51:47.293069 ignition[876]: Stage: kargs May 13 12:51:47.293171 ignition[876]: no configs at "/usr/lib/ignition/base.d" May 13 12:51:47.293177 ignition[876]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 13 12:51:47.293743 ignition[876]: kargs: kargs passed May 13 12:51:47.293774 ignition[876]: Ignition finished successfully May 13 12:51:47.295461 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 12:51:47.296259 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 12:51:47.311594 ignition[882]: Ignition 2.21.0 May 13 12:51:47.311604 ignition[882]: Stage: disks May 13 12:51:47.311683 ignition[882]: no configs at "/usr/lib/ignition/base.d" May 13 12:51:47.311689 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 13 12:51:47.312311 ignition[882]: disks: disks passed May 13 12:51:47.312343 ignition[882]: Ignition finished successfully May 13 12:51:47.313333 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 12:51:47.313649 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 12:51:47.313889 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 12:51:47.314142 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 12:51:47.314344 systemd[1]: Reached target sysinit.target - System Initialization. May 13 12:51:47.314555 systemd[1]: Reached target basic.target - Basic System. May 13 12:51:47.315296 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 12:51:47.402607 systemd-fsck[891]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks May 13 12:51:47.403957 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 12:51:47.405808 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 12:51:47.599061 kernel: EXT4-fs (sda9): mounted filesystem ebf7ca75-051f-4154-b098-5ec24084105d r/w with ordered data mode. Quota mode: none. May 13 12:51:47.599473 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 12:51:47.599848 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 12:51:47.603714 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 12:51:47.605994 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 12:51:47.606426 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 13 12:51:47.606603 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 12:51:47.606620 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 12:51:47.614965 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 12:51:47.615782 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 12:51:47.687965 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 (8:6) scanned by mount (899) May 13 12:51:47.691154 kernel: BTRFS info (device sda6): first mount of filesystem 00c8da9a-330c-44ff-bf12-f9831c2c14e1 May 13 12:51:47.691204 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 13 12:51:47.691212 kernel: BTRFS info (device sda6): using free-space-tree May 13 12:51:47.698925 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 12:51:47.706479 initrd-setup-root[923]: cut: /sysroot/etc/passwd: No such file or directory May 13 12:51:47.714755 initrd-setup-root[930]: cut: /sysroot/etc/group: No such file or directory May 13 12:51:47.718095 initrd-setup-root[937]: cut: /sysroot/etc/shadow: No such file or directory May 13 12:51:47.720764 initrd-setup-root[944]: cut: /sysroot/etc/gshadow: No such file or directory May 13 12:51:47.838139 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 12:51:47.838975 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 12:51:47.840038 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 12:51:47.852950 kernel: BTRFS info (device sda6): last unmount of filesystem 00c8da9a-330c-44ff-bf12-f9831c2c14e1 May 13 12:51:47.864384 ignition[1012]: INFO : Ignition 2.21.0 May 13 12:51:47.864384 ignition[1012]: INFO : Stage: mount May 13 12:51:47.864733 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 12:51:47.864733 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 13 12:51:47.865593 ignition[1012]: INFO : mount: mount passed May 13 12:51:47.866225 ignition[1012]: INFO : Ignition finished successfully May 13 12:51:47.866683 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 12:51:47.868013 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 12:51:47.875806 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 12:51:47.956119 systemd-resolved[274]: Detected conflict on linux IN A 139.178.70.104 May 13 12:51:47.956129 systemd-resolved[274]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. May 13 12:51:47.979201 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 12:51:47.980449 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 12:51:48.018948 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 (8:6) scanned by mount (1024) May 13 12:51:48.023157 kernel: BTRFS info (device sda6): first mount of filesystem 00c8da9a-330c-44ff-bf12-f9831c2c14e1 May 13 12:51:48.023205 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 13 12:51:48.023228 kernel: BTRFS info (device sda6): using free-space-tree May 13 12:51:48.028838 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 12:51:48.051440 ignition[1041]: INFO : Ignition 2.21.0 May 13 12:51:48.051440 ignition[1041]: INFO : Stage: files May 13 12:51:48.051914 ignition[1041]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 12:51:48.051914 ignition[1041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 13 12:51:48.052496 ignition[1041]: DEBUG : files: compiled without relabeling support, skipping May 13 12:51:48.061026 ignition[1041]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 12:51:48.061285 ignition[1041]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 12:51:48.073658 ignition[1041]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 12:51:48.073943 ignition[1041]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 12:51:48.074182 unknown[1041]: wrote ssh authorized keys file for user: core May 13 12:51:48.074485 ignition[1041]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 12:51:48.090099 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 12:51:48.090099 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 13 12:51:48.176751 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 12:51:48.355030 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 12:51:48.355030 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 12:51:48.355449 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 13 12:51:48.707803 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 13 12:51:48.748360 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 12:51:48.748607 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 13 12:51:48.748607 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 13 12:51:48.748607 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 12:51:48.748607 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 12:51:48.748607 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 12:51:48.749413 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 12:51:48.749413 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 12:51:48.749413 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 12:51:48.749934 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 12:51:48.750100 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 12:51:48.750100 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 13 12:51:48.752233 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 13 12:51:48.752456 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 13 12:51:48.752456 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 May 13 12:51:49.047248 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 13 12:51:49.214036 systemd-networkd[865]: ens192: Gained IPv6LL May 13 12:51:49.321682 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 13 12:51:49.322415 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" May 13 12:51:49.323185 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" May 13 12:51:49.323185 ignition[1041]: INFO : files: op(d): [started] processing unit "prepare-helm.service" May 13 12:51:49.323902 ignition[1041]: INFO : files: op(d): op(e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 12:51:49.324478 ignition[1041]: INFO : files: op(d): op(e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 12:51:49.324478 ignition[1041]: INFO : files: op(d): [finished] processing unit "prepare-helm.service" May 13 12:51:49.324478 ignition[1041]: INFO : files: op(f): [started] processing unit "coreos-metadata.service" May 13 12:51:49.324478 ignition[1041]: INFO : files: op(f): op(10): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 12:51:49.324478 ignition[1041]: INFO : files: op(f): op(10): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 12:51:49.324478 ignition[1041]: INFO : files: op(f): [finished] processing unit "coreos-metadata.service" May 13 12:51:49.324478 ignition[1041]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" May 13 12:51:49.352471 ignition[1041]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 12:51:49.355472 ignition[1041]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 12:51:49.355686 ignition[1041]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" May 13 12:51:49.355686 ignition[1041]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" May 13 12:51:49.355686 ignition[1041]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" May 13 12:51:49.355686 ignition[1041]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 12:51:49.357004 ignition[1041]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 12:51:49.357004 ignition[1041]: INFO : files: files passed May 13 12:51:49.357004 ignition[1041]: INFO : Ignition finished successfully May 13 12:51:49.356613 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 12:51:49.358040 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 12:51:49.358689 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 12:51:49.368441 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 12:51:49.368508 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 12:51:49.371979 initrd-setup-root-after-ignition[1073]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 12:51:49.371979 initrd-setup-root-after-ignition[1073]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 12:51:49.372876 initrd-setup-root-after-ignition[1077]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 12:51:49.373738 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 12:51:49.374099 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 12:51:49.374807 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 12:51:49.411285 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 12:51:49.411365 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 12:51:49.411646 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 12:51:49.411759 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 12:51:49.411965 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 12:51:49.412458 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 12:51:49.421237 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 12:51:49.422290 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 12:51:49.439109 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 12:51:49.439418 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 12:51:49.439758 systemd[1]: Stopped target timers.target - Timer Units. May 13 12:51:49.440016 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 12:51:49.440096 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 12:51:49.440592 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 12:51:49.440820 systemd[1]: Stopped target basic.target - Basic System. May 13 12:51:49.441106 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 12:51:49.441371 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 12:51:49.441616 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 12:51:49.441900 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 13 12:51:49.442198 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 12:51:49.442430 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 12:51:49.442733 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 12:51:49.443018 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 12:51:49.443255 systemd[1]: Stopped target swap.target - Swaps. May 13 12:51:49.443483 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 12:51:49.443657 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 12:51:49.444041 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 12:51:49.444287 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 12:51:49.444570 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 12:51:49.444721 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 12:51:49.444998 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 12:51:49.445072 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 12:51:49.445529 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 12:51:49.445698 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 12:51:49.446001 systemd[1]: Stopped target paths.target - Path Units. May 13 12:51:49.446256 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 12:51:49.449981 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 12:51:49.450181 systemd[1]: Stopped target slices.target - Slice Units. May 13 12:51:49.450445 systemd[1]: Stopped target sockets.target - Socket Units. May 13 12:51:49.450632 systemd[1]: iscsid.socket: Deactivated successfully. May 13 12:51:49.450690 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 12:51:49.450838 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 12:51:49.450884 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 12:51:49.451058 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 12:51:49.451130 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 12:51:49.451384 systemd[1]: ignition-files.service: Deactivated successfully. May 13 12:51:49.451445 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 12:51:49.453043 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 12:51:49.453621 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 12:51:49.453723 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 12:51:49.453790 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 12:51:49.453960 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 12:51:49.454019 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 12:51:49.457309 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 12:51:49.464134 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 12:51:49.471022 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 12:51:49.472439 ignition[1097]: INFO : Ignition 2.21.0 May 13 12:51:49.472700 ignition[1097]: INFO : Stage: umount May 13 12:51:49.472953 ignition[1097]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 12:51:49.473158 ignition[1097]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 13 12:51:49.475228 ignition[1097]: INFO : umount: umount passed May 13 12:51:49.476133 ignition[1097]: INFO : Ignition finished successfully May 13 12:51:49.477132 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 12:51:49.477326 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 12:51:49.477662 systemd[1]: Stopped target network.target - Network. May 13 12:51:49.477870 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 12:51:49.478003 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 12:51:49.478235 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 12:51:49.478257 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 12:51:49.478462 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 12:51:49.478486 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 12:51:49.478765 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 12:51:49.478788 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 12:51:49.479149 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 12:51:49.479385 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 12:51:49.484949 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 12:51:49.485013 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 12:51:49.486633 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 13 12:51:49.486869 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 12:51:49.486913 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 12:51:49.488034 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 13 12:51:49.488178 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 12:51:49.488248 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 12:51:49.489165 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 13 12:51:49.489376 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 13 12:51:49.489566 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 12:51:49.489586 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 12:51:49.490237 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 12:51:49.490324 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 12:51:49.490355 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 12:51:49.490478 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. May 13 12:51:49.490498 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. May 13 12:51:49.490608 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 12:51:49.490628 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 12:51:49.490865 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 12:51:49.490886 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 12:51:49.492004 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 12:51:49.492587 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 12:51:49.499264 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 12:51:49.499413 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 12:51:49.499919 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 12:51:49.500017 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 12:51:49.500406 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 12:51:49.500441 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 12:51:49.500577 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 12:51:49.500593 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 12:51:49.500758 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 12:51:49.500783 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 12:51:49.501077 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 12:51:49.501102 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 12:51:49.501411 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 12:51:49.501435 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 12:51:49.502223 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 12:51:49.502332 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 13 12:51:49.502357 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 13 12:51:49.502532 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 12:51:49.502556 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 12:51:49.502837 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 13 12:51:49.502860 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 12:51:49.503867 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 12:51:49.503893 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 12:51:49.504187 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 12:51:49.504210 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 12:51:49.518241 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 12:51:49.518486 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 12:51:49.595479 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 12:51:49.595545 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 12:51:49.595852 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 12:51:49.595982 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 12:51:49.596010 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 12:51:49.596568 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 12:51:49.614484 systemd[1]: Switching root. May 13 12:51:49.643895 systemd-journald[244]: Journal stopped May 13 12:51:51.581041 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). May 13 12:51:51.581070 kernel: SELinux: policy capability network_peer_controls=1 May 13 12:51:51.581079 kernel: SELinux: policy capability open_perms=1 May 13 12:51:51.581085 kernel: SELinux: policy capability extended_socket_class=1 May 13 12:51:51.581090 kernel: SELinux: policy capability always_check_network=0 May 13 12:51:51.581098 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 12:51:51.581104 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 12:51:51.581110 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 12:51:51.581116 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 12:51:51.581122 kernel: SELinux: policy capability userspace_initial_context=0 May 13 12:51:51.581128 systemd[1]: Successfully loaded SELinux policy in 33.660ms. May 13 12:51:51.581136 kernel: audit: type=1403 audit(1747140711.008:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 12:51:51.581144 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.496ms. May 13 12:51:51.581152 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 12:51:51.581159 systemd[1]: Detected virtualization vmware. May 13 12:51:51.581165 systemd[1]: Detected architecture x86-64. May 13 12:51:51.581173 systemd[1]: Detected first boot. May 13 12:51:51.581180 systemd[1]: Initializing machine ID from random generator. May 13 12:51:51.581187 zram_generator::config[1141]: No configuration found. May 13 12:51:51.581290 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc May 13 12:51:51.581302 kernel: Guest personality initialized and is active May 13 12:51:51.581967 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 13 12:51:51.581978 kernel: Initialized host personality May 13 12:51:51.581987 kernel: NET: Registered PF_VSOCK protocol family May 13 12:51:51.581996 systemd[1]: Populated /etc with preset unit settings. May 13 12:51:51.582004 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 13 12:51:51.582012 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" May 13 12:51:51.582019 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 13 12:51:51.582026 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 12:51:51.582032 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 12:51:51.582040 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 12:51:51.582048 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 12:51:51.582055 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 12:51:51.582061 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 12:51:51.582068 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 12:51:51.582075 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 12:51:51.582082 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 12:51:51.582090 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 12:51:51.582097 systemd[1]: Created slice user.slice - User and Session Slice. May 13 12:51:51.582104 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 12:51:51.582112 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 12:51:51.582119 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 12:51:51.582126 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 12:51:51.582133 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 12:51:51.582141 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 12:51:51.582149 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 13 12:51:51.582156 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 12:51:51.582163 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 12:51:51.582170 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 12:51:51.582176 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 12:51:51.582183 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 12:51:51.582190 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 12:51:51.582197 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 12:51:51.582205 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 12:51:51.582212 systemd[1]: Reached target slices.target - Slice Units. May 13 12:51:51.582219 systemd[1]: Reached target swap.target - Swaps. May 13 12:51:51.582227 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 12:51:51.582234 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 12:51:51.583644 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 13 12:51:51.583656 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 12:51:51.583663 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 12:51:51.583670 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 12:51:51.583677 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 12:51:51.583685 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 12:51:51.583692 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 12:51:51.583699 systemd[1]: Mounting media.mount - External Media Directory... May 13 12:51:51.583708 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 12:51:51.583716 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 12:51:51.583723 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 12:51:51.583731 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 12:51:51.583738 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 12:51:51.583746 systemd[1]: Reached target machines.target - Containers. May 13 12:51:51.583753 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 12:51:51.583760 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... May 13 12:51:51.583768 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 12:51:51.583775 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 12:51:51.583782 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 12:51:51.583789 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 12:51:51.583796 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 12:51:51.583810 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 12:51:51.583819 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 12:51:51.583826 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 12:51:51.583834 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 12:51:51.583841 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 12:51:51.583849 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 12:51:51.583857 systemd[1]: Stopped systemd-fsck-usr.service. May 13 12:51:51.583869 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 12:51:51.583881 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 12:51:51.583892 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 12:51:51.583904 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 12:51:51.583914 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 12:51:51.583923 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 13 12:51:51.583930 kernel: ACPI: bus type drm_connector registered May 13 12:51:51.584964 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 12:51:51.584974 systemd[1]: verity-setup.service: Deactivated successfully. May 13 12:51:51.584982 systemd[1]: Stopped verity-setup.service. May 13 12:51:51.584989 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 12:51:51.584996 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 12:51:51.585003 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 12:51:51.585034 systemd-journald[1231]: Collecting audit messages is disabled. May 13 12:51:51.585052 systemd[1]: Mounted media.mount - External Media Directory. May 13 12:51:51.585060 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 12:51:51.585067 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 12:51:51.585075 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 12:51:51.585082 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 12:51:51.585089 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 12:51:51.585097 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 12:51:51.585104 systemd-journald[1231]: Journal started May 13 12:51:51.585119 systemd-journald[1231]: Runtime Journal (/run/log/journal/e21f8bfc5d7542abbba143c6544d0402) is 4.8M, max 38.8M, 34M free. May 13 12:51:51.398594 systemd[1]: Queued start job for default target multi-user.target. May 13 12:51:51.411210 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 13 12:51:51.411454 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 12:51:51.587166 jq[1211]: true May 13 12:51:51.588557 systemd[1]: Started systemd-journald.service - Journal Service. May 13 12:51:51.589560 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 12:51:51.589673 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 12:51:51.589976 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 12:51:51.592068 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 12:51:51.594608 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 12:51:51.599646 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 12:51:51.605950 jq[1247]: true May 13 12:51:51.607849 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 12:51:51.608248 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 12:51:51.608982 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 12:51:51.609307 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 12:51:51.609589 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 12:51:51.620491 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 13 12:51:51.622675 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 12:51:51.625226 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 12:51:51.625388 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 12:51:51.625414 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 12:51:51.626150 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 13 12:51:51.631967 kernel: fuse: init (API version 7.41) May 13 12:51:51.636502 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 12:51:51.636710 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 12:51:51.644181 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 12:51:51.649199 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 12:51:51.649376 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 12:51:51.656349 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 12:51:51.657868 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 12:51:51.659375 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 12:51:51.659758 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 12:51:51.660464 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 12:51:51.661107 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 12:51:51.663365 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 12:51:51.664949 kernel: loop: module loaded May 13 12:51:51.666040 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 12:51:51.667278 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 12:51:51.667576 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 12:51:51.668109 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 12:51:51.678271 systemd-journald[1231]: Time spent on flushing to /var/log/journal/e21f8bfc5d7542abbba143c6544d0402 is 49.039ms for 1763 entries. May 13 12:51:51.678271 systemd-journald[1231]: System Journal (/var/log/journal/e21f8bfc5d7542abbba143c6544d0402) is 8M, max 584.8M, 576.8M free. May 13 12:51:51.759442 systemd-journald[1231]: Received client request to flush runtime journal. May 13 12:51:51.759480 kernel: loop0: detected capacity change from 0 to 2960 May 13 12:51:51.676803 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 12:51:51.684295 ignition[1268]: Ignition 2.21.0 May 13 12:51:51.700132 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. May 13 12:51:51.684514 ignition[1268]: deleting config from guestinfo properties May 13 12:51:51.700141 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. May 13 12:51:51.701671 ignition[1268]: Successfully deleted config May 13 12:51:51.704419 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). May 13 12:51:51.707101 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 12:51:51.711254 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 12:51:51.718601 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 12:51:51.718851 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 12:51:51.720210 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 13 12:51:51.761227 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 12:51:51.776584 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 12:51:51.789993 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 13 12:51:51.803066 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 12:51:51.810271 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 12:51:51.812039 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 12:51:51.816990 kernel: loop1: detected capacity change from 0 to 205544 May 13 12:51:51.831816 systemd-tmpfiles[1311]: ACLs are not supported, ignoring. May 13 12:51:51.831836 systemd-tmpfiles[1311]: ACLs are not supported, ignoring. May 13 12:51:51.835144 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 12:51:51.866971 kernel: loop2: detected capacity change from 0 to 146240 May 13 12:51:51.921994 kernel: loop3: detected capacity change from 0 to 113872 May 13 12:51:51.990955 kernel: loop4: detected capacity change from 0 to 2960 May 13 12:51:52.010017 kernel: loop5: detected capacity change from 0 to 205544 May 13 12:51:52.043953 kernel: loop6: detected capacity change from 0 to 146240 May 13 12:51:52.067954 kernel: loop7: detected capacity change from 0 to 113872 May 13 12:51:52.088257 (sd-merge)[1317]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. May 13 12:51:52.088529 (sd-merge)[1317]: Merged extensions into '/usr'. May 13 12:51:52.094189 systemd[1]: Reload requested from client PID 1290 ('systemd-sysext') (unit systemd-sysext.service)... May 13 12:51:52.094290 systemd[1]: Reloading... May 13 12:51:52.145968 zram_generator::config[1343]: No configuration found. May 13 12:51:52.279676 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 12:51:52.289531 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 13 12:51:52.335668 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 12:51:52.336047 systemd[1]: Reloading finished in 241 ms. May 13 12:51:52.359624 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 12:51:52.367060 systemd[1]: Starting ensure-sysext.service... May 13 12:51:52.368520 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 12:51:52.387129 systemd[1]: Reload requested from client PID 1398 ('systemctl') (unit ensure-sysext.service)... May 13 12:51:52.387221 systemd[1]: Reloading... May 13 12:51:52.394821 systemd-tmpfiles[1399]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 13 12:51:52.394844 systemd-tmpfiles[1399]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 13 12:51:52.395024 systemd-tmpfiles[1399]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 12:51:52.395184 systemd-tmpfiles[1399]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 12:51:52.397225 systemd-tmpfiles[1399]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 12:51:52.397404 systemd-tmpfiles[1399]: ACLs are not supported, ignoring. May 13 12:51:52.397454 systemd-tmpfiles[1399]: ACLs are not supported, ignoring. May 13 12:51:52.403644 systemd-tmpfiles[1399]: Detected autofs mount point /boot during canonicalization of boot. May 13 12:51:52.403651 systemd-tmpfiles[1399]: Skipping /boot May 13 12:51:52.411544 systemd-tmpfiles[1399]: Detected autofs mount point /boot during canonicalization of boot. May 13 12:51:52.411552 systemd-tmpfiles[1399]: Skipping /boot May 13 12:51:52.438947 zram_generator::config[1424]: No configuration found. May 13 12:51:52.472746 ldconfig[1285]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 12:51:52.519300 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 12:51:52.527540 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 13 12:51:52.573796 systemd[1]: Reloading finished in 186 ms. May 13 12:51:52.584362 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 12:51:52.584695 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 12:51:52.587598 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 12:51:52.592117 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 12:51:52.599028 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 12:51:52.600137 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 12:51:52.602055 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 12:51:52.604627 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 12:51:52.609033 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 12:51:52.614060 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 12:51:52.615118 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 12:51:52.618930 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 12:51:52.620092 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 12:51:52.620450 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 12:51:52.620517 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 12:51:52.622731 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 12:51:52.623048 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 12:51:52.627232 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 12:51:52.627351 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 12:51:52.627436 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 12:51:52.627522 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 12:51:52.632279 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 12:51:52.635890 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 12:51:52.639348 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 12:51:52.639562 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 12:51:52.639632 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 12:51:52.639731 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 12:51:52.641421 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 12:51:52.641551 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 12:51:52.642554 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 12:51:52.645208 systemd[1]: Finished ensure-sysext.service. May 13 12:51:52.648763 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 12:51:52.652149 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 12:51:52.652291 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 12:51:52.660097 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 12:51:52.662244 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 12:51:52.663869 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 12:51:52.664235 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 12:51:52.664583 systemd-udevd[1491]: Using default interface naming scheme 'v255'. May 13 12:51:52.664685 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 12:51:52.673859 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 12:51:52.675018 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 12:51:52.678003 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 12:51:52.688842 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 12:51:52.691645 augenrules[1526]: No rules May 13 12:51:52.693177 systemd[1]: audit-rules.service: Deactivated successfully. May 13 12:51:52.693338 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 12:51:52.695923 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 12:51:52.700042 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 12:51:52.706197 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 12:51:52.706443 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 12:51:52.773863 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 13 12:51:52.891673 systemd-networkd[1540]: lo: Link UP May 13 12:51:52.891679 systemd-networkd[1540]: lo: Gained carrier May 13 12:51:52.892506 systemd-networkd[1540]: Enumeration completed May 13 12:51:52.892571 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 12:51:52.892724 systemd-networkd[1540]: ens192: Configuring with /etc/systemd/network/00-vmware.network. May 13 12:51:52.894113 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 13 12:51:52.899024 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated May 13 12:51:52.899728 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps May 13 12:51:52.899279 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 12:51:52.899481 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 12:51:52.899634 systemd[1]: Reached target time-set.target - System Time Set. May 13 12:51:52.902412 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 13 12:51:52.901735 systemd-resolved[1490]: Positive Trust Anchors: May 13 12:51:52.901743 systemd-resolved[1490]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 12:51:52.901766 systemd-resolved[1490]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 12:51:52.903926 systemd-networkd[1540]: ens192: Link UP May 13 12:51:52.904960 kernel: mousedev: PS/2 mouse device common for all mice May 13 12:51:52.905271 systemd-networkd[1540]: ens192: Gained carrier May 13 12:51:52.908312 systemd-resolved[1490]: Defaulting to hostname 'linux'. May 13 12:51:52.911849 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 12:51:52.912160 systemd-timesyncd[1510]: Network configuration changed, trying to establish connection. May 13 12:51:52.912759 systemd[1]: Reached target network.target - Network. May 13 12:51:52.912856 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 12:51:52.916954 kernel: ACPI: button: Power Button [PWRF] May 13 12:51:52.912996 systemd[1]: Reached target sysinit.target - System Initialization. May 13 12:51:52.913154 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 12:51:52.913282 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 12:51:52.913396 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 13 12:51:52.913799 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 12:51:52.914158 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 12:51:52.914278 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 12:51:52.914455 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 12:51:52.914471 systemd[1]: Reached target paths.target - Path Units. May 13 12:51:52.914693 systemd[1]: Reached target timers.target - Timer Units. May 13 12:51:52.915792 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 12:51:52.916838 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 12:51:52.920066 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 13 12:51:52.920313 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 13 12:51:52.920436 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 13 12:51:52.924238 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 12:51:52.924799 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 13 12:51:52.925333 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 12:51:52.926724 systemd[1]: Reached target sockets.target - Socket Units. May 13 12:51:52.927398 systemd[1]: Reached target basic.target - Basic System. May 13 12:51:52.927749 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 12:51:52.927769 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 12:51:52.929751 systemd[1]: Starting containerd.service - containerd container runtime... May 13 12:51:52.931368 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 12:51:52.933454 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 12:51:52.935994 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 12:51:52.941614 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 12:51:52.941987 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 12:51:52.943818 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 13 12:51:52.947203 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 12:51:52.950637 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 12:51:52.956001 jq[1586]: false May 13 12:51:52.958907 extend-filesystems[1587]: Found loop4 May 13 12:51:52.959257 extend-filesystems[1587]: Found loop5 May 13 12:51:52.960273 extend-filesystems[1587]: Found loop6 May 13 12:51:52.960273 extend-filesystems[1587]: Found loop7 May 13 12:51:52.960273 extend-filesystems[1587]: Found sda May 13 12:51:52.960273 extend-filesystems[1587]: Found sda1 May 13 12:51:52.960835 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 12:51:52.963222 extend-filesystems[1587]: Found sda2 May 13 12:51:52.963222 extend-filesystems[1587]: Found sda3 May 13 12:51:52.963222 extend-filesystems[1587]: Found usr May 13 12:51:52.963222 extend-filesystems[1587]: Found sda4 May 13 12:51:52.963222 extend-filesystems[1587]: Found sda6 May 13 12:51:52.963222 extend-filesystems[1587]: Found sda7 May 13 12:51:52.963222 extend-filesystems[1587]: Found sda9 May 13 12:51:52.963222 extend-filesystems[1587]: Checking size of /dev/sda9 May 13 12:51:52.971354 google_oslogin_nss_cache[1588]: oslogin_cache_refresh[1588]: Refreshing passwd entry cache May 13 12:51:52.962856 oslogin_cache_refresh[1588]: Refreshing passwd entry cache May 13 12:51:52.966537 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 12:51:52.971741 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 12:51:52.973390 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 12:51:52.973888 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 12:51:52.975557 systemd[1]: Starting update-engine.service - Update Engine... May 13 12:51:52.980082 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 12:51:52.984849 extend-filesystems[1587]: Old size kept for /dev/sda9 May 13 12:51:52.984849 extend-filesystems[1587]: Found sr0 May 13 12:51:52.982209 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... May 13 12:51:52.985582 google_oslogin_nss_cache[1588]: oslogin_cache_refresh[1588]: Failure getting users, quitting May 13 12:51:52.985577 oslogin_cache_refresh[1588]: Failure getting users, quitting May 13 12:51:52.985629 google_oslogin_nss_cache[1588]: oslogin_cache_refresh[1588]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 13 12:51:52.985629 google_oslogin_nss_cache[1588]: oslogin_cache_refresh[1588]: Refreshing group entry cache May 13 12:51:52.985590 oslogin_cache_refresh[1588]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 13 12:51:52.985622 oslogin_cache_refresh[1588]: Refreshing group entry cache May 13 12:51:52.992543 google_oslogin_nss_cache[1588]: oslogin_cache_refresh[1588]: Failure getting groups, quitting May 13 12:51:52.992543 google_oslogin_nss_cache[1588]: oslogin_cache_refresh[1588]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 13 12:51:52.988610 oslogin_cache_refresh[1588]: Failure getting groups, quitting May 13 12:51:52.990006 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 13 12:51:52.988618 oslogin_cache_refresh[1588]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 13 12:51:52.993615 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 12:51:52.993885 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 12:51:52.994030 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 12:51:52.994173 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 12:51:52.994278 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 12:51:52.994712 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 13 12:51:52.994826 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 13 12:51:53.011489 systemd[1]: motdgen.service: Deactivated successfully. May 13 12:51:53.011618 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 12:51:53.018440 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. May 13 12:51:53.024568 jq[1599]: true May 13 12:51:53.019379 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 12:51:53.019522 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 12:51:53.025529 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... May 13 12:53:14.099249 systemd-resolved[1490]: Clock change detected. Flushing caches. May 13 12:53:14.099321 systemd-timesyncd[1510]: Contacted time server 172.234.37.140:123 (0.flatcar.pool.ntp.org). May 13 12:53:14.099348 systemd-timesyncd[1510]: Initial clock synchronization to Tue 2025-05-13 12:53:14.099222 UTC. May 13 12:53:14.115577 update_engine[1597]: I20250513 12:53:14.115036 1597 main.cc:92] Flatcar Update Engine starting May 13 12:53:14.119876 tar[1610]: linux-amd64/helm May 13 12:53:14.124148 (ntainerd)[1626]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 12:53:14.136361 jq[1625]: true May 13 12:53:14.141031 dbus-daemon[1584]: [system] SELinux support is enabled May 13 12:53:14.141149 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 12:53:14.143064 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 12:53:14.143081 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 12:53:14.144616 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 12:53:14.144627 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 12:53:14.158980 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. May 13 12:53:14.160652 update_engine[1597]: I20250513 12:53:14.159641 1597 update_check_scheduler.cc:74] Next update check in 11m48s May 13 12:53:14.159216 systemd[1]: Started update-engine.service - Update Engine. May 13 12:53:14.167957 unknown[1621]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath May 13 12:53:14.173411 unknown[1621]: Core dump limit set to -1 May 13 12:53:14.174036 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 12:53:14.220407 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. May 13 12:53:14.228364 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! May 13 12:53:14.224746 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 12:53:14.241549 bash[1648]: Updated "/home/core/.ssh/authorized_keys" May 13 12:53:14.242801 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 12:53:14.243249 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 13 12:53:14.266416 systemd-logind[1596]: New seat seat0. May 13 12:53:14.266904 systemd[1]: Started systemd-logind.service - User Login Management. May 13 12:53:14.270036 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 12:53:14.306048 locksmithd[1634]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 12:53:14.313357 sshd_keygen[1619]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 12:53:14.376748 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 12:53:14.381414 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 12:53:14.406728 systemd[1]: issuegen.service: Deactivated successfully. May 13 12:53:14.407351 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 12:53:14.411797 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 12:53:14.432577 containerd[1626]: time="2025-05-13T12:53:14Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 13 12:53:14.432577 containerd[1626]: time="2025-05-13T12:53:14.430752435Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 13 12:53:14.443170 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 12:53:14.443998 containerd[1626]: time="2025-05-13T12:53:14.443967724Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.248µs" May 13 12:53:14.443998 containerd[1626]: time="2025-05-13T12:53:14.443995094Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 13 12:53:14.444051 containerd[1626]: time="2025-05-13T12:53:14.444010941Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 13 12:53:14.444432 containerd[1626]: time="2025-05-13T12:53:14.444125671Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 13 12:53:14.445730 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 12:53:14.447125 containerd[1626]: time="2025-05-13T12:53:14.447102521Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 13 12:53:14.447166 containerd[1626]: time="2025-05-13T12:53:14.447139628Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 12:53:14.447207 containerd[1626]: time="2025-05-13T12:53:14.447195223Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 12:53:14.447224 containerd[1626]: time="2025-05-13T12:53:14.447205944Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 12:53:14.447382 containerd[1626]: time="2025-05-13T12:53:14.447361850Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 12:53:14.447407 containerd[1626]: time="2025-05-13T12:53:14.447380622Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 12:53:14.447407 containerd[1626]: time="2025-05-13T12:53:14.447393574Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 12:53:14.447407 containerd[1626]: time="2025-05-13T12:53:14.447399045Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 13 12:53:14.447455 containerd[1626]: time="2025-05-13T12:53:14.447445585Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 13 12:53:14.448161 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 13 12:53:14.448377 systemd[1]: Reached target getty.target - Login Prompts. May 13 12:53:14.455825 containerd[1626]: time="2025-05-13T12:53:14.455796568Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 12:53:14.455888 containerd[1626]: time="2025-05-13T12:53:14.455836964Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 12:53:14.455888 containerd[1626]: time="2025-05-13T12:53:14.455845209Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 13 12:53:14.455888 containerd[1626]: time="2025-05-13T12:53:14.455864990Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 13 12:53:14.456005 containerd[1626]: time="2025-05-13T12:53:14.455992289Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 13 12:53:14.456045 containerd[1626]: time="2025-05-13T12:53:14.456034241Z" level=info msg="metadata content store policy set" policy=shared May 13 12:53:14.462038 containerd[1626]: time="2025-05-13T12:53:14.462007024Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 13 12:53:14.462038 containerd[1626]: time="2025-05-13T12:53:14.462046896Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 13 12:53:14.462156 containerd[1626]: time="2025-05-13T12:53:14.462057920Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 13 12:53:14.462156 containerd[1626]: time="2025-05-13T12:53:14.462066423Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 13 12:53:14.462156 containerd[1626]: time="2025-05-13T12:53:14.462077604Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 13 12:53:14.462156 containerd[1626]: time="2025-05-13T12:53:14.462085135Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 13 12:53:14.462156 containerd[1626]: time="2025-05-13T12:53:14.462093620Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 13 12:53:14.462156 containerd[1626]: time="2025-05-13T12:53:14.462102953Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 13 12:53:14.462156 containerd[1626]: time="2025-05-13T12:53:14.462109375Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 13 12:53:14.462156 containerd[1626]: time="2025-05-13T12:53:14.462114768Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 13 12:53:14.462156 containerd[1626]: time="2025-05-13T12:53:14.462119499Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 13 12:53:14.462156 containerd[1626]: time="2025-05-13T12:53:14.462127342Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 13 12:53:14.462288 containerd[1626]: time="2025-05-13T12:53:14.462196756Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 13 12:53:14.462288 containerd[1626]: time="2025-05-13T12:53:14.462213920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 13 12:53:14.462288 containerd[1626]: time="2025-05-13T12:53:14.462223437Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 13 12:53:14.462288 containerd[1626]: time="2025-05-13T12:53:14.462229186Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 13 12:53:14.462288 containerd[1626]: time="2025-05-13T12:53:14.462234883Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 13 12:53:14.462288 containerd[1626]: time="2025-05-13T12:53:14.462240264Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 13 12:53:14.462288 containerd[1626]: time="2025-05-13T12:53:14.462246128Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 13 12:53:14.462288 containerd[1626]: time="2025-05-13T12:53:14.462252208Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 13 12:53:14.462288 containerd[1626]: time="2025-05-13T12:53:14.462258612Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 13 12:53:14.462288 containerd[1626]: time="2025-05-13T12:53:14.462264479Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 13 12:53:14.462288 containerd[1626]: time="2025-05-13T12:53:14.462272767Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 13 12:53:14.462478 containerd[1626]: time="2025-05-13T12:53:14.462308676Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 13 12:53:14.462478 containerd[1626]: time="2025-05-13T12:53:14.462316921Z" level=info msg="Start snapshots syncer" May 13 12:53:14.462478 containerd[1626]: time="2025-05-13T12:53:14.462331509Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 13 12:53:14.462517 containerd[1626]: time="2025-05-13T12:53:14.462470776Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 13 12:53:14.462517 containerd[1626]: time="2025-05-13T12:53:14.462500115Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 13 12:53:14.462651 containerd[1626]: time="2025-05-13T12:53:14.462542790Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 13 12:53:14.462651 containerd[1626]: time="2025-05-13T12:53:14.462602727Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 13 12:53:14.462651 containerd[1626]: time="2025-05-13T12:53:14.462614543Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 13 12:53:14.462651 containerd[1626]: time="2025-05-13T12:53:14.462620683Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 13 12:53:14.462651 containerd[1626]: time="2025-05-13T12:53:14.462626588Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 13 12:53:14.462651 containerd[1626]: time="2025-05-13T12:53:14.462635799Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 13 12:53:14.462651 containerd[1626]: time="2025-05-13T12:53:14.462642229Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 13 12:53:14.462651 containerd[1626]: time="2025-05-13T12:53:14.462647783Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 13 12:53:14.462751 containerd[1626]: time="2025-05-13T12:53:14.462662203Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 13 12:53:14.462751 containerd[1626]: time="2025-05-13T12:53:14.462668679Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 13 12:53:14.462751 containerd[1626]: time="2025-05-13T12:53:14.462674192Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 13 12:53:14.466164 containerd[1626]: time="2025-05-13T12:53:14.465303261Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 12:53:14.466164 containerd[1626]: time="2025-05-13T12:53:14.465333788Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 12:53:14.466164 containerd[1626]: time="2025-05-13T12:53:14.465340991Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 12:53:14.466164 containerd[1626]: time="2025-05-13T12:53:14.465347065Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 12:53:14.466164 containerd[1626]: time="2025-05-13T12:53:14.465351773Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 13 12:53:14.466164 containerd[1626]: time="2025-05-13T12:53:14.465357361Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 13 12:53:14.466164 containerd[1626]: time="2025-05-13T12:53:14.465363837Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 13 12:53:14.466164 containerd[1626]: time="2025-05-13T12:53:14.465374340Z" level=info msg="runtime interface created" May 13 12:53:14.466164 containerd[1626]: time="2025-05-13T12:53:14.465377350Z" level=info msg="created NRI interface" May 13 12:53:14.466164 containerd[1626]: time="2025-05-13T12:53:14.465381869Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 13 12:53:14.466164 containerd[1626]: time="2025-05-13T12:53:14.465392108Z" level=info msg="Connect containerd service" May 13 12:53:14.466164 containerd[1626]: time="2025-05-13T12:53:14.465419330Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 12:53:14.466164 containerd[1626]: time="2025-05-13T12:53:14.465925965Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 12:53:14.512577 (udev-worker)[1543]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. May 13 12:53:14.545868 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 12:53:14.556029 systemd-logind[1596]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 13 12:53:14.577013 systemd-logind[1596]: Watching system buttons on /dev/input/event2 (Power Button) May 13 12:53:14.615250 containerd[1626]: time="2025-05-13T12:53:14.615227534Z" level=info msg="Start subscribing containerd event" May 13 12:53:14.615354 containerd[1626]: time="2025-05-13T12:53:14.615335830Z" level=info msg="Start recovering state" May 13 12:53:14.615435 containerd[1626]: time="2025-05-13T12:53:14.615428168Z" level=info msg="Start event monitor" May 13 12:53:14.615469 containerd[1626]: time="2025-05-13T12:53:14.615463895Z" level=info msg="Start cni network conf syncer for default" May 13 12:53:14.615497 containerd[1626]: time="2025-05-13T12:53:14.615492240Z" level=info msg="Start streaming server" May 13 12:53:14.615534 containerd[1626]: time="2025-05-13T12:53:14.615528505Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 13 12:53:14.615591 containerd[1626]: time="2025-05-13T12:53:14.615584373Z" level=info msg="runtime interface starting up..." May 13 12:53:14.615620 containerd[1626]: time="2025-05-13T12:53:14.615614963Z" level=info msg="starting plugins..." May 13 12:53:14.615651 containerd[1626]: time="2025-05-13T12:53:14.615645567Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 13 12:53:14.615867 containerd[1626]: time="2025-05-13T12:53:14.615857743Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 12:53:14.616720 containerd[1626]: time="2025-05-13T12:53:14.616709748Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 12:53:14.618640 containerd[1626]: time="2025-05-13T12:53:14.617650532Z" level=info msg="containerd successfully booted in 0.187723s" May 13 12:53:14.619079 systemd[1]: Started containerd.service - containerd container runtime. May 13 12:53:14.711880 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 12:53:14.777117 tar[1610]: linux-amd64/LICENSE May 13 12:53:14.777177 tar[1610]: linux-amd64/README.md May 13 12:53:14.784678 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 12:53:15.402756 systemd-networkd[1540]: ens192: Gained IPv6LL May 13 12:53:15.404326 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 12:53:15.404750 systemd[1]: Reached target network-online.target - Network is Online. May 13 12:53:15.405839 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... May 13 12:53:15.407004 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:53:15.412622 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 12:53:15.432444 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 12:53:15.445848 systemd[1]: coreos-metadata.service: Deactivated successfully. May 13 12:53:15.446674 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. May 13 12:53:15.447133 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 12:53:16.188662 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:53:16.189152 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 12:53:16.189484 systemd[1]: Startup finished in 2.574s (kernel) + 6.415s (initrd) + 4.144s (userspace) = 13.134s. May 13 12:53:16.195830 (kubelet)[1803]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 12:53:16.224121 login[1716]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying May 13 12:53:16.224525 login[1714]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 13 12:53:16.229722 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 12:53:16.230380 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 12:53:16.235434 systemd-logind[1596]: New session 2 of user core. May 13 12:53:16.244406 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 12:53:16.246404 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 12:53:16.255992 (systemd)[1810]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 12:53:16.257985 systemd-logind[1596]: New session c1 of user core. May 13 12:53:16.355892 systemd[1810]: Queued start job for default target default.target. May 13 12:53:16.359488 systemd[1810]: Created slice app.slice - User Application Slice. May 13 12:53:16.359576 systemd[1810]: Reached target paths.target - Paths. May 13 12:53:16.359653 systemd[1810]: Reached target timers.target - Timers. May 13 12:53:16.360598 systemd[1810]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 12:53:16.367569 systemd[1810]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 12:53:16.368018 systemd[1810]: Reached target sockets.target - Sockets. May 13 12:53:16.368046 systemd[1810]: Reached target basic.target - Basic System. May 13 12:53:16.368068 systemd[1810]: Reached target default.target - Main User Target. May 13 12:53:16.368084 systemd[1810]: Startup finished in 106ms. May 13 12:53:16.368306 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 12:53:16.369448 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 12:53:16.671303 kubelet[1803]: E0513 12:53:16.671235 1803 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 12:53:16.672693 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 12:53:16.672788 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 12:53:16.672991 systemd[1]: kubelet.service: Consumed 566ms CPU time, 235.4M memory peak. May 13 12:53:17.225537 login[1716]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 13 12:53:17.228260 systemd-logind[1596]: New session 1 of user core. May 13 12:53:17.236673 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 12:53:26.923289 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 12:53:26.924972 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:53:27.381389 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:53:27.389854 (kubelet)[1852]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 12:53:27.417090 kubelet[1852]: E0513 12:53:27.417041 1852 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 12:53:27.419325 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 12:53:27.419412 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 12:53:27.419628 systemd[1]: kubelet.service: Consumed 95ms CPU time, 95.8M memory peak. May 13 12:53:33.528386 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 12:53:33.529158 systemd[1]: Started sshd@0-139.178.70.104:22-64.62.156.172:30155.service - OpenSSH per-connection server daemon (64.62.156.172:30155). May 13 12:53:33.560423 sshd[1860]: banner exchange: Connection from 64.62.156.172 port 30155: invalid format May 13 12:53:33.559965 systemd[1]: sshd@0-139.178.70.104:22-64.62.156.172:30155.service: Deactivated successfully. May 13 12:53:37.456328 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 12:53:37.457784 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:53:37.795011 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:53:37.800766 (kubelet)[1871]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 12:53:37.845712 kubelet[1871]: E0513 12:53:37.845674 1871 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 12:53:37.847343 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 12:53:37.847507 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 12:53:37.847797 systemd[1]: kubelet.service: Consumed 97ms CPU time, 95.9M memory peak. May 13 12:53:44.326695 systemd[1]: Started sshd@1-139.178.70.104:22-147.75.109.163:49602.service - OpenSSH per-connection server daemon (147.75.109.163:49602). May 13 12:53:44.367318 sshd[1879]: Accepted publickey for core from 147.75.109.163 port 49602 ssh2: RSA SHA256:bxyL21ypRg/l6L1U5vXH7bz9HOopqyjcFfRC9D+f+uA May 13 12:53:44.368117 sshd-session[1879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:53:44.371059 systemd-logind[1596]: New session 3 of user core. May 13 12:53:44.376625 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 12:53:44.428726 systemd[1]: Started sshd@2-139.178.70.104:22-147.75.109.163:49610.service - OpenSSH per-connection server daemon (147.75.109.163:49610). May 13 12:53:44.466206 sshd[1884]: Accepted publickey for core from 147.75.109.163 port 49610 ssh2: RSA SHA256:bxyL21ypRg/l6L1U5vXH7bz9HOopqyjcFfRC9D+f+uA May 13 12:53:44.467076 sshd-session[1884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:53:44.470605 systemd-logind[1596]: New session 4 of user core. May 13 12:53:44.475631 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 12:53:44.524788 sshd[1886]: Connection closed by 147.75.109.163 port 49610 May 13 12:53:44.525639 sshd-session[1884]: pam_unix(sshd:session): session closed for user core May 13 12:53:44.532191 systemd[1]: sshd@2-139.178.70.104:22-147.75.109.163:49610.service: Deactivated successfully. May 13 12:53:44.533156 systemd[1]: session-4.scope: Deactivated successfully. May 13 12:53:44.533840 systemd-logind[1596]: Session 4 logged out. Waiting for processes to exit. May 13 12:53:44.535240 systemd[1]: Started sshd@3-139.178.70.104:22-147.75.109.163:49624.service - OpenSSH per-connection server daemon (147.75.109.163:49624). May 13 12:53:44.537064 systemd-logind[1596]: Removed session 4. May 13 12:53:44.582845 sshd[1892]: Accepted publickey for core from 147.75.109.163 port 49624 ssh2: RSA SHA256:bxyL21ypRg/l6L1U5vXH7bz9HOopqyjcFfRC9D+f+uA May 13 12:53:44.583609 sshd-session[1892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:53:44.586172 systemd-logind[1596]: New session 5 of user core. May 13 12:53:44.594759 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 12:53:44.640788 sshd[1894]: Connection closed by 147.75.109.163 port 49624 May 13 12:53:44.641174 sshd-session[1892]: pam_unix(sshd:session): session closed for user core May 13 12:53:44.649216 systemd[1]: sshd@3-139.178.70.104:22-147.75.109.163:49624.service: Deactivated successfully. May 13 12:53:44.650215 systemd[1]: session-5.scope: Deactivated successfully. May 13 12:53:44.651131 systemd-logind[1596]: Session 5 logged out. Waiting for processes to exit. May 13 12:53:44.652888 systemd[1]: Started sshd@4-139.178.70.104:22-147.75.109.163:49636.service - OpenSSH per-connection server daemon (147.75.109.163:49636). May 13 12:53:44.654597 systemd-logind[1596]: Removed session 5. May 13 12:53:44.694982 sshd[1900]: Accepted publickey for core from 147.75.109.163 port 49636 ssh2: RSA SHA256:bxyL21ypRg/l6L1U5vXH7bz9HOopqyjcFfRC9D+f+uA May 13 12:53:44.695731 sshd-session[1900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:53:44.698411 systemd-logind[1596]: New session 6 of user core. May 13 12:53:44.707711 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 12:53:44.756476 sshd[1902]: Connection closed by 147.75.109.163 port 49636 May 13 12:53:44.757361 sshd-session[1900]: pam_unix(sshd:session): session closed for user core May 13 12:53:44.767181 systemd[1]: sshd@4-139.178.70.104:22-147.75.109.163:49636.service: Deactivated successfully. May 13 12:53:44.769211 systemd[1]: session-6.scope: Deactivated successfully. May 13 12:53:44.769915 systemd-logind[1596]: Session 6 logged out. Waiting for processes to exit. May 13 12:53:44.771643 systemd[1]: Started sshd@5-139.178.70.104:22-147.75.109.163:49638.service - OpenSSH per-connection server daemon (147.75.109.163:49638). May 13 12:53:44.772827 systemd-logind[1596]: Removed session 6. May 13 12:53:44.816484 sshd[1908]: Accepted publickey for core from 147.75.109.163 port 49638 ssh2: RSA SHA256:bxyL21ypRg/l6L1U5vXH7bz9HOopqyjcFfRC9D+f+uA May 13 12:53:44.817248 sshd-session[1908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:53:44.820246 systemd-logind[1596]: New session 7 of user core. May 13 12:53:44.825660 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 12:53:44.887031 sudo[1911]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 12:53:44.887238 sudo[1911]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 12:53:44.897219 sudo[1911]: pam_unix(sudo:session): session closed for user root May 13 12:53:44.898126 sshd[1910]: Connection closed by 147.75.109.163 port 49638 May 13 12:53:44.898934 sshd-session[1908]: pam_unix(sshd:session): session closed for user core May 13 12:53:44.909255 systemd[1]: sshd@5-139.178.70.104:22-147.75.109.163:49638.service: Deactivated successfully. May 13 12:53:44.910455 systemd[1]: session-7.scope: Deactivated successfully. May 13 12:53:44.911120 systemd-logind[1596]: Session 7 logged out. Waiting for processes to exit. May 13 12:53:44.913330 systemd[1]: Started sshd@6-139.178.70.104:22-147.75.109.163:49642.service - OpenSSH per-connection server daemon (147.75.109.163:49642). May 13 12:53:44.914089 systemd-logind[1596]: Removed session 7. May 13 12:53:44.950060 sshd[1917]: Accepted publickey for core from 147.75.109.163 port 49642 ssh2: RSA SHA256:bxyL21ypRg/l6L1U5vXH7bz9HOopqyjcFfRC9D+f+uA May 13 12:53:44.950848 sshd-session[1917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:53:44.953435 systemd-logind[1596]: New session 8 of user core. May 13 12:53:44.961796 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 12:53:45.009315 sudo[1921]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 12:53:45.009668 sudo[1921]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 12:53:45.012031 sudo[1921]: pam_unix(sudo:session): session closed for user root May 13 12:53:45.014941 sudo[1920]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 13 12:53:45.015091 sudo[1920]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 12:53:45.020863 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 12:53:45.043948 augenrules[1943]: No rules May 13 12:53:45.044523 systemd[1]: audit-rules.service: Deactivated successfully. May 13 12:53:45.044686 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 12:53:45.045398 sudo[1920]: pam_unix(sudo:session): session closed for user root May 13 12:53:45.046244 sshd[1919]: Connection closed by 147.75.109.163 port 49642 May 13 12:53:45.046511 sshd-session[1917]: pam_unix(sshd:session): session closed for user core May 13 12:53:45.051731 systemd[1]: sshd@6-139.178.70.104:22-147.75.109.163:49642.service: Deactivated successfully. May 13 12:53:45.052634 systemd[1]: session-8.scope: Deactivated successfully. May 13 12:53:45.053085 systemd-logind[1596]: Session 8 logged out. Waiting for processes to exit. May 13 12:53:45.054861 systemd[1]: Started sshd@7-139.178.70.104:22-147.75.109.163:49654.service - OpenSSH per-connection server daemon (147.75.109.163:49654). May 13 12:53:45.055423 systemd-logind[1596]: Removed session 8. May 13 12:53:45.091763 sshd[1952]: Accepted publickey for core from 147.75.109.163 port 49654 ssh2: RSA SHA256:bxyL21ypRg/l6L1U5vXH7bz9HOopqyjcFfRC9D+f+uA May 13 12:53:45.092498 sshd-session[1952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:53:45.095049 systemd-logind[1596]: New session 9 of user core. May 13 12:53:45.102692 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 12:53:45.150289 sudo[1955]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 12:53:45.150445 sudo[1955]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 12:53:45.468580 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 12:53:45.479846 (dockerd)[1972]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 12:53:45.797713 dockerd[1972]: time="2025-05-13T12:53:45.797638914Z" level=info msg="Starting up" May 13 12:53:45.798114 dockerd[1972]: time="2025-05-13T12:53:45.798083288Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 13 12:53:45.835808 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3286837927-merged.mount: Deactivated successfully. May 13 12:53:45.911343 dockerd[1972]: time="2025-05-13T12:53:45.911310341Z" level=info msg="Loading containers: start." May 13 12:53:45.948602 kernel: Initializing XFRM netlink socket May 13 12:53:46.206837 systemd-networkd[1540]: docker0: Link UP May 13 12:53:46.208835 dockerd[1972]: time="2025-05-13T12:53:46.208815233Z" level=info msg="Loading containers: done." May 13 12:53:46.217226 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3111455326-merged.mount: Deactivated successfully. May 13 12:53:46.219772 dockerd[1972]: time="2025-05-13T12:53:46.219720636Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 12:53:46.219772 dockerd[1972]: time="2025-05-13T12:53:46.219771416Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 13 12:53:46.219854 dockerd[1972]: time="2025-05-13T12:53:46.219825029Z" level=info msg="Initializing buildkit" May 13 12:53:46.229372 dockerd[1972]: time="2025-05-13T12:53:46.229295473Z" level=info msg="Completed buildkit initialization" May 13 12:53:46.233531 dockerd[1972]: time="2025-05-13T12:53:46.233513964Z" level=info msg="Daemon has completed initialization" May 13 12:53:46.233649 dockerd[1972]: time="2025-05-13T12:53:46.233624636Z" level=info msg="API listen on /run/docker.sock" May 13 12:53:46.233823 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 12:53:47.138781 containerd[1626]: time="2025-05-13T12:53:47.138719685Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 13 12:53:47.888360 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 13 12:53:47.889918 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:53:47.897184 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2974787260.mount: Deactivated successfully. May 13 12:53:48.075442 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:53:48.077747 (kubelet)[2197]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 12:53:48.110692 kubelet[2197]: E0513 12:53:48.110660 2197 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 12:53:48.112144 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 12:53:48.112234 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 12:53:48.112539 systemd[1]: kubelet.service: Consumed 88ms CPU time, 95.4M memory peak. May 13 12:53:48.995566 containerd[1626]: time="2025-05-13T12:53:48.995417122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:53:48.996637 containerd[1626]: time="2025-05-13T12:53:48.996603060Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27960987" May 13 12:53:48.997219 containerd[1626]: time="2025-05-13T12:53:48.997202262Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:53:48.999923 containerd[1626]: time="2025-05-13T12:53:48.999906146Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:53:49.000568 containerd[1626]: time="2025-05-13T12:53:49.000495700Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 1.861750753s" May 13 12:53:49.000568 containerd[1626]: time="2025-05-13T12:53:49.000523162Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" May 13 12:53:49.001715 containerd[1626]: time="2025-05-13T12:53:49.001698973Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 13 12:53:50.860358 containerd[1626]: time="2025-05-13T12:53:50.860318914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:53:50.867526 containerd[1626]: time="2025-05-13T12:53:50.867499256Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713776" May 13 12:53:50.874569 containerd[1626]: time="2025-05-13T12:53:50.874513011Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:53:50.880189 containerd[1626]: time="2025-05-13T12:53:50.880159274Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:53:50.881589 containerd[1626]: time="2025-05-13T12:53:50.881551418Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 1.879833424s" May 13 12:53:50.881788 containerd[1626]: time="2025-05-13T12:53:50.881588838Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" May 13 12:53:50.881973 containerd[1626]: time="2025-05-13T12:53:50.881923753Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 13 12:53:52.177337 containerd[1626]: time="2025-05-13T12:53:52.177235568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:53:52.177952 containerd[1626]: time="2025-05-13T12:53:52.177936848Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780386" May 13 12:53:52.178089 containerd[1626]: time="2025-05-13T12:53:52.178075336Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:53:52.179662 containerd[1626]: time="2025-05-13T12:53:52.179647901Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:53:52.180565 containerd[1626]: time="2025-05-13T12:53:52.180542980Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 1.298556708s" May 13 12:53:52.180593 containerd[1626]: time="2025-05-13T12:53:52.180573757Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" May 13 12:53:52.180833 containerd[1626]: time="2025-05-13T12:53:52.180818650Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 13 12:53:53.448379 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3294318325.mount: Deactivated successfully. May 13 12:53:53.934583 containerd[1626]: time="2025-05-13T12:53:53.934366075Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:53:53.944365 containerd[1626]: time="2025-05-13T12:53:53.944325430Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354625" May 13 12:53:53.953110 containerd[1626]: time="2025-05-13T12:53:53.953066396Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:53:53.962088 containerd[1626]: time="2025-05-13T12:53:53.962049679Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:53:53.962599 containerd[1626]: time="2025-05-13T12:53:53.962391254Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 1.781554448s" May 13 12:53:53.962599 containerd[1626]: time="2025-05-13T12:53:53.962414352Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" May 13 12:53:53.962875 containerd[1626]: time="2025-05-13T12:53:53.962851407Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 13 12:53:54.458195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3044779179.mount: Deactivated successfully. May 13 12:53:55.449348 containerd[1626]: time="2025-05-13T12:53:55.449301398Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:53:55.452598 containerd[1626]: time="2025-05-13T12:53:55.452580253Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 13 12:53:55.458445 containerd[1626]: time="2025-05-13T12:53:55.458396163Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:53:55.469084 containerd[1626]: time="2025-05-13T12:53:55.469030051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:53:55.469714 containerd[1626]: time="2025-05-13T12:53:55.469599556Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.506727844s" May 13 12:53:55.469714 containerd[1626]: time="2025-05-13T12:53:55.469622203Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 13 12:53:55.469892 containerd[1626]: time="2025-05-13T12:53:55.469875166Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 13 12:53:56.058445 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3197717936.mount: Deactivated successfully. May 13 12:53:56.085161 containerd[1626]: time="2025-05-13T12:53:56.084650865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 12:53:56.085675 containerd[1626]: time="2025-05-13T12:53:56.085641997Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 13 12:53:56.086621 containerd[1626]: time="2025-05-13T12:53:56.086594428Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 12:53:56.087410 containerd[1626]: time="2025-05-13T12:53:56.087390008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 12:53:56.088057 containerd[1626]: time="2025-05-13T12:53:56.088039461Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 618.147907ms" May 13 12:53:56.088130 containerd[1626]: time="2025-05-13T12:53:56.088119064Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 13 12:53:56.088544 containerd[1626]: time="2025-05-13T12:53:56.088483805Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 13 12:53:56.875883 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4011022687.mount: Deactivated successfully. May 13 12:53:58.206091 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 13 12:53:58.207979 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:53:59.067916 update_engine[1597]: I20250513 12:53:59.067689 1597 update_attempter.cc:509] Updating boot flags... May 13 12:53:59.263460 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:53:59.270797 (kubelet)[2387]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 12:53:59.646183 kubelet[2387]: E0513 12:53:59.646149 2387 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 12:53:59.648598 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 12:53:59.648757 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 12:53:59.649032 systemd[1]: kubelet.service: Consumed 101ms CPU time, 91M memory peak. May 13 12:53:59.963530 containerd[1626]: time="2025-05-13T12:53:59.962793708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:53:59.971217 containerd[1626]: time="2025-05-13T12:53:59.971201237Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" May 13 12:53:59.979852 containerd[1626]: time="2025-05-13T12:53:59.979836180Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:53:59.991624 containerd[1626]: time="2025-05-13T12:53:59.991606575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:53:59.992055 containerd[1626]: time="2025-05-13T12:53:59.992031747Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.903507091s" May 13 12:53:59.992095 containerd[1626]: time="2025-05-13T12:53:59.992056366Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 13 12:54:02.124208 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:54:02.124599 systemd[1]: kubelet.service: Consumed 101ms CPU time, 91M memory peak. May 13 12:54:02.127118 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:54:02.151153 systemd[1]: Reload requested from client PID 2426 ('systemctl') (unit session-9.scope)... May 13 12:54:02.151163 systemd[1]: Reloading... May 13 12:54:02.213579 zram_generator::config[2471]: No configuration found. May 13 12:54:02.282251 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 12:54:02.291430 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 13 12:54:02.363053 systemd[1]: Reloading finished in 211 ms. May 13 12:54:02.398949 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 13 12:54:02.399021 systemd[1]: kubelet.service: Failed with result 'signal'. May 13 12:54:02.399306 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:54:02.400846 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:54:02.776081 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:54:02.788791 (kubelet)[2537]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 12:54:02.816483 kubelet[2537]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 12:54:02.816483 kubelet[2537]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 12:54:02.816483 kubelet[2537]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 12:54:02.819459 kubelet[2537]: I0513 12:54:02.819252 2537 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 12:54:03.218834 kubelet[2537]: I0513 12:54:03.218803 2537 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 13 12:54:03.218834 kubelet[2537]: I0513 12:54:03.218826 2537 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 12:54:03.218988 kubelet[2537]: I0513 12:54:03.218975 2537 server.go:929] "Client rotation is on, will bootstrap in background" May 13 12:54:03.516280 kubelet[2537]: I0513 12:54:03.515847 2537 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 12:54:03.519194 kubelet[2537]: E0513 12:54:03.519163 2537 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.104:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.104:6443: connect: connection refused" logger="UnhandledError" May 13 12:54:03.529338 kubelet[2537]: I0513 12:54:03.529322 2537 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 12:54:03.532732 kubelet[2537]: I0513 12:54:03.532709 2537 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 12:54:03.533951 kubelet[2537]: I0513 12:54:03.533933 2537 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 13 12:54:03.534085 kubelet[2537]: I0513 12:54:03.534055 2537 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 12:54:03.534200 kubelet[2537]: I0513 12:54:03.534082 2537 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 12:54:03.534289 kubelet[2537]: I0513 12:54:03.534201 2537 topology_manager.go:138] "Creating topology manager with none policy" May 13 12:54:03.534289 kubelet[2537]: I0513 12:54:03.534208 2537 container_manager_linux.go:300] "Creating device plugin manager" May 13 12:54:03.534289 kubelet[2537]: I0513 12:54:03.534287 2537 state_mem.go:36] "Initialized new in-memory state store" May 13 12:54:03.536112 kubelet[2537]: I0513 12:54:03.536040 2537 kubelet.go:408] "Attempting to sync node with API server" May 13 12:54:03.536112 kubelet[2537]: I0513 12:54:03.536062 2537 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 12:54:03.538646 kubelet[2537]: I0513 12:54:03.538537 2537 kubelet.go:314] "Adding apiserver pod source" May 13 12:54:03.538646 kubelet[2537]: I0513 12:54:03.538580 2537 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 12:54:03.542773 kubelet[2537]: W0513 12:54:03.542719 2537 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused May 13 12:54:03.542773 kubelet[2537]: E0513 12:54:03.542774 2537 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.104:6443: connect: connection refused" logger="UnhandledError" May 13 12:54:03.544808 kubelet[2537]: W0513 12:54:03.544396 2537 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.104:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused May 13 12:54:03.544808 kubelet[2537]: E0513 12:54:03.544417 2537 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.104:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.104:6443: connect: connection refused" logger="UnhandledError" May 13 12:54:03.544808 kubelet[2537]: I0513 12:54:03.544703 2537 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 13 12:54:03.546154 kubelet[2537]: I0513 12:54:03.546062 2537 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 12:54:03.546785 kubelet[2537]: W0513 12:54:03.546770 2537 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 12:54:03.548580 kubelet[2537]: I0513 12:54:03.548300 2537 server.go:1269] "Started kubelet" May 13 12:54:03.549613 kubelet[2537]: I0513 12:54:03.549601 2537 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 12:54:03.555127 kubelet[2537]: I0513 12:54:03.555099 2537 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 12:54:03.558572 kubelet[2537]: I0513 12:54:03.558388 2537 server.go:460] "Adding debug handlers to kubelet server" May 13 12:54:03.558991 kubelet[2537]: I0513 12:54:03.558958 2537 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 12:54:03.559223 kubelet[2537]: I0513 12:54:03.559214 2537 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 12:54:03.560136 kubelet[2537]: I0513 12:54:03.559984 2537 volume_manager.go:289] "Starting Kubelet Volume Manager" May 13 12:54:03.560182 kubelet[2537]: E0513 12:54:03.560156 2537 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 12:54:03.563386 kubelet[2537]: I0513 12:54:03.562931 2537 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 13 12:54:03.563386 kubelet[2537]: I0513 12:54:03.562984 2537 reconciler.go:26] "Reconciler: start to sync state" May 13 12:54:03.565171 kubelet[2537]: I0513 12:54:03.564988 2537 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 12:54:03.565521 kubelet[2537]: E0513 12:54:03.565502 2537 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.104:6443: connect: connection refused" interval="200ms" May 13 12:54:03.565746 kubelet[2537]: I0513 12:54:03.565736 2537 factory.go:221] Registration of the systemd container factory successfully May 13 12:54:03.565823 kubelet[2537]: I0513 12:54:03.565814 2537 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 12:54:03.582372 kubelet[2537]: I0513 12:54:03.582348 2537 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 12:54:03.583746 kubelet[2537]: E0513 12:54:03.583663 2537 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 12:54:03.583804 kubelet[2537]: W0513 12:54:03.583776 2537 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused May 13 12:54:03.583842 kubelet[2537]: E0513 12:54:03.583809 2537 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.104:6443: connect: connection refused" logger="UnhandledError" May 13 12:54:03.583893 kubelet[2537]: E0513 12:54:03.575591 2537 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.104:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.104:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f1757284586ea default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 12:54:03.548280554 +0000 UTC m=+0.757472861,LastTimestamp:2025-05-13 12:54:03.548280554 +0000 UTC m=+0.757472861,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 12:54:03.584286 kubelet[2537]: I0513 12:54:03.584274 2537 factory.go:221] Registration of the containerd container factory successfully May 13 12:54:03.623184 kubelet[2537]: I0513 12:54:03.622650 2537 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 12:54:03.623184 kubelet[2537]: I0513 12:54:03.622679 2537 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 12:54:03.623184 kubelet[2537]: I0513 12:54:03.622693 2537 kubelet.go:2321] "Starting kubelet main sync loop" May 13 12:54:03.623184 kubelet[2537]: E0513 12:54:03.622715 2537 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 12:54:03.624957 kubelet[2537]: W0513 12:54:03.624927 2537 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused May 13 12:54:03.624998 kubelet[2537]: E0513 12:54:03.624959 2537 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.104:6443: connect: connection refused" logger="UnhandledError" May 13 12:54:03.630744 kubelet[2537]: I0513 12:54:03.630725 2537 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 12:54:03.630744 kubelet[2537]: I0513 12:54:03.630737 2537 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 12:54:03.630744 kubelet[2537]: I0513 12:54:03.630747 2537 state_mem.go:36] "Initialized new in-memory state store" May 13 12:54:03.634870 kubelet[2537]: I0513 12:54:03.634854 2537 policy_none.go:49] "None policy: Start" May 13 12:54:03.635333 kubelet[2537]: I0513 12:54:03.635178 2537 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 12:54:03.635333 kubelet[2537]: I0513 12:54:03.635190 2537 state_mem.go:35] "Initializing new in-memory state store" May 13 12:54:03.640648 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 12:54:03.658982 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 12:54:03.662501 kubelet[2537]: E0513 12:54:03.662482 2537 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 12:54:03.665766 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 12:54:03.666612 kubelet[2537]: I0513 12:54:03.666600 2537 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 12:54:03.666976 kubelet[2537]: I0513 12:54:03.666915 2537 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 12:54:03.666976 kubelet[2537]: I0513 12:54:03.666924 2537 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 12:54:03.667595 kubelet[2537]: I0513 12:54:03.667583 2537 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 12:54:03.668285 kubelet[2537]: E0513 12:54:03.668270 2537 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 13 12:54:03.733892 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice - libcontainer container kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. May 13 12:54:03.753512 systemd[1]: Created slice kubepods-burstable-poda0a1819df9dd05c001253ad7fe34bd2d.slice - libcontainer container kubepods-burstable-poda0a1819df9dd05c001253ad7fe34bd2d.slice. May 13 12:54:03.763279 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice - libcontainer container kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. May 13 12:54:03.764271 kubelet[2537]: I0513 12:54:03.764127 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 13 12:54:03.764271 kubelet[2537]: I0513 12:54:03.764146 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a0a1819df9dd05c001253ad7fe34bd2d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a0a1819df9dd05c001253ad7fe34bd2d\") " pod="kube-system/kube-apiserver-localhost" May 13 12:54:03.764271 kubelet[2537]: I0513 12:54:03.764160 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:54:03.764271 kubelet[2537]: I0513 12:54:03.764170 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:54:03.764271 kubelet[2537]: I0513 12:54:03.764181 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a0a1819df9dd05c001253ad7fe34bd2d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a0a1819df9dd05c001253ad7fe34bd2d\") " pod="kube-system/kube-apiserver-localhost" May 13 12:54:03.764406 kubelet[2537]: I0513 12:54:03.764191 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a0a1819df9dd05c001253ad7fe34bd2d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a0a1819df9dd05c001253ad7fe34bd2d\") " pod="kube-system/kube-apiserver-localhost" May 13 12:54:03.764406 kubelet[2537]: I0513 12:54:03.764201 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:54:03.764406 kubelet[2537]: I0513 12:54:03.764210 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:54:03.764406 kubelet[2537]: I0513 12:54:03.764220 2537 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:54:03.767592 kubelet[2537]: E0513 12:54:03.766846 2537 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.104:6443: connect: connection refused" interval="400ms" May 13 12:54:03.768819 kubelet[2537]: I0513 12:54:03.768675 2537 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 12:54:03.768918 kubelet[2537]: E0513 12:54:03.768903 2537 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.104:6443/api/v1/nodes\": dial tcp 139.178.70.104:6443: connect: connection refused" node="localhost" May 13 12:54:03.970488 kubelet[2537]: I0513 12:54:03.970469 2537 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 12:54:03.970824 kubelet[2537]: E0513 12:54:03.970706 2537 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.104:6443/api/v1/nodes\": dial tcp 139.178.70.104:6443: connect: connection refused" node="localhost" May 13 12:54:04.050898 containerd[1626]: time="2025-05-13T12:54:04.050804332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" May 13 12:54:04.062290 containerd[1626]: time="2025-05-13T12:54:04.062257258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a0a1819df9dd05c001253ad7fe34bd2d,Namespace:kube-system,Attempt:0,}" May 13 12:54:04.067225 containerd[1626]: time="2025-05-13T12:54:04.067160552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" May 13 12:54:04.167137 kubelet[2537]: E0513 12:54:04.167103 2537 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.104:6443: connect: connection refused" interval="800ms" May 13 12:54:04.223574 containerd[1626]: time="2025-05-13T12:54:04.223530337Z" level=info msg="connecting to shim 3516a7f5a38e46582a7ac15b63050f82c77a551dcb1cba4f40546ae0f656bf08" address="unix:///run/containerd/s/a258f1797e8c3bfa7cdaee038fb2e08169baa2390a6c02bc65b8c24940217521" namespace=k8s.io protocol=ttrpc version=3 May 13 12:54:04.225085 containerd[1626]: time="2025-05-13T12:54:04.225059562Z" level=info msg="connecting to shim 261477a58d270006fdd076c60c23d799cf93e6ea0af4e3e814946bb0417ddfd0" address="unix:///run/containerd/s/48d8536252c8d4e1b000865867bcc8f702338e69dbfec722f0b4143fc7c0f7fb" namespace=k8s.io protocol=ttrpc version=3 May 13 12:54:04.230434 containerd[1626]: time="2025-05-13T12:54:04.230166044Z" level=info msg="connecting to shim ae968b4854fe8505abcede4efce4ef7f48373136a89d0fc15bd66918910b65ce" address="unix:///run/containerd/s/1cef29575bba7c924532c8c6322f46f4b9c7b6b847a733b36a4ecad20ea48669" namespace=k8s.io protocol=ttrpc version=3 May 13 12:54:04.307714 systemd[1]: Started cri-containerd-261477a58d270006fdd076c60c23d799cf93e6ea0af4e3e814946bb0417ddfd0.scope - libcontainer container 261477a58d270006fdd076c60c23d799cf93e6ea0af4e3e814946bb0417ddfd0. May 13 12:54:04.309726 systemd[1]: Started cri-containerd-3516a7f5a38e46582a7ac15b63050f82c77a551dcb1cba4f40546ae0f656bf08.scope - libcontainer container 3516a7f5a38e46582a7ac15b63050f82c77a551dcb1cba4f40546ae0f656bf08. May 13 12:54:04.313671 systemd[1]: Started cri-containerd-ae968b4854fe8505abcede4efce4ef7f48373136a89d0fc15bd66918910b65ce.scope - libcontainer container ae968b4854fe8505abcede4efce4ef7f48373136a89d0fc15bd66918910b65ce. May 13 12:54:04.373396 kubelet[2537]: I0513 12:54:04.373380 2537 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 12:54:04.373790 kubelet[2537]: E0513 12:54:04.373775 2537 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.104:6443/api/v1/nodes\": dial tcp 139.178.70.104:6443: connect: connection refused" node="localhost" May 13 12:54:04.375425 containerd[1626]: time="2025-05-13T12:54:04.375208033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a0a1819df9dd05c001253ad7fe34bd2d,Namespace:kube-system,Attempt:0,} returns sandbox id \"3516a7f5a38e46582a7ac15b63050f82c77a551dcb1cba4f40546ae0f656bf08\"" May 13 12:54:04.377961 containerd[1626]: time="2025-05-13T12:54:04.377937786Z" level=info msg="CreateContainer within sandbox \"3516a7f5a38e46582a7ac15b63050f82c77a551dcb1cba4f40546ae0f656bf08\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 12:54:04.379586 containerd[1626]: time="2025-05-13T12:54:04.379528421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"261477a58d270006fdd076c60c23d799cf93e6ea0af4e3e814946bb0417ddfd0\"" May 13 12:54:04.381311 containerd[1626]: time="2025-05-13T12:54:04.381260876Z" level=info msg="CreateContainer within sandbox \"261477a58d270006fdd076c60c23d799cf93e6ea0af4e3e814946bb0417ddfd0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 12:54:04.401354 containerd[1626]: time="2025-05-13T12:54:04.401321846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae968b4854fe8505abcede4efce4ef7f48373136a89d0fc15bd66918910b65ce\"" May 13 12:54:04.402739 containerd[1626]: time="2025-05-13T12:54:04.402708715Z" level=info msg="CreateContainer within sandbox \"ae968b4854fe8505abcede4efce4ef7f48373136a89d0fc15bd66918910b65ce\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 12:54:04.509583 containerd[1626]: time="2025-05-13T12:54:04.509288524Z" level=info msg="Container 2bcbebfe639259505c326b8726f17363828e93d068a279ec04bba430820ea491: CDI devices from CRI Config.CDIDevices: []" May 13 12:54:04.539456 containerd[1626]: time="2025-05-13T12:54:04.539367801Z" level=info msg="Container d77430e36503b1f9088bc7019dba41ec7b8bc9a77ed7dbda7fc0121b42b0c966: CDI devices from CRI Config.CDIDevices: []" May 13 12:54:04.552972 containerd[1626]: time="2025-05-13T12:54:04.552937920Z" level=info msg="Container 7a05bba43a919ec0d28ced1f2a7c16d52cecb677d5b262ec31904a8e54ad8774: CDI devices from CRI Config.CDIDevices: []" May 13 12:54:04.555530 containerd[1626]: time="2025-05-13T12:54:04.555507123Z" level=info msg="CreateContainer within sandbox \"3516a7f5a38e46582a7ac15b63050f82c77a551dcb1cba4f40546ae0f656bf08\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2bcbebfe639259505c326b8726f17363828e93d068a279ec04bba430820ea491\"" May 13 12:54:04.555939 containerd[1626]: time="2025-05-13T12:54:04.555926247Z" level=info msg="StartContainer for \"2bcbebfe639259505c326b8726f17363828e93d068a279ec04bba430820ea491\"" May 13 12:54:04.557741 containerd[1626]: time="2025-05-13T12:54:04.557714948Z" level=info msg="connecting to shim 2bcbebfe639259505c326b8726f17363828e93d068a279ec04bba430820ea491" address="unix:///run/containerd/s/a258f1797e8c3bfa7cdaee038fb2e08169baa2390a6c02bc65b8c24940217521" protocol=ttrpc version=3 May 13 12:54:04.571645 systemd[1]: Started cri-containerd-2bcbebfe639259505c326b8726f17363828e93d068a279ec04bba430820ea491.scope - libcontainer container 2bcbebfe639259505c326b8726f17363828e93d068a279ec04bba430820ea491. May 13 12:54:04.573332 kubelet[2537]: W0513 12:54:04.573284 2537 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused May 13 12:54:04.574450 kubelet[2537]: E0513 12:54:04.573340 2537 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.104:6443: connect: connection refused" logger="UnhandledError" May 13 12:54:04.577684 containerd[1626]: time="2025-05-13T12:54:04.577660077Z" level=info msg="CreateContainer within sandbox \"261477a58d270006fdd076c60c23d799cf93e6ea0af4e3e814946bb0417ddfd0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d77430e36503b1f9088bc7019dba41ec7b8bc9a77ed7dbda7fc0121b42b0c966\"" May 13 12:54:04.578017 containerd[1626]: time="2025-05-13T12:54:04.578002174Z" level=info msg="StartContainer for \"d77430e36503b1f9088bc7019dba41ec7b8bc9a77ed7dbda7fc0121b42b0c966\"" May 13 12:54:04.579372 containerd[1626]: time="2025-05-13T12:54:04.579336563Z" level=info msg="connecting to shim d77430e36503b1f9088bc7019dba41ec7b8bc9a77ed7dbda7fc0121b42b0c966" address="unix:///run/containerd/s/48d8536252c8d4e1b000865867bcc8f702338e69dbfec722f0b4143fc7c0f7fb" protocol=ttrpc version=3 May 13 12:54:04.593714 systemd[1]: Started cri-containerd-d77430e36503b1f9088bc7019dba41ec7b8bc9a77ed7dbda7fc0121b42b0c966.scope - libcontainer container d77430e36503b1f9088bc7019dba41ec7b8bc9a77ed7dbda7fc0121b42b0c966. May 13 12:54:04.600950 containerd[1626]: time="2025-05-13T12:54:04.600925109Z" level=info msg="CreateContainer within sandbox \"ae968b4854fe8505abcede4efce4ef7f48373136a89d0fc15bd66918910b65ce\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7a05bba43a919ec0d28ced1f2a7c16d52cecb677d5b262ec31904a8e54ad8774\"" May 13 12:54:04.601332 containerd[1626]: time="2025-05-13T12:54:04.601302595Z" level=info msg="StartContainer for \"7a05bba43a919ec0d28ced1f2a7c16d52cecb677d5b262ec31904a8e54ad8774\"" May 13 12:54:04.601952 containerd[1626]: time="2025-05-13T12:54:04.601933501Z" level=info msg="connecting to shim 7a05bba43a919ec0d28ced1f2a7c16d52cecb677d5b262ec31904a8e54ad8774" address="unix:///run/containerd/s/1cef29575bba7c924532c8c6322f46f4b9c7b6b847a733b36a4ecad20ea48669" protocol=ttrpc version=3 May 13 12:54:04.624642 systemd[1]: Started cri-containerd-7a05bba43a919ec0d28ced1f2a7c16d52cecb677d5b262ec31904a8e54ad8774.scope - libcontainer container 7a05bba43a919ec0d28ced1f2a7c16d52cecb677d5b262ec31904a8e54ad8774. May 13 12:54:04.633570 containerd[1626]: time="2025-05-13T12:54:04.633521289Z" level=info msg="StartContainer for \"2bcbebfe639259505c326b8726f17363828e93d068a279ec04bba430820ea491\" returns successfully" May 13 12:54:04.645779 kubelet[2537]: W0513 12:54:04.645723 2537 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused May 13 12:54:04.645779 kubelet[2537]: E0513 12:54:04.645765 2537 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.104:6443: connect: connection refused" logger="UnhandledError" May 13 12:54:04.649607 containerd[1626]: time="2025-05-13T12:54:04.649584745Z" level=info msg="StartContainer for \"d77430e36503b1f9088bc7019dba41ec7b8bc9a77ed7dbda7fc0121b42b0c966\" returns successfully" May 13 12:54:04.673080 containerd[1626]: time="2025-05-13T12:54:04.673059332Z" level=info msg="StartContainer for \"7a05bba43a919ec0d28ced1f2a7c16d52cecb677d5b262ec31904a8e54ad8774\" returns successfully" May 13 12:54:04.851355 kubelet[2537]: W0513 12:54:04.851243 2537 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.104:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused May 13 12:54:04.851355 kubelet[2537]: E0513 12:54:04.851287 2537 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.104:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.104:6443: connect: connection refused" logger="UnhandledError" May 13 12:54:04.967492 kubelet[2537]: E0513 12:54:04.967459 2537 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.104:6443: connect: connection refused" interval="1.6s" May 13 12:54:05.174971 kubelet[2537]: I0513 12:54:05.174865 2537 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 12:54:05.175523 kubelet[2537]: W0513 12:54:05.175400 2537 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused May 13 12:54:05.175523 kubelet[2537]: E0513 12:54:05.175437 2537 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.104:6443: connect: connection refused" logger="UnhandledError" May 13 12:54:05.175646 kubelet[2537]: E0513 12:54:05.175635 2537 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.104:6443/api/v1/nodes\": dial tcp 139.178.70.104:6443: connect: connection refused" node="localhost" May 13 12:54:05.522745 kubelet[2537]: E0513 12:54:05.522692 2537 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.104:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.104:6443: connect: connection refused" logger="UnhandledError" May 13 12:54:05.594266 kubelet[2537]: E0513 12:54:05.594197 2537 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.104:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.104:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f1757284586ea default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 12:54:03.548280554 +0000 UTC m=+0.757472861,LastTimestamp:2025-05-13 12:54:03.548280554 +0000 UTC m=+0.757472861,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 12:54:06.656284 kubelet[2537]: E0513 12:54:06.656249 2537 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 13 12:54:06.776575 kubelet[2537]: I0513 12:54:06.776534 2537 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 12:54:06.784149 kubelet[2537]: I0513 12:54:06.784124 2537 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 13 12:54:07.547063 kubelet[2537]: I0513 12:54:07.546912 2537 apiserver.go:52] "Watching apiserver" May 13 12:54:07.563613 kubelet[2537]: I0513 12:54:07.563592 2537 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 13 12:54:08.545614 systemd[1]: Reload requested from client PID 2804 ('systemctl') (unit session-9.scope)... May 13 12:54:08.545808 systemd[1]: Reloading... May 13 12:54:08.600603 zram_generator::config[2847]: No configuration found. May 13 12:54:08.676726 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 12:54:08.685008 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 13 12:54:08.790027 systemd[1]: Reloading finished in 243 ms. May 13 12:54:08.831237 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:54:08.846404 systemd[1]: kubelet.service: Deactivated successfully. May 13 12:54:08.846668 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:54:08.846796 systemd[1]: kubelet.service: Consumed 569ms CPU time, 113.9M memory peak. May 13 12:54:08.848275 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:54:09.476311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:54:09.484963 (kubelet)[2915]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 12:54:09.616965 kubelet[2915]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 12:54:09.617261 kubelet[2915]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 12:54:09.617291 kubelet[2915]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 12:54:09.617368 kubelet[2915]: I0513 12:54:09.617352 2915 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 12:54:09.622106 kubelet[2915]: I0513 12:54:09.622085 2915 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 13 12:54:09.622267 kubelet[2915]: I0513 12:54:09.622260 2915 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 12:54:09.622473 kubelet[2915]: I0513 12:54:09.622465 2915 server.go:929] "Client rotation is on, will bootstrap in background" May 13 12:54:09.623284 kubelet[2915]: I0513 12:54:09.623275 2915 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 12:54:09.626376 kubelet[2915]: I0513 12:54:09.626352 2915 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 12:54:09.632639 kubelet[2915]: I0513 12:54:09.632622 2915 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 12:54:09.635178 kubelet[2915]: I0513 12:54:09.635157 2915 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 12:54:09.635334 kubelet[2915]: I0513 12:54:09.635328 2915 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 13 12:54:09.635456 kubelet[2915]: I0513 12:54:09.635441 2915 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 12:54:09.635605 kubelet[2915]: I0513 12:54:09.635490 2915 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 12:54:09.635697 kubelet[2915]: I0513 12:54:09.635690 2915 topology_manager.go:138] "Creating topology manager with none policy" May 13 12:54:09.635801 kubelet[2915]: I0513 12:54:09.635727 2915 container_manager_linux.go:300] "Creating device plugin manager" May 13 12:54:09.635801 kubelet[2915]: I0513 12:54:09.635748 2915 state_mem.go:36] "Initialized new in-memory state store" May 13 12:54:09.635860 kubelet[2915]: I0513 12:54:09.635856 2915 kubelet.go:408] "Attempting to sync node with API server" May 13 12:54:09.636216 kubelet[2915]: I0513 12:54:09.635892 2915 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 12:54:09.637021 sudo[2926]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 13 12:54:09.637272 kubelet[2915]: I0513 12:54:09.637261 2915 kubelet.go:314] "Adding apiserver pod source" May 13 12:54:09.637319 kubelet[2915]: I0513 12:54:09.637314 2915 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 12:54:09.637390 sudo[2926]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 13 12:54:09.642314 kubelet[2915]: I0513 12:54:09.642227 2915 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 13 12:54:09.644363 kubelet[2915]: I0513 12:54:09.644345 2915 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 12:54:09.645747 kubelet[2915]: I0513 12:54:09.645258 2915 server.go:1269] "Started kubelet" May 13 12:54:09.649644 kubelet[2915]: I0513 12:54:09.649528 2915 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 12:54:09.662165 kubelet[2915]: I0513 12:54:09.662148 2915 volume_manager.go:289] "Starting Kubelet Volume Manager" May 13 12:54:09.664756 kubelet[2915]: I0513 12:54:09.664713 2915 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 12:54:09.666154 kubelet[2915]: I0513 12:54:09.666115 2915 server.go:460] "Adding debug handlers to kubelet server" May 13 12:54:09.667804 kubelet[2915]: I0513 12:54:09.665004 2915 reconciler.go:26] "Reconciler: start to sync state" May 13 12:54:09.668811 kubelet[2915]: I0513 12:54:09.668092 2915 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 12:54:09.669625 kubelet[2915]: E0513 12:54:09.669535 2915 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 12:54:09.670566 kubelet[2915]: I0513 12:54:09.670126 2915 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 12:54:09.670566 kubelet[2915]: I0513 12:54:09.670278 2915 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 12:54:09.670566 kubelet[2915]: I0513 12:54:09.664918 2915 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 13 12:54:09.670566 kubelet[2915]: I0513 12:54:09.670405 2915 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 12:54:09.671540 kubelet[2915]: I0513 12:54:09.671524 2915 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 12:54:09.671875 kubelet[2915]: I0513 12:54:09.671868 2915 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 12:54:09.671967 kubelet[2915]: I0513 12:54:09.671937 2915 kubelet.go:2321] "Starting kubelet main sync loop" May 13 12:54:09.672054 kubelet[2915]: E0513 12:54:09.672038 2915 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 12:54:09.672251 kubelet[2915]: I0513 12:54:09.668457 2915 factory.go:221] Registration of the systemd container factory successfully May 13 12:54:09.672251 kubelet[2915]: I0513 12:54:09.672208 2915 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 12:54:09.674791 kubelet[2915]: I0513 12:54:09.674774 2915 factory.go:221] Registration of the containerd container factory successfully May 13 12:54:09.753141 kubelet[2915]: I0513 12:54:09.753067 2915 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 12:54:09.753141 kubelet[2915]: I0513 12:54:09.753081 2915 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 12:54:09.753141 kubelet[2915]: I0513 12:54:09.753101 2915 state_mem.go:36] "Initialized new in-memory state store" May 13 12:54:09.753290 kubelet[2915]: I0513 12:54:09.753215 2915 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 12:54:09.753290 kubelet[2915]: I0513 12:54:09.753223 2915 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 12:54:09.753290 kubelet[2915]: I0513 12:54:09.753236 2915 policy_none.go:49] "None policy: Start" May 13 12:54:09.753824 kubelet[2915]: I0513 12:54:09.753810 2915 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 12:54:09.753824 kubelet[2915]: I0513 12:54:09.753825 2915 state_mem.go:35] "Initializing new in-memory state store" May 13 12:54:09.755282 kubelet[2915]: I0513 12:54:09.755263 2915 state_mem.go:75] "Updated machine memory state" May 13 12:54:09.758988 kubelet[2915]: I0513 12:54:09.758968 2915 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 12:54:09.759568 kubelet[2915]: I0513 12:54:09.759076 2915 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 12:54:09.759568 kubelet[2915]: I0513 12:54:09.759085 2915 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 12:54:09.759568 kubelet[2915]: I0513 12:54:09.759439 2915 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 12:54:09.867217 kubelet[2915]: I0513 12:54:09.867192 2915 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 12:54:09.872933 kubelet[2915]: I0513 12:54:09.872915 2915 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 13 12:54:09.873017 kubelet[2915]: I0513 12:54:09.872956 2915 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 13 12:54:09.970666 kubelet[2915]: I0513 12:54:09.970639 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:54:09.970666 kubelet[2915]: I0513 12:54:09.970665 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:54:09.970807 kubelet[2915]: I0513 12:54:09.970684 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:54:09.970807 kubelet[2915]: I0513 12:54:09.970700 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 13 12:54:09.970807 kubelet[2915]: I0513 12:54:09.970711 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a0a1819df9dd05c001253ad7fe34bd2d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a0a1819df9dd05c001253ad7fe34bd2d\") " pod="kube-system/kube-apiserver-localhost" May 13 12:54:09.970807 kubelet[2915]: I0513 12:54:09.970721 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a0a1819df9dd05c001253ad7fe34bd2d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a0a1819df9dd05c001253ad7fe34bd2d\") " pod="kube-system/kube-apiserver-localhost" May 13 12:54:09.970807 kubelet[2915]: I0513 12:54:09.970730 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a0a1819df9dd05c001253ad7fe34bd2d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a0a1819df9dd05c001253ad7fe34bd2d\") " pod="kube-system/kube-apiserver-localhost" May 13 12:54:09.970895 kubelet[2915]: I0513 12:54:09.970738 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:54:09.970895 kubelet[2915]: I0513 12:54:09.970747 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:54:10.127860 sudo[2926]: pam_unix(sudo:session): session closed for user root May 13 12:54:10.638180 kubelet[2915]: I0513 12:54:10.638153 2915 apiserver.go:52] "Watching apiserver" May 13 12:54:10.671044 kubelet[2915]: I0513 12:54:10.671007 2915 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 13 12:54:10.707244 kubelet[2915]: I0513 12:54:10.706888 2915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.706870716 podStartE2EDuration="1.706870716s" podCreationTimestamp="2025-05-13 12:54:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 12:54:10.702658937 +0000 UTC m=+1.123321567" watchObservedRunningTime="2025-05-13 12:54:10.706870716 +0000 UTC m=+1.127533334" May 13 12:54:10.711567 kubelet[2915]: I0513 12:54:10.711504 2915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.711489657 podStartE2EDuration="1.711489657s" podCreationTimestamp="2025-05-13 12:54:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 12:54:10.711133258 +0000 UTC m=+1.131795883" watchObservedRunningTime="2025-05-13 12:54:10.711489657 +0000 UTC m=+1.132152288" May 13 12:54:10.711755 kubelet[2915]: I0513 12:54:10.711698 2915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.711691767 podStartE2EDuration="1.711691767s" podCreationTimestamp="2025-05-13 12:54:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 12:54:10.70727092 +0000 UTC m=+1.127933540" watchObservedRunningTime="2025-05-13 12:54:10.711691767 +0000 UTC m=+1.132354386" May 13 12:54:10.739769 kubelet[2915]: E0513 12:54:10.739666 2915 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 13 12:54:11.513867 sudo[1955]: pam_unix(sudo:session): session closed for user root May 13 12:54:11.514591 sshd[1954]: Connection closed by 147.75.109.163 port 49654 May 13 12:54:11.515308 sshd-session[1952]: pam_unix(sshd:session): session closed for user core May 13 12:54:11.517674 systemd[1]: sshd@7-139.178.70.104:22-147.75.109.163:49654.service: Deactivated successfully. May 13 12:54:11.519965 systemd[1]: session-9.scope: Deactivated successfully. May 13 12:54:11.520239 systemd[1]: session-9.scope: Consumed 3.084s CPU time, 212.8M memory peak. May 13 12:54:11.521282 systemd-logind[1596]: Session 9 logged out. Waiting for processes to exit. May 13 12:54:11.522506 systemd-logind[1596]: Removed session 9. May 13 12:54:14.384350 kubelet[2915]: I0513 12:54:14.384282 2915 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 12:54:14.385082 containerd[1626]: time="2025-05-13T12:54:14.384474068Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 12:54:14.385400 kubelet[2915]: I0513 12:54:14.385161 2915 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 12:54:15.249078 systemd[1]: Created slice kubepods-besteffort-pod1bd212e7_9d77_478e_8c81_33a1cd5484a0.slice - libcontainer container kubepods-besteffort-pod1bd212e7_9d77_478e_8c81_33a1cd5484a0.slice. May 13 12:54:15.258757 systemd[1]: Created slice kubepods-burstable-podcda9e517_250e_41ec_95a8_d6fdcb18dc17.slice - libcontainer container kubepods-burstable-podcda9e517_250e_41ec_95a8_d6fdcb18dc17.slice. May 13 12:54:15.301365 kubelet[2915]: I0513 12:54:15.301318 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cda9e517-250e-41ec-95a8-d6fdcb18dc17-cilium-run\") pod \"cilium-k4qrb\" (UID: \"cda9e517-250e-41ec-95a8-d6fdcb18dc17\") " pod="kube-system/cilium-k4qrb" May 13 12:54:15.301365 kubelet[2915]: I0513 12:54:15.301359 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cda9e517-250e-41ec-95a8-d6fdcb18dc17-cilium-cgroup\") pod \"cilium-k4qrb\" (UID: \"cda9e517-250e-41ec-95a8-d6fdcb18dc17\") " pod="kube-system/cilium-k4qrb" May 13 12:54:15.301489 kubelet[2915]: I0513 12:54:15.301376 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1bd212e7-9d77-478e-8c81-33a1cd5484a0-kube-proxy\") pod \"kube-proxy-sd47s\" (UID: \"1bd212e7-9d77-478e-8c81-33a1cd5484a0\") " pod="kube-system/kube-proxy-sd47s" May 13 12:54:15.301489 kubelet[2915]: I0513 12:54:15.301394 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1bd212e7-9d77-478e-8c81-33a1cd5484a0-xtables-lock\") pod \"kube-proxy-sd47s\" (UID: \"1bd212e7-9d77-478e-8c81-33a1cd5484a0\") " pod="kube-system/kube-proxy-sd47s" May 13 12:54:15.301489 kubelet[2915]: I0513 12:54:15.301404 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsdmg\" (UniqueName: \"kubernetes.io/projected/1bd212e7-9d77-478e-8c81-33a1cd5484a0-kube-api-access-wsdmg\") pod \"kube-proxy-sd47s\" (UID: \"1bd212e7-9d77-478e-8c81-33a1cd5484a0\") " pod="kube-system/kube-proxy-sd47s" May 13 12:54:15.301489 kubelet[2915]: I0513 12:54:15.301416 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cda9e517-250e-41ec-95a8-d6fdcb18dc17-hostproc\") pod \"cilium-k4qrb\" (UID: \"cda9e517-250e-41ec-95a8-d6fdcb18dc17\") " pod="kube-system/cilium-k4qrb" May 13 12:54:15.301489 kubelet[2915]: I0513 12:54:15.301433 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1bd212e7-9d77-478e-8c81-33a1cd5484a0-lib-modules\") pod \"kube-proxy-sd47s\" (UID: \"1bd212e7-9d77-478e-8c81-33a1cd5484a0\") " pod="kube-system/kube-proxy-sd47s" May 13 12:54:15.301489 kubelet[2915]: I0513 12:54:15.301448 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cda9e517-250e-41ec-95a8-d6fdcb18dc17-bpf-maps\") pod \"cilium-k4qrb\" (UID: \"cda9e517-250e-41ec-95a8-d6fdcb18dc17\") " pod="kube-system/cilium-k4qrb" May 13 12:54:15.301645 kubelet[2915]: I0513 12:54:15.301464 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cda9e517-250e-41ec-95a8-d6fdcb18dc17-cni-path\") pod \"cilium-k4qrb\" (UID: \"cda9e517-250e-41ec-95a8-d6fdcb18dc17\") " pod="kube-system/cilium-k4qrb" May 13 12:54:15.331094 systemd[1]: Created slice kubepods-besteffort-pod66c79ef7_9826_4987_ac5b_3ec5ca1f4ad6.slice - libcontainer container kubepods-besteffort-pod66c79ef7_9826_4987_ac5b_3ec5ca1f4ad6.slice. May 13 12:54:15.402594 kubelet[2915]: I0513 12:54:15.402538 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cda9e517-250e-41ec-95a8-d6fdcb18dc17-host-proc-sys-kernel\") pod \"cilium-k4qrb\" (UID: \"cda9e517-250e-41ec-95a8-d6fdcb18dc17\") " pod="kube-system/cilium-k4qrb" May 13 12:54:15.402594 kubelet[2915]: I0513 12:54:15.402585 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7rrd\" (UniqueName: \"kubernetes.io/projected/cda9e517-250e-41ec-95a8-d6fdcb18dc17-kube-api-access-r7rrd\") pod \"cilium-k4qrb\" (UID: \"cda9e517-250e-41ec-95a8-d6fdcb18dc17\") " pod="kube-system/cilium-k4qrb" May 13 12:54:15.403645 kubelet[2915]: I0513 12:54:15.402694 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cda9e517-250e-41ec-95a8-d6fdcb18dc17-cilium-config-path\") pod \"cilium-k4qrb\" (UID: \"cda9e517-250e-41ec-95a8-d6fdcb18dc17\") " pod="kube-system/cilium-k4qrb" May 13 12:54:15.403645 kubelet[2915]: I0513 12:54:15.402722 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cda9e517-250e-41ec-95a8-d6fdcb18dc17-lib-modules\") pod \"cilium-k4qrb\" (UID: \"cda9e517-250e-41ec-95a8-d6fdcb18dc17\") " pod="kube-system/cilium-k4qrb" May 13 12:54:15.403645 kubelet[2915]: I0513 12:54:15.402741 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cda9e517-250e-41ec-95a8-d6fdcb18dc17-xtables-lock\") pod \"cilium-k4qrb\" (UID: \"cda9e517-250e-41ec-95a8-d6fdcb18dc17\") " pod="kube-system/cilium-k4qrb" May 13 12:54:15.403645 kubelet[2915]: I0513 12:54:15.402777 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cda9e517-250e-41ec-95a8-d6fdcb18dc17-etc-cni-netd\") pod \"cilium-k4qrb\" (UID: \"cda9e517-250e-41ec-95a8-d6fdcb18dc17\") " pod="kube-system/cilium-k4qrb" May 13 12:54:15.403645 kubelet[2915]: I0513 12:54:15.402791 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cda9e517-250e-41ec-95a8-d6fdcb18dc17-hubble-tls\") pod \"cilium-k4qrb\" (UID: \"cda9e517-250e-41ec-95a8-d6fdcb18dc17\") " pod="kube-system/cilium-k4qrb" May 13 12:54:15.403645 kubelet[2915]: I0513 12:54:15.402826 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cda9e517-250e-41ec-95a8-d6fdcb18dc17-host-proc-sys-net\") pod \"cilium-k4qrb\" (UID: \"cda9e517-250e-41ec-95a8-d6fdcb18dc17\") " pod="kube-system/cilium-k4qrb" May 13 12:54:15.403828 kubelet[2915]: I0513 12:54:15.402836 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cda9e517-250e-41ec-95a8-d6fdcb18dc17-clustermesh-secrets\") pod \"cilium-k4qrb\" (UID: \"cda9e517-250e-41ec-95a8-d6fdcb18dc17\") " pod="kube-system/cilium-k4qrb" May 13 12:54:15.503868 kubelet[2915]: I0513 12:54:15.503565 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/66c79ef7-9826-4987-ac5b-3ec5ca1f4ad6-cilium-config-path\") pod \"cilium-operator-5d85765b45-q4qhg\" (UID: \"66c79ef7-9826-4987-ac5b-3ec5ca1f4ad6\") " pod="kube-system/cilium-operator-5d85765b45-q4qhg" May 13 12:54:15.503868 kubelet[2915]: I0513 12:54:15.503597 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7jhm\" (UniqueName: \"kubernetes.io/projected/66c79ef7-9826-4987-ac5b-3ec5ca1f4ad6-kube-api-access-v7jhm\") pod \"cilium-operator-5d85765b45-q4qhg\" (UID: \"66c79ef7-9826-4987-ac5b-3ec5ca1f4ad6\") " pod="kube-system/cilium-operator-5d85765b45-q4qhg" May 13 12:54:15.556447 containerd[1626]: time="2025-05-13T12:54:15.556408092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sd47s,Uid:1bd212e7-9d77-478e-8c81-33a1cd5484a0,Namespace:kube-system,Attempt:0,}" May 13 12:54:15.561878 containerd[1626]: time="2025-05-13T12:54:15.561778478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k4qrb,Uid:cda9e517-250e-41ec-95a8-d6fdcb18dc17,Namespace:kube-system,Attempt:0,}" May 13 12:54:15.633190 containerd[1626]: time="2025-05-13T12:54:15.633159605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-q4qhg,Uid:66c79ef7-9826-4987-ac5b-3ec5ca1f4ad6,Namespace:kube-system,Attempt:0,}" May 13 12:54:15.932987 containerd[1626]: time="2025-05-13T12:54:15.932882259Z" level=info msg="connecting to shim eb518ed0d721a0d9023d2d08569e3c34051eadafdbec37d7e7a44f04aff53793" address="unix:///run/containerd/s/640feafe58790659af9ea23f3784f981f27cc24191eaf9661e8565216b91c7bc" namespace=k8s.io protocol=ttrpc version=3 May 13 12:54:15.955684 systemd[1]: Started cri-containerd-eb518ed0d721a0d9023d2d08569e3c34051eadafdbec37d7e7a44f04aff53793.scope - libcontainer container eb518ed0d721a0d9023d2d08569e3c34051eadafdbec37d7e7a44f04aff53793. May 13 12:54:15.988061 containerd[1626]: time="2025-05-13T12:54:15.988007757Z" level=info msg="connecting to shim 260e12b63ddb76d98abd4188db0517324371e9cd3a32304dc6d87dd8f65b7cf0" address="unix:///run/containerd/s/a84fb6b2387b6f895d9f7b84de7c1365749003d02e012b0ac0f133a86789d0f3" namespace=k8s.io protocol=ttrpc version=3 May 13 12:54:15.996315 containerd[1626]: time="2025-05-13T12:54:15.996102080Z" level=info msg="connecting to shim b66de58a8247b58b72c06b072efd354921cf35ea25c1e2c722a29b8cb0c1744b" address="unix:///run/containerd/s/72aaee11808d716cdcb02a6f0623ccea07f9d8869d97f3367f68866ceb2b0647" namespace=k8s.io protocol=ttrpc version=3 May 13 12:54:16.006725 containerd[1626]: time="2025-05-13T12:54:16.006656299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sd47s,Uid:1bd212e7-9d77-478e-8c81-33a1cd5484a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb518ed0d721a0d9023d2d08569e3c34051eadafdbec37d7e7a44f04aff53793\"" May 13 12:54:16.007693 systemd[1]: Started cri-containerd-260e12b63ddb76d98abd4188db0517324371e9cd3a32304dc6d87dd8f65b7cf0.scope - libcontainer container 260e12b63ddb76d98abd4188db0517324371e9cd3a32304dc6d87dd8f65b7cf0. May 13 12:54:16.009620 containerd[1626]: time="2025-05-13T12:54:16.009549492Z" level=info msg="CreateContainer within sandbox \"eb518ed0d721a0d9023d2d08569e3c34051eadafdbec37d7e7a44f04aff53793\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 12:54:16.019661 systemd[1]: Started cri-containerd-b66de58a8247b58b72c06b072efd354921cf35ea25c1e2c722a29b8cb0c1744b.scope - libcontainer container b66de58a8247b58b72c06b072efd354921cf35ea25c1e2c722a29b8cb0c1744b. May 13 12:54:16.055403 containerd[1626]: time="2025-05-13T12:54:16.055376583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k4qrb,Uid:cda9e517-250e-41ec-95a8-d6fdcb18dc17,Namespace:kube-system,Attempt:0,} returns sandbox id \"b66de58a8247b58b72c06b072efd354921cf35ea25c1e2c722a29b8cb0c1744b\"" May 13 12:54:16.056504 containerd[1626]: time="2025-05-13T12:54:16.056487486Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 13 12:54:16.102962 containerd[1626]: time="2025-05-13T12:54:16.102937266Z" level=info msg="Container f7b55685ee48f4f020c0a4b03a5ce625be86697ddbfc807c56ccaeda8978ef9a: CDI devices from CRI Config.CDIDevices: []" May 13 12:54:16.103396 containerd[1626]: time="2025-05-13T12:54:16.103274847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-q4qhg,Uid:66c79ef7-9826-4987-ac5b-3ec5ca1f4ad6,Namespace:kube-system,Attempt:0,} returns sandbox id \"260e12b63ddb76d98abd4188db0517324371e9cd3a32304dc6d87dd8f65b7cf0\"" May 13 12:54:16.138251 containerd[1626]: time="2025-05-13T12:54:16.138223313Z" level=info msg="CreateContainer within sandbox \"eb518ed0d721a0d9023d2d08569e3c34051eadafdbec37d7e7a44f04aff53793\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f7b55685ee48f4f020c0a4b03a5ce625be86697ddbfc807c56ccaeda8978ef9a\"" May 13 12:54:16.138854 containerd[1626]: time="2025-05-13T12:54:16.138819045Z" level=info msg="StartContainer for \"f7b55685ee48f4f020c0a4b03a5ce625be86697ddbfc807c56ccaeda8978ef9a\"" May 13 12:54:16.140354 containerd[1626]: time="2025-05-13T12:54:16.140328375Z" level=info msg="connecting to shim f7b55685ee48f4f020c0a4b03a5ce625be86697ddbfc807c56ccaeda8978ef9a" address="unix:///run/containerd/s/640feafe58790659af9ea23f3784f981f27cc24191eaf9661e8565216b91c7bc" protocol=ttrpc version=3 May 13 12:54:16.159709 systemd[1]: Started cri-containerd-f7b55685ee48f4f020c0a4b03a5ce625be86697ddbfc807c56ccaeda8978ef9a.scope - libcontainer container f7b55685ee48f4f020c0a4b03a5ce625be86697ddbfc807c56ccaeda8978ef9a. May 13 12:54:16.198430 containerd[1626]: time="2025-05-13T12:54:16.198089796Z" level=info msg="StartContainer for \"f7b55685ee48f4f020c0a4b03a5ce625be86697ddbfc807c56ccaeda8978ef9a\" returns successfully" May 13 12:54:16.767245 kubelet[2915]: I0513 12:54:16.767175 2915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sd47s" podStartSLOduration=1.767162321 podStartE2EDuration="1.767162321s" podCreationTimestamp="2025-05-13 12:54:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 12:54:16.76679537 +0000 UTC m=+7.187458000" watchObservedRunningTime="2025-05-13 12:54:16.767162321 +0000 UTC m=+7.187824950" May 13 12:54:20.934392 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3679518624.mount: Deactivated successfully. May 13 12:54:26.142785 containerd[1626]: time="2025-05-13T12:54:26.142745900Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:54:26.155041 containerd[1626]: time="2025-05-13T12:54:26.155002206Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 13 12:54:26.350333 containerd[1626]: time="2025-05-13T12:54:26.350267979Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:54:26.351451 containerd[1626]: time="2025-05-13T12:54:26.351399056Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.294187651s" May 13 12:54:26.351451 containerd[1626]: time="2025-05-13T12:54:26.351429131Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 13 12:54:26.365671 containerd[1626]: time="2025-05-13T12:54:26.361169121Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 13 12:54:26.365671 containerd[1626]: time="2025-05-13T12:54:26.361835464Z" level=info msg="CreateContainer within sandbox \"b66de58a8247b58b72c06b072efd354921cf35ea25c1e2c722a29b8cb0c1744b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 12:54:26.488935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2290978395.mount: Deactivated successfully. May 13 12:54:26.496035 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3869895953.mount: Deactivated successfully. May 13 12:54:26.509836 containerd[1626]: time="2025-05-13T12:54:26.496048733Z" level=info msg="Container 75340c00ae151adb05d7267393669b1877718a077b4a353af8108582d504cf10: CDI devices from CRI Config.CDIDevices: []" May 13 12:54:26.532269 containerd[1626]: time="2025-05-13T12:54:26.532193058Z" level=info msg="CreateContainer within sandbox \"b66de58a8247b58b72c06b072efd354921cf35ea25c1e2c722a29b8cb0c1744b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"75340c00ae151adb05d7267393669b1877718a077b4a353af8108582d504cf10\"" May 13 12:54:26.532959 containerd[1626]: time="2025-05-13T12:54:26.532894027Z" level=info msg="StartContainer for \"75340c00ae151adb05d7267393669b1877718a077b4a353af8108582d504cf10\"" May 13 12:54:26.538608 containerd[1626]: time="2025-05-13T12:54:26.538573181Z" level=info msg="connecting to shim 75340c00ae151adb05d7267393669b1877718a077b4a353af8108582d504cf10" address="unix:///run/containerd/s/72aaee11808d716cdcb02a6f0623ccea07f9d8869d97f3367f68866ceb2b0647" protocol=ttrpc version=3 May 13 12:54:26.586724 systemd[1]: Started cri-containerd-75340c00ae151adb05d7267393669b1877718a077b4a353af8108582d504cf10.scope - libcontainer container 75340c00ae151adb05d7267393669b1877718a077b4a353af8108582d504cf10. May 13 12:54:26.629216 containerd[1626]: time="2025-05-13T12:54:26.629193019Z" level=info msg="StartContainer for \"75340c00ae151adb05d7267393669b1877718a077b4a353af8108582d504cf10\" returns successfully" May 13 12:54:26.641130 systemd[1]: cri-containerd-75340c00ae151adb05d7267393669b1877718a077b4a353af8108582d504cf10.scope: Deactivated successfully. May 13 12:54:26.716450 containerd[1626]: time="2025-05-13T12:54:26.716422917Z" level=info msg="TaskExit event in podsandbox handler container_id:\"75340c00ae151adb05d7267393669b1877718a077b4a353af8108582d504cf10\" id:\"75340c00ae151adb05d7267393669b1877718a077b4a353af8108582d504cf10\" pid:3321 exited_at:{seconds:1747140866 nanos:642122398}" May 13 12:54:26.723712 containerd[1626]: time="2025-05-13T12:54:26.716623610Z" level=info msg="received exit event container_id:\"75340c00ae151adb05d7267393669b1877718a077b4a353af8108582d504cf10\" id:\"75340c00ae151adb05d7267393669b1877718a077b4a353af8108582d504cf10\" pid:3321 exited_at:{seconds:1747140866 nanos:642122398}" May 13 12:54:27.476461 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75340c00ae151adb05d7267393669b1877718a077b4a353af8108582d504cf10-rootfs.mount: Deactivated successfully. May 13 12:54:27.865749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3046738680.mount: Deactivated successfully. May 13 12:54:27.921278 containerd[1626]: time="2025-05-13T12:54:27.921248838Z" level=info msg="CreateContainer within sandbox \"b66de58a8247b58b72c06b072efd354921cf35ea25c1e2c722a29b8cb0c1744b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 12:54:27.936194 containerd[1626]: time="2025-05-13T12:54:27.935331646Z" level=info msg="Container a8215698e1083b24122c76e1ddaf1a8d17c57b33f704a201b23285611ba8bf24: CDI devices from CRI Config.CDIDevices: []" May 13 12:54:27.935806 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount923673071.mount: Deactivated successfully. May 13 12:54:27.966204 containerd[1626]: time="2025-05-13T12:54:27.966167704Z" level=info msg="CreateContainer within sandbox \"b66de58a8247b58b72c06b072efd354921cf35ea25c1e2c722a29b8cb0c1744b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a8215698e1083b24122c76e1ddaf1a8d17c57b33f704a201b23285611ba8bf24\"" May 13 12:54:27.966546 containerd[1626]: time="2025-05-13T12:54:27.966529333Z" level=info msg="StartContainer for \"a8215698e1083b24122c76e1ddaf1a8d17c57b33f704a201b23285611ba8bf24\"" May 13 12:54:27.969266 containerd[1626]: time="2025-05-13T12:54:27.969212787Z" level=info msg="connecting to shim a8215698e1083b24122c76e1ddaf1a8d17c57b33f704a201b23285611ba8bf24" address="unix:///run/containerd/s/72aaee11808d716cdcb02a6f0623ccea07f9d8869d97f3367f68866ceb2b0647" protocol=ttrpc version=3 May 13 12:54:27.985694 systemd[1]: Started cri-containerd-a8215698e1083b24122c76e1ddaf1a8d17c57b33f704a201b23285611ba8bf24.scope - libcontainer container a8215698e1083b24122c76e1ddaf1a8d17c57b33f704a201b23285611ba8bf24. May 13 12:54:28.022130 containerd[1626]: time="2025-05-13T12:54:28.022099218Z" level=info msg="StartContainer for \"a8215698e1083b24122c76e1ddaf1a8d17c57b33f704a201b23285611ba8bf24\" returns successfully" May 13 12:54:28.042361 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 12:54:28.042967 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 12:54:28.043134 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 13 12:54:28.045263 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 12:54:28.055955 systemd[1]: cri-containerd-a8215698e1083b24122c76e1ddaf1a8d17c57b33f704a201b23285611ba8bf24.scope: Deactivated successfully. May 13 12:54:28.056165 systemd[1]: cri-containerd-a8215698e1083b24122c76e1ddaf1a8d17c57b33f704a201b23285611ba8bf24.scope: Consumed 15ms CPU time, 5.4M memory peak, 24K read from disk, 2.2M written to disk. May 13 12:54:28.063295 containerd[1626]: time="2025-05-13T12:54:28.057082897Z" level=info msg="received exit event container_id:\"a8215698e1083b24122c76e1ddaf1a8d17c57b33f704a201b23285611ba8bf24\" id:\"a8215698e1083b24122c76e1ddaf1a8d17c57b33f704a201b23285611ba8bf24\" pid:3379 exited_at:{seconds:1747140868 nanos:56850032}" May 13 12:54:28.063295 containerd[1626]: time="2025-05-13T12:54:28.057197665Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a8215698e1083b24122c76e1ddaf1a8d17c57b33f704a201b23285611ba8bf24\" id:\"a8215698e1083b24122c76e1ddaf1a8d17c57b33f704a201b23285611ba8bf24\" pid:3379 exited_at:{seconds:1747140868 nanos:56850032}" May 13 12:54:28.111347 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 12:54:28.399478 containerd[1626]: time="2025-05-13T12:54:28.399440997Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:54:28.399910 containerd[1626]: time="2025-05-13T12:54:28.399892672Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 13 12:54:28.400088 containerd[1626]: time="2025-05-13T12:54:28.400073193Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:54:28.400737 containerd[1626]: time="2025-05-13T12:54:28.400722813Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.039529735s" May 13 12:54:28.400792 containerd[1626]: time="2025-05-13T12:54:28.400782461Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 13 12:54:28.403109 containerd[1626]: time="2025-05-13T12:54:28.403083474Z" level=info msg="CreateContainer within sandbox \"260e12b63ddb76d98abd4188db0517324371e9cd3a32304dc6d87dd8f65b7cf0\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 13 12:54:28.408756 containerd[1626]: time="2025-05-13T12:54:28.408714242Z" level=info msg="Container 84d7429fd0c1947e14ea50b288ed2e55aa8fa7610b08d368af927deff18f1058: CDI devices from CRI Config.CDIDevices: []" May 13 12:54:28.428280 containerd[1626]: time="2025-05-13T12:54:28.428235376Z" level=info msg="CreateContainer within sandbox \"260e12b63ddb76d98abd4188db0517324371e9cd3a32304dc6d87dd8f65b7cf0\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"84d7429fd0c1947e14ea50b288ed2e55aa8fa7610b08d368af927deff18f1058\"" May 13 12:54:28.429005 containerd[1626]: time="2025-05-13T12:54:28.428969255Z" level=info msg="StartContainer for \"84d7429fd0c1947e14ea50b288ed2e55aa8fa7610b08d368af927deff18f1058\"" May 13 12:54:28.440820 containerd[1626]: time="2025-05-13T12:54:28.440698919Z" level=info msg="connecting to shim 84d7429fd0c1947e14ea50b288ed2e55aa8fa7610b08d368af927deff18f1058" address="unix:///run/containerd/s/a84fb6b2387b6f895d9f7b84de7c1365749003d02e012b0ac0f133a86789d0f3" protocol=ttrpc version=3 May 13 12:54:28.455751 systemd[1]: Started cri-containerd-84d7429fd0c1947e14ea50b288ed2e55aa8fa7610b08d368af927deff18f1058.scope - libcontainer container 84d7429fd0c1947e14ea50b288ed2e55aa8fa7610b08d368af927deff18f1058. May 13 12:54:28.480059 containerd[1626]: time="2025-05-13T12:54:28.479997171Z" level=info msg="StartContainer for \"84d7429fd0c1947e14ea50b288ed2e55aa8fa7610b08d368af927deff18f1058\" returns successfully" May 13 12:54:28.919950 containerd[1626]: time="2025-05-13T12:54:28.919857264Z" level=info msg="CreateContainer within sandbox \"b66de58a8247b58b72c06b072efd354921cf35ea25c1e2c722a29b8cb0c1744b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 12:54:28.962722 containerd[1626]: time="2025-05-13T12:54:28.962680717Z" level=info msg="Container 54e8c83a77318070c52d2128a233de0a6e4864290fc6994dea8314227ec8ac64: CDI devices from CRI Config.CDIDevices: []" May 13 12:54:28.989936 containerd[1626]: time="2025-05-13T12:54:28.989893631Z" level=info msg="CreateContainer within sandbox \"b66de58a8247b58b72c06b072efd354921cf35ea25c1e2c722a29b8cb0c1744b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"54e8c83a77318070c52d2128a233de0a6e4864290fc6994dea8314227ec8ac64\"" May 13 12:54:28.990336 containerd[1626]: time="2025-05-13T12:54:28.990322645Z" level=info msg="StartContainer for \"54e8c83a77318070c52d2128a233de0a6e4864290fc6994dea8314227ec8ac64\"" May 13 12:54:28.997489 containerd[1626]: time="2025-05-13T12:54:28.992089094Z" level=info msg="connecting to shim 54e8c83a77318070c52d2128a233de0a6e4864290fc6994dea8314227ec8ac64" address="unix:///run/containerd/s/72aaee11808d716cdcb02a6f0623ccea07f9d8869d97f3367f68866ceb2b0647" protocol=ttrpc version=3 May 13 12:54:29.000656 kubelet[2915]: I0513 12:54:29.000595 2915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-q4qhg" podStartSLOduration=1.695305856 podStartE2EDuration="13.991538252s" podCreationTimestamp="2025-05-13 12:54:15 +0000 UTC" firstStartedPulling="2025-05-13 12:54:16.104969246 +0000 UTC m=+6.525631870" lastFinishedPulling="2025-05-13 12:54:28.401201645 +0000 UTC m=+18.821864266" observedRunningTime="2025-05-13 12:54:28.94060223 +0000 UTC m=+19.361264861" watchObservedRunningTime="2025-05-13 12:54:28.991538252 +0000 UTC m=+19.412200876" May 13 12:54:29.018034 systemd[1]: Started cri-containerd-54e8c83a77318070c52d2128a233de0a6e4864290fc6994dea8314227ec8ac64.scope - libcontainer container 54e8c83a77318070c52d2128a233de0a6e4864290fc6994dea8314227ec8ac64. May 13 12:54:29.067506 containerd[1626]: time="2025-05-13T12:54:29.067469026Z" level=info msg="StartContainer for \"54e8c83a77318070c52d2128a233de0a6e4864290fc6994dea8314227ec8ac64\" returns successfully" May 13 12:54:29.097788 systemd[1]: cri-containerd-54e8c83a77318070c52d2128a233de0a6e4864290fc6994dea8314227ec8ac64.scope: Deactivated successfully. May 13 12:54:29.097975 systemd[1]: cri-containerd-54e8c83a77318070c52d2128a233de0a6e4864290fc6994dea8314227ec8ac64.scope: Consumed 19ms CPU time, 4.6M memory peak, 1.2M read from disk. May 13 12:54:29.098738 containerd[1626]: time="2025-05-13T12:54:29.098655424Z" level=info msg="received exit event container_id:\"54e8c83a77318070c52d2128a233de0a6e4864290fc6994dea8314227ec8ac64\" id:\"54e8c83a77318070c52d2128a233de0a6e4864290fc6994dea8314227ec8ac64\" pid:3467 exited_at:{seconds:1747140869 nanos:98315633}" May 13 12:54:29.101379 containerd[1626]: time="2025-05-13T12:54:29.101357531Z" level=info msg="TaskExit event in podsandbox handler container_id:\"54e8c83a77318070c52d2128a233de0a6e4864290fc6994dea8314227ec8ac64\" id:\"54e8c83a77318070c52d2128a233de0a6e4864290fc6994dea8314227ec8ac64\" pid:3467 exited_at:{seconds:1747140869 nanos:98315633}" May 13 12:54:29.121757 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54e8c83a77318070c52d2128a233de0a6e4864290fc6994dea8314227ec8ac64-rootfs.mount: Deactivated successfully. May 13 12:54:29.922198 containerd[1626]: time="2025-05-13T12:54:29.922174085Z" level=info msg="CreateContainer within sandbox \"b66de58a8247b58b72c06b072efd354921cf35ea25c1e2c722a29b8cb0c1744b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 12:54:29.933535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3822062207.mount: Deactivated successfully. May 13 12:54:29.933713 containerd[1626]: time="2025-05-13T12:54:29.933515994Z" level=info msg="Container 6f4b1f02f13830b6fa362a6ee42cc57f15a641eb3684cbb20e95f59fad6d9a41: CDI devices from CRI Config.CDIDevices: []" May 13 12:54:29.938122 containerd[1626]: time="2025-05-13T12:54:29.938101155Z" level=info msg="CreateContainer within sandbox \"b66de58a8247b58b72c06b072efd354921cf35ea25c1e2c722a29b8cb0c1744b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6f4b1f02f13830b6fa362a6ee42cc57f15a641eb3684cbb20e95f59fad6d9a41\"" May 13 12:54:29.938480 containerd[1626]: time="2025-05-13T12:54:29.938446143Z" level=info msg="StartContainer for \"6f4b1f02f13830b6fa362a6ee42cc57f15a641eb3684cbb20e95f59fad6d9a41\"" May 13 12:54:29.939426 containerd[1626]: time="2025-05-13T12:54:29.939311836Z" level=info msg="connecting to shim 6f4b1f02f13830b6fa362a6ee42cc57f15a641eb3684cbb20e95f59fad6d9a41" address="unix:///run/containerd/s/72aaee11808d716cdcb02a6f0623ccea07f9d8869d97f3367f68866ceb2b0647" protocol=ttrpc version=3 May 13 12:54:29.962640 systemd[1]: Started cri-containerd-6f4b1f02f13830b6fa362a6ee42cc57f15a641eb3684cbb20e95f59fad6d9a41.scope - libcontainer container 6f4b1f02f13830b6fa362a6ee42cc57f15a641eb3684cbb20e95f59fad6d9a41. May 13 12:54:29.979495 systemd[1]: cri-containerd-6f4b1f02f13830b6fa362a6ee42cc57f15a641eb3684cbb20e95f59fad6d9a41.scope: Deactivated successfully. May 13 12:54:29.980421 containerd[1626]: time="2025-05-13T12:54:29.980400388Z" level=info msg="received exit event container_id:\"6f4b1f02f13830b6fa362a6ee42cc57f15a641eb3684cbb20e95f59fad6d9a41\" id:\"6f4b1f02f13830b6fa362a6ee42cc57f15a641eb3684cbb20e95f59fad6d9a41\" pid:3508 exited_at:{seconds:1747140869 nanos:979412074}" May 13 12:54:29.980931 containerd[1626]: time="2025-05-13T12:54:29.980894561Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6f4b1f02f13830b6fa362a6ee42cc57f15a641eb3684cbb20e95f59fad6d9a41\" id:\"6f4b1f02f13830b6fa362a6ee42cc57f15a641eb3684cbb20e95f59fad6d9a41\" pid:3508 exited_at:{seconds:1747140869 nanos:979412074}" May 13 12:54:29.981347 containerd[1626]: time="2025-05-13T12:54:29.981325337Z" level=info msg="StartContainer for \"6f4b1f02f13830b6fa362a6ee42cc57f15a641eb3684cbb20e95f59fad6d9a41\" returns successfully" May 13 12:54:29.993690 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f4b1f02f13830b6fa362a6ee42cc57f15a641eb3684cbb20e95f59fad6d9a41-rootfs.mount: Deactivated successfully. May 13 12:54:30.927038 containerd[1626]: time="2025-05-13T12:54:30.926755987Z" level=info msg="CreateContainer within sandbox \"b66de58a8247b58b72c06b072efd354921cf35ea25c1e2c722a29b8cb0c1744b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 12:54:30.935154 containerd[1626]: time="2025-05-13T12:54:30.935128376Z" level=info msg="Container 4a97c20deefb02c4848fc079b2ef050f7936ddbf0ac4cc31d4ef6f005937cd0b: CDI devices from CRI Config.CDIDevices: []" May 13 12:54:30.945344 containerd[1626]: time="2025-05-13T12:54:30.945317665Z" level=info msg="CreateContainer within sandbox \"b66de58a8247b58b72c06b072efd354921cf35ea25c1e2c722a29b8cb0c1744b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4a97c20deefb02c4848fc079b2ef050f7936ddbf0ac4cc31d4ef6f005937cd0b\"" May 13 12:54:30.946032 containerd[1626]: time="2025-05-13T12:54:30.946009714Z" level=info msg="StartContainer for \"4a97c20deefb02c4848fc079b2ef050f7936ddbf0ac4cc31d4ef6f005937cd0b\"" May 13 12:54:30.946799 containerd[1626]: time="2025-05-13T12:54:30.946757735Z" level=info msg="connecting to shim 4a97c20deefb02c4848fc079b2ef050f7936ddbf0ac4cc31d4ef6f005937cd0b" address="unix:///run/containerd/s/72aaee11808d716cdcb02a6f0623ccea07f9d8869d97f3367f68866ceb2b0647" protocol=ttrpc version=3 May 13 12:54:30.964660 systemd[1]: Started cri-containerd-4a97c20deefb02c4848fc079b2ef050f7936ddbf0ac4cc31d4ef6f005937cd0b.scope - libcontainer container 4a97c20deefb02c4848fc079b2ef050f7936ddbf0ac4cc31d4ef6f005937cd0b. May 13 12:54:30.986517 containerd[1626]: time="2025-05-13T12:54:30.986456553Z" level=info msg="StartContainer for \"4a97c20deefb02c4848fc079b2ef050f7936ddbf0ac4cc31d4ef6f005937cd0b\" returns successfully" May 13 12:54:31.131865 containerd[1626]: time="2025-05-13T12:54:31.131841646Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4a97c20deefb02c4848fc079b2ef050f7936ddbf0ac4cc31d4ef6f005937cd0b\" id:\"85814d30f3727d4546b0f5062d8db456e589166f7597cb1470490e15c8a9d49e\" pid:3576 exited_at:{seconds:1747140871 nanos:131527322}" May 13 12:54:31.193804 kubelet[2915]: I0513 12:54:31.193701 2915 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 13 12:54:31.221988 systemd[1]: Created slice kubepods-burstable-podba538e2a_7d36_4d6a_874e_021e6e363710.slice - libcontainer container kubepods-burstable-podba538e2a_7d36_4d6a_874e_021e6e363710.slice. May 13 12:54:31.227224 systemd[1]: Created slice kubepods-burstable-poda342e083_9de3_4302_ae2f_dc655774583b.slice - libcontainer container kubepods-burstable-poda342e083_9de3_4302_ae2f_dc655774583b.slice. May 13 12:54:31.313654 kubelet[2915]: I0513 12:54:31.313626 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a342e083-9de3-4302-ae2f-dc655774583b-config-volume\") pod \"coredns-6f6b679f8f-ng5x4\" (UID: \"a342e083-9de3-4302-ae2f-dc655774583b\") " pod="kube-system/coredns-6f6b679f8f-ng5x4" May 13 12:54:31.313654 kubelet[2915]: I0513 12:54:31.313657 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ba538e2a-7d36-4d6a-874e-021e6e363710-config-volume\") pod \"coredns-6f6b679f8f-p8k6v\" (UID: \"ba538e2a-7d36-4d6a-874e-021e6e363710\") " pod="kube-system/coredns-6f6b679f8f-p8k6v" May 13 12:54:31.313771 kubelet[2915]: I0513 12:54:31.313668 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jsnt\" (UniqueName: \"kubernetes.io/projected/ba538e2a-7d36-4d6a-874e-021e6e363710-kube-api-access-5jsnt\") pod \"coredns-6f6b679f8f-p8k6v\" (UID: \"ba538e2a-7d36-4d6a-874e-021e6e363710\") " pod="kube-system/coredns-6f6b679f8f-p8k6v" May 13 12:54:31.313771 kubelet[2915]: I0513 12:54:31.313679 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jphl\" (UniqueName: \"kubernetes.io/projected/a342e083-9de3-4302-ae2f-dc655774583b-kube-api-access-5jphl\") pod \"coredns-6f6b679f8f-ng5x4\" (UID: \"a342e083-9de3-4302-ae2f-dc655774583b\") " pod="kube-system/coredns-6f6b679f8f-ng5x4" May 13 12:54:31.526626 containerd[1626]: time="2025-05-13T12:54:31.526415604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-p8k6v,Uid:ba538e2a-7d36-4d6a-874e-021e6e363710,Namespace:kube-system,Attempt:0,}" May 13 12:54:31.532225 containerd[1626]: time="2025-05-13T12:54:31.531627700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ng5x4,Uid:a342e083-9de3-4302-ae2f-dc655774583b,Namespace:kube-system,Attempt:0,}" May 13 12:54:33.367462 systemd-networkd[1540]: cilium_host: Link UP May 13 12:54:33.368756 systemd-networkd[1540]: cilium_net: Link UP May 13 12:54:33.368912 systemd-networkd[1540]: cilium_net: Gained carrier May 13 12:54:33.369020 systemd-networkd[1540]: cilium_host: Gained carrier May 13 12:54:33.474659 systemd-networkd[1540]: cilium_vxlan: Link UP May 13 12:54:33.474855 systemd-networkd[1540]: cilium_vxlan: Gained carrier May 13 12:54:33.554691 systemd-networkd[1540]: cilium_net: Gained IPv6LL May 13 12:54:33.930761 systemd-networkd[1540]: cilium_host: Gained IPv6LL May 13 12:54:34.064577 kernel: NET: Registered PF_ALG protocol family May 13 12:54:34.509484 systemd-networkd[1540]: lxc_health: Link UP May 13 12:54:34.515876 systemd-networkd[1540]: lxc_health: Gained carrier May 13 12:54:35.074611 kernel: eth0: renamed from tmp50fcb May 13 12:54:35.073121 systemd-networkd[1540]: lxc3e7d1e9b46bd: Link UP May 13 12:54:35.081886 systemd-networkd[1540]: lxc3e7d1e9b46bd: Gained carrier May 13 12:54:35.082852 systemd-networkd[1540]: lxcc56b8afbed9f: Link UP May 13 12:54:35.091575 kernel: eth0: renamed from tmp0beea May 13 12:54:35.093155 systemd-networkd[1540]: lxcc56b8afbed9f: Gained carrier May 13 12:54:35.274645 systemd-networkd[1540]: cilium_vxlan: Gained IPv6LL May 13 12:54:35.595038 kubelet[2915]: I0513 12:54:35.594888 2915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-k4qrb" podStartSLOduration=10.298866809 podStartE2EDuration="20.594878492s" podCreationTimestamp="2025-05-13 12:54:15 +0000 UTC" firstStartedPulling="2025-05-13 12:54:16.056064544 +0000 UTC m=+6.476727164" lastFinishedPulling="2025-05-13 12:54:26.352076221 +0000 UTC m=+16.772738847" observedRunningTime="2025-05-13 12:54:31.939859199 +0000 UTC m=+22.360521830" watchObservedRunningTime="2025-05-13 12:54:35.594878492 +0000 UTC m=+26.015541110" May 13 12:54:36.042679 systemd-networkd[1540]: lxc_health: Gained IPv6LL May 13 12:54:36.874690 systemd-networkd[1540]: lxc3e7d1e9b46bd: Gained IPv6LL May 13 12:54:37.130683 systemd-networkd[1540]: lxcc56b8afbed9f: Gained IPv6LL May 13 12:54:37.754134 containerd[1626]: time="2025-05-13T12:54:37.754049881Z" level=info msg="connecting to shim 50fcb20f8f7faa89736fdd8943ba30797704f3e06e57a037bbbf44b60950ae1d" address="unix:///run/containerd/s/78a1582a1233b277d046624bdfa974d867e04de9a45d23d67cceab5ec417d73e" namespace=k8s.io protocol=ttrpc version=3 May 13 12:54:37.783847 systemd[1]: Started cri-containerd-50fcb20f8f7faa89736fdd8943ba30797704f3e06e57a037bbbf44b60950ae1d.scope - libcontainer container 50fcb20f8f7faa89736fdd8943ba30797704f3e06e57a037bbbf44b60950ae1d. May 13 12:54:37.792347 containerd[1626]: time="2025-05-13T12:54:37.787285750Z" level=info msg="connecting to shim 0beeab079614520bd56ed86ae48e1f7f654b313b03cc9f93dd0070572482d565" address="unix:///run/containerd/s/afe72df72993c54c47aaec2d7af2b50b6a21b8171e461b97d6410484485b2726" namespace=k8s.io protocol=ttrpc version=3 May 13 12:54:37.801146 systemd-resolved[1490]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 12:54:37.816670 systemd[1]: Started cri-containerd-0beeab079614520bd56ed86ae48e1f7f654b313b03cc9f93dd0070572482d565.scope - libcontainer container 0beeab079614520bd56ed86ae48e1f7f654b313b03cc9f93dd0070572482d565. May 13 12:54:37.832375 systemd-resolved[1490]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 12:54:37.842583 containerd[1626]: time="2025-05-13T12:54:37.842550205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ng5x4,Uid:a342e083-9de3-4302-ae2f-dc655774583b,Namespace:kube-system,Attempt:0,} returns sandbox id \"50fcb20f8f7faa89736fdd8943ba30797704f3e06e57a037bbbf44b60950ae1d\"" May 13 12:54:37.844883 containerd[1626]: time="2025-05-13T12:54:37.844865817Z" level=info msg="CreateContainer within sandbox \"50fcb20f8f7faa89736fdd8943ba30797704f3e06e57a037bbbf44b60950ae1d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 12:54:37.881783 containerd[1626]: time="2025-05-13T12:54:37.881758534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-p8k6v,Uid:ba538e2a-7d36-4d6a-874e-021e6e363710,Namespace:kube-system,Attempt:0,} returns sandbox id \"0beeab079614520bd56ed86ae48e1f7f654b313b03cc9f93dd0070572482d565\"" May 13 12:54:37.883835 containerd[1626]: time="2025-05-13T12:54:37.883809584Z" level=info msg="CreateContainer within sandbox \"0beeab079614520bd56ed86ae48e1f7f654b313b03cc9f93dd0070572482d565\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 12:54:37.926575 containerd[1626]: time="2025-05-13T12:54:37.926521508Z" level=info msg="Container 07528e17ff1fcee26b26fab1cd3c10042882080c1f3c41169dabbadaa567f6ea: CDI devices from CRI Config.CDIDevices: []" May 13 12:54:37.927213 containerd[1626]: time="2025-05-13T12:54:37.926628099Z" level=info msg="Container 457007d683f1a06219de69a188f0361ac7c7522aaee14535094e682213cd6924: CDI devices from CRI Config.CDIDevices: []" May 13 12:54:37.930387 containerd[1626]: time="2025-05-13T12:54:37.930280857Z" level=info msg="CreateContainer within sandbox \"0beeab079614520bd56ed86ae48e1f7f654b313b03cc9f93dd0070572482d565\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"07528e17ff1fcee26b26fab1cd3c10042882080c1f3c41169dabbadaa567f6ea\"" May 13 12:54:37.930879 containerd[1626]: time="2025-05-13T12:54:37.930855153Z" level=info msg="StartContainer for \"07528e17ff1fcee26b26fab1cd3c10042882080c1f3c41169dabbadaa567f6ea\"" May 13 12:54:37.931548 containerd[1626]: time="2025-05-13T12:54:37.931527051Z" level=info msg="connecting to shim 07528e17ff1fcee26b26fab1cd3c10042882080c1f3c41169dabbadaa567f6ea" address="unix:///run/containerd/s/afe72df72993c54c47aaec2d7af2b50b6a21b8171e461b97d6410484485b2726" protocol=ttrpc version=3 May 13 12:54:37.935901 containerd[1626]: time="2025-05-13T12:54:37.935826921Z" level=info msg="CreateContainer within sandbox \"50fcb20f8f7faa89736fdd8943ba30797704f3e06e57a037bbbf44b60950ae1d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"457007d683f1a06219de69a188f0361ac7c7522aaee14535094e682213cd6924\"" May 13 12:54:37.936340 containerd[1626]: time="2025-05-13T12:54:37.936298481Z" level=info msg="StartContainer for \"457007d683f1a06219de69a188f0361ac7c7522aaee14535094e682213cd6924\"" May 13 12:54:37.937857 containerd[1626]: time="2025-05-13T12:54:37.937758350Z" level=info msg="connecting to shim 457007d683f1a06219de69a188f0361ac7c7522aaee14535094e682213cd6924" address="unix:///run/containerd/s/78a1582a1233b277d046624bdfa974d867e04de9a45d23d67cceab5ec417d73e" protocol=ttrpc version=3 May 13 12:54:37.955763 systemd[1]: Started cri-containerd-457007d683f1a06219de69a188f0361ac7c7522aaee14535094e682213cd6924.scope - libcontainer container 457007d683f1a06219de69a188f0361ac7c7522aaee14535094e682213cd6924. May 13 12:54:37.962709 systemd[1]: Started cri-containerd-07528e17ff1fcee26b26fab1cd3c10042882080c1f3c41169dabbadaa567f6ea.scope - libcontainer container 07528e17ff1fcee26b26fab1cd3c10042882080c1f3c41169dabbadaa567f6ea. May 13 12:54:37.987139 containerd[1626]: time="2025-05-13T12:54:37.987084975Z" level=info msg="StartContainer for \"457007d683f1a06219de69a188f0361ac7c7522aaee14535094e682213cd6924\" returns successfully" May 13 12:54:37.995697 containerd[1626]: time="2025-05-13T12:54:37.995598276Z" level=info msg="StartContainer for \"07528e17ff1fcee26b26fab1cd3c10042882080c1f3c41169dabbadaa567f6ea\" returns successfully" May 13 12:54:38.749295 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4001666070.mount: Deactivated successfully. May 13 12:54:38.980943 kubelet[2915]: I0513 12:54:38.980908 2915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-p8k6v" podStartSLOduration=23.980898726 podStartE2EDuration="23.980898726s" podCreationTimestamp="2025-05-13 12:54:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 12:54:38.980533562 +0000 UTC m=+29.401196192" watchObservedRunningTime="2025-05-13 12:54:38.980898726 +0000 UTC m=+29.401561359" May 13 12:54:38.995005 kubelet[2915]: I0513 12:54:38.994972 2915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-ng5x4" podStartSLOduration=23.994961669 podStartE2EDuration="23.994961669s" podCreationTimestamp="2025-05-13 12:54:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 12:54:38.994320024 +0000 UTC m=+29.414982646" watchObservedRunningTime="2025-05-13 12:54:38.994961669 +0000 UTC m=+29.415624293" May 13 12:55:12.733597 systemd[1]: Started sshd@8-139.178.70.104:22-218.92.0.233:46346.service - OpenSSH per-connection server daemon (218.92.0.233:46346). May 13 12:55:13.866404 sshd-session[4233]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.233 user=root May 13 12:55:16.507655 sshd[4231]: PAM: Permission denied for root from 218.92.0.233 May 13 12:55:16.801538 sshd-session[4235]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.233 user=root May 13 12:55:16.995600 systemd[1]: Started sshd@9-139.178.70.104:22-147.75.109.163:57590.service - OpenSSH per-connection server daemon (147.75.109.163:57590). May 13 12:55:17.038092 sshd[4237]: Accepted publickey for core from 147.75.109.163 port 57590 ssh2: RSA SHA256:bxyL21ypRg/l6L1U5vXH7bz9HOopqyjcFfRC9D+f+uA May 13 12:55:17.039212 sshd-session[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:55:17.042338 systemd-logind[1596]: New session 10 of user core. May 13 12:55:17.049641 systemd[1]: Started session-10.scope - Session 10 of User core. May 13 12:55:17.517917 sshd[4239]: Connection closed by 147.75.109.163 port 57590 May 13 12:55:17.518341 sshd-session[4237]: pam_unix(sshd:session): session closed for user core May 13 12:55:17.524305 systemd-logind[1596]: Session 10 logged out. Waiting for processes to exit. May 13 12:55:17.524451 systemd[1]: sshd@9-139.178.70.104:22-147.75.109.163:57590.service: Deactivated successfully. May 13 12:55:17.525463 systemd[1]: session-10.scope: Deactivated successfully. May 13 12:55:17.526544 systemd-logind[1596]: Removed session 10. May 13 12:55:18.518993 sshd[4231]: PAM: Permission denied for root from 218.92.0.233 May 13 12:55:18.813009 sshd-session[4251]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.233 user=root May 13 12:55:20.807370 sshd[4231]: PAM: Permission denied for root from 218.92.0.233 May 13 12:55:20.953865 sshd[4231]: Received disconnect from 218.92.0.233 port 46346:11: [preauth] May 13 12:55:20.953865 sshd[4231]: Disconnected from authenticating user root 218.92.0.233 port 46346 [preauth] May 13 12:55:20.955738 systemd[1]: sshd@8-139.178.70.104:22-218.92.0.233:46346.service: Deactivated successfully. May 13 12:55:21.098996 systemd[1]: Started sshd@10-139.178.70.104:22-218.92.0.233:57210.service - OpenSSH per-connection server daemon (218.92.0.233:57210). May 13 12:55:22.194127 sshd-session[4259]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.233 user=root May 13 12:55:22.527660 systemd[1]: Started sshd@11-139.178.70.104:22-147.75.109.163:56918.service - OpenSSH per-connection server daemon (147.75.109.163:56918). May 13 12:55:22.569703 sshd[4261]: Accepted publickey for core from 147.75.109.163 port 56918 ssh2: RSA SHA256:bxyL21ypRg/l6L1U5vXH7bz9HOopqyjcFfRC9D+f+uA May 13 12:55:22.570518 sshd-session[4261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:55:22.573128 systemd-logind[1596]: New session 11 of user core. May 13 12:55:22.579776 systemd[1]: Started session-11.scope - Session 11 of User core. May 13 12:55:22.684449 sshd[4263]: Connection closed by 147.75.109.163 port 56918 May 13 12:55:22.684920 sshd-session[4261]: pam_unix(sshd:session): session closed for user core May 13 12:55:22.687646 systemd[1]: sshd@11-139.178.70.104:22-147.75.109.163:56918.service: Deactivated successfully. May 13 12:55:22.689468 systemd[1]: session-11.scope: Deactivated successfully. May 13 12:55:22.691154 systemd-logind[1596]: Session 11 logged out. Waiting for processes to exit. May 13 12:55:22.692723 systemd-logind[1596]: Removed session 11. May 13 12:55:23.405005 sshd[4257]: PAM: Permission denied for root from 218.92.0.233 May 13 12:55:23.687215 sshd-session[4275]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.233 user=root May 13 12:55:25.701169 sshd[4257]: PAM: Permission denied for root from 218.92.0.233 May 13 12:55:25.984211 sshd-session[4276]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.233 user=root May 13 12:55:27.696011 systemd[1]: Started sshd@12-139.178.70.104:22-147.75.109.163:56934.service - OpenSSH per-connection server daemon (147.75.109.163:56934). May 13 12:55:27.742335 sshd[4278]: Accepted publickey for core from 147.75.109.163 port 56934 ssh2: RSA SHA256:bxyL21ypRg/l6L1U5vXH7bz9HOopqyjcFfRC9D+f+uA May 13 12:55:27.743438 sshd-session[4278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:55:27.746280 systemd-logind[1596]: New session 12 of user core. May 13 12:55:27.750678 systemd[1]: Started session-12.scope - Session 12 of User core. May 13 12:55:27.839503 sshd[4280]: Connection closed by 147.75.109.163 port 56934 May 13 12:55:27.839729 sshd-session[4278]: pam_unix(sshd:session): session closed for user core May 13 12:55:27.842508 systemd[1]: sshd@12-139.178.70.104:22-147.75.109.163:56934.service: Deactivated successfully. May 13 12:55:27.843978 systemd[1]: session-12.scope: Deactivated successfully. May 13 12:55:27.845053 systemd-logind[1596]: Session 12 logged out. Waiting for processes to exit. May 13 12:55:27.845893 systemd-logind[1596]: Removed session 12. May 13 12:55:28.273975 sshd[4257]: PAM: Permission denied for root from 218.92.0.233 May 13 12:55:28.414846 sshd[4257]: Received disconnect from 218.92.0.233 port 57210:11: [preauth] May 13 12:55:28.415157 sshd[4257]: Disconnected from authenticating user root 218.92.0.233 port 57210 [preauth] May 13 12:55:28.416449 systemd[1]: sshd@10-139.178.70.104:22-218.92.0.233:57210.service: Deactivated successfully. May 13 12:55:29.570984 systemd[1]: Started sshd@13-139.178.70.104:22-218.92.0.233:57222.service - OpenSSH per-connection server daemon (218.92.0.233:57222). May 13 12:55:30.695705 sshd-session[4297]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.233 user=root May 13 12:55:32.669409 sshd[4295]: PAM: Permission denied for root from 218.92.0.233 May 13 12:55:32.854197 systemd[1]: Started sshd@14-139.178.70.104:22-147.75.109.163:48778.service - OpenSSH per-connection server daemon (147.75.109.163:48778). May 13 12:55:32.893158 sshd[4300]: Accepted publickey for core from 147.75.109.163 port 48778 ssh2: RSA SHA256:bxyL21ypRg/l6L1U5vXH7bz9HOopqyjcFfRC9D+f+uA May 13 12:55:32.894229 sshd-session[4300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:55:32.896780 systemd-logind[1596]: New session 13 of user core. May 13 12:55:32.907669 systemd[1]: Started session-13.scope - Session 13 of User core. May 13 12:55:32.995755 sshd[4302]: Connection closed by 147.75.109.163 port 48778 May 13 12:55:32.996617 sshd-session[4300]: pam_unix(sshd:session): session closed for user core May 13 12:55:33.003549 systemd[1]: sshd@14-139.178.70.104:22-147.75.109.163:48778.service: Deactivated successfully. May 13 12:55:33.004766 systemd[1]: session-13.scope: Deactivated successfully. May 13 12:55:33.005454 systemd-logind[1596]: Session 13 logged out. Waiting for processes to exit. May 13 12:55:33.006748 systemd-logind[1596]: Removed session 13. May 13 12:55:33.008325 systemd[1]: Started sshd@15-139.178.70.104:22-147.75.109.163:48794.service - OpenSSH per-connection server daemon (147.75.109.163:48794). May 13 12:55:33.048656 sshd[4315]: Accepted publickey for core from 147.75.109.163 port 48794 ssh2: RSA SHA256:bxyL21ypRg/l6L1U5vXH7bz9HOopqyjcFfRC9D+f+uA May 13 12:55:33.049522 sshd-session[4315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:55:33.053232 systemd-logind[1596]: New session 14 of user core. May 13 12:55:33.057677 systemd[1]: Started session-14.scope - Session 14 of User core. May 13 12:55:33.171665 sshd[4317]: Connection closed by 147.75.109.163 port 48794 May 13 12:55:33.172648 sshd-session[4315]: pam_unix(sshd:session): session closed for user core May 13 12:55:33.181970 systemd[1]: sshd@15-139.178.70.104:22-147.75.109.163:48794.service: Deactivated successfully. May 13 12:55:33.183781 systemd[1]: session-14.scope: Deactivated successfully. May 13 12:55:33.184607 systemd-logind[1596]: Session 14 logged out. Waiting for processes to exit. May 13 12:55:33.187120 systemd[1]: Started sshd@16-139.178.70.104:22-147.75.109.163:48810.service - OpenSSH per-connection server daemon (147.75.109.163:48810). May 13 12:55:33.188943 systemd-logind[1596]: Removed session 14. May 13 12:55:33.232154 sshd[4326]: Accepted publickey for core from 147.75.109.163 port 48810 ssh2: RSA SHA256:bxyL21ypRg/l6L1U5vXH7bz9HOopqyjcFfRC9D+f+uA May 13 12:55:33.233021 sshd-session[4326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:55:33.236547 systemd-logind[1596]: New session 15 of user core. May 13 12:55:33.245716 systemd[1]: Started session-15.scope - Session 15 of User core. May 13 12:55:33.332822 sshd-session[4298]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.233 user=root May 13 12:55:33.335520 sshd[4328]: Connection closed by 147.75.109.163 port 48810 May 13 12:55:33.335872 sshd-session[4326]: pam_unix(sshd:session): session closed for user core May 13 12:55:33.339071 systemd-logind[1596]: Session 15 logged out. Waiting for processes to exit. May 13 12:55:33.339141 systemd[1]: sshd@16-139.178.70.104:22-147.75.109.163:48810.service: Deactivated successfully. May 13 12:55:33.340198 systemd[1]: session-15.scope: Deactivated successfully. May 13 12:55:33.340976 systemd-logind[1596]: Removed session 15. May 13 12:55:35.582599 sshd[4295]: PAM: Permission denied for root from 218.92.0.233 May 13 12:55:35.879266 sshd-session[4341]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.233 user=root May 13 12:55:37.873190 sshd[4295]: PAM: Permission denied for root from 218.92.0.233 May 13 12:55:38.020592 sshd[4295]: Received disconnect from 218.92.0.233 port 57222:11: [preauth] May 13 12:55:38.020592 sshd[4295]: Disconnected from authenticating user root 218.92.0.233 port 57222 [preauth] May 13 12:55:38.020823 systemd[1]: sshd@13-139.178.70.104:22-218.92.0.233:57222.service: Deactivated successfully. May 13 12:55:38.345747 systemd[1]: Started sshd@17-139.178.70.104:22-147.75.109.163:58792.service - OpenSSH per-connection server daemon (147.75.109.163:58792). May 13 12:55:38.477445 sshd[4345]: Accepted publickey for core from 147.75.109.163 port 58792 ssh2: RSA SHA256:bxyL21ypRg/l6L1U5vXH7bz9HOopqyjcFfRC9D+f+uA May 13 12:55:38.478330 sshd-session[4345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:55:38.481336 systemd-logind[1596]: New session 16 of user core. May 13 12:55:38.487679 systemd[1]: Started session-16.scope - Session 16 of User core. May 13 12:55:38.634874 sshd[4347]: Connection closed by 147.75.109.163 port 58792 May 13 12:55:38.635330 sshd-session[4345]: pam_unix(sshd:session): session closed for user core May 13 12:55:38.637365 systemd[1]: sshd@17-139.178.70.104:22-147.75.109.163:58792.service: Deactivated successfully. May 13 12:55:38.638446 systemd[1]: session-16.scope: Deactivated successfully. May 13 12:55:38.638986 systemd-logind[1596]: Session 16 logged out. Waiting for processes to exit. May 13 12:55:38.639984 systemd-logind[1596]: Removed session 16. May 13 12:55:43.646562 systemd[1]: Started sshd@18-139.178.70.104:22-147.75.109.163:58806.service - OpenSSH per-connection server daemon (147.75.109.163:58806). May 13 12:55:43.687758 sshd[4359]: Accepted publickey for core from 147.75.109.163 port 58806 ssh2: RSA SHA256:bxyL21ypRg/l6L1U5vXH7bz9HOopqyjcFfRC9D+f+uA May 13 12:55:43.688761 sshd-session[4359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:55:43.691325 systemd-logind[1596]: New session 17 of user core. May 13 12:55:43.702764 systemd[1]: Started session-17.scope - Session 17 of User core. May 13 12:55:43.790115 sshd[4361]: Connection closed by 147.75.109.163 port 58806 May 13 12:55:43.790541 sshd-session[4359]: pam_unix(sshd:session): session closed for user core May 13 12:55:43.800421 systemd[1]: sshd@18-139.178.70.104:22-147.75.109.163:58806.service: Deactivated successfully. May 13 12:55:43.801673 systemd[1]: session-17.scope: Deactivated successfully. May 13 12:55:43.802625 systemd-logind[1596]: Session 17 logged out. Waiting for processes to exit. May 13 12:55:43.803945 systemd[1]: Started sshd@19-139.178.70.104:22-147.75.109.163:58820.service - OpenSSH per-connection server daemon (147.75.109.163:58820). May 13 12:55:43.805116 systemd-logind[1596]: Removed session 17. May 13 12:55:43.842612 sshd[4372]: Accepted publickey for core from 147.75.109.163 port 58820 ssh2: RSA SHA256:bxyL21ypRg/l6L1U5vXH7bz9HOopqyjcFfRC9D+f+uA May 13 12:55:43.843989 sshd-session[4372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:55:43.846658 systemd-logind[1596]: New session 18 of user core. May 13 12:55:43.853650 systemd[1]: Started session-18.scope - Session 18 of User core. May 13 12:55:44.196950 sshd[4374]: Connection closed by 147.75.109.163 port 58820 May 13 12:55:44.197628 sshd-session[4372]: pam_unix(sshd:session): session closed for user core May 13 12:55:44.209600 systemd[1]: sshd@19-139.178.70.104:22-147.75.109.163:58820.service: Deactivated successfully. May 13 12:55:44.211268 systemd[1]: session-18.scope: Deactivated successfully. May 13 12:55:44.212295 systemd-logind[1596]: Session 18 logged out. Waiting for processes to exit. May 13 12:55:44.213961 systemd[1]: Started sshd@20-139.178.70.104:22-147.75.109.163:58834.service - OpenSSH per-connection server daemon (147.75.109.163:58834). May 13 12:55:44.214912 systemd-logind[1596]: Removed session 18. May 13 12:55:44.277769 sshd[4384]: Accepted publickey for core from 147.75.109.163 port 58834 ssh2: RSA SHA256:bxyL21ypRg/l6L1U5vXH7bz9HOopqyjcFfRC9D+f+uA May 13 12:55:44.278675 sshd-session[4384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:55:44.284466 systemd-logind[1596]: New session 19 of user core. May 13 12:55:44.292718 systemd[1]: Started session-19.scope - Session 19 of User core. May 13 12:55:45.560827 sshd[4386]: Connection closed by 147.75.109.163 port 58834 May 13 12:55:45.561189 sshd-session[4384]: pam_unix(sshd:session): session closed for user core May 13 12:55:45.572693 systemd[1]: sshd@20-139.178.70.104:22-147.75.109.163:58834.service: Deactivated successfully. May 13 12:55:45.574769 systemd[1]: session-19.scope: Deactivated successfully. May 13 12:55:45.575005 systemd[1]: session-19.scope: Consumed 301ms CPU time, 66.2M memory peak. May 13 12:55:45.575686 systemd-logind[1596]: Session 19 logged out. Waiting for processes to exit. May 13 12:55:45.577872 systemd[1]: Started sshd@21-139.178.70.104:22-147.75.109.163:58846.service - OpenSSH per-connection server daemon (147.75.109.163:58846). May 13 12:55:45.580169 systemd-logind[1596]: Removed session 19. May 13 12:55:45.622133 sshd[4403]: Accepted publickey for core from 147.75.109.163 port 58846 ssh2: RSA SHA256:bxyL21ypRg/l6L1U5vXH7bz9HOopqyjcFfRC9D+f+uA May 13 12:55:45.623168 sshd-session[4403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:55:45.627144 systemd-logind[1596]: New session 20 of user core. May 13 12:55:45.631635 systemd[1]: Started session-20.scope - Session 20 of User core. May 13 12:55:45.819686 sshd[4405]: Connection closed by 147.75.109.163 port 58846 May 13 12:55:45.820448 sshd-session[4403]: pam_unix(sshd:session): session closed for user core May 13 12:55:45.825868 systemd[1]: sshd@21-139.178.70.104:22-147.75.109.163:58846.service: Deactivated successfully. May 13 12:55:45.827344 systemd[1]: session-20.scope: Deactivated successfully. May 13 12:55:45.828587 systemd-logind[1596]: Session 20 logged out. Waiting for processes to exit. May 13 12:55:45.830433 systemd[1]: Started sshd@22-139.178.70.104:22-147.75.109.163:58850.service - OpenSSH per-connection server daemon (147.75.109.163:58850). May 13 12:55:45.831925 systemd-logind[1596]: Removed session 20. May 13 12:55:45.868477 sshd[4415]: Accepted publickey for core from 147.75.109.163 port 58850 ssh2: RSA SHA256:bxyL21ypRg/l6L1U5vXH7bz9HOopqyjcFfRC9D+f+uA May 13 12:55:45.869464 sshd-session[4415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:55:45.872773 systemd-logind[1596]: New session 21 of user core. May 13 12:55:45.884728 systemd[1]: Started session-21.scope - Session 21 of User core. May 13 12:55:45.980413 sshd[4417]: Connection closed by 147.75.109.163 port 58850 May 13 12:55:45.980776 sshd-session[4415]: pam_unix(sshd:session): session closed for user core May 13 12:55:45.982957 systemd-logind[1596]: Session 21 logged out. Waiting for processes to exit. May 13 12:55:45.983582 systemd[1]: sshd@22-139.178.70.104:22-147.75.109.163:58850.service: Deactivated successfully. May 13 12:55:45.984747 systemd[1]: session-21.scope: Deactivated successfully. May 13 12:55:45.985748 systemd-logind[1596]: Removed session 21. May 13 12:55:50.991959 systemd[1]: Started sshd@23-139.178.70.104:22-147.75.109.163:42884.service - OpenSSH per-connection server daemon (147.75.109.163:42884). May 13 12:55:51.032018 sshd[4435]: Accepted publickey for core from 147.75.109.163 port 42884 ssh2: RSA SHA256:bxyL21ypRg/l6L1U5vXH7bz9HOopqyjcFfRC9D+f+uA May 13 12:55:51.032957 sshd-session[4435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:55:51.036446 systemd-logind[1596]: New session 22 of user core. May 13 12:55:51.041728 systemd[1]: Started session-22.scope - Session 22 of User core. May 13 12:55:51.130437 sshd[4437]: Connection closed by 147.75.109.163 port 42884 May 13 12:55:51.130794 sshd-session[4435]: pam_unix(sshd:session): session closed for user core May 13 12:55:51.133080 systemd[1]: sshd@23-139.178.70.104:22-147.75.109.163:42884.service: Deactivated successfully. May 13 12:55:51.134072 systemd[1]: session-22.scope: Deactivated successfully. May 13 12:55:51.134681 systemd-logind[1596]: Session 22 logged out. Waiting for processes to exit. May 13 12:55:51.135445 systemd-logind[1596]: Removed session 22. May 13 12:55:56.144889 systemd[1]: Started sshd@24-139.178.70.104:22-147.75.109.163:42898.service - OpenSSH per-connection server daemon (147.75.109.163:42898). May 13 12:55:56.192082 sshd[4449]: Accepted publickey for core from 147.75.109.163 port 42898 ssh2: RSA SHA256:bxyL21ypRg/l6L1U5vXH7bz9HOopqyjcFfRC9D+f+uA May 13 12:55:56.192987 sshd-session[4449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:55:56.195684 systemd-logind[1596]: New session 23 of user core. May 13 12:55:56.203685 systemd[1]: Started session-23.scope - Session 23 of User core. May 13 12:55:56.290476 sshd[4451]: Connection closed by 147.75.109.163 port 42898 May 13 12:55:56.290848 sshd-session[4449]: pam_unix(sshd:session): session closed for user core May 13 12:55:56.292501 systemd[1]: sshd@24-139.178.70.104:22-147.75.109.163:42898.service: Deactivated successfully. May 13 12:55:56.293821 systemd[1]: session-23.scope: Deactivated successfully. May 13 12:55:56.294450 systemd-logind[1596]: Session 23 logged out. Waiting for processes to exit. May 13 12:55:56.295771 systemd-logind[1596]: Removed session 23. May 13 12:56:01.302916 systemd[1]: Started sshd@25-139.178.70.104:22-147.75.109.163:47880.service - OpenSSH per-connection server daemon (147.75.109.163:47880). May 13 12:56:01.349233 sshd[4463]: Accepted publickey for core from 147.75.109.163 port 47880 ssh2: RSA SHA256:bxyL21ypRg/l6L1U5vXH7bz9HOopqyjcFfRC9D+f+uA May 13 12:56:01.350109 sshd-session[4463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:56:01.354639 systemd-logind[1596]: New session 24 of user core. May 13 12:56:01.360658 systemd[1]: Started session-24.scope - Session 24 of User core. May 13 12:56:01.480919 sshd[4465]: Connection closed by 147.75.109.163 port 47880 May 13 12:56:01.481362 sshd-session[4463]: pam_unix(sshd:session): session closed for user core May 13 12:56:01.492406 systemd[1]: sshd@25-139.178.70.104:22-147.75.109.163:47880.service: Deactivated successfully. May 13 12:56:01.493911 systemd[1]: session-24.scope: Deactivated successfully. May 13 12:56:01.495073 systemd-logind[1596]: Session 24 logged out. Waiting for processes to exit. May 13 12:56:01.497510 systemd[1]: Started sshd@26-139.178.70.104:22-147.75.109.163:47890.service - OpenSSH per-connection server daemon (147.75.109.163:47890). May 13 12:56:01.498272 systemd-logind[1596]: Removed session 24. May 13 12:56:01.562617 sshd[4476]: Accepted publickey for core from 147.75.109.163 port 47890 ssh2: RSA SHA256:bxyL21ypRg/l6L1U5vXH7bz9HOopqyjcFfRC9D+f+uA May 13 12:56:01.563601 sshd-session[4476]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:56:01.567874 systemd-logind[1596]: New session 25 of user core. May 13 12:56:01.571657 systemd[1]: Started session-25.scope - Session 25 of User core. May 13 12:56:02.974255 containerd[1626]: time="2025-05-13T12:56:02.973833278Z" level=info msg="StopContainer for \"84d7429fd0c1947e14ea50b288ed2e55aa8fa7610b08d368af927deff18f1058\" with timeout 30 (s)" May 13 12:56:02.981586 containerd[1626]: time="2025-05-13T12:56:02.981318434Z" level=info msg="Stop container \"84d7429fd0c1947e14ea50b288ed2e55aa8fa7610b08d368af927deff18f1058\" with signal terminated" May 13 12:56:02.996821 containerd[1626]: time="2025-05-13T12:56:02.994946168Z" level=info msg="received exit event container_id:\"84d7429fd0c1947e14ea50b288ed2e55aa8fa7610b08d368af927deff18f1058\" id:\"84d7429fd0c1947e14ea50b288ed2e55aa8fa7610b08d368af927deff18f1058\" pid:3431 exited_at:{seconds:1747140962 nanos:994547465}" May 13 12:56:02.996821 containerd[1626]: time="2025-05-13T12:56:02.995954545Z" level=info msg="TaskExit event in podsandbox handler container_id:\"84d7429fd0c1947e14ea50b288ed2e55aa8fa7610b08d368af927deff18f1058\" id:\"84d7429fd0c1947e14ea50b288ed2e55aa8fa7610b08d368af927deff18f1058\" pid:3431 exited_at:{seconds:1747140962 nanos:994547465}" May 13 12:56:02.997009 systemd[1]: cri-containerd-84d7429fd0c1947e14ea50b288ed2e55aa8fa7610b08d368af927deff18f1058.scope: Deactivated successfully. May 13 12:56:03.019064 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84d7429fd0c1947e14ea50b288ed2e55aa8fa7610b08d368af927deff18f1058-rootfs.mount: Deactivated successfully. May 13 12:56:03.023898 containerd[1626]: time="2025-05-13T12:56:03.023864101Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 12:56:03.025998 containerd[1626]: time="2025-05-13T12:56:03.025908234Z" level=info msg="StopContainer for \"84d7429fd0c1947e14ea50b288ed2e55aa8fa7610b08d368af927deff18f1058\" returns successfully" May 13 12:56:03.026734 containerd[1626]: time="2025-05-13T12:56:03.026490462Z" level=info msg="StopPodSandbox for \"260e12b63ddb76d98abd4188db0517324371e9cd3a32304dc6d87dd8f65b7cf0\"" May 13 12:56:03.029426 containerd[1626]: time="2025-05-13T12:56:03.029134525Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4a97c20deefb02c4848fc079b2ef050f7936ddbf0ac4cc31d4ef6f005937cd0b\" id:\"ecfbf4e0ee5e03100197d3163a8560e7db46d6f0ac4f8199403bfa84c0400e65\" pid:4508 exited_at:{seconds:1747140963 nanos:28539134}" May 13 12:56:03.032734 containerd[1626]: time="2025-05-13T12:56:03.031818222Z" level=info msg="StopContainer for \"4a97c20deefb02c4848fc079b2ef050f7936ddbf0ac4cc31d4ef6f005937cd0b\" with timeout 2 (s)" May 13 12:56:03.032734 containerd[1626]: time="2025-05-13T12:56:03.032102087Z" level=info msg="Stop container \"4a97c20deefb02c4848fc079b2ef050f7936ddbf0ac4cc31d4ef6f005937cd0b\" with signal terminated" May 13 12:56:03.034907 containerd[1626]: time="2025-05-13T12:56:03.034880961Z" level=info msg="Container to stop \"84d7429fd0c1947e14ea50b288ed2e55aa8fa7610b08d368af927deff18f1058\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 12:56:03.042723 systemd-networkd[1540]: lxc_health: Link DOWN May 13 12:56:03.042727 systemd-networkd[1540]: lxc_health: Lost carrier May 13 12:56:03.046041 systemd[1]: cri-containerd-260e12b63ddb76d98abd4188db0517324371e9cd3a32304dc6d87dd8f65b7cf0.scope: Deactivated successfully. May 13 12:56:03.049355 containerd[1626]: time="2025-05-13T12:56:03.049189403Z" level=info msg="TaskExit event in podsandbox handler container_id:\"260e12b63ddb76d98abd4188db0517324371e9cd3a32304dc6d87dd8f65b7cf0\" id:\"260e12b63ddb76d98abd4188db0517324371e9cd3a32304dc6d87dd8f65b7cf0\" pid:3095 exit_status:137 exited_at:{seconds:1747140963 nanos:47900616}" May 13 12:56:03.068265 systemd[1]: cri-containerd-4a97c20deefb02c4848fc079b2ef050f7936ddbf0ac4cc31d4ef6f005937cd0b.scope: Deactivated successfully. May 13 12:56:03.068521 systemd[1]: cri-containerd-4a97c20deefb02c4848fc079b2ef050f7936ddbf0ac4cc31d4ef6f005937cd0b.scope: Consumed 4.461s CPU time, 243.4M memory peak, 124.4M read from disk, 13.3M written to disk. May 13 12:56:03.069799 containerd[1626]: time="2025-05-13T12:56:03.069727642Z" level=info msg="received exit event container_id:\"4a97c20deefb02c4848fc079b2ef050f7936ddbf0ac4cc31d4ef6f005937cd0b\" id:\"4a97c20deefb02c4848fc079b2ef050f7936ddbf0ac4cc31d4ef6f005937cd0b\" pid:3545 exited_at:{seconds:1747140963 nanos:69494601}" May 13 12:56:03.078889 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-260e12b63ddb76d98abd4188db0517324371e9cd3a32304dc6d87dd8f65b7cf0-rootfs.mount: Deactivated successfully. May 13 12:56:03.091777 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a97c20deefb02c4848fc079b2ef050f7936ddbf0ac4cc31d4ef6f005937cd0b-rootfs.mount: Deactivated successfully. May 13 12:56:03.138970 containerd[1626]: time="2025-05-13T12:56:03.138927337Z" level=info msg="shim disconnected" id=260e12b63ddb76d98abd4188db0517324371e9cd3a32304dc6d87dd8f65b7cf0 namespace=k8s.io May 13 12:56:03.139853 containerd[1626]: time="2025-05-13T12:56:03.139571257Z" level=warning msg="cleaning up after shim disconnected" id=260e12b63ddb76d98abd4188db0517324371e9cd3a32304dc6d87dd8f65b7cf0 namespace=k8s.io May 13 12:56:03.142823 containerd[1626]: time="2025-05-13T12:56:03.139583534Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 12:56:03.154876 containerd[1626]: time="2025-05-13T12:56:03.154855196Z" level=info msg="StopContainer for \"4a97c20deefb02c4848fc079b2ef050f7936ddbf0ac4cc31d4ef6f005937cd0b\" returns successfully" May 13 12:56:03.155147 containerd[1626]: time="2025-05-13T12:56:03.155133449Z" level=info msg="StopPodSandbox for \"b66de58a8247b58b72c06b072efd354921cf35ea25c1e2c722a29b8cb0c1744b\"" May 13 12:56:03.155176 containerd[1626]: time="2025-05-13T12:56:03.155169555Z" level=info msg="Container to stop \"a8215698e1083b24122c76e1ddaf1a8d17c57b33f704a201b23285611ba8bf24\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 12:56:03.155195 containerd[1626]: time="2025-05-13T12:56:03.155178874Z" level=info msg="Container to stop \"75340c00ae151adb05d7267393669b1877718a077b4a353af8108582d504cf10\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 12:56:03.155195 containerd[1626]: time="2025-05-13T12:56:03.155188968Z" level=info msg="Container to stop \"54e8c83a77318070c52d2128a233de0a6e4864290fc6994dea8314227ec8ac64\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 12:56:03.155195 containerd[1626]: time="2025-05-13T12:56:03.155193972Z" level=info msg="Container to stop \"6f4b1f02f13830b6fa362a6ee42cc57f15a641eb3684cbb20e95f59fad6d9a41\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 12:56:03.155260 containerd[1626]: time="2025-05-13T12:56:03.155198896Z" level=info msg="Container to stop \"4a97c20deefb02c4848fc079b2ef050f7936ddbf0ac4cc31d4ef6f005937cd0b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 12:56:03.159515 systemd[1]: cri-containerd-b66de58a8247b58b72c06b072efd354921cf35ea25c1e2c722a29b8cb0c1744b.scope: Deactivated successfully. May 13 12:56:03.176199 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b66de58a8247b58b72c06b072efd354921cf35ea25c1e2c722a29b8cb0c1744b-rootfs.mount: Deactivated successfully. May 13 12:56:03.183284 containerd[1626]: time="2025-05-13T12:56:03.183161188Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4a97c20deefb02c4848fc079b2ef050f7936ddbf0ac4cc31d4ef6f005937cd0b\" id:\"4a97c20deefb02c4848fc079b2ef050f7936ddbf0ac4cc31d4ef6f005937cd0b\" pid:3545 exited_at:{seconds:1747140963 nanos:69494601}" May 13 12:56:03.183284 containerd[1626]: time="2025-05-13T12:56:03.183205706Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b66de58a8247b58b72c06b072efd354921cf35ea25c1e2c722a29b8cb0c1744b\" id:\"b66de58a8247b58b72c06b072efd354921cf35ea25c1e2c722a29b8cb0c1744b\" pid:3104 exit_status:137 exited_at:{seconds:1747140963 nanos:160376904}" May 13 12:56:03.184541 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-260e12b63ddb76d98abd4188db0517324371e9cd3a32304dc6d87dd8f65b7cf0-shm.mount: Deactivated successfully. May 13 12:56:03.197719 containerd[1626]: time="2025-05-13T12:56:03.189009642Z" level=info msg="received exit event sandbox_id:\"260e12b63ddb76d98abd4188db0517324371e9cd3a32304dc6d87dd8f65b7cf0\" exit_status:137 exited_at:{seconds:1747140963 nanos:47900616}" May 13 12:56:03.197719 containerd[1626]: time="2025-05-13T12:56:03.193227293Z" level=info msg="TearDown network for sandbox \"260e12b63ddb76d98abd4188db0517324371e9cd3a32304dc6d87dd8f65b7cf0\" successfully" May 13 12:56:03.197719 containerd[1626]: time="2025-05-13T12:56:03.193250964Z" level=info msg="StopPodSandbox for \"260e12b63ddb76d98abd4188db0517324371e9cd3a32304dc6d87dd8f65b7cf0\" returns successfully" May 13 12:56:03.252114 containerd[1626]: time="2025-05-13T12:56:03.251973509Z" level=info msg="received exit event sandbox_id:\"b66de58a8247b58b72c06b072efd354921cf35ea25c1e2c722a29b8cb0c1744b\" exit_status:137 exited_at:{seconds:1747140963 nanos:160376904}" May 13 12:56:03.252612 containerd[1626]: time="2025-05-13T12:56:03.252259989Z" level=info msg="shim disconnected" id=b66de58a8247b58b72c06b072efd354921cf35ea25c1e2c722a29b8cb0c1744b namespace=k8s.io May 13 12:56:03.252612 containerd[1626]: time="2025-05-13T12:56:03.252279056Z" level=warning msg="cleaning up after shim disconnected" id=b66de58a8247b58b72c06b072efd354921cf35ea25c1e2c722a29b8cb0c1744b namespace=k8s.io May 13 12:56:03.252612 containerd[1626]: time="2025-05-13T12:56:03.252288088Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 12:56:03.254873 containerd[1626]: time="2025-05-13T12:56:03.254645985Z" level=info msg="TearDown network for sandbox \"b66de58a8247b58b72c06b072efd354921cf35ea25c1e2c722a29b8cb0c1744b\" successfully" May 13 12:56:03.254873 containerd[1626]: time="2025-05-13T12:56:03.254767355Z" level=info msg="StopPodSandbox for \"b66de58a8247b58b72c06b072efd354921cf35ea25c1e2c722a29b8cb0c1744b\" returns successfully" May 13 12:56:03.261254 kubelet[2915]: I0513 12:56:03.261203 2915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7jhm\" (UniqueName: \"kubernetes.io/projected/66c79ef7-9826-4987-ac5b-3ec5ca1f4ad6-kube-api-access-v7jhm\") pod \"66c79ef7-9826-4987-ac5b-3ec5ca1f4ad6\" (UID: \"66c79ef7-9826-4987-ac5b-3ec5ca1f4ad6\") " May 13 12:56:03.261254 kubelet[2915]: I0513 12:56:03.261235 2915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/66c79ef7-9826-4987-ac5b-3ec5ca1f4ad6-cilium-config-path\") pod \"66c79ef7-9826-4987-ac5b-3ec5ca1f4ad6\" (UID: \"66c79ef7-9826-4987-ac5b-3ec5ca1f4ad6\") " May 13 12:56:03.298502 kubelet[2915]: I0513 12:56:03.298480 2915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66c79ef7-9826-4987-ac5b-3ec5ca1f4ad6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "66c79ef7-9826-4987-ac5b-3ec5ca1f4ad6" (UID: "66c79ef7-9826-4987-ac5b-3ec5ca1f4ad6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 12:56:03.347442 kubelet[2915]: I0513 12:56:03.347401 2915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66c79ef7-9826-4987-ac5b-3ec5ca1f4ad6-kube-api-access-v7jhm" (OuterVolumeSpecName: "kube-api-access-v7jhm") pod "66c79ef7-9826-4987-ac5b-3ec5ca1f4ad6" (UID: "66c79ef7-9826-4987-ac5b-3ec5ca1f4ad6"). InnerVolumeSpecName "kube-api-access-v7jhm". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 12:56:03.361706 kubelet[2915]: I0513 12:56:03.361687 2915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cda9e517-250e-41ec-95a8-d6fdcb18dc17-host-proc-sys-net\") pod \"cda9e517-250e-41ec-95a8-d6fdcb18dc17\" (UID: \"cda9e517-250e-41ec-95a8-d6fdcb18dc17\") " May 13 12:56:03.377647 kubelet[2915]: I0513 12:56:03.377610 2915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cda9e517-250e-41ec-95a8-d6fdcb18dc17-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "cda9e517-250e-41ec-95a8-d6fdcb18dc17" (UID: "cda9e517-250e-41ec-95a8-d6fdcb18dc17"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 12:56:03.378055 kubelet[2915]: I0513 12:56:03.378038 2915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cda9e517-250e-41ec-95a8-d6fdcb18dc17-bpf-maps\") pod \"cda9e517-250e-41ec-95a8-d6fdcb18dc17\" (UID: \"cda9e517-250e-41ec-95a8-d6fdcb18dc17\") " May 13 12:56:03.378102 kubelet[2915]: I0513 12:56:03.378057 2915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cda9e517-250e-41ec-95a8-d6fdcb18dc17-lib-modules\") pod \"cda9e517-250e-41ec-95a8-d6fdcb18dc17\" (UID: \"cda9e517-250e-41ec-95a8-d6fdcb18dc17\") " May 13 12:56:03.378102 kubelet[2915]: I0513 12:56:03.378073 2915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cda9e517-250e-41ec-95a8-d6fdcb18dc17-hubble-tls\") pod \"cda9e517-250e-41ec-95a8-d6fdcb18dc17\" (UID: \"cda9e517-250e-41ec-95a8-d6fdcb18dc17\") " May 13 12:56:03.378102 kubelet[2915]: I0513 12:56:03.378088 2915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cda9e517-250e-41ec-95a8-d6fdcb18dc17-cilium-cgroup\") pod \"cda9e517-250e-41ec-95a8-d6fdcb18dc17\" (UID: \"cda9e517-250e-41ec-95a8-d6fdcb18dc17\") " May 13 12:56:03.378102 kubelet[2915]: I0513 12:56:03.378102 2915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cda9e517-250e-41ec-95a8-d6fdcb18dc17-clustermesh-secrets\") pod \"cda9e517-250e-41ec-95a8-d6fdcb18dc17\" (UID: \"cda9e517-250e-41ec-95a8-d6fdcb18dc17\") " May 13 12:56:03.378215 kubelet[2915]: I0513 12:56:03.378117 2915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cda9e517-250e-41ec-95a8-d6fdcb18dc17-cilium-config-path\") pod \"cda9e517-250e-41ec-95a8-d6fdcb18dc17\" (UID: \"cda9e517-250e-41ec-95a8-d6fdcb18dc17\") " May 13 12:56:03.378215 kubelet[2915]: I0513 12:56:03.378127 2915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cda9e517-250e-41ec-95a8-d6fdcb18dc17-xtables-lock\") pod \"cda9e517-250e-41ec-95a8-d6fdcb18dc17\" (UID: \"cda9e517-250e-41ec-95a8-d6fdcb18dc17\") " May 13 12:56:03.378215 kubelet[2915]: I0513 12:56:03.378138 2915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cda9e517-250e-41ec-95a8-d6fdcb18dc17-cni-path\") pod \"cda9e517-250e-41ec-95a8-d6fdcb18dc17\" (UID: \"cda9e517-250e-41ec-95a8-d6fdcb18dc17\") " May 13 12:56:03.378215 kubelet[2915]: I0513 12:56:03.378149 2915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cda9e517-250e-41ec-95a8-d6fdcb18dc17-host-proc-sys-kernel\") pod \"cda9e517-250e-41ec-95a8-d6fdcb18dc17\" (UID: \"cda9e517-250e-41ec-95a8-d6fdcb18dc17\") " May 13 12:56:03.378215 kubelet[2915]: I0513 12:56:03.378187 2915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7rrd\" (UniqueName: \"kubernetes.io/projected/cda9e517-250e-41ec-95a8-d6fdcb18dc17-kube-api-access-r7rrd\") pod \"cda9e517-250e-41ec-95a8-d6fdcb18dc17\" (UID: \"cda9e517-250e-41ec-95a8-d6fdcb18dc17\") " May 13 12:56:03.378215 kubelet[2915]: I0513 12:56:03.378201 2915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cda9e517-250e-41ec-95a8-d6fdcb18dc17-hostproc\") pod \"cda9e517-250e-41ec-95a8-d6fdcb18dc17\" (UID: \"cda9e517-250e-41ec-95a8-d6fdcb18dc17\") " May 13 12:56:03.378353 kubelet[2915]: I0513 12:56:03.378212 2915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cda9e517-250e-41ec-95a8-d6fdcb18dc17-cilium-run\") pod \"cda9e517-250e-41ec-95a8-d6fdcb18dc17\" (UID: \"cda9e517-250e-41ec-95a8-d6fdcb18dc17\") " May 13 12:56:03.378353 kubelet[2915]: I0513 12:56:03.378222 2915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cda9e517-250e-41ec-95a8-d6fdcb18dc17-etc-cni-netd\") pod \"cda9e517-250e-41ec-95a8-d6fdcb18dc17\" (UID: \"cda9e517-250e-41ec-95a8-d6fdcb18dc17\") " May 13 12:56:03.378353 kubelet[2915]: I0513 12:56:03.378253 2915 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-v7jhm\" (UniqueName: \"kubernetes.io/projected/66c79ef7-9826-4987-ac5b-3ec5ca1f4ad6-kube-api-access-v7jhm\") on node \"localhost\" DevicePath \"\"" May 13 12:56:03.378353 kubelet[2915]: I0513 12:56:03.378261 2915 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/66c79ef7-9826-4987-ac5b-3ec5ca1f4ad6-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 12:56:03.378353 kubelet[2915]: I0513 12:56:03.378268 2915 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cda9e517-250e-41ec-95a8-d6fdcb18dc17-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 13 12:56:03.378353 kubelet[2915]: I0513 12:56:03.378286 2915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cda9e517-250e-41ec-95a8-d6fdcb18dc17-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "cda9e517-250e-41ec-95a8-d6fdcb18dc17" (UID: "cda9e517-250e-41ec-95a8-d6fdcb18dc17"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 12:56:03.378482 kubelet[2915]: I0513 12:56:03.378301 2915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cda9e517-250e-41ec-95a8-d6fdcb18dc17-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "cda9e517-250e-41ec-95a8-d6fdcb18dc17" (UID: "cda9e517-250e-41ec-95a8-d6fdcb18dc17"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 12:56:03.378482 kubelet[2915]: I0513 12:56:03.378312 2915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cda9e517-250e-41ec-95a8-d6fdcb18dc17-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "cda9e517-250e-41ec-95a8-d6fdcb18dc17" (UID: "cda9e517-250e-41ec-95a8-d6fdcb18dc17"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 12:56:03.378604 kubelet[2915]: I0513 12:56:03.378544 2915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cda9e517-250e-41ec-95a8-d6fdcb18dc17-cni-path" (OuterVolumeSpecName: "cni-path") pod "cda9e517-250e-41ec-95a8-d6fdcb18dc17" (UID: "cda9e517-250e-41ec-95a8-d6fdcb18dc17"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 12:56:03.378604 kubelet[2915]: I0513 12:56:03.378578 2915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cda9e517-250e-41ec-95a8-d6fdcb18dc17-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "cda9e517-250e-41ec-95a8-d6fdcb18dc17" (UID: "cda9e517-250e-41ec-95a8-d6fdcb18dc17"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 12:56:03.397372 kubelet[2915]: I0513 12:56:03.397325 2915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cda9e517-250e-41ec-95a8-d6fdcb18dc17-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "cda9e517-250e-41ec-95a8-d6fdcb18dc17" (UID: "cda9e517-250e-41ec-95a8-d6fdcb18dc17"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 12:56:03.397605 kubelet[2915]: I0513 12:56:03.397572 2915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cda9e517-250e-41ec-95a8-d6fdcb18dc17-hostproc" (OuterVolumeSpecName: "hostproc") pod "cda9e517-250e-41ec-95a8-d6fdcb18dc17" (UID: "cda9e517-250e-41ec-95a8-d6fdcb18dc17"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 12:56:03.397605 kubelet[2915]: I0513 12:56:03.397581 2915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cda9e517-250e-41ec-95a8-d6fdcb18dc17-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "cda9e517-250e-41ec-95a8-d6fdcb18dc17" (UID: "cda9e517-250e-41ec-95a8-d6fdcb18dc17"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 12:56:03.397605 kubelet[2915]: I0513 12:56:03.397589 2915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cda9e517-250e-41ec-95a8-d6fdcb18dc17-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "cda9e517-250e-41ec-95a8-d6fdcb18dc17" (UID: "cda9e517-250e-41ec-95a8-d6fdcb18dc17"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 12:56:03.432352 kubelet[2915]: I0513 12:56:03.432304 2915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cda9e517-250e-41ec-95a8-d6fdcb18dc17-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cda9e517-250e-41ec-95a8-d6fdcb18dc17" (UID: "cda9e517-250e-41ec-95a8-d6fdcb18dc17"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 12:56:03.439678 kubelet[2915]: I0513 12:56:03.439644 2915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cda9e517-250e-41ec-95a8-d6fdcb18dc17-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "cda9e517-250e-41ec-95a8-d6fdcb18dc17" (UID: "cda9e517-250e-41ec-95a8-d6fdcb18dc17"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 12:56:03.439811 kubelet[2915]: I0513 12:56:03.439728 2915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cda9e517-250e-41ec-95a8-d6fdcb18dc17-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "cda9e517-250e-41ec-95a8-d6fdcb18dc17" (UID: "cda9e517-250e-41ec-95a8-d6fdcb18dc17"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 12:56:03.447491 kubelet[2915]: I0513 12:56:03.447451 2915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cda9e517-250e-41ec-95a8-d6fdcb18dc17-kube-api-access-r7rrd" (OuterVolumeSpecName: "kube-api-access-r7rrd") pod "cda9e517-250e-41ec-95a8-d6fdcb18dc17" (UID: "cda9e517-250e-41ec-95a8-d6fdcb18dc17"). InnerVolumeSpecName "kube-api-access-r7rrd". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 12:56:03.478535 kubelet[2915]: I0513 12:56:03.478507 2915 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cda9e517-250e-41ec-95a8-d6fdcb18dc17-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 13 12:56:03.478535 kubelet[2915]: I0513 12:56:03.478531 2915 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cda9e517-250e-41ec-95a8-d6fdcb18dc17-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 13 12:56:03.478535 kubelet[2915]: I0513 12:56:03.478538 2915 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cda9e517-250e-41ec-95a8-d6fdcb18dc17-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 13 12:56:03.478684 kubelet[2915]: I0513 12:56:03.478544 2915 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cda9e517-250e-41ec-95a8-d6fdcb18dc17-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 13 12:56:03.478684 kubelet[2915]: I0513 12:56:03.478551 2915 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cda9e517-250e-41ec-95a8-d6fdcb18dc17-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 12:56:03.478684 kubelet[2915]: I0513 12:56:03.478570 2915 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cda9e517-250e-41ec-95a8-d6fdcb18dc17-cni-path\") on node \"localhost\" DevicePath \"\"" May 13 12:56:03.478684 kubelet[2915]: I0513 12:56:03.478575 2915 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cda9e517-250e-41ec-95a8-d6fdcb18dc17-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 13 12:56:03.478684 kubelet[2915]: I0513 12:56:03.478581 2915 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-r7rrd\" (UniqueName: \"kubernetes.io/projected/cda9e517-250e-41ec-95a8-d6fdcb18dc17-kube-api-access-r7rrd\") on node \"localhost\" DevicePath \"\"" May 13 12:56:03.478684 kubelet[2915]: I0513 12:56:03.478585 2915 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cda9e517-250e-41ec-95a8-d6fdcb18dc17-hostproc\") on node \"localhost\" DevicePath \"\"" May 13 12:56:03.478684 kubelet[2915]: I0513 12:56:03.478590 2915 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cda9e517-250e-41ec-95a8-d6fdcb18dc17-cilium-run\") on node \"localhost\" DevicePath \"\"" May 13 12:56:03.478684 kubelet[2915]: I0513 12:56:03.478594 2915 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cda9e517-250e-41ec-95a8-d6fdcb18dc17-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 13 12:56:03.478810 kubelet[2915]: I0513 12:56:03.478598 2915 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cda9e517-250e-41ec-95a8-d6fdcb18dc17-lib-modules\") on node \"localhost\" DevicePath \"\"" May 13 12:56:03.478810 kubelet[2915]: I0513 12:56:03.478603 2915 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cda9e517-250e-41ec-95a8-d6fdcb18dc17-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 13 12:56:03.684246 systemd[1]: Removed slice kubepods-burstable-podcda9e517_250e_41ec_95a8_d6fdcb18dc17.slice - libcontainer container kubepods-burstable-podcda9e517_250e_41ec_95a8_d6fdcb18dc17.slice. May 13 12:56:03.684626 systemd[1]: kubepods-burstable-podcda9e517_250e_41ec_95a8_d6fdcb18dc17.slice: Consumed 4.525s CPU time, 244.4M memory peak, 125.7M read from disk, 15.6M written to disk. May 13 12:56:03.691657 systemd[1]: Removed slice kubepods-besteffort-pod66c79ef7_9826_4987_ac5b_3ec5ca1f4ad6.slice - libcontainer container kubepods-besteffort-pod66c79ef7_9826_4987_ac5b_3ec5ca1f4ad6.slice. May 13 12:56:04.018519 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b66de58a8247b58b72c06b072efd354921cf35ea25c1e2c722a29b8cb0c1744b-shm.mount: Deactivated successfully. May 13 12:56:04.018595 systemd[1]: var-lib-kubelet-pods-66c79ef7\x2d9826\x2d4987\x2dac5b\x2d3ec5ca1f4ad6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv7jhm.mount: Deactivated successfully. May 13 12:56:04.018659 systemd[1]: var-lib-kubelet-pods-cda9e517\x2d250e\x2d41ec\x2d95a8\x2dd6fdcb18dc17-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr7rrd.mount: Deactivated successfully. May 13 12:56:04.018701 systemd[1]: var-lib-kubelet-pods-cda9e517\x2d250e\x2d41ec\x2d95a8\x2dd6fdcb18dc17-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 13 12:56:04.018743 systemd[1]: var-lib-kubelet-pods-cda9e517\x2d250e\x2d41ec\x2d95a8\x2dd6fdcb18dc17-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 13 12:56:04.085935 kubelet[2915]: I0513 12:56:04.085898 2915 scope.go:117] "RemoveContainer" containerID="84d7429fd0c1947e14ea50b288ed2e55aa8fa7610b08d368af927deff18f1058" May 13 12:56:04.094878 containerd[1626]: time="2025-05-13T12:56:04.094832960Z" level=info msg="RemoveContainer for \"84d7429fd0c1947e14ea50b288ed2e55aa8fa7610b08d368af927deff18f1058\"" May 13 12:56:04.129871 containerd[1626]: time="2025-05-13T12:56:04.129791105Z" level=info msg="RemoveContainer for \"84d7429fd0c1947e14ea50b288ed2e55aa8fa7610b08d368af927deff18f1058\" returns successfully" May 13 12:56:04.130043 kubelet[2915]: I0513 12:56:04.130019 2915 scope.go:117] "RemoveContainer" containerID="4a97c20deefb02c4848fc079b2ef050f7936ddbf0ac4cc31d4ef6f005937cd0b" May 13 12:56:04.131102 containerd[1626]: time="2025-05-13T12:56:04.131044720Z" level=info msg="RemoveContainer for \"4a97c20deefb02c4848fc079b2ef050f7936ddbf0ac4cc31d4ef6f005937cd0b\"" May 13 12:56:04.147676 containerd[1626]: time="2025-05-13T12:56:04.147602856Z" level=info msg="RemoveContainer for \"4a97c20deefb02c4848fc079b2ef050f7936ddbf0ac4cc31d4ef6f005937cd0b\" returns successfully" May 13 12:56:04.147847 kubelet[2915]: I0513 12:56:04.147758 2915 scope.go:117] "RemoveContainer" containerID="6f4b1f02f13830b6fa362a6ee42cc57f15a641eb3684cbb20e95f59fad6d9a41" May 13 12:56:04.148944 containerd[1626]: time="2025-05-13T12:56:04.148926373Z" level=info msg="RemoveContainer for \"6f4b1f02f13830b6fa362a6ee42cc57f15a641eb3684cbb20e95f59fad6d9a41\"" May 13 12:56:04.168097 containerd[1626]: time="2025-05-13T12:56:04.168075976Z" level=info msg="RemoveContainer for \"6f4b1f02f13830b6fa362a6ee42cc57f15a641eb3684cbb20e95f59fad6d9a41\" returns successfully" May 13 12:56:04.168332 kubelet[2915]: I0513 12:56:04.168315 2915 scope.go:117] "RemoveContainer" containerID="54e8c83a77318070c52d2128a233de0a6e4864290fc6994dea8314227ec8ac64" May 13 12:56:04.170151 containerd[1626]: time="2025-05-13T12:56:04.170136810Z" level=info msg="RemoveContainer for \"54e8c83a77318070c52d2128a233de0a6e4864290fc6994dea8314227ec8ac64\"" May 13 12:56:04.183513 containerd[1626]: time="2025-05-13T12:56:04.183494146Z" level=info msg="RemoveContainer for \"54e8c83a77318070c52d2128a233de0a6e4864290fc6994dea8314227ec8ac64\" returns successfully" May 13 12:56:04.183822 kubelet[2915]: I0513 12:56:04.183803 2915 scope.go:117] "RemoveContainer" containerID="a8215698e1083b24122c76e1ddaf1a8d17c57b33f704a201b23285611ba8bf24" May 13 12:56:04.185233 containerd[1626]: time="2025-05-13T12:56:04.185215299Z" level=info msg="RemoveContainer for \"a8215698e1083b24122c76e1ddaf1a8d17c57b33f704a201b23285611ba8bf24\"" May 13 12:56:04.200886 containerd[1626]: time="2025-05-13T12:56:04.200856363Z" level=info msg="RemoveContainer for \"a8215698e1083b24122c76e1ddaf1a8d17c57b33f704a201b23285611ba8bf24\" returns successfully" May 13 12:56:04.201027 kubelet[2915]: I0513 12:56:04.201002 2915 scope.go:117] "RemoveContainer" containerID="75340c00ae151adb05d7267393669b1877718a077b4a353af8108582d504cf10" May 13 12:56:04.207713 containerd[1626]: time="2025-05-13T12:56:04.207691362Z" level=info msg="RemoveContainer for \"75340c00ae151adb05d7267393669b1877718a077b4a353af8108582d504cf10\"" May 13 12:56:04.227638 containerd[1626]: time="2025-05-13T12:56:04.227606630Z" level=info msg="RemoveContainer for \"75340c00ae151adb05d7267393669b1877718a077b4a353af8108582d504cf10\" returns successfully" May 13 12:56:04.228110 kubelet[2915]: I0513 12:56:04.228099 2915 scope.go:117] "RemoveContainer" containerID="4a97c20deefb02c4848fc079b2ef050f7936ddbf0ac4cc31d4ef6f005937cd0b" May 13 12:56:04.228341 containerd[1626]: time="2025-05-13T12:56:04.228314193Z" level=error msg="ContainerStatus for \"4a97c20deefb02c4848fc079b2ef050f7936ddbf0ac4cc31d4ef6f005937cd0b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4a97c20deefb02c4848fc079b2ef050f7936ddbf0ac4cc31d4ef6f005937cd0b\": not found" May 13 12:56:04.236449 kubelet[2915]: E0513 12:56:04.236401 2915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4a97c20deefb02c4848fc079b2ef050f7936ddbf0ac4cc31d4ef6f005937cd0b\": not found" containerID="4a97c20deefb02c4848fc079b2ef050f7936ddbf0ac4cc31d4ef6f005937cd0b" May 13 12:56:04.249000 kubelet[2915]: I0513 12:56:04.248895 2915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4a97c20deefb02c4848fc079b2ef050f7936ddbf0ac4cc31d4ef6f005937cd0b"} err="failed to get container status \"4a97c20deefb02c4848fc079b2ef050f7936ddbf0ac4cc31d4ef6f005937cd0b\": rpc error: code = NotFound desc = an error occurred when try to find container \"4a97c20deefb02c4848fc079b2ef050f7936ddbf0ac4cc31d4ef6f005937cd0b\": not found" May 13 12:56:04.249000 kubelet[2915]: I0513 12:56:04.249004 2915 scope.go:117] "RemoveContainer" containerID="6f4b1f02f13830b6fa362a6ee42cc57f15a641eb3684cbb20e95f59fad6d9a41" May 13 12:56:04.249281 containerd[1626]: time="2025-05-13T12:56:04.249249173Z" level=error msg="ContainerStatus for \"6f4b1f02f13830b6fa362a6ee42cc57f15a641eb3684cbb20e95f59fad6d9a41\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6f4b1f02f13830b6fa362a6ee42cc57f15a641eb3684cbb20e95f59fad6d9a41\": not found" May 13 12:56:04.249432 kubelet[2915]: E0513 12:56:04.249354 2915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6f4b1f02f13830b6fa362a6ee42cc57f15a641eb3684cbb20e95f59fad6d9a41\": not found" containerID="6f4b1f02f13830b6fa362a6ee42cc57f15a641eb3684cbb20e95f59fad6d9a41" May 13 12:56:04.249432 kubelet[2915]: I0513 12:56:04.249370 2915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6f4b1f02f13830b6fa362a6ee42cc57f15a641eb3684cbb20e95f59fad6d9a41"} err="failed to get container status \"6f4b1f02f13830b6fa362a6ee42cc57f15a641eb3684cbb20e95f59fad6d9a41\": rpc error: code = NotFound desc = an error occurred when try to find container \"6f4b1f02f13830b6fa362a6ee42cc57f15a641eb3684cbb20e95f59fad6d9a41\": not found" May 13 12:56:04.249432 kubelet[2915]: I0513 12:56:04.249380 2915 scope.go:117] "RemoveContainer" containerID="54e8c83a77318070c52d2128a233de0a6e4864290fc6994dea8314227ec8ac64" May 13 12:56:04.249606 containerd[1626]: time="2025-05-13T12:56:04.249568133Z" level=error msg="ContainerStatus for \"54e8c83a77318070c52d2128a233de0a6e4864290fc6994dea8314227ec8ac64\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"54e8c83a77318070c52d2128a233de0a6e4864290fc6994dea8314227ec8ac64\": not found" May 13 12:56:04.249654 kubelet[2915]: E0513 12:56:04.249638 2915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"54e8c83a77318070c52d2128a233de0a6e4864290fc6994dea8314227ec8ac64\": not found" containerID="54e8c83a77318070c52d2128a233de0a6e4864290fc6994dea8314227ec8ac64" May 13 12:56:04.249680 kubelet[2915]: I0513 12:56:04.249656 2915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"54e8c83a77318070c52d2128a233de0a6e4864290fc6994dea8314227ec8ac64"} err="failed to get container status \"54e8c83a77318070c52d2128a233de0a6e4864290fc6994dea8314227ec8ac64\": rpc error: code = NotFound desc = an error occurred when try to find container \"54e8c83a77318070c52d2128a233de0a6e4864290fc6994dea8314227ec8ac64\": not found" May 13 12:56:04.249680 kubelet[2915]: I0513 12:56:04.249668 2915 scope.go:117] "RemoveContainer" containerID="a8215698e1083b24122c76e1ddaf1a8d17c57b33f704a201b23285611ba8bf24" May 13 12:56:04.249782 containerd[1626]: time="2025-05-13T12:56:04.249763815Z" level=error msg="ContainerStatus for \"a8215698e1083b24122c76e1ddaf1a8d17c57b33f704a201b23285611ba8bf24\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a8215698e1083b24122c76e1ddaf1a8d17c57b33f704a201b23285611ba8bf24\": not found" May 13 12:56:04.249897 kubelet[2915]: E0513 12:56:04.249834 2915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a8215698e1083b24122c76e1ddaf1a8d17c57b33f704a201b23285611ba8bf24\": not found" containerID="a8215698e1083b24122c76e1ddaf1a8d17c57b33f704a201b23285611ba8bf24" May 13 12:56:04.249897 kubelet[2915]: I0513 12:56:04.249846 2915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a8215698e1083b24122c76e1ddaf1a8d17c57b33f704a201b23285611ba8bf24"} err="failed to get container status \"a8215698e1083b24122c76e1ddaf1a8d17c57b33f704a201b23285611ba8bf24\": rpc error: code = NotFound desc = an error occurred when try to find container \"a8215698e1083b24122c76e1ddaf1a8d17c57b33f704a201b23285611ba8bf24\": not found" May 13 12:56:04.249897 kubelet[2915]: I0513 12:56:04.249854 2915 scope.go:117] "RemoveContainer" containerID="75340c00ae151adb05d7267393669b1877718a077b4a353af8108582d504cf10" May 13 12:56:04.250039 containerd[1626]: time="2025-05-13T12:56:04.250005143Z" level=error msg="ContainerStatus for \"75340c00ae151adb05d7267393669b1877718a077b4a353af8108582d504cf10\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"75340c00ae151adb05d7267393669b1877718a077b4a353af8108582d504cf10\": not found" May 13 12:56:04.250092 kubelet[2915]: E0513 12:56:04.250073 2915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"75340c00ae151adb05d7267393669b1877718a077b4a353af8108582d504cf10\": not found" containerID="75340c00ae151adb05d7267393669b1877718a077b4a353af8108582d504cf10" May 13 12:56:04.250137 kubelet[2915]: I0513 12:56:04.250092 2915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"75340c00ae151adb05d7267393669b1877718a077b4a353af8108582d504cf10"} err="failed to get container status \"75340c00ae151adb05d7267393669b1877718a077b4a353af8108582d504cf10\": rpc error: code = NotFound desc = an error occurred when try to find container \"75340c00ae151adb05d7267393669b1877718a077b4a353af8108582d504cf10\": not found" May 13 12:56:04.849242 kubelet[2915]: E0513 12:56:04.849206 2915 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 12:56:04.917597 sshd[4478]: Connection closed by 147.75.109.163 port 47890 May 13 12:56:04.921154 sshd-session[4476]: pam_unix(sshd:session): session closed for user core May 13 12:56:04.926815 systemd[1]: sshd@26-139.178.70.104:22-147.75.109.163:47890.service: Deactivated successfully. May 13 12:56:04.927903 systemd[1]: session-25.scope: Deactivated successfully. May 13 12:56:04.928403 systemd-logind[1596]: Session 25 logged out. Waiting for processes to exit. May 13 12:56:04.930434 systemd[1]: Started sshd@27-139.178.70.104:22-147.75.109.163:47896.service - OpenSSH per-connection server daemon (147.75.109.163:47896). May 13 12:56:04.930897 systemd-logind[1596]: Removed session 25. May 13 12:56:04.996894 sshd[4629]: Accepted publickey for core from 147.75.109.163 port 47896 ssh2: RSA SHA256:bxyL21ypRg/l6L1U5vXH7bz9HOopqyjcFfRC9D+f+uA May 13 12:56:04.997980 sshd-session[4629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:56:05.002015 systemd-logind[1596]: New session 26 of user core. May 13 12:56:05.009695 systemd[1]: Started session-26.scope - Session 26 of User core. May 13 12:56:05.328572 sshd[4631]: Connection closed by 147.75.109.163 port 47896 May 13 12:56:05.328693 sshd-session[4629]: pam_unix(sshd:session): session closed for user core May 13 12:56:05.334432 systemd[1]: sshd@27-139.178.70.104:22-147.75.109.163:47896.service: Deactivated successfully. May 13 12:56:05.336372 systemd[1]: session-26.scope: Deactivated successfully. May 13 12:56:05.337661 systemd-logind[1596]: Session 26 logged out. Waiting for processes to exit. May 13 12:56:05.341825 systemd[1]: Started sshd@28-139.178.70.104:22-147.75.109.163:47912.service - OpenSSH per-connection server daemon (147.75.109.163:47912). May 13 12:56:05.344243 systemd-logind[1596]: Removed session 26. May 13 12:56:05.356532 kubelet[2915]: E0513 12:56:05.356437 2915 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cda9e517-250e-41ec-95a8-d6fdcb18dc17" containerName="mount-cgroup" May 13 12:56:05.356532 kubelet[2915]: E0513 12:56:05.356519 2915 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="66c79ef7-9826-4987-ac5b-3ec5ca1f4ad6" containerName="cilium-operator" May 13 12:56:05.356532 kubelet[2915]: E0513 12:56:05.356526 2915 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cda9e517-250e-41ec-95a8-d6fdcb18dc17" containerName="apply-sysctl-overwrites" May 13 12:56:05.356532 kubelet[2915]: E0513 12:56:05.356531 2915 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cda9e517-250e-41ec-95a8-d6fdcb18dc17" containerName="mount-bpf-fs" May 13 12:56:05.356532 kubelet[2915]: E0513 12:56:05.356535 2915 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cda9e517-250e-41ec-95a8-d6fdcb18dc17" containerName="clean-cilium-state" May 13 12:56:05.356532 kubelet[2915]: E0513 12:56:05.356539 2915 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cda9e517-250e-41ec-95a8-d6fdcb18dc17" containerName="cilium-agent" May 13 12:56:05.356747 kubelet[2915]: I0513 12:56:05.356599 2915 memory_manager.go:354] "RemoveStaleState removing state" podUID="cda9e517-250e-41ec-95a8-d6fdcb18dc17" containerName="cilium-agent" May 13 12:56:05.356747 kubelet[2915]: I0513 12:56:05.356609 2915 memory_manager.go:354] "RemoveStaleState removing state" podUID="66c79ef7-9826-4987-ac5b-3ec5ca1f4ad6" containerName="cilium-operator" May 13 12:56:05.367853 systemd[1]: Created slice kubepods-burstable-pod07f71780_5a14_4ce3_9ca5_e51d1c9c0e53.slice - libcontainer container kubepods-burstable-pod07f71780_5a14_4ce3_9ca5_e51d1c9c0e53.slice. May 13 12:56:05.390339 kubelet[2915]: I0513 12:56:05.390309 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/07f71780-5a14-4ce3-9ca5-e51d1c9c0e53-cilium-run\") pod \"cilium-47pjq\" (UID: \"07f71780-5a14-4ce3-9ca5-e51d1c9c0e53\") " pod="kube-system/cilium-47pjq" May 13 12:56:05.390339 kubelet[2915]: I0513 12:56:05.390339 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/07f71780-5a14-4ce3-9ca5-e51d1c9c0e53-etc-cni-netd\") pod \"cilium-47pjq\" (UID: \"07f71780-5a14-4ce3-9ca5-e51d1c9c0e53\") " pod="kube-system/cilium-47pjq" May 13 12:56:05.390483 kubelet[2915]: I0513 12:56:05.390353 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/07f71780-5a14-4ce3-9ca5-e51d1c9c0e53-clustermesh-secrets\") pod \"cilium-47pjq\" (UID: \"07f71780-5a14-4ce3-9ca5-e51d1c9c0e53\") " pod="kube-system/cilium-47pjq" May 13 12:56:05.390483 kubelet[2915]: I0513 12:56:05.390404 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/07f71780-5a14-4ce3-9ca5-e51d1c9c0e53-lib-modules\") pod \"cilium-47pjq\" (UID: \"07f71780-5a14-4ce3-9ca5-e51d1c9c0e53\") " pod="kube-system/cilium-47pjq" May 13 12:56:05.390483 kubelet[2915]: I0513 12:56:05.390420 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/07f71780-5a14-4ce3-9ca5-e51d1c9c0e53-xtables-lock\") pod \"cilium-47pjq\" (UID: \"07f71780-5a14-4ce3-9ca5-e51d1c9c0e53\") " pod="kube-system/cilium-47pjq" May 13 12:56:05.390483 kubelet[2915]: I0513 12:56:05.390466 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/07f71780-5a14-4ce3-9ca5-e51d1c9c0e53-cilium-config-path\") pod \"cilium-47pjq\" (UID: \"07f71780-5a14-4ce3-9ca5-e51d1c9c0e53\") " pod="kube-system/cilium-47pjq" May 13 12:56:05.390611 kubelet[2915]: I0513 12:56:05.390483 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/07f71780-5a14-4ce3-9ca5-e51d1c9c0e53-host-proc-sys-kernel\") pod \"cilium-47pjq\" (UID: \"07f71780-5a14-4ce3-9ca5-e51d1c9c0e53\") " pod="kube-system/cilium-47pjq" May 13 12:56:05.390611 kubelet[2915]: I0513 12:56:05.390498 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6jzs\" (UniqueName: \"kubernetes.io/projected/07f71780-5a14-4ce3-9ca5-e51d1c9c0e53-kube-api-access-m6jzs\") pod \"cilium-47pjq\" (UID: \"07f71780-5a14-4ce3-9ca5-e51d1c9c0e53\") " pod="kube-system/cilium-47pjq" May 13 12:56:05.390611 kubelet[2915]: I0513 12:56:05.390534 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/07f71780-5a14-4ce3-9ca5-e51d1c9c0e53-bpf-maps\") pod \"cilium-47pjq\" (UID: \"07f71780-5a14-4ce3-9ca5-e51d1c9c0e53\") " pod="kube-system/cilium-47pjq" May 13 12:56:05.390680 kubelet[2915]: I0513 12:56:05.390550 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/07f71780-5a14-4ce3-9ca5-e51d1c9c0e53-host-proc-sys-net\") pod \"cilium-47pjq\" (UID: \"07f71780-5a14-4ce3-9ca5-e51d1c9c0e53\") " pod="kube-system/cilium-47pjq" May 13 12:56:05.390680 kubelet[2915]: I0513 12:56:05.390660 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/07f71780-5a14-4ce3-9ca5-e51d1c9c0e53-cni-path\") pod \"cilium-47pjq\" (UID: \"07f71780-5a14-4ce3-9ca5-e51d1c9c0e53\") " pod="kube-system/cilium-47pjq" May 13 12:56:05.390680 kubelet[2915]: I0513 12:56:05.390669 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/07f71780-5a14-4ce3-9ca5-e51d1c9c0e53-hubble-tls\") pod \"cilium-47pjq\" (UID: \"07f71780-5a14-4ce3-9ca5-e51d1c9c0e53\") " pod="kube-system/cilium-47pjq" May 13 12:56:05.390739 kubelet[2915]: I0513 12:56:05.390681 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/07f71780-5a14-4ce3-9ca5-e51d1c9c0e53-cilium-cgroup\") pod \"cilium-47pjq\" (UID: \"07f71780-5a14-4ce3-9ca5-e51d1c9c0e53\") " pod="kube-system/cilium-47pjq" May 13 12:56:05.390756 kubelet[2915]: I0513 12:56:05.390740 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/07f71780-5a14-4ce3-9ca5-e51d1c9c0e53-hostproc\") pod \"cilium-47pjq\" (UID: \"07f71780-5a14-4ce3-9ca5-e51d1c9c0e53\") " pod="kube-system/cilium-47pjq" May 13 12:56:05.390774 kubelet[2915]: I0513 12:56:05.390757 2915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/07f71780-5a14-4ce3-9ca5-e51d1c9c0e53-cilium-ipsec-secrets\") pod \"cilium-47pjq\" (UID: \"07f71780-5a14-4ce3-9ca5-e51d1c9c0e53\") " pod="kube-system/cilium-47pjq" May 13 12:56:05.405372 sshd[4641]: Accepted publickey for core from 147.75.109.163 port 47912 ssh2: RSA SHA256:bxyL21ypRg/l6L1U5vXH7bz9HOopqyjcFfRC9D+f+uA May 13 12:56:05.407151 sshd-session[4641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:56:05.412641 systemd-logind[1596]: New session 27 of user core. May 13 12:56:05.416368 systemd[1]: Started session-27.scope - Session 27 of User core. May 13 12:56:05.466423 sshd[4643]: Connection closed by 147.75.109.163 port 47912 May 13 12:56:05.467304 sshd-session[4641]: pam_unix(sshd:session): session closed for user core May 13 12:56:05.475000 systemd[1]: sshd@28-139.178.70.104:22-147.75.109.163:47912.service: Deactivated successfully. May 13 12:56:05.476548 systemd[1]: session-27.scope: Deactivated successfully. May 13 12:56:05.477342 systemd-logind[1596]: Session 27 logged out. Waiting for processes to exit. May 13 12:56:05.479538 systemd[1]: Started sshd@29-139.178.70.104:22-147.75.109.163:47922.service - OpenSSH per-connection server daemon (147.75.109.163:47922). May 13 12:56:05.481252 systemd-logind[1596]: Removed session 27. May 13 12:56:05.536754 sshd[4650]: Accepted publickey for core from 147.75.109.163 port 47922 ssh2: RSA SHA256:bxyL21ypRg/l6L1U5vXH7bz9HOopqyjcFfRC9D+f+uA May 13 12:56:05.537239 sshd-session[4650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:56:05.540436 systemd-logind[1596]: New session 28 of user core. May 13 12:56:05.544645 systemd[1]: Started session-28.scope - Session 28 of User core. May 13 12:56:05.675036 kubelet[2915]: I0513 12:56:05.674944 2915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66c79ef7-9826-4987-ac5b-3ec5ca1f4ad6" path="/var/lib/kubelet/pods/66c79ef7-9826-4987-ac5b-3ec5ca1f4ad6/volumes" May 13 12:56:05.681953 containerd[1626]: time="2025-05-13T12:56:05.681858408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-47pjq,Uid:07f71780-5a14-4ce3-9ca5-e51d1c9c0e53,Namespace:kube-system,Attempt:0,}" May 13 12:56:05.685678 kubelet[2915]: I0513 12:56:05.685652 2915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cda9e517-250e-41ec-95a8-d6fdcb18dc17" path="/var/lib/kubelet/pods/cda9e517-250e-41ec-95a8-d6fdcb18dc17/volumes" May 13 12:56:05.793045 containerd[1626]: time="2025-05-13T12:56:05.793013015Z" level=info msg="connecting to shim c64f7bed83db6095841fe4287b6348c0ec8575736ced06c44becaff83a4ae9a4" address="unix:///run/containerd/s/1cb231f3d7f46b1462560706f2b3e9de8195d4afe005c936e15562475ec5336b" namespace=k8s.io protocol=ttrpc version=3 May 13 12:56:05.810657 systemd[1]: Started cri-containerd-c64f7bed83db6095841fe4287b6348c0ec8575736ced06c44becaff83a4ae9a4.scope - libcontainer container c64f7bed83db6095841fe4287b6348c0ec8575736ced06c44becaff83a4ae9a4. May 13 12:56:05.835643 containerd[1626]: time="2025-05-13T12:56:05.835569322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-47pjq,Uid:07f71780-5a14-4ce3-9ca5-e51d1c9c0e53,Namespace:kube-system,Attempt:0,} returns sandbox id \"c64f7bed83db6095841fe4287b6348c0ec8575736ced06c44becaff83a4ae9a4\"" May 13 12:56:05.837295 containerd[1626]: time="2025-05-13T12:56:05.837275947Z" level=info msg="CreateContainer within sandbox \"c64f7bed83db6095841fe4287b6348c0ec8575736ced06c44becaff83a4ae9a4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 12:56:05.884236 containerd[1626]: time="2025-05-13T12:56:05.884134057Z" level=info msg="Container e3d81c12f2ce3e1ea85f456bdd3e16b8f1aae2c4c39b4e39d6b76c389ba5d45a: CDI devices from CRI Config.CDIDevices: []" May 13 12:56:05.896464 containerd[1626]: time="2025-05-13T12:56:05.896432475Z" level=info msg="CreateContainer within sandbox \"c64f7bed83db6095841fe4287b6348c0ec8575736ced06c44becaff83a4ae9a4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e3d81c12f2ce3e1ea85f456bdd3e16b8f1aae2c4c39b4e39d6b76c389ba5d45a\"" May 13 12:56:05.896960 containerd[1626]: time="2025-05-13T12:56:05.896813942Z" level=info msg="StartContainer for \"e3d81c12f2ce3e1ea85f456bdd3e16b8f1aae2c4c39b4e39d6b76c389ba5d45a\"" May 13 12:56:05.897650 containerd[1626]: time="2025-05-13T12:56:05.897636019Z" level=info msg="connecting to shim e3d81c12f2ce3e1ea85f456bdd3e16b8f1aae2c4c39b4e39d6b76c389ba5d45a" address="unix:///run/containerd/s/1cb231f3d7f46b1462560706f2b3e9de8195d4afe005c936e15562475ec5336b" protocol=ttrpc version=3 May 13 12:56:05.918742 systemd[1]: Started cri-containerd-e3d81c12f2ce3e1ea85f456bdd3e16b8f1aae2c4c39b4e39d6b76c389ba5d45a.scope - libcontainer container e3d81c12f2ce3e1ea85f456bdd3e16b8f1aae2c4c39b4e39d6b76c389ba5d45a. May 13 12:56:05.941543 containerd[1626]: time="2025-05-13T12:56:05.941442040Z" level=info msg="StartContainer for \"e3d81c12f2ce3e1ea85f456bdd3e16b8f1aae2c4c39b4e39d6b76c389ba5d45a\" returns successfully" May 13 12:56:05.958518 systemd[1]: cri-containerd-e3d81c12f2ce3e1ea85f456bdd3e16b8f1aae2c4c39b4e39d6b76c389ba5d45a.scope: Deactivated successfully. May 13 12:56:05.959497 systemd[1]: cri-containerd-e3d81c12f2ce3e1ea85f456bdd3e16b8f1aae2c4c39b4e39d6b76c389ba5d45a.scope: Consumed 16ms CPU time, 9.7M memory peak, 3.3M read from disk. May 13 12:56:05.960532 containerd[1626]: time="2025-05-13T12:56:05.960405358Z" level=info msg="received exit event container_id:\"e3d81c12f2ce3e1ea85f456bdd3e16b8f1aae2c4c39b4e39d6b76c389ba5d45a\" id:\"e3d81c12f2ce3e1ea85f456bdd3e16b8f1aae2c4c39b4e39d6b76c389ba5d45a\" pid:4719 exited_at:{seconds:1747140965 nanos:960116407}" May 13 12:56:05.960809 containerd[1626]: time="2025-05-13T12:56:05.960538690Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e3d81c12f2ce3e1ea85f456bdd3e16b8f1aae2c4c39b4e39d6b76c389ba5d45a\" id:\"e3d81c12f2ce3e1ea85f456bdd3e16b8f1aae2c4c39b4e39d6b76c389ba5d45a\" pid:4719 exited_at:{seconds:1747140965 nanos:960116407}" May 13 12:56:06.104830 containerd[1626]: time="2025-05-13T12:56:06.104754847Z" level=info msg="CreateContainer within sandbox \"c64f7bed83db6095841fe4287b6348c0ec8575736ced06c44becaff83a4ae9a4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 12:56:06.133666 containerd[1626]: time="2025-05-13T12:56:06.133629538Z" level=info msg="Container 0e70538c434be43997b526c99af8d0e8851b3d71e42d249cb06940d0ac068a4b: CDI devices from CRI Config.CDIDevices: []" May 13 12:56:06.154570 containerd[1626]: time="2025-05-13T12:56:06.154533324Z" level=info msg="CreateContainer within sandbox \"c64f7bed83db6095841fe4287b6348c0ec8575736ced06c44becaff83a4ae9a4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0e70538c434be43997b526c99af8d0e8851b3d71e42d249cb06940d0ac068a4b\"" May 13 12:56:06.155328 containerd[1626]: time="2025-05-13T12:56:06.155254598Z" level=info msg="StartContainer for \"0e70538c434be43997b526c99af8d0e8851b3d71e42d249cb06940d0ac068a4b\"" May 13 12:56:06.156171 containerd[1626]: time="2025-05-13T12:56:06.156130325Z" level=info msg="connecting to shim 0e70538c434be43997b526c99af8d0e8851b3d71e42d249cb06940d0ac068a4b" address="unix:///run/containerd/s/1cb231f3d7f46b1462560706f2b3e9de8195d4afe005c936e15562475ec5336b" protocol=ttrpc version=3 May 13 12:56:06.173744 systemd[1]: Started cri-containerd-0e70538c434be43997b526c99af8d0e8851b3d71e42d249cb06940d0ac068a4b.scope - libcontainer container 0e70538c434be43997b526c99af8d0e8851b3d71e42d249cb06940d0ac068a4b. May 13 12:56:06.262888 containerd[1626]: time="2025-05-13T12:56:06.262797369Z" level=info msg="StartContainer for \"0e70538c434be43997b526c99af8d0e8851b3d71e42d249cb06940d0ac068a4b\" returns successfully" May 13 12:56:06.289606 systemd[1]: cri-containerd-0e70538c434be43997b526c99af8d0e8851b3d71e42d249cb06940d0ac068a4b.scope: Deactivated successfully. May 13 12:56:06.289832 containerd[1626]: time="2025-05-13T12:56:06.289802825Z" level=info msg="received exit event container_id:\"0e70538c434be43997b526c99af8d0e8851b3d71e42d249cb06940d0ac068a4b\" id:\"0e70538c434be43997b526c99af8d0e8851b3d71e42d249cb06940d0ac068a4b\" pid:4764 exited_at:{seconds:1747140966 nanos:289656612}" May 13 12:56:06.290167 systemd[1]: cri-containerd-0e70538c434be43997b526c99af8d0e8851b3d71e42d249cb06940d0ac068a4b.scope: Consumed 13ms CPU time, 7.6M memory peak, 2.2M read from disk. May 13 12:56:06.290951 containerd[1626]: time="2025-05-13T12:56:06.290929213Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0e70538c434be43997b526c99af8d0e8851b3d71e42d249cb06940d0ac068a4b\" id:\"0e70538c434be43997b526c99af8d0e8851b3d71e42d249cb06940d0ac068a4b\" pid:4764 exited_at:{seconds:1747140966 nanos:289656612}" May 13 12:56:07.105887 containerd[1626]: time="2025-05-13T12:56:07.105837862Z" level=info msg="CreateContainer within sandbox \"c64f7bed83db6095841fe4287b6348c0ec8575736ced06c44becaff83a4ae9a4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 12:56:07.144570 containerd[1626]: time="2025-05-13T12:56:07.144312861Z" level=info msg="Container d033ca1787856c4fad304f1288deb72b049704c88a3a6bb36b8952e21466aee0: CDI devices from CRI Config.CDIDevices: []" May 13 12:56:07.169262 containerd[1626]: time="2025-05-13T12:56:07.169239245Z" level=info msg="CreateContainer within sandbox \"c64f7bed83db6095841fe4287b6348c0ec8575736ced06c44becaff83a4ae9a4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d033ca1787856c4fad304f1288deb72b049704c88a3a6bb36b8952e21466aee0\"" May 13 12:56:07.169777 containerd[1626]: time="2025-05-13T12:56:07.169759362Z" level=info msg="StartContainer for \"d033ca1787856c4fad304f1288deb72b049704c88a3a6bb36b8952e21466aee0\"" May 13 12:56:07.170525 containerd[1626]: time="2025-05-13T12:56:07.170506345Z" level=info msg="connecting to shim d033ca1787856c4fad304f1288deb72b049704c88a3a6bb36b8952e21466aee0" address="unix:///run/containerd/s/1cb231f3d7f46b1462560706f2b3e9de8195d4afe005c936e15562475ec5336b" protocol=ttrpc version=3 May 13 12:56:07.190709 systemd[1]: Started cri-containerd-d033ca1787856c4fad304f1288deb72b049704c88a3a6bb36b8952e21466aee0.scope - libcontainer container d033ca1787856c4fad304f1288deb72b049704c88a3a6bb36b8952e21466aee0. May 13 12:56:07.221632 containerd[1626]: time="2025-05-13T12:56:07.221605437Z" level=info msg="StartContainer for \"d033ca1787856c4fad304f1288deb72b049704c88a3a6bb36b8952e21466aee0\" returns successfully" May 13 12:56:07.238959 systemd[1]: cri-containerd-d033ca1787856c4fad304f1288deb72b049704c88a3a6bb36b8952e21466aee0.scope: Deactivated successfully. May 13 12:56:07.239388 systemd[1]: cri-containerd-d033ca1787856c4fad304f1288deb72b049704c88a3a6bb36b8952e21466aee0.scope: Consumed 14ms CPU time, 5.9M memory peak, 1.1M read from disk. May 13 12:56:07.240456 containerd[1626]: time="2025-05-13T12:56:07.240338341Z" level=info msg="received exit event container_id:\"d033ca1787856c4fad304f1288deb72b049704c88a3a6bb36b8952e21466aee0\" id:\"d033ca1787856c4fad304f1288deb72b049704c88a3a6bb36b8952e21466aee0\" pid:4808 exited_at:{seconds:1747140967 nanos:239961685}" May 13 12:56:07.241109 containerd[1626]: time="2025-05-13T12:56:07.241088603Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d033ca1787856c4fad304f1288deb72b049704c88a3a6bb36b8952e21466aee0\" id:\"d033ca1787856c4fad304f1288deb72b049704c88a3a6bb36b8952e21466aee0\" pid:4808 exited_at:{seconds:1747140967 nanos:239961685}" May 13 12:56:07.256116 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d033ca1787856c4fad304f1288deb72b049704c88a3a6bb36b8952e21466aee0-rootfs.mount: Deactivated successfully. May 13 12:56:08.110230 containerd[1626]: time="2025-05-13T12:56:08.110128489Z" level=info msg="CreateContainer within sandbox \"c64f7bed83db6095841fe4287b6348c0ec8575736ced06c44becaff83a4ae9a4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 12:56:08.196473 containerd[1626]: time="2025-05-13T12:56:08.194685264Z" level=info msg="Container a5c953053218348ebdddac13e308a15a0e37f5ea581aed83b72de6fbe790dae3: CDI devices from CRI Config.CDIDevices: []" May 13 12:56:08.233013 containerd[1626]: time="2025-05-13T12:56:08.232948699Z" level=info msg="CreateContainer within sandbox \"c64f7bed83db6095841fe4287b6348c0ec8575736ced06c44becaff83a4ae9a4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a5c953053218348ebdddac13e308a15a0e37f5ea581aed83b72de6fbe790dae3\"" May 13 12:56:08.233734 containerd[1626]: time="2025-05-13T12:56:08.233482871Z" level=info msg="StartContainer for \"a5c953053218348ebdddac13e308a15a0e37f5ea581aed83b72de6fbe790dae3\"" May 13 12:56:08.234333 containerd[1626]: time="2025-05-13T12:56:08.234315657Z" level=info msg="connecting to shim a5c953053218348ebdddac13e308a15a0e37f5ea581aed83b72de6fbe790dae3" address="unix:///run/containerd/s/1cb231f3d7f46b1462560706f2b3e9de8195d4afe005c936e15562475ec5336b" protocol=ttrpc version=3 May 13 12:56:08.249646 systemd[1]: Started cri-containerd-a5c953053218348ebdddac13e308a15a0e37f5ea581aed83b72de6fbe790dae3.scope - libcontainer container a5c953053218348ebdddac13e308a15a0e37f5ea581aed83b72de6fbe790dae3. May 13 12:56:08.266195 systemd[1]: cri-containerd-a5c953053218348ebdddac13e308a15a0e37f5ea581aed83b72de6fbe790dae3.scope: Deactivated successfully. May 13 12:56:08.266881 containerd[1626]: time="2025-05-13T12:56:08.266859779Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a5c953053218348ebdddac13e308a15a0e37f5ea581aed83b72de6fbe790dae3\" id:\"a5c953053218348ebdddac13e308a15a0e37f5ea581aed83b72de6fbe790dae3\" pid:4850 exited_at:{seconds:1747140968 nanos:266403810}" May 13 12:56:08.276296 containerd[1626]: time="2025-05-13T12:56:08.276200656Z" level=info msg="received exit event container_id:\"a5c953053218348ebdddac13e308a15a0e37f5ea581aed83b72de6fbe790dae3\" id:\"a5c953053218348ebdddac13e308a15a0e37f5ea581aed83b72de6fbe790dae3\" pid:4850 exited_at:{seconds:1747140968 nanos:266403810}" May 13 12:56:08.280547 containerd[1626]: time="2025-05-13T12:56:08.280515686Z" level=info msg="StartContainer for \"a5c953053218348ebdddac13e308a15a0e37f5ea581aed83b72de6fbe790dae3\" returns successfully" May 13 12:56:08.288397 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5c953053218348ebdddac13e308a15a0e37f5ea581aed83b72de6fbe790dae3-rootfs.mount: Deactivated successfully. May 13 12:56:09.113911 containerd[1626]: time="2025-05-13T12:56:09.113800972Z" level=info msg="CreateContainer within sandbox \"c64f7bed83db6095841fe4287b6348c0ec8575736ced06c44becaff83a4ae9a4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 12:56:09.138721 containerd[1626]: time="2025-05-13T12:56:09.138693234Z" level=info msg="Container 41035e8cc35f2a95677cde6e55411d011b89d8e12dde0368a20158e9593948fd: CDI devices from CRI Config.CDIDevices: []" May 13 12:56:09.160067 containerd[1626]: time="2025-05-13T12:56:09.160037453Z" level=info msg="CreateContainer within sandbox \"c64f7bed83db6095841fe4287b6348c0ec8575736ced06c44becaff83a4ae9a4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"41035e8cc35f2a95677cde6e55411d011b89d8e12dde0368a20158e9593948fd\"" May 13 12:56:09.160518 containerd[1626]: time="2025-05-13T12:56:09.160411898Z" level=info msg="StartContainer for \"41035e8cc35f2a95677cde6e55411d011b89d8e12dde0368a20158e9593948fd\"" May 13 12:56:09.161235 containerd[1626]: time="2025-05-13T12:56:09.161194910Z" level=info msg="connecting to shim 41035e8cc35f2a95677cde6e55411d011b89d8e12dde0368a20158e9593948fd" address="unix:///run/containerd/s/1cb231f3d7f46b1462560706f2b3e9de8195d4afe005c936e15562475ec5336b" protocol=ttrpc version=3 May 13 12:56:09.185656 systemd[1]: Started cri-containerd-41035e8cc35f2a95677cde6e55411d011b89d8e12dde0368a20158e9593948fd.scope - libcontainer container 41035e8cc35f2a95677cde6e55411d011b89d8e12dde0368a20158e9593948fd. May 13 12:56:09.213849 containerd[1626]: time="2025-05-13T12:56:09.213718884Z" level=info msg="StartContainer for \"41035e8cc35f2a95677cde6e55411d011b89d8e12dde0368a20158e9593948fd\" returns successfully" May 13 12:56:09.499691 containerd[1626]: time="2025-05-13T12:56:09.499653431Z" level=info msg="TaskExit event in podsandbox handler container_id:\"41035e8cc35f2a95677cde6e55411d011b89d8e12dde0368a20158e9593948fd\" id:\"b75a52b896b4a7a4bbc26b5ae312278247cf40833297a60f98afbf3c292e1fc3\" pid:4914 exited_at:{seconds:1747140969 nanos:499405812}" May 13 12:56:09.675508 containerd[1626]: time="2025-05-13T12:56:09.675479032Z" level=info msg="StopPodSandbox for \"b66de58a8247b58b72c06b072efd354921cf35ea25c1e2c722a29b8cb0c1744b\"" May 13 12:56:09.675756 containerd[1626]: time="2025-05-13T12:56:09.675741827Z" level=info msg="TearDown network for sandbox \"b66de58a8247b58b72c06b072efd354921cf35ea25c1e2c722a29b8cb0c1744b\" successfully" May 13 12:56:09.675885 containerd[1626]: time="2025-05-13T12:56:09.675809987Z" level=info msg="StopPodSandbox for \"b66de58a8247b58b72c06b072efd354921cf35ea25c1e2c722a29b8cb0c1744b\" returns successfully" May 13 12:56:09.676206 containerd[1626]: time="2025-05-13T12:56:09.676170542Z" level=info msg="RemovePodSandbox for \"b66de58a8247b58b72c06b072efd354921cf35ea25c1e2c722a29b8cb0c1744b\"" May 13 12:56:09.688169 containerd[1626]: time="2025-05-13T12:56:09.688147760Z" level=info msg="Forcibly stopping sandbox \"b66de58a8247b58b72c06b072efd354921cf35ea25c1e2c722a29b8cb0c1744b\"" May 13 12:56:09.688274 containerd[1626]: time="2025-05-13T12:56:09.688248022Z" level=info msg="TearDown network for sandbox \"b66de58a8247b58b72c06b072efd354921cf35ea25c1e2c722a29b8cb0c1744b\" successfully" May 13 12:56:09.697649 containerd[1626]: time="2025-05-13T12:56:09.697584405Z" level=info msg="Ensure that sandbox b66de58a8247b58b72c06b072efd354921cf35ea25c1e2c722a29b8cb0c1744b in task-service has been cleanup successfully" May 13 12:56:09.722178 containerd[1626]: time="2025-05-13T12:56:09.722068679Z" level=info msg="RemovePodSandbox \"b66de58a8247b58b72c06b072efd354921cf35ea25c1e2c722a29b8cb0c1744b\" returns successfully" May 13 12:56:09.723098 containerd[1626]: time="2025-05-13T12:56:09.723062853Z" level=info msg="StopPodSandbox for \"260e12b63ddb76d98abd4188db0517324371e9cd3a32304dc6d87dd8f65b7cf0\"" May 13 12:56:09.723256 containerd[1626]: time="2025-05-13T12:56:09.723241876Z" level=info msg="TearDown network for sandbox \"260e12b63ddb76d98abd4188db0517324371e9cd3a32304dc6d87dd8f65b7cf0\" successfully" May 13 12:56:09.723313 containerd[1626]: time="2025-05-13T12:56:09.723304006Z" level=info msg="StopPodSandbox for \"260e12b63ddb76d98abd4188db0517324371e9cd3a32304dc6d87dd8f65b7cf0\" returns successfully" May 13 12:56:09.724776 containerd[1626]: time="2025-05-13T12:56:09.724023830Z" level=info msg="RemovePodSandbox for \"260e12b63ddb76d98abd4188db0517324371e9cd3a32304dc6d87dd8f65b7cf0\"" May 13 12:56:09.724776 containerd[1626]: time="2025-05-13T12:56:09.724043053Z" level=info msg="Forcibly stopping sandbox \"260e12b63ddb76d98abd4188db0517324371e9cd3a32304dc6d87dd8f65b7cf0\"" May 13 12:56:09.724776 containerd[1626]: time="2025-05-13T12:56:09.724095180Z" level=info msg="TearDown network for sandbox \"260e12b63ddb76d98abd4188db0517324371e9cd3a32304dc6d87dd8f65b7cf0\" successfully" May 13 12:56:09.725492 containerd[1626]: time="2025-05-13T12:56:09.725470717Z" level=info msg="Ensure that sandbox 260e12b63ddb76d98abd4188db0517324371e9cd3a32304dc6d87dd8f65b7cf0 in task-service has been cleanup successfully" May 13 12:56:09.750885 containerd[1626]: time="2025-05-13T12:56:09.750818176Z" level=info msg="RemovePodSandbox \"260e12b63ddb76d98abd4188db0517324371e9cd3a32304dc6d87dd8f65b7cf0\" returns successfully" May 13 12:56:10.147300 kubelet[2915]: I0513 12:56:10.147144 2915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-47pjq" podStartSLOduration=5.147128815 podStartE2EDuration="5.147128815s" podCreationTimestamp="2025-05-13 12:56:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 12:56:10.146542085 +0000 UTC m=+120.567204718" watchObservedRunningTime="2025-05-13 12:56:10.147128815 +0000 UTC m=+120.567791444" May 13 12:56:11.339640 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) May 13 12:56:12.034243 containerd[1626]: time="2025-05-13T12:56:12.034192469Z" level=info msg="TaskExit event in podsandbox handler container_id:\"41035e8cc35f2a95677cde6e55411d011b89d8e12dde0368a20158e9593948fd\" id:\"ad6633173fc2f077cf52370ffa877e59aff361b4c4ab205facd1984e8416c1fa\" pid:4997 exit_status:1 exited_at:{seconds:1747140972 nanos:33940313}" May 13 12:56:14.146921 containerd[1626]: time="2025-05-13T12:56:14.146787073Z" level=info msg="TaskExit event in podsandbox handler container_id:\"41035e8cc35f2a95677cde6e55411d011b89d8e12dde0368a20158e9593948fd\" id:\"43f4c72c3f13be136f77ef0603f393545648a4e79ae07a25fcd7e1178ba0897a\" pid:5261 exit_status:1 exited_at:{seconds:1747140974 nanos:146511620}" May 13 12:56:14.574670 systemd-networkd[1540]: lxc_health: Link UP May 13 12:56:14.585931 systemd-networkd[1540]: lxc_health: Gained carrier May 13 12:56:15.690667 systemd-networkd[1540]: lxc_health: Gained IPv6LL May 13 12:56:16.585697 containerd[1626]: time="2025-05-13T12:56:16.585566966Z" level=info msg="TaskExit event in podsandbox handler container_id:\"41035e8cc35f2a95677cde6e55411d011b89d8e12dde0368a20158e9593948fd\" id:\"45a672d830dbef1595d7c890cfbbedfd3551b6de52277765df3b55fd497dc9c1\" pid:5473 exited_at:{seconds:1747140976 nanos:584771303}" May 13 12:56:18.661545 containerd[1626]: time="2025-05-13T12:56:18.661474968Z" level=info msg="TaskExit event in podsandbox handler container_id:\"41035e8cc35f2a95677cde6e55411d011b89d8e12dde0368a20158e9593948fd\" id:\"9e8f7ff2e6eb7fe25a40c8fc7245bce212c5a717f78ebe53173c0cb7cdbbb89a\" pid:5514 exited_at:{seconds:1747140978 nanos:661132253}" May 13 12:56:20.745980 containerd[1626]: time="2025-05-13T12:56:20.745935574Z" level=info msg="TaskExit event in podsandbox handler container_id:\"41035e8cc35f2a95677cde6e55411d011b89d8e12dde0368a20158e9593948fd\" id:\"13f7ed65d7a89923e8698f755a0293d32b9126d7041f81313e703fad25169d00\" pid:5539 exited_at:{seconds:1747140980 nanos:745697204}" May 13 12:56:20.749518 sshd[4656]: Connection closed by 147.75.109.163 port 47922 May 13 12:56:20.750073 sshd-session[4650]: pam_unix(sshd:session): session closed for user core May 13 12:56:20.752303 systemd[1]: sshd@29-139.178.70.104:22-147.75.109.163:47922.service: Deactivated successfully. May 13 12:56:20.753257 systemd[1]: session-28.scope: Deactivated successfully. May 13 12:56:20.754060 systemd-logind[1596]: Session 28 logged out. Waiting for processes to exit. May 13 12:56:20.754853 systemd-logind[1596]: Removed session 28.