May 13 12:53:54.717531 kernel: Linux version 6.12.28-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 13 11:28:50 -00 2025 May 13 12:53:54.717547 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=7099d7ee582d4f3e6d25a3763207cfa25fb4eb117c83034e2c517b959b8370a1 May 13 12:53:54.717553 kernel: Disabled fast string operations May 13 12:53:54.717557 kernel: BIOS-provided physical RAM map: May 13 12:53:54.717561 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable May 13 12:53:54.717565 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved May 13 12:53:54.717571 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved May 13 12:53:54.717575 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable May 13 12:53:54.717580 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data May 13 12:53:54.717584 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS May 13 12:53:54.717588 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable May 13 12:53:54.717592 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved May 13 12:53:54.717597 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved May 13 12:53:54.717601 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved May 13 12:53:54.717607 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved May 13 12:53:54.717612 kernel: NX (Execute Disable) protection: active May 13 12:53:54.717617 kernel: APIC: Static calls initialized May 13 12:53:54.717621 kernel: SMBIOS 2.7 present. May 13 12:53:54.717626 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 May 13 12:53:54.717631 kernel: DMI: Memory slots populated: 1/128 May 13 12:53:54.717637 kernel: vmware: hypercall mode: 0x00 May 13 12:53:54.717642 kernel: Hypervisor detected: VMware May 13 12:53:54.717646 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz May 13 12:53:54.717651 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz May 13 12:53:54.717656 kernel: vmware: using clock offset of 4483771644 ns May 13 12:53:54.717661 kernel: tsc: Detected 3408.000 MHz processor May 13 12:53:54.717666 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 13 12:53:54.717671 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 13 12:53:54.717676 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 May 13 12:53:54.717681 kernel: total RAM covered: 3072M May 13 12:53:54.717687 kernel: Found optimal setting for mtrr clean up May 13 12:53:54.717694 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G May 13 12:53:54.717699 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs May 13 12:53:54.717704 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 13 12:53:54.717708 kernel: Using GB pages for direct mapping May 13 12:53:54.717713 kernel: ACPI: Early table checksum verification disabled May 13 12:53:54.717718 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) May 13 12:53:54.717723 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) May 13 12:53:54.717728 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) May 13 12:53:54.717734 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) May 13 12:53:54.717740 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 May 13 12:53:54.717745 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 May 13 12:53:54.717783 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) May 13 12:53:54.717789 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) May 13 12:53:54.717794 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) May 13 12:53:54.717802 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) May 13 12:53:54.717807 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) May 13 12:53:54.717812 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) May 13 12:53:54.717817 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] May 13 12:53:54.717822 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] May 13 12:53:54.717827 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] May 13 12:53:54.717832 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] May 13 12:53:54.717837 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] May 13 12:53:54.717842 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] May 13 12:53:54.717849 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] May 13 12:53:54.717854 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] May 13 12:53:54.717859 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] May 13 12:53:54.717864 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] May 13 12:53:54.717869 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 13 12:53:54.717874 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] May 13 12:53:54.717879 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug May 13 12:53:54.717884 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00001000-0x7fffffff] May 13 12:53:54.717889 kernel: NODE_DATA(0) allocated [mem 0x7fff8dc0-0x7fffffff] May 13 12:53:54.717895 kernel: Zone ranges: May 13 12:53:54.717901 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 13 12:53:54.717906 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] May 13 12:53:54.717923 kernel: Normal empty May 13 12:53:54.717929 kernel: Device empty May 13 12:53:54.717934 kernel: Movable zone start for each node May 13 12:53:54.719061 kernel: Early memory node ranges May 13 12:53:54.719072 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] May 13 12:53:54.719078 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] May 13 12:53:54.719083 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] May 13 12:53:54.719090 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] May 13 12:53:54.719096 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 12:53:54.719101 kernel: On node 0, zone DMA: 98 pages in unavailable ranges May 13 12:53:54.719106 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges May 13 12:53:54.719114 kernel: ACPI: PM-Timer IO Port: 0x1008 May 13 12:53:54.719120 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) May 13 12:53:54.719125 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) May 13 12:53:54.719130 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) May 13 12:53:54.719135 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) May 13 12:53:54.719141 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) May 13 12:53:54.719146 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) May 13 12:53:54.719151 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) May 13 12:53:54.719156 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) May 13 12:53:54.719161 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) May 13 12:53:54.719166 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) May 13 12:53:54.719171 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) May 13 12:53:54.719176 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) May 13 12:53:54.719181 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) May 13 12:53:54.719186 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) May 13 12:53:54.719192 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) May 13 12:53:54.719197 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) May 13 12:53:54.719202 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) May 13 12:53:54.719207 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) May 13 12:53:54.719212 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) May 13 12:53:54.719217 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) May 13 12:53:54.719222 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) May 13 12:53:54.719227 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) May 13 12:53:54.719232 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) May 13 12:53:54.719238 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) May 13 12:53:54.719243 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) May 13 12:53:54.719248 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) May 13 12:53:54.719254 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) May 13 12:53:54.719259 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) May 13 12:53:54.719264 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) May 13 12:53:54.719268 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) May 13 12:53:54.719274 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) May 13 12:53:54.719279 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) May 13 12:53:54.719284 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) May 13 12:53:54.719290 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) May 13 12:53:54.719295 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) May 13 12:53:54.719300 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) May 13 12:53:54.719305 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) May 13 12:53:54.719310 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) May 13 12:53:54.719315 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) May 13 12:53:54.719320 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) May 13 12:53:54.719329 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) May 13 12:53:54.719334 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) May 13 12:53:54.719339 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) May 13 12:53:54.719345 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) May 13 12:53:54.719351 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) May 13 12:53:54.719356 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) May 13 12:53:54.719362 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) May 13 12:53:54.719367 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) May 13 12:53:54.719372 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) May 13 12:53:54.719377 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) May 13 12:53:54.719383 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) May 13 12:53:54.719389 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) May 13 12:53:54.719395 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) May 13 12:53:54.719400 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) May 13 12:53:54.719405 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) May 13 12:53:54.719411 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) May 13 12:53:54.719416 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) May 13 12:53:54.719422 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) May 13 12:53:54.719427 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) May 13 12:53:54.719432 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) May 13 12:53:54.719438 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) May 13 12:53:54.719444 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) May 13 12:53:54.719449 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) May 13 12:53:54.719455 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) May 13 12:53:54.719460 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) May 13 12:53:54.719465 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) May 13 12:53:54.719471 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) May 13 12:53:54.719476 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) May 13 12:53:54.719481 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) May 13 12:53:54.719486 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) May 13 12:53:54.719493 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) May 13 12:53:54.719498 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) May 13 12:53:54.719503 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) May 13 12:53:54.719509 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) May 13 12:53:54.719514 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) May 13 12:53:54.719520 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) May 13 12:53:54.719525 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) May 13 12:53:54.719530 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) May 13 12:53:54.719535 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) May 13 12:53:54.719541 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) May 13 12:53:54.719553 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) May 13 12:53:54.719559 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) May 13 12:53:54.719564 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) May 13 12:53:54.719570 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) May 13 12:53:54.719575 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) May 13 12:53:54.719580 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) May 13 12:53:54.719586 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) May 13 12:53:54.719591 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) May 13 12:53:54.719596 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) May 13 12:53:54.719602 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) May 13 12:53:54.719609 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) May 13 12:53:54.719614 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) May 13 12:53:54.719619 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) May 13 12:53:54.719625 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) May 13 12:53:54.719630 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) May 13 12:53:54.719642 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) May 13 12:53:54.719651 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) May 13 12:53:54.719659 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) May 13 12:53:54.719671 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) May 13 12:53:54.719677 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) May 13 12:53:54.719684 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) May 13 12:53:54.719690 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) May 13 12:53:54.719695 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) May 13 12:53:54.719701 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) May 13 12:53:54.719706 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) May 13 12:53:54.719711 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) May 13 12:53:54.719719 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) May 13 12:53:54.719725 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) May 13 12:53:54.719730 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) May 13 12:53:54.719735 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) May 13 12:53:54.719742 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) May 13 12:53:54.719747 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) May 13 12:53:54.719753 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) May 13 12:53:54.719758 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) May 13 12:53:54.719764 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) May 13 12:53:54.719772 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) May 13 12:53:54.719779 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) May 13 12:53:54.719784 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) May 13 12:53:54.719790 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) May 13 12:53:54.719796 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) May 13 12:53:54.719801 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) May 13 12:53:54.719807 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) May 13 12:53:54.719812 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) May 13 12:53:54.719818 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) May 13 12:53:54.719823 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) May 13 12:53:54.719828 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) May 13 12:53:54.719834 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) May 13 12:53:54.719839 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) May 13 12:53:54.719844 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 May 13 12:53:54.719851 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) May 13 12:53:54.719856 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 13 12:53:54.719862 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 May 13 12:53:54.719867 kernel: TSC deadline timer available May 13 12:53:54.719873 kernel: CPU topo: Max. logical packages: 128 May 13 12:53:54.719878 kernel: CPU topo: Max. logical dies: 128 May 13 12:53:54.719884 kernel: CPU topo: Max. dies per package: 1 May 13 12:53:54.719889 kernel: CPU topo: Max. threads per core: 1 May 13 12:53:54.719894 kernel: CPU topo: Num. cores per package: 1 May 13 12:53:54.719900 kernel: CPU topo: Num. threads per package: 1 May 13 12:53:54.719906 kernel: CPU topo: Allowing 2 present CPUs plus 126 hotplug CPUs May 13 12:53:54.719913 kernel: [mem 0x80000000-0xefffffff] available for PCI devices May 13 12:53:54.719923 kernel: Booting paravirtualized kernel on VMware hypervisor May 13 12:53:54.719933 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 13 12:53:54.719942 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 May 13 12:53:54.719950 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 May 13 12:53:54.719956 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 May 13 12:53:54.719962 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 May 13 12:53:54.719967 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 May 13 12:53:54.719974 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 May 13 12:53:54.719979 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 May 13 12:53:54.719985 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 May 13 12:53:54.719990 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 May 13 12:53:54.719995 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 May 13 12:53:54.720001 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 May 13 12:53:54.720006 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 May 13 12:53:54.720011 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 May 13 12:53:54.720017 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 May 13 12:53:54.720023 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 May 13 12:53:54.720028 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 May 13 12:53:54.720034 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 May 13 12:53:54.720039 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 May 13 12:53:54.720044 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 May 13 12:53:54.722069 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=7099d7ee582d4f3e6d25a3763207cfa25fb4eb117c83034e2c517b959b8370a1 May 13 12:53:54.722076 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 12:53:54.722084 kernel: random: crng init done May 13 12:53:54.722089 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes May 13 12:53:54.722095 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes May 13 12:53:54.722101 kernel: printk: log_buf_len min size: 262144 bytes May 13 12:53:54.722106 kernel: printk: log_buf_len: 1048576 bytes May 13 12:53:54.722112 kernel: printk: early log buf free: 245576(93%) May 13 12:53:54.722117 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 12:53:54.722123 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 13 12:53:54.722128 kernel: Fallback order for Node 0: 0 May 13 12:53:54.722135 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524157 May 13 12:53:54.722141 kernel: Policy zone: DMA32 May 13 12:53:54.722147 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 12:53:54.722153 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 May 13 12:53:54.722158 kernel: ftrace: allocating 40071 entries in 157 pages May 13 12:53:54.722164 kernel: ftrace: allocated 157 pages with 5 groups May 13 12:53:54.722169 kernel: Dynamic Preempt: voluntary May 13 12:53:54.722175 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 12:53:54.722181 kernel: rcu: RCU event tracing is enabled. May 13 12:53:54.722186 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. May 13 12:53:54.722193 kernel: Trampoline variant of Tasks RCU enabled. May 13 12:53:54.722199 kernel: Rude variant of Tasks RCU enabled. May 13 12:53:54.722204 kernel: Tracing variant of Tasks RCU enabled. May 13 12:53:54.722210 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 12:53:54.722215 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 May 13 12:53:54.722221 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. May 13 12:53:54.722226 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. May 13 12:53:54.722232 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. May 13 12:53:54.722237 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 May 13 12:53:54.722244 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. May 13 12:53:54.722250 kernel: Console: colour VGA+ 80x25 May 13 12:53:54.722255 kernel: printk: legacy console [tty0] enabled May 13 12:53:54.722261 kernel: printk: legacy console [ttyS0] enabled May 13 12:53:54.722266 kernel: ACPI: Core revision 20240827 May 13 12:53:54.722272 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns May 13 12:53:54.722277 kernel: APIC: Switch to symmetric I/O mode setup May 13 12:53:54.722283 kernel: x2apic enabled May 13 12:53:54.722289 kernel: APIC: Switched APIC routing to: physical x2apic May 13 12:53:54.722295 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 13 12:53:54.722301 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns May 13 12:53:54.722307 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) May 13 12:53:54.722312 kernel: Disabled fast string operations May 13 12:53:54.722318 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 13 12:53:54.722323 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 13 12:53:54.722329 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 13 12:53:54.722334 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit May 13 12:53:54.722340 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS May 13 12:53:54.722346 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT May 13 12:53:54.722352 kernel: RETBleed: Mitigation: Enhanced IBRS May 13 12:53:54.722357 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 13 12:53:54.722363 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 13 12:53:54.722368 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 13 12:53:54.722374 kernel: SRBDS: Unknown: Dependent on hypervisor status May 13 12:53:54.722379 kernel: GDS: Unknown: Dependent on hypervisor status May 13 12:53:54.722385 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 13 12:53:54.722390 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 13 12:53:54.722397 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 13 12:53:54.722402 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 13 12:53:54.722408 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 13 12:53:54.722413 kernel: Freeing SMP alternatives memory: 32K May 13 12:53:54.722419 kernel: pid_max: default: 131072 minimum: 1024 May 13 12:53:54.722424 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 13 12:53:54.722430 kernel: landlock: Up and running. May 13 12:53:54.722435 kernel: SELinux: Initializing. May 13 12:53:54.722441 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 13 12:53:54.722447 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 13 12:53:54.722453 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) May 13 12:53:54.722459 kernel: Performance Events: Skylake events, core PMU driver. May 13 12:53:54.722464 kernel: core: CPUID marked event: 'cpu cycles' unavailable May 13 12:53:54.722470 kernel: core: CPUID marked event: 'instructions' unavailable May 13 12:53:54.722475 kernel: core: CPUID marked event: 'bus cycles' unavailable May 13 12:53:54.722481 kernel: core: CPUID marked event: 'cache references' unavailable May 13 12:53:54.722486 kernel: core: CPUID marked event: 'cache misses' unavailable May 13 12:53:54.722492 kernel: core: CPUID marked event: 'branch instructions' unavailable May 13 12:53:54.722498 kernel: core: CPUID marked event: 'branch misses' unavailable May 13 12:53:54.722503 kernel: ... version: 1 May 13 12:53:54.722509 kernel: ... bit width: 48 May 13 12:53:54.722514 kernel: ... generic registers: 4 May 13 12:53:54.722520 kernel: ... value mask: 0000ffffffffffff May 13 12:53:54.722525 kernel: ... max period: 000000007fffffff May 13 12:53:54.722531 kernel: ... fixed-purpose events: 0 May 13 12:53:54.722536 kernel: ... event mask: 000000000000000f May 13 12:53:54.722543 kernel: signal: max sigframe size: 1776 May 13 12:53:54.722548 kernel: rcu: Hierarchical SRCU implementation. May 13 12:53:54.722554 kernel: rcu: Max phase no-delay instances is 400. May 13 12:53:54.722560 kernel: Timer migration: 3 hierarchy levels; 8 children per group; 3 crossnode level May 13 12:53:54.722565 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 13 12:53:54.722571 kernel: smp: Bringing up secondary CPUs ... May 13 12:53:54.722576 kernel: smpboot: x86: Booting SMP configuration: May 13 12:53:54.722582 kernel: .... node #0, CPUs: #1 May 13 12:53:54.722588 kernel: Disabled fast string operations May 13 12:53:54.722593 kernel: smp: Brought up 1 node, 2 CPUs May 13 12:53:54.722600 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) May 13 12:53:54.722606 kernel: Memory: 1924228K/2096628K available (14336K kernel code, 2430K rwdata, 9948K rodata, 54420K init, 2548K bss, 161016K reserved, 0K cma-reserved) May 13 12:53:54.722611 kernel: devtmpfs: initialized May 13 12:53:54.722617 kernel: x86/mm: Memory block size: 128MB May 13 12:53:54.722622 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) May 13 12:53:54.722628 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 12:53:54.722633 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) May 13 12:53:54.722642 kernel: pinctrl core: initialized pinctrl subsystem May 13 12:53:54.722648 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 12:53:54.722655 kernel: audit: initializing netlink subsys (disabled) May 13 12:53:54.722660 kernel: audit: type=2000 audit(1747140831.063:1): state=initialized audit_enabled=0 res=1 May 13 12:53:54.722666 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 12:53:54.722671 kernel: thermal_sys: Registered thermal governor 'user_space' May 13 12:53:54.722677 kernel: cpuidle: using governor menu May 13 12:53:54.722682 kernel: Simple Boot Flag at 0x36 set to 0x80 May 13 12:53:54.722688 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 12:53:54.722693 kernel: dca service started, version 1.12.1 May 13 12:53:54.722699 kernel: PCI: ECAM [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) for domain 0000 [bus 00-7f] May 13 12:53:54.722706 kernel: PCI: Using configuration type 1 for base access May 13 12:53:54.722718 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 13 12:53:54.722725 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 12:53:54.722731 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 13 12:53:54.722737 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 12:53:54.722743 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 13 12:53:54.722748 kernel: ACPI: Added _OSI(Module Device) May 13 12:53:54.722754 kernel: ACPI: Added _OSI(Processor Device) May 13 12:53:54.722760 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 12:53:54.722766 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 12:53:54.722772 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 12:53:54.722778 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored May 13 12:53:54.722784 kernel: ACPI: Interpreter enabled May 13 12:53:54.722790 kernel: ACPI: PM: (supports S0 S1 S5) May 13 12:53:54.722796 kernel: ACPI: Using IOAPIC for interrupt routing May 13 12:53:54.722801 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 13 12:53:54.722807 kernel: PCI: Using E820 reservations for host bridge windows May 13 12:53:54.722813 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F May 13 12:53:54.722820 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) May 13 12:53:54.722900 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 12:53:54.722953 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] May 13 12:53:54.723004 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] May 13 12:53:54.723012 kernel: PCI host bridge to bus 0000:00 May 13 12:53:54.724107 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 13 12:53:54.724178 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] May 13 12:53:54.724299 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 13 12:53:54.724796 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 13 12:53:54.724843 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] May 13 12:53:54.724887 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] May 13 12:53:54.724947 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 conventional PCI endpoint May 13 12:53:54.725005 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 conventional PCI bridge May 13 12:53:54.725334 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 13 12:53:54.725453 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint May 13 12:53:54.725520 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a conventional PCI endpoint May 13 12:53:54.725576 kernel: pci 0000:00:07.1: BAR 4 [io 0x1060-0x106f] May 13 12:53:54.725659 kernel: pci 0000:00:07.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk May 13 12:53:54.725746 kernel: pci 0000:00:07.1: BAR 1 [io 0x03f6]: legacy IDE quirk May 13 12:53:54.725831 kernel: pci 0000:00:07.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk May 13 12:53:54.725910 kernel: pci 0000:00:07.1: BAR 3 [io 0x0376]: legacy IDE quirk May 13 12:53:54.725998 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint May 13 12:53:54.726088 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI May 13 12:53:54.726181 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB May 13 12:53:54.726274 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 conventional PCI endpoint May 13 12:53:54.726372 kernel: pci 0000:00:07.7: BAR 0 [io 0x1080-0x10bf] May 13 12:53:54.726457 kernel: pci 0000:00:07.7: BAR 1 [mem 0xfebfe000-0xfebfffff 64bit] May 13 12:53:54.726521 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 conventional PCI endpoint May 13 12:53:54.726570 kernel: pci 0000:00:0f.0: BAR 0 [io 0x1070-0x107f] May 13 12:53:54.726622 kernel: pci 0000:00:0f.0: BAR 1 [mem 0xe8000000-0xefffffff pref] May 13 12:53:54.726670 kernel: pci 0000:00:0f.0: BAR 2 [mem 0xfe000000-0xfe7fffff] May 13 12:53:54.726718 kernel: pci 0000:00:0f.0: ROM [mem 0x00000000-0x00007fff pref] May 13 12:53:54.726765 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 13 12:53:54.726818 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 conventional PCI bridge May 13 12:53:54.726866 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) May 13 12:53:54.726915 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] May 13 12:53:54.726966 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] May 13 12:53:54.727014 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] May 13 12:53:54.728099 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:53:54.728155 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] May 13 12:53:54.728207 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] May 13 12:53:54.728256 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] May 13 12:53:54.728305 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold May 13 12:53:54.728371 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:53:54.728425 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] May 13 12:53:54.728474 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] May 13 12:53:54.728539 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] May 13 12:53:54.728588 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] May 13 12:53:54.728651 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold May 13 12:53:54.728708 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:53:54.728766 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] May 13 12:53:54.728815 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] May 13 12:53:54.728869 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] May 13 12:53:54.728919 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] May 13 12:53:54.728973 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold May 13 12:53:54.729029 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:53:54.729102 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] May 13 12:53:54.729154 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] May 13 12:53:54.729206 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] May 13 12:53:54.729256 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold May 13 12:53:54.729311 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:53:54.729377 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] May 13 12:53:54.729442 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] May 13 12:53:54.729497 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] May 13 12:53:54.729549 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold May 13 12:53:54.729604 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:53:54.729666 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] May 13 12:53:54.729722 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] May 13 12:53:54.729772 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] May 13 12:53:54.729826 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold May 13 12:53:54.729882 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:53:54.729934 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] May 13 12:53:54.729986 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] May 13 12:53:54.730036 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] May 13 12:53:54.731109 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold May 13 12:53:54.731165 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:53:54.731216 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] May 13 12:53:54.731266 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] May 13 12:53:54.731317 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] May 13 12:53:54.731366 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold May 13 12:53:54.731419 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:53:54.731469 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] May 13 12:53:54.731517 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] May 13 12:53:54.731566 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] May 13 12:53:54.732117 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold May 13 12:53:54.732177 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:53:54.732228 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] May 13 12:53:54.732277 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] May 13 12:53:54.732326 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] May 13 12:53:54.732374 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] May 13 12:53:54.732422 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold May 13 12:53:54.732476 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:53:54.732529 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] May 13 12:53:54.732578 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] May 13 12:53:54.732626 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] May 13 12:53:54.732684 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] May 13 12:53:54.732734 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold May 13 12:53:54.732787 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:53:54.732837 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] May 13 12:53:54.732888 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] May 13 12:53:54.732936 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] May 13 12:53:54.732984 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold May 13 12:53:54.733036 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:53:54.734121 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] May 13 12:53:54.734179 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] May 13 12:53:54.734236 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] May 13 12:53:54.734294 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold May 13 12:53:54.734350 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:53:54.734400 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] May 13 12:53:54.734450 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] May 13 12:53:54.734499 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] May 13 12:53:54.734548 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold May 13 12:53:54.734602 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:53:54.734655 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] May 13 12:53:54.734704 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] May 13 12:53:54.734753 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] May 13 12:53:54.734802 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold May 13 12:53:54.734854 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:53:54.734903 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] May 13 12:53:54.734951 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] May 13 12:53:54.735000 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] May 13 12:53:54.735059 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold May 13 12:53:54.735114 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:53:54.735164 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] May 13 12:53:54.735212 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] May 13 12:53:54.735266 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] May 13 12:53:54.735325 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] May 13 12:53:54.735374 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold May 13 12:53:54.735430 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:53:54.735496 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] May 13 12:53:54.735548 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] May 13 12:53:54.735596 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] May 13 12:53:54.735653 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] May 13 12:53:54.735751 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold May 13 12:53:54.736011 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:53:54.738013 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] May 13 12:53:54.738091 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] May 13 12:53:54.738147 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] May 13 12:53:54.738199 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] May 13 12:53:54.738252 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold May 13 12:53:54.738308 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:53:54.738359 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] May 13 12:53:54.738409 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] May 13 12:53:54.738458 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] May 13 12:53:54.738507 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold May 13 12:53:54.738562 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:53:54.738614 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] May 13 12:53:54.738668 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] May 13 12:53:54.738719 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] May 13 12:53:54.738768 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold May 13 12:53:54.738821 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:53:54.738871 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] May 13 12:53:54.738921 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] May 13 12:53:54.738972 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] May 13 12:53:54.739021 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold May 13 12:53:54.739082 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:53:54.739133 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] May 13 12:53:54.739183 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] May 13 12:53:54.739233 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] May 13 12:53:54.739283 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold May 13 12:53:54.739339 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:53:54.739390 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] May 13 12:53:54.739440 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] May 13 12:53:54.739490 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] May 13 12:53:54.739539 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold May 13 12:53:54.739594 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:53:54.739654 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] May 13 12:53:54.739709 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] May 13 12:53:54.739759 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] May 13 12:53:54.739808 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] May 13 12:53:54.739857 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold May 13 12:53:54.739911 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:53:54.739962 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] May 13 12:53:54.740011 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] May 13 12:53:54.740165 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] May 13 12:53:54.740217 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] May 13 12:53:54.740267 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold May 13 12:53:54.740325 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:53:54.740376 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] May 13 12:53:54.740426 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] May 13 12:53:54.740475 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] May 13 12:53:54.740527 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold May 13 12:53:54.740581 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:53:54.740631 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] May 13 12:53:54.740680 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] May 13 12:53:54.740730 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] May 13 12:53:54.740778 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold May 13 12:53:54.740831 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:53:54.740883 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] May 13 12:53:54.740933 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] May 13 12:53:54.740982 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] May 13 12:53:54.741031 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold May 13 12:53:54.741099 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:53:54.741152 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] May 13 12:53:54.741202 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] May 13 12:53:54.741251 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] May 13 12:53:54.741303 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold May 13 12:53:54.741358 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:53:54.741408 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] May 13 12:53:54.741458 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] May 13 12:53:54.741507 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] May 13 12:53:54.741556 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold May 13 12:53:54.741609 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 13 12:53:54.741661 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] May 13 12:53:54.741710 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] May 13 12:53:54.741760 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] May 13 12:53:54.741810 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold May 13 12:53:54.741863 kernel: pci_bus 0000:01: extended config space not accessible May 13 12:53:54.741914 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 13 12:53:54.741964 kernel: pci_bus 0000:02: extended config space not accessible May 13 12:53:54.741975 kernel: acpiphp: Slot [32] registered May 13 12:53:54.741981 kernel: acpiphp: Slot [33] registered May 13 12:53:54.741987 kernel: acpiphp: Slot [34] registered May 13 12:53:54.741993 kernel: acpiphp: Slot [35] registered May 13 12:53:54.741999 kernel: acpiphp: Slot [36] registered May 13 12:53:54.742004 kernel: acpiphp: Slot [37] registered May 13 12:53:54.742010 kernel: acpiphp: Slot [38] registered May 13 12:53:54.742016 kernel: acpiphp: Slot [39] registered May 13 12:53:54.742022 kernel: acpiphp: Slot [40] registered May 13 12:53:54.742029 kernel: acpiphp: Slot [41] registered May 13 12:53:54.742035 kernel: acpiphp: Slot [42] registered May 13 12:53:54.742041 kernel: acpiphp: Slot [43] registered May 13 12:53:54.742684 kernel: acpiphp: Slot [44] registered May 13 12:53:54.742691 kernel: acpiphp: Slot [45] registered May 13 12:53:54.742697 kernel: acpiphp: Slot [46] registered May 13 12:53:54.742703 kernel: acpiphp: Slot [47] registered May 13 12:53:54.742709 kernel: acpiphp: Slot [48] registered May 13 12:53:54.742715 kernel: acpiphp: Slot [49] registered May 13 12:53:54.742721 kernel: acpiphp: Slot [50] registered May 13 12:53:54.742729 kernel: acpiphp: Slot [51] registered May 13 12:53:54.742735 kernel: acpiphp: Slot [52] registered May 13 12:53:54.742741 kernel: acpiphp: Slot [53] registered May 13 12:53:54.742747 kernel: acpiphp: Slot [54] registered May 13 12:53:54.742753 kernel: acpiphp: Slot [55] registered May 13 12:53:54.742759 kernel: acpiphp: Slot [56] registered May 13 12:53:54.742765 kernel: acpiphp: Slot [57] registered May 13 12:53:54.742771 kernel: acpiphp: Slot [58] registered May 13 12:53:54.742776 kernel: acpiphp: Slot [59] registered May 13 12:53:54.742783 kernel: acpiphp: Slot [60] registered May 13 12:53:54.742789 kernel: acpiphp: Slot [61] registered May 13 12:53:54.742795 kernel: acpiphp: Slot [62] registered May 13 12:53:54.742801 kernel: acpiphp: Slot [63] registered May 13 12:53:54.742859 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) May 13 12:53:54.742910 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) May 13 12:53:54.742960 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) May 13 12:53:54.743010 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) May 13 12:53:54.743069 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) May 13 12:53:54.743119 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) May 13 12:53:54.744130 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 PCIe Endpoint May 13 12:53:54.744190 kernel: pci 0000:03:00.0: BAR 0 [io 0x4000-0x4007] May 13 12:53:54.744244 kernel: pci 0000:03:00.0: BAR 1 [mem 0xfd5f8000-0xfd5fffff 64bit] May 13 12:53:54.744652 kernel: pci 0000:03:00.0: ROM [mem 0x00000000-0x0000ffff pref] May 13 12:53:54.744729 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold May 13 12:53:54.744784 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' May 13 12:53:54.744839 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] May 13 12:53:54.744890 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] May 13 12:53:54.744942 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] May 13 12:53:54.744993 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] May 13 12:53:54.745042 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] May 13 12:53:54.745110 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] May 13 12:53:54.745160 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] May 13 12:53:54.745213 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] May 13 12:53:54.745269 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 PCIe Endpoint May 13 12:53:54.745320 kernel: pci 0000:0b:00.0: BAR 0 [mem 0xfd4fc000-0xfd4fcfff] May 13 12:53:54.745378 kernel: pci 0000:0b:00.0: BAR 1 [mem 0xfd4fd000-0xfd4fdfff] May 13 12:53:54.745428 kernel: pci 0000:0b:00.0: BAR 2 [mem 0xfd4fe000-0xfd4fffff] May 13 12:53:54.745509 kernel: pci 0000:0b:00.0: BAR 3 [io 0x5000-0x500f] May 13 12:53:54.745571 kernel: pci 0000:0b:00.0: ROM [mem 0x00000000-0x0000ffff pref] May 13 12:53:54.745625 kernel: pci 0000:0b:00.0: supports D1 D2 May 13 12:53:54.745675 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 13 12:53:54.745725 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' May 13 12:53:54.745776 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] May 13 12:53:54.745826 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] May 13 12:53:54.745876 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] May 13 12:53:54.745924 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] May 13 12:53:54.745973 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] May 13 12:53:54.746025 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] May 13 12:53:54.746107 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] May 13 12:53:54.746160 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] May 13 12:53:54.746210 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] May 13 12:53:54.746261 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] May 13 12:53:54.746310 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] May 13 12:53:54.746361 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] May 13 12:53:54.746414 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] May 13 12:53:54.746464 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] May 13 12:53:54.746513 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] May 13 12:53:54.746563 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] May 13 12:53:54.746612 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] May 13 12:53:54.746665 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] May 13 12:53:54.746715 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] May 13 12:53:54.746765 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] May 13 12:53:54.746816 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] May 13 12:53:54.746866 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] May 13 12:53:54.746916 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] May 13 12:53:54.746966 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] May 13 12:53:54.746975 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 May 13 12:53:54.746981 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 May 13 12:53:54.746987 kernel: ACPI: PCI: Interrupt link LNKB disabled May 13 12:53:54.746995 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 13 12:53:54.747001 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 May 13 12:53:54.747007 kernel: iommu: Default domain type: Translated May 13 12:53:54.747012 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 13 12:53:54.747018 kernel: PCI: Using ACPI for IRQ routing May 13 12:53:54.747024 kernel: PCI: pci_cache_line_size set to 64 bytes May 13 12:53:54.747030 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] May 13 12:53:54.747036 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] May 13 12:53:54.747105 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device May 13 12:53:54.747158 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible May 13 12:53:54.747208 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 13 12:53:54.747217 kernel: vgaarb: loaded May 13 12:53:54.747223 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 May 13 12:53:54.747229 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter May 13 12:53:54.747588 kernel: clocksource: Switched to clocksource tsc-early May 13 12:53:54.747595 kernel: VFS: Disk quotas dquot_6.6.0 May 13 12:53:54.747603 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 12:53:54.747612 kernel: pnp: PnP ACPI init May 13 12:53:54.747678 kernel: system 00:00: [io 0x1000-0x103f] has been reserved May 13 12:53:54.747727 kernel: system 00:00: [io 0x1040-0x104f] has been reserved May 13 12:53:54.747773 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved May 13 12:53:54.747821 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved May 13 12:53:54.747869 kernel: pnp 00:06: [dma 2] May 13 12:53:54.747919 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved May 13 12:53:54.747967 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved May 13 12:53:54.748011 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved May 13 12:53:54.748019 kernel: pnp: PnP ACPI: found 8 devices May 13 12:53:54.748026 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 13 12:53:54.748032 kernel: NET: Registered PF_INET protocol family May 13 12:53:54.748038 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 12:53:54.748044 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 13 12:53:54.748066 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 12:53:54.748074 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 13 12:53:54.748080 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 13 12:53:54.748086 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 13 12:53:54.748093 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 13 12:53:54.748098 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 13 12:53:54.748105 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 12:53:54.748111 kernel: NET: Registered PF_XDP protocol family May 13 12:53:54.748168 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 13 12:53:54.750144 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 May 13 12:53:54.750230 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 May 13 12:53:54.750310 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 May 13 12:53:54.750387 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 May 13 12:53:54.750461 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 May 13 12:53:54.750534 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 May 13 12:53:54.750611 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 May 13 12:53:54.750686 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 May 13 12:53:54.750767 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 May 13 12:53:54.750849 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 May 13 12:53:54.750926 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 May 13 12:53:54.751005 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 May 13 12:53:54.751114 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 May 13 12:53:54.751180 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 May 13 12:53:54.751251 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 May 13 12:53:54.751326 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 May 13 12:53:54.751405 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 May 13 12:53:54.751485 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 May 13 12:53:54.751546 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 May 13 12:53:54.751598 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 May 13 12:53:54.751649 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 May 13 12:53:54.751717 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 May 13 12:53:54.751770 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref]: assigned May 13 12:53:54.751820 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref]: assigned May 13 12:53:54.751886 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space May 13 12:53:54.751962 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign May 13 12:53:54.752023 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space May 13 12:53:54.752909 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign May 13 12:53:54.752979 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space May 13 12:53:54.753035 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign May 13 12:53:54.753099 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space May 13 12:53:54.753150 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign May 13 12:53:54.753213 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space May 13 12:53:54.753265 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign May 13 12:53:54.753315 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space May 13 12:53:54.753365 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign May 13 12:53:54.753415 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space May 13 12:53:54.753465 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign May 13 12:53:54.753515 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space May 13 12:53:54.753564 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign May 13 12:53:54.753617 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space May 13 12:53:54.753666 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign May 13 12:53:54.753717 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space May 13 12:53:54.753765 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign May 13 12:53:54.753815 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space May 13 12:53:54.753868 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign May 13 12:53:54.753922 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space May 13 12:53:54.753981 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign May 13 12:53:54.754062 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space May 13 12:53:54.754116 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign May 13 12:53:54.754167 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space May 13 12:53:54.754225 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign May 13 12:53:54.754277 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space May 13 12:53:54.754327 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign May 13 12:53:54.754377 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space May 13 12:53:54.754429 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign May 13 12:53:54.754479 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space May 13 12:53:54.754528 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign May 13 12:53:54.754577 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space May 13 12:53:54.754626 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign May 13 12:53:54.754682 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space May 13 12:53:54.754732 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign May 13 12:53:54.754782 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space May 13 12:53:54.754834 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign May 13 12:53:54.754884 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space May 13 12:53:54.754933 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign May 13 12:53:54.754982 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space May 13 12:53:54.755031 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign May 13 12:53:54.755093 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space May 13 12:53:54.755143 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign May 13 12:53:54.755198 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space May 13 12:53:54.755249 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign May 13 12:53:54.755300 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space May 13 12:53:54.755350 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign May 13 12:53:54.755399 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space May 13 12:53:54.755459 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign May 13 12:53:54.755510 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space May 13 12:53:54.755559 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign May 13 12:53:54.755608 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space May 13 12:53:54.755656 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign May 13 12:53:54.755705 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space May 13 12:53:54.755757 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign May 13 12:53:54.755841 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space May 13 12:53:54.755891 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign May 13 12:53:54.755940 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space May 13 12:53:54.755990 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign May 13 12:53:54.756039 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space May 13 12:53:54.756103 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign May 13 12:53:54.756153 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space May 13 12:53:54.756212 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign May 13 12:53:54.756269 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space May 13 12:53:54.756323 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign May 13 12:53:54.756373 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space May 13 12:53:54.756422 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign May 13 12:53:54.756471 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space May 13 12:53:54.756535 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign May 13 12:53:54.756586 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space May 13 12:53:54.756636 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign May 13 12:53:54.756688 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space May 13 12:53:54.756737 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign May 13 12:53:54.756786 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space May 13 12:53:54.756835 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign May 13 12:53:54.756885 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space May 13 12:53:54.756934 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign May 13 12:53:54.756984 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space May 13 12:53:54.757033 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign May 13 12:53:54.758129 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space May 13 12:53:54.758189 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign May 13 12:53:54.758243 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 13 12:53:54.758306 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] May 13 12:53:54.758357 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] May 13 12:53:54.758406 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] May 13 12:53:54.758455 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] May 13 12:53:54.758510 kernel: pci 0000:03:00.0: ROM [mem 0xfd500000-0xfd50ffff pref]: assigned May 13 12:53:54.758576 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] May 13 12:53:54.758627 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] May 13 12:53:54.758687 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] May 13 12:53:54.758737 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] May 13 12:53:54.758788 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] May 13 12:53:54.758837 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] May 13 12:53:54.758886 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] May 13 12:53:54.758935 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] May 13 12:53:54.758986 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] May 13 12:53:54.759035 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] May 13 12:53:54.759109 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] May 13 12:53:54.759158 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] May 13 12:53:54.759208 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] May 13 12:53:54.759256 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] May 13 12:53:54.759305 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] May 13 12:53:54.759354 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] May 13 12:53:54.759403 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] May 13 12:53:54.759452 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] May 13 12:53:54.759503 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] May 13 12:53:54.759552 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] May 13 12:53:54.759601 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] May 13 12:53:54.759663 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] May 13 12:53:54.759714 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] May 13 12:53:54.759771 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] May 13 12:53:54.759822 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] May 13 12:53:54.759874 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] May 13 12:53:54.759924 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] May 13 12:53:54.759976 kernel: pci 0000:0b:00.0: ROM [mem 0xfd400000-0xfd40ffff pref]: assigned May 13 12:53:54.760027 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] May 13 12:53:54.760579 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] May 13 12:53:54.760637 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] May 13 12:53:54.761016 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] May 13 12:53:54.761104 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] May 13 12:53:54.761160 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] May 13 12:53:54.761215 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] May 13 12:53:54.761265 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] May 13 12:53:54.761316 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] May 13 12:53:54.761366 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] May 13 12:53:54.761414 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] May 13 12:53:54.761463 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] May 13 12:53:54.761512 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] May 13 12:53:54.761561 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] May 13 12:53:54.762503 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] May 13 12:53:54.762562 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] May 13 12:53:54.762613 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] May 13 12:53:54.762664 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] May 13 12:53:54.762714 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] May 13 12:53:54.762764 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] May 13 12:53:54.762813 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] May 13 12:53:54.762869 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] May 13 12:53:54.762921 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] May 13 12:53:54.762984 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] May 13 12:53:54.763034 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] May 13 12:53:54.763105 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] May 13 12:53:54.763156 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] May 13 12:53:54.763208 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] May 13 12:53:54.763258 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] May 13 12:53:54.763307 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] May 13 12:53:54.763359 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] May 13 12:53:54.763409 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] May 13 12:53:54.763458 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] May 13 12:53:54.763507 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] May 13 12:53:54.763556 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] May 13 12:53:54.763607 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] May 13 12:53:54.763656 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] May 13 12:53:54.763718 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] May 13 12:53:54.764061 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] May 13 12:53:54.764125 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] May 13 12:53:54.764178 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] May 13 12:53:54.764229 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] May 13 12:53:54.764283 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] May 13 12:53:54.764332 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] May 13 12:53:54.764381 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] May 13 12:53:54.764430 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] May 13 12:53:54.764479 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] May 13 12:53:54.764530 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] May 13 12:53:54.764579 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] May 13 12:53:54.764627 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] May 13 12:53:54.764675 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] May 13 12:53:54.764725 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] May 13 12:53:54.764774 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] May 13 12:53:54.764822 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] May 13 12:53:54.764875 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] May 13 12:53:54.764924 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] May 13 12:53:54.764977 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] May 13 12:53:54.765189 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] May 13 12:53:54.765249 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] May 13 12:53:54.765300 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] May 13 12:53:54.765349 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] May 13 12:53:54.765397 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] May 13 12:53:54.765445 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] May 13 12:53:54.765496 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] May 13 12:53:54.765753 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] May 13 12:53:54.765807 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] May 13 12:53:54.765874 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] May 13 12:53:54.765925 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] May 13 12:53:54.765974 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] May 13 12:53:54.766023 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] May 13 12:53:54.766091 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] May 13 12:53:54.766146 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] May 13 12:53:54.766194 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] May 13 12:53:54.766242 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] May 13 12:53:54.766291 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] May 13 12:53:54.766339 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] May 13 12:53:54.766422 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] May 13 12:53:54.766472 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] May 13 12:53:54.766522 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] May 13 12:53:54.766569 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] May 13 12:53:54.766617 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] May 13 12:53:54.766661 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] May 13 12:53:54.766703 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] May 13 12:53:54.766745 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] May 13 12:53:54.766786 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] May 13 12:53:54.766834 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] May 13 12:53:54.766877 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] May 13 12:53:54.766921 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] May 13 12:53:54.766965 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] May 13 12:53:54.767008 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] May 13 12:53:54.767062 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] May 13 12:53:54.767114 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] May 13 12:53:54.767162 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] May 13 12:53:54.767211 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] May 13 12:53:54.767255 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] May 13 12:53:54.767313 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] May 13 12:53:54.767363 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] May 13 12:53:54.767408 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] May 13 12:53:54.767451 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] May 13 12:53:54.767502 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] May 13 12:53:54.767546 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] May 13 12:53:54.767589 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] May 13 12:53:54.767635 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] May 13 12:53:54.767679 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] May 13 12:53:54.767728 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] May 13 12:53:54.767772 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] May 13 12:53:54.767822 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] May 13 12:53:54.767866 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] May 13 12:53:54.767912 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] May 13 12:53:54.767956 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] May 13 12:53:54.768003 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] May 13 12:53:54.768056 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] May 13 12:53:54.768108 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] May 13 12:53:54.768152 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] May 13 12:53:54.768195 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] May 13 12:53:54.768243 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] May 13 12:53:54.768287 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] May 13 12:53:54.768332 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] May 13 12:53:54.768379 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] May 13 12:53:54.768423 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] May 13 12:53:54.768485 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] May 13 12:53:54.768548 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] May 13 12:53:54.770417 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] May 13 12:53:54.770480 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] May 13 12:53:54.770531 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] May 13 12:53:54.770581 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] May 13 12:53:54.770626 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] May 13 12:53:54.770692 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] May 13 12:53:54.770753 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] May 13 12:53:54.770804 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] May 13 12:53:54.770851 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] May 13 12:53:54.770901 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] May 13 12:53:54.770946 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] May 13 12:53:54.770990 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] May 13 12:53:54.771040 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] May 13 12:53:54.771104 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] May 13 12:53:54.771149 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] May 13 12:53:54.771201 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] May 13 12:53:54.771246 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] May 13 12:53:54.771290 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] May 13 12:53:54.771340 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] May 13 12:53:54.771399 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] May 13 12:53:54.771462 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] May 13 12:53:54.771510 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] May 13 12:53:54.771560 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] May 13 12:53:54.771613 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] May 13 12:53:54.771664 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] May 13 12:53:54.771719 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] May 13 12:53:54.771779 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] May 13 12:53:54.771826 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] May 13 12:53:54.771889 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] May 13 12:53:54.771953 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] May 13 12:53:54.772004 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] May 13 12:53:54.772095 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] May 13 12:53:54.772146 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] May 13 12:53:54.772199 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] May 13 12:53:54.772257 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] May 13 12:53:54.772308 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] May 13 12:53:54.772366 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] May 13 12:53:54.772413 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] May 13 12:53:54.772484 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] May 13 12:53:54.772541 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] May 13 12:53:54.772591 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] May 13 12:53:54.772642 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] May 13 12:53:54.772694 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] May 13 12:53:54.772739 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] May 13 12:53:54.772787 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] May 13 12:53:54.772832 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] May 13 12:53:54.772908 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 13 12:53:54.772923 kernel: PCI: CLS 32 bytes, default 64 May 13 12:53:54.772930 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 13 12:53:54.772936 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns May 13 12:53:54.772942 kernel: clocksource: Switched to clocksource tsc May 13 12:53:54.772948 kernel: Initialise system trusted keyrings May 13 12:53:54.772954 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 13 12:53:54.772961 kernel: Key type asymmetric registered May 13 12:53:54.772966 kernel: Asymmetric key parser 'x509' registered May 13 12:53:54.772972 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 13 12:53:54.772980 kernel: io scheduler mq-deadline registered May 13 12:53:54.772986 kernel: io scheduler kyber registered May 13 12:53:54.772992 kernel: io scheduler bfq registered May 13 12:53:54.773066 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 May 13 12:53:54.773125 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:53:54.773178 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 May 13 12:53:54.773228 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:53:54.773282 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 May 13 12:53:54.773333 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:53:54.773383 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 May 13 12:53:54.773435 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:53:54.773485 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 May 13 12:53:54.773535 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:53:54.773586 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 May 13 12:53:54.773635 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:53:54.773688 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 May 13 12:53:54.773737 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:53:54.773787 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 May 13 12:53:54.773838 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:53:54.773888 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 May 13 12:53:54.773939 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:53:54.774008 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 May 13 12:53:54.774078 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:53:54.774133 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 May 13 12:53:54.774183 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:53:54.774233 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 May 13 12:53:54.774283 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:53:54.774342 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 May 13 12:53:54.774394 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:53:54.774446 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 May 13 12:53:54.774512 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:53:54.774570 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 May 13 12:53:54.774619 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:53:54.774669 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 May 13 12:53:54.774718 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:53:54.774770 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 May 13 12:53:54.774822 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:53:54.774873 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 May 13 12:53:54.774923 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:53:54.774972 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 May 13 12:53:54.775023 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:53:54.775089 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 May 13 12:53:54.775141 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:53:54.775192 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 May 13 12:53:54.775261 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:53:54.775313 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 May 13 12:53:54.775363 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:53:54.775413 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 May 13 12:53:54.775463 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:53:54.775514 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 May 13 12:53:54.775564 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:53:54.775616 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 May 13 12:53:54.775670 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:53:54.775720 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 May 13 12:53:54.775770 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:53:54.775820 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 May 13 12:53:54.775869 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:53:54.775921 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 May 13 12:53:54.775985 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:53:54.776038 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 May 13 12:53:54.776112 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:53:54.776164 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 May 13 12:53:54.776215 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:53:54.776266 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 May 13 12:53:54.776317 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:53:54.776368 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 May 13 12:53:54.776421 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 12:53:54.776430 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 13 12:53:54.776439 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 12:53:54.776445 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 13 12:53:54.776452 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 May 13 12:53:54.776458 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 13 12:53:54.776465 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 13 12:53:54.776517 kernel: rtc_cmos 00:01: registered as rtc0 May 13 12:53:54.776567 kernel: rtc_cmos 00:01: setting system clock to 2025-05-13T12:53:54 UTC (1747140834) May 13 12:53:54.776612 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram May 13 12:53:54.776621 kernel: intel_pstate: CPU model not supported May 13 12:53:54.776628 kernel: NET: Registered PF_INET6 protocol family May 13 12:53:54.776634 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 13 12:53:54.776640 kernel: Segment Routing with IPv6 May 13 12:53:54.776647 kernel: In-situ OAM (IOAM) with IPv6 May 13 12:53:54.776653 kernel: NET: Registered PF_PACKET protocol family May 13 12:53:54.776662 kernel: Key type dns_resolver registered May 13 12:53:54.776668 kernel: IPI shorthand broadcast: enabled May 13 12:53:54.776674 kernel: sched_clock: Marking stable (2559004125, 179183354)->(2751556506, -13369027) May 13 12:53:54.776680 kernel: registered taskstats version 1 May 13 12:53:54.776686 kernel: Loading compiled-in X.509 certificates May 13 12:53:54.776693 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.28-flatcar: d81efc2839896c91a2830d4cfad7b0572af8b26a' May 13 12:53:54.776699 kernel: Demotion targets for Node 0: null May 13 12:53:54.776705 kernel: Key type .fscrypt registered May 13 12:53:54.776711 kernel: Key type fscrypt-provisioning registered May 13 12:53:54.776719 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 12:53:54.776725 kernel: ima: Allocated hash algorithm: sha1 May 13 12:53:54.776731 kernel: ima: No architecture policies found May 13 12:53:54.776737 kernel: clk: Disabling unused clocks May 13 12:53:54.776744 kernel: Warning: unable to open an initial console. May 13 12:53:54.776751 kernel: Freeing unused kernel image (initmem) memory: 54420K May 13 12:53:54.776757 kernel: Write protecting the kernel read-only data: 24576k May 13 12:53:54.776763 kernel: Freeing unused kernel image (rodata/data gap) memory: 292K May 13 12:53:54.776770 kernel: Run /init as init process May 13 12:53:54.776779 kernel: with arguments: May 13 12:53:54.776789 kernel: /init May 13 12:53:54.776800 kernel: with environment: May 13 12:53:54.776809 kernel: HOME=/ May 13 12:53:54.776815 kernel: TERM=linux May 13 12:53:54.776822 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 12:53:54.776829 systemd[1]: Successfully made /usr/ read-only. May 13 12:53:54.776838 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 12:53:54.776848 systemd[1]: Detected virtualization vmware. May 13 12:53:54.776858 systemd[1]: Detected architecture x86-64. May 13 12:53:54.776869 systemd[1]: Running in initrd. May 13 12:53:54.776879 systemd[1]: No hostname configured, using default hostname. May 13 12:53:54.776887 systemd[1]: Hostname set to . May 13 12:53:54.776894 systemd[1]: Initializing machine ID from random generator. May 13 12:53:54.776900 systemd[1]: Queued start job for default target initrd.target. May 13 12:53:54.776907 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 12:53:54.776915 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 12:53:54.776922 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 12:53:54.776929 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 12:53:54.776935 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 12:53:54.776942 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 12:53:54.776949 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 12:53:54.776957 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 12:53:54.776964 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 12:53:54.776971 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 12:53:54.776977 systemd[1]: Reached target paths.target - Path Units. May 13 12:53:54.776983 systemd[1]: Reached target slices.target - Slice Units. May 13 12:53:54.776990 systemd[1]: Reached target swap.target - Swaps. May 13 12:53:54.776996 systemd[1]: Reached target timers.target - Timer Units. May 13 12:53:54.777003 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 12:53:54.777010 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 12:53:54.777018 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 12:53:54.777024 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 13 12:53:54.777031 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 12:53:54.777038 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 12:53:54.777044 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 12:53:54.777068 systemd[1]: Reached target sockets.target - Socket Units. May 13 12:53:54.777075 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 12:53:54.777082 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 12:53:54.777088 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 12:53:54.777097 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 13 12:53:54.777104 systemd[1]: Starting systemd-fsck-usr.service... May 13 12:53:54.777110 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 12:53:54.777117 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 12:53:54.777123 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 12:53:54.777130 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 12:53:54.777138 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 12:53:54.777144 systemd[1]: Finished systemd-fsck-usr.service. May 13 12:53:54.777151 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 12:53:54.777171 systemd-journald[243]: Collecting audit messages is disabled. May 13 12:53:54.777190 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 12:53:54.777197 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 12:53:54.777204 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 12:53:54.777211 kernel: Bridge firewalling registered May 13 12:53:54.777217 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 12:53:54.777224 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 12:53:54.777232 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 12:53:54.777239 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 12:53:54.777246 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 12:53:54.777253 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 12:53:54.777259 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 12:53:54.777266 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 12:53:54.777273 systemd-journald[243]: Journal started May 13 12:53:54.777291 systemd-journald[243]: Runtime Journal (/run/log/journal/aa136b7453194d15bb9fe1f46281a6f0) is 4.8M, max 38.8M, 34M free. May 13 12:53:54.717956 systemd-modules-load[244]: Inserted module 'overlay' May 13 12:53:54.737677 systemd-modules-load[244]: Inserted module 'br_netfilter' May 13 12:53:54.779066 systemd[1]: Started systemd-journald.service - Journal Service. May 13 12:53:54.780546 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 12:53:54.784336 dracut-cmdline[269]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=7099d7ee582d4f3e6d25a3763207cfa25fb4eb117c83034e2c517b959b8370a1 May 13 12:53:54.789221 systemd-tmpfiles[283]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 13 12:53:54.790911 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 12:53:54.792243 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 12:53:54.816432 systemd-resolved[305]: Positive Trust Anchors: May 13 12:53:54.816662 systemd-resolved[305]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 12:53:54.816829 systemd-resolved[305]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 12:53:54.819173 systemd-resolved[305]: Defaulting to hostname 'linux'. May 13 12:53:54.819925 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 12:53:54.820067 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 12:53:54.835076 kernel: SCSI subsystem initialized May 13 12:53:54.841058 kernel: Loading iSCSI transport class v2.0-870. May 13 12:53:54.849059 kernel: iscsi: registered transport (tcp) May 13 12:53:54.862061 kernel: iscsi: registered transport (qla4xxx) May 13 12:53:54.862092 kernel: QLogic iSCSI HBA Driver May 13 12:53:54.872386 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 12:53:54.885780 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 12:53:54.886908 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 12:53:54.908232 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 12:53:54.909144 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 12:53:54.948068 kernel: raid6: avx2x4 gen() 45502 MB/s May 13 12:53:54.965074 kernel: raid6: avx2x2 gen() 50876 MB/s May 13 12:53:54.982258 kernel: raid6: avx2x1 gen() 44424 MB/s May 13 12:53:54.982283 kernel: raid6: using algorithm avx2x2 gen() 50876 MB/s May 13 12:53:55.000260 kernel: raid6: .... xor() 32165 MB/s, rmw enabled May 13 12:53:55.000279 kernel: raid6: using avx2x2 recovery algorithm May 13 12:53:55.014064 kernel: xor: automatically using best checksumming function avx May 13 12:53:55.113176 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 12:53:55.116531 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 12:53:55.117470 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 12:53:55.137765 systemd-udevd[491]: Using default interface naming scheme 'v255'. May 13 12:53:55.141113 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 12:53:55.141803 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 12:53:55.152645 dracut-pre-trigger[492]: rd.md=0: removing MD RAID activation May 13 12:53:55.166265 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 12:53:55.167038 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 12:53:55.245794 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 12:53:55.247417 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 12:53:55.316062 kernel: VMware PVSCSI driver - version 1.0.7.0-k May 13 12:53:55.319532 kernel: vmw_pvscsi: using 64bit dma May 13 12:53:55.319566 kernel: vmw_pvscsi: max_id: 16 May 13 12:53:55.319574 kernel: vmw_pvscsi: setting ring_pages to 8 May 13 12:53:55.324057 kernel: libata version 3.00 loaded. May 13 12:53:55.326061 kernel: ata_piix 0000:00:07.1: version 2.13 May 13 12:53:55.328138 kernel: scsi host1: ata_piix May 13 12:53:55.328256 kernel: vmw_pvscsi: enabling reqCallThreshold May 13 12:53:55.328265 kernel: vmw_pvscsi: driver-based request coalescing enabled May 13 12:53:55.328273 kernel: vmw_pvscsi: using MSI-X May 13 12:53:55.330058 kernel: scsi host2: ata_piix May 13 12:53:55.334603 kernel: VMware vmxnet3 virtual NIC driver - version 1.9.0.0-k-NAPI May 13 12:53:55.334632 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 lpm-pol 0 May 13 12:53:55.334640 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 lpm-pol 0 May 13 12:53:55.337086 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 May 13 12:53:55.337201 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 May 13 12:53:55.342069 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps May 13 12:53:55.350958 (udev-worker)[539]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. May 13 12:53:55.351262 kernel: cryptd: max_cpu_qlen set to 1000 May 13 12:53:55.353477 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 May 13 12:53:55.353643 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 May 13 12:53:55.357367 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 12:53:55.357531 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 12:53:55.357852 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 12:53:55.358549 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 12:53:55.381447 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 12:53:55.503170 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 May 13 12:53:55.509081 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 May 13 12:53:55.515629 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 May 13 12:53:55.523056 kernel: AES CTR mode by8 optimization enabled May 13 12:53:55.523080 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) May 13 12:53:55.524057 kernel: sd 0:0:0:0: [sda] Write Protect is off May 13 12:53:55.524137 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 May 13 12:53:55.527170 kernel: sd 0:0:0:0: [sda] Cache data unavailable May 13 12:53:55.527269 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through May 13 12:53:55.530069 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 May 13 12:53:55.586441 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 13 12:53:55.586480 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 13 12:53:55.604095 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray May 13 12:53:55.604268 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 13 12:53:55.619058 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 13 12:53:55.930154 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. May 13 12:53:55.937211 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. May 13 12:53:55.943982 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. May 13 12:53:55.956084 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. May 13 12:53:55.956233 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. May 13 12:53:55.957500 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 12:53:55.966247 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 12:53:55.966820 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 12:53:55.967242 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 12:53:55.967541 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 12:53:55.968201 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 12:53:55.985382 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 12:53:56.198078 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 13 12:53:56.214069 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 13 12:53:57.208765 disk-uuid[665]: The operation has completed successfully. May 13 12:53:57.209599 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 13 12:53:57.281132 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 12:53:57.281199 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 12:53:57.297096 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 12:53:57.309757 sh[679]: Success May 13 12:53:57.321187 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 12:53:57.321213 kernel: device-mapper: uevent: version 1.0.3 May 13 12:53:57.322354 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 13 12:53:57.329100 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" May 13 12:53:57.360730 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 12:53:57.362102 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 12:53:57.373986 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 12:53:57.386843 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 13 12:53:57.386863 kernel: BTRFS: device fsid 3042589c-b63f-42f0-9a6f-a4369b1889f9 devid 1 transid 40 /dev/mapper/usr (254:0) scanned by mount (691) May 13 12:53:57.389662 kernel: BTRFS info (device dm-0): first mount of filesystem 3042589c-b63f-42f0-9a6f-a4369b1889f9 May 13 12:53:57.389682 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 13 12:53:57.389705 kernel: BTRFS info (device dm-0): using free-space-tree May 13 12:53:57.399304 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 12:53:57.399650 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 13 12:53:57.400243 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... May 13 12:53:57.402104 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 12:53:57.422061 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 (8:6) scanned by mount (714) May 13 12:53:57.425813 kernel: BTRFS info (device sda6): first mount of filesystem 00c8da9a-330c-44ff-bf12-f9831c2c14e1 May 13 12:53:57.425834 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 13 12:53:57.425842 kernel: BTRFS info (device sda6): using free-space-tree May 13 12:53:57.438070 kernel: BTRFS info (device sda6): last unmount of filesystem 00c8da9a-330c-44ff-bf12-f9831c2c14e1 May 13 12:53:57.440165 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 12:53:57.441210 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 12:53:57.466916 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. May 13 12:53:57.468117 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 12:53:57.533276 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 12:53:57.535120 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 12:53:57.564893 systemd-networkd[870]: lo: Link UP May 13 12:53:57.564898 systemd-networkd[870]: lo: Gained carrier May 13 12:53:57.565711 systemd-networkd[870]: Enumeration completed May 13 12:53:57.565851 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 12:53:57.565994 systemd[1]: Reached target network.target - Network. May 13 12:53:57.566998 ignition[733]: Ignition 2.21.0 May 13 12:53:57.566154 systemd-networkd[870]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. May 13 12:53:57.569448 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated May 13 12:53:57.569548 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps May 13 12:53:57.567003 ignition[733]: Stage: fetch-offline May 13 12:53:57.569099 systemd-networkd[870]: ens192: Link UP May 13 12:53:57.567021 ignition[733]: no configs at "/usr/lib/ignition/base.d" May 13 12:53:57.569101 systemd-networkd[870]: ens192: Gained carrier May 13 12:53:57.567026 ignition[733]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 13 12:53:57.567091 ignition[733]: parsed url from cmdline: "" May 13 12:53:57.567092 ignition[733]: no config URL provided May 13 12:53:57.567096 ignition[733]: reading system config file "/usr/lib/ignition/user.ign" May 13 12:53:57.567099 ignition[733]: no config at "/usr/lib/ignition/user.ign" May 13 12:53:57.567572 ignition[733]: config successfully fetched May 13 12:53:57.567590 ignition[733]: parsing config with SHA512: ef6e0b9d8614fa3884ed74b9d0b90d3e64e5827002ab8d8bc0eed1284bbd0a0426cf257ef415676f75851143b4528a880052885c044aa4d6eac5840caf77915f May 13 12:53:57.574302 unknown[733]: fetched base config from "system" May 13 12:53:57.574311 unknown[733]: fetched user config from "vmware" May 13 12:53:57.574533 ignition[733]: fetch-offline: fetch-offline passed May 13 12:53:57.574564 ignition[733]: Ignition finished successfully May 13 12:53:57.575533 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 12:53:57.575744 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 12:53:57.576198 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 12:53:57.591317 ignition[876]: Ignition 2.21.0 May 13 12:53:57.591326 ignition[876]: Stage: kargs May 13 12:53:57.591416 ignition[876]: no configs at "/usr/lib/ignition/base.d" May 13 12:53:57.591422 ignition[876]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 13 12:53:57.592458 ignition[876]: kargs: kargs passed May 13 12:53:57.592498 ignition[876]: Ignition finished successfully May 13 12:53:57.594278 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 12:53:57.595083 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 12:53:57.609625 ignition[882]: Ignition 2.21.0 May 13 12:53:57.609846 ignition[882]: Stage: disks May 13 12:53:57.610009 ignition[882]: no configs at "/usr/lib/ignition/base.d" May 13 12:53:57.610017 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 13 12:53:57.610736 ignition[882]: disks: disks passed May 13 12:53:57.610783 ignition[882]: Ignition finished successfully May 13 12:53:57.611421 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 12:53:57.611801 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 12:53:57.611935 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 12:53:57.612129 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 12:53:57.612348 systemd[1]: Reached target sysinit.target - System Initialization. May 13 12:53:57.612516 systemd[1]: Reached target basic.target - Basic System. May 13 12:53:57.613211 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 12:53:57.630816 systemd-fsck[890]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks May 13 12:53:57.631828 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 12:53:57.632708 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 12:53:57.706975 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 12:53:57.707134 kernel: EXT4-fs (sda9): mounted filesystem ebf7ca75-051f-4154-b098-5ec24084105d r/w with ordered data mode. Quota mode: none. May 13 12:53:57.707449 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 12:53:57.708419 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 12:53:57.710085 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 12:53:57.710466 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 13 12:53:57.710664 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 12:53:57.710877 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 12:53:57.716018 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 12:53:57.717317 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 12:53:57.721066 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 (8:6) scanned by mount (898) May 13 12:53:57.723839 kernel: BTRFS info (device sda6): first mount of filesystem 00c8da9a-330c-44ff-bf12-f9831c2c14e1 May 13 12:53:57.723855 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 13 12:53:57.723863 kernel: BTRFS info (device sda6): using free-space-tree May 13 12:53:57.729184 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 12:53:57.742643 initrd-setup-root[922]: cut: /sysroot/etc/passwd: No such file or directory May 13 12:53:57.745546 initrd-setup-root[929]: cut: /sysroot/etc/group: No such file or directory May 13 12:53:57.747588 initrd-setup-root[936]: cut: /sysroot/etc/shadow: No such file or directory May 13 12:53:57.749516 initrd-setup-root[943]: cut: /sysroot/etc/gshadow: No such file or directory May 13 12:53:57.801586 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 12:53:57.802225 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 12:53:57.803128 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 12:53:57.816061 kernel: BTRFS info (device sda6): last unmount of filesystem 00c8da9a-330c-44ff-bf12-f9831c2c14e1 May 13 12:53:57.829335 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 12:53:57.830501 ignition[1011]: INFO : Ignition 2.21.0 May 13 12:53:57.830707 ignition[1011]: INFO : Stage: mount May 13 12:53:57.830881 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 12:53:57.831075 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 13 12:53:57.831622 ignition[1011]: INFO : mount: mount passed May 13 12:53:57.831622 ignition[1011]: INFO : Ignition finished successfully May 13 12:53:57.832421 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 12:53:57.833024 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 12:53:58.385192 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 12:53:58.386114 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 12:53:58.402989 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 (8:6) scanned by mount (1022) May 13 12:53:58.403023 kernel: BTRFS info (device sda6): first mount of filesystem 00c8da9a-330c-44ff-bf12-f9831c2c14e1 May 13 12:53:58.403032 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 13 12:53:58.404617 kernel: BTRFS info (device sda6): using free-space-tree May 13 12:53:58.407529 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 12:53:58.421826 ignition[1039]: INFO : Ignition 2.21.0 May 13 12:53:58.422648 ignition[1039]: INFO : Stage: files May 13 12:53:58.422648 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 12:53:58.422648 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 13 12:53:58.423111 ignition[1039]: DEBUG : files: compiled without relabeling support, skipping May 13 12:53:58.424669 ignition[1039]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 12:53:58.424953 ignition[1039]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 12:53:58.426569 ignition[1039]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 12:53:58.426822 ignition[1039]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 12:53:58.427141 unknown[1039]: wrote ssh authorized keys file for user: core May 13 12:53:58.427403 ignition[1039]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 12:53:58.428786 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 13 12:53:58.428786 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 13 12:53:58.610594 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 12:53:58.846043 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 13 12:53:58.846043 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 12:53:58.846475 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 13 12:53:59.049424 systemd-networkd[870]: ens192: Gained IPv6LL May 13 12:53:59.338514 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 13 12:53:59.390072 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 12:53:59.390842 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 13 12:53:59.390842 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 13 12:53:59.390842 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 12:53:59.390842 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 12:53:59.390842 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 12:53:59.390842 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 12:53:59.390842 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 12:53:59.390842 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 12:53:59.392882 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 12:53:59.393117 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 12:53:59.393117 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 12:53:59.395215 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 12:53:59.395215 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 12:53:59.395721 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 May 13 12:53:59.821369 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 13 12:54:00.063747 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 12:54:00.064026 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" May 13 12:54:00.069839 ignition[1039]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" May 13 12:54:00.069839 ignition[1039]: INFO : files: op(d): [started] processing unit "prepare-helm.service" May 13 12:54:00.081811 ignition[1039]: INFO : files: op(d): op(e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 12:54:00.086413 ignition[1039]: INFO : files: op(d): op(e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 12:54:00.086413 ignition[1039]: INFO : files: op(d): [finished] processing unit "prepare-helm.service" May 13 12:54:00.086413 ignition[1039]: INFO : files: op(f): [started] processing unit "coreos-metadata.service" May 13 12:54:00.086413 ignition[1039]: INFO : files: op(f): op(10): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 12:54:00.087123 ignition[1039]: INFO : files: op(f): op(10): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 12:54:00.087123 ignition[1039]: INFO : files: op(f): [finished] processing unit "coreos-metadata.service" May 13 12:54:00.087123 ignition[1039]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" May 13 12:54:00.552981 ignition[1039]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 12:54:00.555114 ignition[1039]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 12:54:00.555354 ignition[1039]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" May 13 12:54:00.555354 ignition[1039]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" May 13 12:54:00.555354 ignition[1039]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" May 13 12:54:00.555354 ignition[1039]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 12:54:00.556563 ignition[1039]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 12:54:00.556563 ignition[1039]: INFO : files: files passed May 13 12:54:00.556563 ignition[1039]: INFO : Ignition finished successfully May 13 12:54:00.556494 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 12:54:00.557732 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 12:54:00.560145 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 12:54:00.578313 initrd-setup-root-after-ignition[1070]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 12:54:00.578658 initrd-setup-root-after-ignition[1070]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 12:54:00.579212 initrd-setup-root-after-ignition[1074]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 12:54:00.580022 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 12:54:00.580416 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 12:54:00.581123 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 12:54:00.582124 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 12:54:00.582304 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 12:54:00.621538 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 12:54:00.621606 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 12:54:00.621906 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 12:54:00.622000 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 12:54:00.622229 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 12:54:00.622738 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 12:54:00.639150 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 12:54:00.639994 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 12:54:00.653510 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 12:54:00.653884 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 12:54:00.654223 systemd[1]: Stopped target timers.target - Timer Units. May 13 12:54:00.654381 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 12:54:00.654478 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 12:54:00.654843 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 12:54:00.655139 systemd[1]: Stopped target basic.target - Basic System. May 13 12:54:00.655386 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 12:54:00.655633 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 12:54:00.655849 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 12:54:00.656075 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 13 12:54:00.656289 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 12:54:00.656547 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 12:54:00.656739 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 12:54:00.656936 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 12:54:00.657157 systemd[1]: Stopped target swap.target - Swaps. May 13 12:54:00.657332 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 12:54:00.657400 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 12:54:00.657655 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 12:54:00.657886 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 12:54:00.658118 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 12:54:00.658227 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 12:54:00.658453 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 12:54:00.658524 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 12:54:00.658805 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 12:54:00.658901 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 12:54:00.659225 systemd[1]: Stopped target paths.target - Path Units. May 13 12:54:00.659371 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 12:54:00.659444 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 12:54:00.659658 systemd[1]: Stopped target slices.target - Slice Units. May 13 12:54:00.659846 systemd[1]: Stopped target sockets.target - Socket Units. May 13 12:54:00.660029 systemd[1]: iscsid.socket: Deactivated successfully. May 13 12:54:00.660115 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 12:54:00.660409 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 12:54:00.660477 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 12:54:00.660775 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 12:54:00.660874 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 12:54:00.661123 systemd[1]: ignition-files.service: Deactivated successfully. May 13 12:54:00.661210 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 12:54:00.662092 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 12:54:00.662222 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 12:54:00.662325 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 12:54:00.663076 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 12:54:00.664133 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 12:54:00.664246 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 12:54:00.664570 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 12:54:00.664687 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 12:54:00.669018 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 12:54:00.678685 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 12:54:00.687174 ignition[1096]: INFO : Ignition 2.21.0 May 13 12:54:00.687174 ignition[1096]: INFO : Stage: umount May 13 12:54:00.687563 ignition[1096]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 12:54:00.687563 ignition[1096]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 13 12:54:00.688211 ignition[1096]: INFO : umount: umount passed May 13 12:54:00.688211 ignition[1096]: INFO : Ignition finished successfully May 13 12:54:00.689256 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 12:54:00.689475 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 12:54:00.689822 systemd[1]: Stopped target network.target - Network. May 13 12:54:00.690041 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 12:54:00.690180 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 12:54:00.690438 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 12:54:00.690563 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 12:54:00.690812 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 12:54:00.690938 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 12:54:00.691198 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 12:54:00.691349 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 12:54:00.691677 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 12:54:00.692152 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 12:54:00.694336 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 12:54:00.694435 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 12:54:00.695901 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 13 12:54:00.696103 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 12:54:00.696167 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 12:54:00.697198 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 13 12:54:00.697487 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 13 12:54:00.697637 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 12:54:00.697658 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 12:54:00.698729 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 12:54:00.698833 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 12:54:00.698863 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 12:54:00.699019 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. May 13 12:54:00.699227 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. May 13 12:54:00.699367 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 12:54:00.699397 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 12:54:00.700557 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 12:54:00.700585 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 12:54:00.700927 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 12:54:00.700953 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 12:54:00.701724 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 12:54:00.702870 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 12:54:00.702910 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 13 12:54:00.711954 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 12:54:00.712027 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 12:54:00.714389 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 12:54:00.714472 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 12:54:00.714766 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 12:54:00.714795 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 12:54:00.715014 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 12:54:00.715031 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 12:54:00.715202 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 12:54:00.715226 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 12:54:00.715504 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 12:54:00.715527 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 12:54:00.715809 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 12:54:00.715834 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 12:54:00.716628 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 12:54:00.716738 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 13 12:54:00.716772 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 13 12:54:00.716948 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 12:54:00.716971 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 12:54:00.717135 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 12:54:00.717156 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 12:54:00.718275 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 13 12:54:00.718307 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 13 12:54:00.718335 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 12:54:00.720684 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 12:54:00.726569 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 12:54:00.726638 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 12:54:00.831181 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 12:54:00.831250 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 12:54:00.831548 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 12:54:00.831671 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 12:54:00.831699 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 12:54:00.832340 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 12:54:00.847495 systemd[1]: Switching root. May 13 12:54:00.883100 systemd-journald[243]: Journal stopped May 13 12:54:02.887398 systemd-journald[243]: Received SIGTERM from PID 1 (systemd). May 13 12:54:02.887420 kernel: SELinux: policy capability network_peer_controls=1 May 13 12:54:02.887430 kernel: SELinux: policy capability open_perms=1 May 13 12:54:02.887436 kernel: SELinux: policy capability extended_socket_class=1 May 13 12:54:02.887441 kernel: SELinux: policy capability always_check_network=0 May 13 12:54:02.887448 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 12:54:02.887455 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 12:54:02.887460 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 12:54:02.887466 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 12:54:02.887472 kernel: SELinux: policy capability userspace_initial_context=0 May 13 12:54:02.887477 kernel: audit: type=1403 audit(1747140842.112:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 12:54:02.887484 systemd[1]: Successfully loaded SELinux policy in 52.531ms. May 13 12:54:02.887493 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.314ms. May 13 12:54:02.887500 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 12:54:02.887507 systemd[1]: Detected virtualization vmware. May 13 12:54:02.887514 systemd[1]: Detected architecture x86-64. May 13 12:54:02.887522 systemd[1]: Detected first boot. May 13 12:54:02.887529 systemd[1]: Initializing machine ID from random generator. May 13 12:54:02.887535 zram_generator::config[1139]: No configuration found. May 13 12:54:02.887617 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc May 13 12:54:02.887628 kernel: Guest personality initialized and is active May 13 12:54:02.887635 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 13 12:54:02.887641 kernel: Initialized host personality May 13 12:54:02.887649 kernel: NET: Registered PF_VSOCK protocol family May 13 12:54:02.887656 systemd[1]: Populated /etc with preset unit settings. May 13 12:54:02.887664 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 13 12:54:02.887671 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" May 13 12:54:02.887678 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 13 12:54:02.887685 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 12:54:02.887691 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 12:54:02.887699 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 12:54:02.887706 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 12:54:02.887713 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 12:54:02.887720 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 12:54:02.887727 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 12:54:02.887733 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 12:54:02.887741 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 12:54:02.887749 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 12:54:02.887756 systemd[1]: Created slice user.slice - User and Session Slice. May 13 12:54:02.887762 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 12:54:02.887771 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 12:54:02.887778 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 12:54:02.887786 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 12:54:02.887793 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 12:54:02.887800 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 12:54:02.887808 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 13 12:54:02.887815 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 12:54:02.887822 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 12:54:02.887829 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 12:54:02.887836 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 12:54:02.887843 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 12:54:02.887850 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 12:54:02.887857 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 12:54:02.887865 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 12:54:02.887872 systemd[1]: Reached target slices.target - Slice Units. May 13 12:54:02.887879 systemd[1]: Reached target swap.target - Swaps. May 13 12:54:02.887886 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 12:54:02.887893 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 12:54:02.887902 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 13 12:54:02.887909 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 12:54:02.887916 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 12:54:02.887923 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 12:54:02.887930 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 12:54:02.887938 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 12:54:02.887946 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 12:54:02.887953 systemd[1]: Mounting media.mount - External Media Directory... May 13 12:54:02.887961 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 12:54:02.887968 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 12:54:02.887975 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 12:54:02.887983 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 12:54:02.887990 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 12:54:02.887997 systemd[1]: Reached target machines.target - Containers. May 13 12:54:02.888004 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 12:54:02.888012 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... May 13 12:54:02.888020 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 12:54:02.888027 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 12:54:02.888034 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 12:54:02.888041 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 12:54:02.888058 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 12:54:02.888067 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 12:54:02.888075 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 12:54:02.888082 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 12:54:02.888091 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 12:54:02.888098 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 12:54:02.888106 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 12:54:02.888113 systemd[1]: Stopped systemd-fsck-usr.service. May 13 12:54:02.888121 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 12:54:02.888128 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 12:54:02.888135 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 12:54:02.888142 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 12:54:02.888149 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 12:54:02.888158 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 13 12:54:02.888165 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 12:54:02.888172 systemd[1]: verity-setup.service: Deactivated successfully. May 13 12:54:02.888180 systemd[1]: Stopped verity-setup.service. May 13 12:54:02.888187 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 12:54:02.888194 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 12:54:02.888201 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 12:54:02.888208 systemd[1]: Mounted media.mount - External Media Directory. May 13 12:54:02.888216 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 12:54:02.888223 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 12:54:02.888230 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 12:54:02.888237 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 12:54:02.888244 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 12:54:02.888251 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 12:54:02.888259 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 12:54:02.888266 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 12:54:02.888273 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 12:54:02.888282 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 12:54:02.888289 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 12:54:02.888296 kernel: fuse: init (API version 7.41) May 13 12:54:02.888303 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 12:54:02.888310 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 12:54:02.888317 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 12:54:02.888324 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 12:54:02.888331 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 13 12:54:02.888339 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 12:54:02.888347 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 12:54:02.888356 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 12:54:02.888365 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 12:54:02.888372 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 12:54:02.888380 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 12:54:02.888387 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 12:54:02.888396 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 12:54:02.888403 kernel: loop: module loaded May 13 12:54:02.888423 systemd-journald[1232]: Collecting audit messages is disabled. May 13 12:54:02.888441 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 12:54:02.888449 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 12:54:02.888457 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 12:54:02.888474 systemd-journald[1232]: Journal started May 13 12:54:02.888492 systemd-journald[1232]: Runtime Journal (/run/log/journal/d26ccfd18c3d4250b4f5914d96acd3fc) is 4.8M, max 38.8M, 34M free. May 13 12:54:02.695655 systemd[1]: Queued start job for default target multi-user.target. May 13 12:54:02.701946 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 13 12:54:02.702204 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 12:54:02.888974 jq[1209]: true May 13 12:54:02.889127 systemd[1]: Started systemd-journald.service - Journal Service. May 13 12:54:02.889715 jq[1243]: true May 13 12:54:02.890623 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 12:54:02.891102 kernel: ACPI: bus type drm_connector registered May 13 12:54:02.891333 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 12:54:02.892210 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 12:54:02.897433 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 12:54:02.897740 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 13 12:54:02.908216 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 12:54:02.911136 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 12:54:02.911289 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 12:54:02.912210 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 12:54:02.913020 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 12:54:02.931279 ignition[1262]: Ignition 2.21.0 May 13 12:54:02.931588 ignition[1262]: deleting config from guestinfo properties May 13 12:54:02.960354 ignition[1262]: Successfully deleted config May 13 12:54:02.960815 systemd-journald[1232]: Time spent on flushing to /var/log/journal/d26ccfd18c3d4250b4f5914d96acd3fc is 43.682ms for 1763 entries. May 13 12:54:02.960815 systemd-journald[1232]: System Journal (/var/log/journal/d26ccfd18c3d4250b4f5914d96acd3fc) is 8M, max 584.8M, 576.8M free. May 13 12:54:03.026860 systemd-journald[1232]: Received client request to flush runtime journal. May 13 12:54:03.026893 kernel: loop0: detected capacity change from 0 to 113872 May 13 12:54:02.967355 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 12:54:02.968084 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 12:54:02.973301 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 13 12:54:02.977764 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). May 13 12:54:02.987247 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 12:54:03.006893 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 12:54:03.029088 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 13 12:54:03.029440 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 12:54:03.043236 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 12:54:03.053024 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 12:54:03.054266 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 12:54:03.061087 kernel: loop1: detected capacity change from 0 to 218376 May 13 12:54:03.178170 systemd-tmpfiles[1307]: ACLs are not supported, ignoring. May 13 12:54:03.178183 systemd-tmpfiles[1307]: ACLs are not supported, ignoring. May 13 12:54:03.181826 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 12:54:03.296122 kernel: loop2: detected capacity change from 0 to 146240 May 13 12:54:03.332260 kernel: loop3: detected capacity change from 0 to 2960 May 13 12:54:03.365065 kernel: loop4: detected capacity change from 0 to 113872 May 13 12:54:03.600073 kernel: loop5: detected capacity change from 0 to 218376 May 13 12:54:03.704368 kernel: loop6: detected capacity change from 0 to 146240 May 13 12:54:03.703114 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 12:54:03.704238 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 12:54:03.716609 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 12:54:03.724068 kernel: loop7: detected capacity change from 0 to 2960 May 13 12:54:03.743499 (sd-merge)[1313]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. May 13 12:54:03.744519 (sd-merge)[1313]: Merged extensions into '/usr'. May 13 12:54:03.751766 systemd[1]: Reload requested from client PID 1260 ('systemd-sysext') (unit systemd-sysext.service)... May 13 12:54:03.751857 systemd[1]: Reloading... May 13 12:54:03.797087 zram_generator::config[1337]: No configuration found. May 13 12:54:03.872724 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 12:54:03.882184 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 13 12:54:03.929335 systemd[1]: Reloading finished in 177 ms. May 13 12:54:03.949141 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 12:54:03.954440 systemd[1]: Starting ensure-sysext.service... May 13 12:54:03.957115 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 12:54:03.975200 systemd-tmpfiles[1397]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 13 12:54:03.975220 systemd-tmpfiles[1397]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 13 12:54:03.975370 systemd-tmpfiles[1397]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 12:54:03.975534 systemd-tmpfiles[1397]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 12:54:03.976040 systemd-tmpfiles[1397]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 12:54:03.976159 systemd[1]: Reload requested from client PID 1396 ('systemctl') (unit ensure-sysext.service)... May 13 12:54:03.976170 systemd[1]: Reloading... May 13 12:54:03.976299 systemd-tmpfiles[1397]: ACLs are not supported, ignoring. May 13 12:54:03.976333 systemd-tmpfiles[1397]: ACLs are not supported, ignoring. May 13 12:54:04.021101 zram_generator::config[1428]: No configuration found. May 13 12:54:04.049353 systemd-tmpfiles[1397]: Detected autofs mount point /boot during canonicalization of boot. May 13 12:54:04.049359 systemd-tmpfiles[1397]: Skipping /boot May 13 12:54:04.055428 systemd-tmpfiles[1397]: Detected autofs mount point /boot during canonicalization of boot. May 13 12:54:04.055434 systemd-tmpfiles[1397]: Skipping /boot May 13 12:54:04.086001 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 12:54:04.094659 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 13 12:54:04.142567 systemd[1]: Reloading finished in 166 ms. May 13 12:54:04.365041 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 12:54:04.367760 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 12:54:04.369231 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 12:54:04.372158 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 12:54:04.372317 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 12:54:04.372386 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 12:54:04.372461 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 12:54:04.374698 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 12:54:04.374789 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 12:54:04.374846 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 12:54:04.374906 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 12:54:04.376374 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 12:54:04.376547 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 12:54:04.378383 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 12:54:04.378494 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 12:54:04.381466 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 12:54:04.381785 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 12:54:04.381879 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 12:54:04.384471 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 12:54:04.390117 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 12:54:04.398132 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 12:54:04.400042 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 12:54:04.402116 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 12:54:04.403991 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 12:54:04.404191 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 12:54:04.404216 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 12:54:04.406716 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 12:54:04.410432 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 12:54:04.411715 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 12:54:04.412890 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 12:54:04.413271 systemd[1]: Finished ensure-sysext.service. May 13 12:54:04.417288 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 12:54:04.420263 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 12:54:04.422507 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 12:54:04.422896 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 12:54:04.424095 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 12:54:04.424213 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 12:54:04.424881 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 12:54:04.427882 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 12:54:04.428235 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 12:54:04.428350 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 12:54:04.428782 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 12:54:04.440998 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 12:54:04.445101 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 12:54:04.454533 systemd-udevd[1502]: Using default interface naming scheme 'v255'. May 13 12:54:04.483844 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 12:54:04.538602 augenrules[1530]: No rules May 13 12:54:04.538981 systemd[1]: audit-rules.service: Deactivated successfully. May 13 12:54:04.539147 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 12:54:04.553814 systemd-resolved[1495]: Positive Trust Anchors: May 13 12:54:04.554327 systemd-resolved[1495]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 12:54:04.554568 systemd-resolved[1495]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 12:54:04.555084 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 12:54:04.556437 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 12:54:04.556577 systemd[1]: Reached target time-set.target - System Time Set. May 13 12:54:04.577336 systemd-resolved[1495]: Defaulting to hostname 'linux'. May 13 12:54:04.578323 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 12:54:04.578490 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 12:54:04.697221 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 12:54:04.701146 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 12:54:04.796158 systemd-networkd[1542]: lo: Link UP May 13 12:54:04.796163 systemd-networkd[1542]: lo: Gained carrier May 13 12:54:04.797259 systemd-networkd[1542]: Enumeration completed May 13 12:54:04.797323 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 12:54:04.797489 systemd[1]: Reached target network.target - Network. May 13 12:54:04.799685 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 13 12:54:04.801404 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 12:54:04.833263 ldconfig[1250]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 12:54:04.836289 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 12:54:04.836863 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 13 12:54:04.840538 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 12:54:04.842678 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 12:54:04.843242 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 12:54:04.846814 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 13 12:54:04.858921 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 12:54:04.859438 systemd[1]: Reached target sysinit.target - System Initialization. May 13 12:54:04.860338 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 12:54:04.860466 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 12:54:04.860577 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 13 12:54:04.860921 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 12:54:04.861127 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 12:54:04.861493 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 12:54:04.861617 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 12:54:04.861638 systemd[1]: Reached target paths.target - Path Units. May 13 12:54:04.861998 systemd[1]: Reached target timers.target - Timer Units. May 13 12:54:04.863138 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 12:54:04.866428 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 12:54:04.868904 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 13 12:54:04.869353 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 13 12:54:04.869688 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 13 12:54:04.872464 systemd-networkd[1542]: ens192: Configuring with /etc/systemd/network/00-vmware.network. May 13 12:54:04.873691 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 12:54:04.874415 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 13 12:54:04.876452 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated May 13 12:54:04.876588 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps May 13 12:54:04.876788 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 12:54:04.877797 systemd[1]: Reached target sockets.target - Socket Units. May 13 12:54:04.877910 systemd[1]: Reached target basic.target - Basic System. May 13 12:54:04.878034 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 12:54:04.878108 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 12:54:04.879986 systemd[1]: Starting containerd.service - containerd container runtime... May 13 12:54:04.882131 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 12:54:04.884454 systemd-networkd[1542]: ens192: Link UP May 13 12:54:04.884558 systemd-networkd[1542]: ens192: Gained carrier May 13 12:54:04.885204 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 12:54:04.887196 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 12:54:04.891572 systemd-timesyncd[1498]: Network configuration changed, trying to establish connection. May 13 12:54:04.891706 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 12:54:04.892097 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 12:54:04.893560 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 13 12:54:04.898819 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 12:54:04.903591 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 12:54:04.906197 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 12:54:04.906939 google_oslogin_nss_cache[1589]: oslogin_cache_refresh[1589]: Refreshing passwd entry cache May 13 12:54:04.907408 oslogin_cache_refresh[1589]: Refreshing passwd entry cache May 13 12:54:04.910238 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 12:54:04.914134 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 12:54:04.914762 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 12:54:04.916563 google_oslogin_nss_cache[1589]: oslogin_cache_refresh[1589]: Failure getting users, quitting May 13 12:54:04.916563 google_oslogin_nss_cache[1589]: oslogin_cache_refresh[1589]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 13 12:54:04.916563 google_oslogin_nss_cache[1589]: oslogin_cache_refresh[1589]: Refreshing group entry cache May 13 12:54:04.916363 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 12:54:04.916243 oslogin_cache_refresh[1589]: Failure getting users, quitting May 13 12:54:04.916255 oslogin_cache_refresh[1589]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 13 12:54:04.916282 oslogin_cache_refresh[1589]: Refreshing group entry cache May 13 12:54:04.918506 jq[1587]: false May 13 12:54:04.918703 systemd[1]: Starting update-engine.service - Update Engine... May 13 12:54:04.919288 google_oslogin_nss_cache[1589]: oslogin_cache_refresh[1589]: Failure getting groups, quitting May 13 12:54:04.919288 google_oslogin_nss_cache[1589]: oslogin_cache_refresh[1589]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 13 12:54:04.919258 oslogin_cache_refresh[1589]: Failure getting groups, quitting May 13 12:54:04.919263 oslogin_cache_refresh[1589]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 13 12:54:04.924578 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 12:54:04.931721 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... May 13 12:54:04.932241 jq[1603]: true May 13 12:54:04.935935 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 12:54:04.936198 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 12:54:04.936308 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 12:54:04.936448 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 13 12:54:04.936549 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 13 12:54:04.936767 systemd[1]: motdgen.service: Deactivated successfully. May 13 12:54:04.936864 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 12:54:04.946605 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 12:54:04.948863 update_engine[1601]: I20250513 12:54:04.948797 1601 main.cc:92] Flatcar Update Engine starting May 13 12:54:04.949674 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 12:54:04.964557 jq[1609]: true May 13 12:54:04.972294 kernel: mousedev: PS/2 mouse device common for all mice May 13 12:54:04.972973 extend-filesystems[1588]: Found loop4 May 13 12:54:04.972973 extend-filesystems[1588]: Found loop5 May 13 12:54:04.972973 extend-filesystems[1588]: Found loop6 May 13 12:54:04.973423 extend-filesystems[1588]: Found loop7 May 13 12:54:04.973423 extend-filesystems[1588]: Found sda May 13 12:54:04.973423 extend-filesystems[1588]: Found sda1 May 13 12:54:04.973423 extend-filesystems[1588]: Found sda2 May 13 12:54:04.973423 extend-filesystems[1588]: Found sda3 May 13 12:54:04.973423 extend-filesystems[1588]: Found usr May 13 12:54:04.973423 extend-filesystems[1588]: Found sda4 May 13 12:54:04.973423 extend-filesystems[1588]: Found sda6 May 13 12:54:04.973423 extend-filesystems[1588]: Found sda7 May 13 12:54:04.973423 extend-filesystems[1588]: Found sda9 May 13 12:54:04.973423 extend-filesystems[1588]: Checking size of /dev/sda9 May 13 12:54:04.974273 (ntainerd)[1620]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 12:54:04.983105 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. May 13 12:54:04.985706 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... May 13 12:54:04.995302 tar[1608]: linux-amd64/LICENSE May 13 12:54:04.995302 tar[1608]: linux-amd64/helm May 13 12:54:05.005058 extend-filesystems[1588]: Old size kept for /dev/sda9 May 13 12:54:05.005058 extend-filesystems[1588]: Found sr0 May 13 12:54:05.007973 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. May 13 12:54:05.009883 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 12:54:05.010038 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 12:54:05.012177 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 12:54:05.043122 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. May 13 12:54:05.043556 systemd-logind[1597]: New seat seat0. May 13 12:54:05.044381 systemd[1]: Started systemd-logind.service - User Login Management. May 13 12:54:05.055245 unknown[1634]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath May 13 12:54:05.056590 unknown[1634]: Core dump limit set to -1 May 13 12:54:05.074262 bash[1644]: Updated "/home/core/.ssh/authorized_keys" May 13 12:54:05.076612 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 12:54:05.076990 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 13 12:54:05.077852 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 12:54:05.103062 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 May 13 12:54:05.108061 kernel: ACPI: button: Power Button [PWRF] May 13 12:54:05.141676 dbus-daemon[1585]: [system] SELinux support is enabled May 13 12:54:05.141790 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 12:54:05.144707 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 12:54:05.144729 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 12:54:05.144863 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 12:54:05.144875 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 12:54:05.154737 dbus-daemon[1585]: [system] Successfully activated service 'org.freedesktop.systemd1' May 13 12:54:05.159293 update_engine[1601]: I20250513 12:54:05.159078 1601 update_check_scheduler.cc:74] Next update check in 9m59s May 13 12:54:05.159221 systemd[1]: Started update-engine.service - Update Engine. May 13 12:54:05.168100 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 12:54:05.229427 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! May 13 12:54:05.358029 containerd[1620]: time="2025-05-13T12:54:05Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 13 12:54:05.361464 (udev-worker)[1539]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. May 13 12:54:05.365443 containerd[1620]: time="2025-05-13T12:54:05.365413712Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 13 12:54:05.399600 systemd-logind[1597]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 13 12:54:05.405031 containerd[1620]: time="2025-05-13T12:54:05.404995302Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.333µs" May 13 12:54:05.405031 containerd[1620]: time="2025-05-13T12:54:05.405020467Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 13 12:54:05.405132 containerd[1620]: time="2025-05-13T12:54:05.405034830Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 13 12:54:05.405161 containerd[1620]: time="2025-05-13T12:54:05.405148470Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 13 12:54:05.405200 containerd[1620]: time="2025-05-13T12:54:05.405160633Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 13 12:54:05.405200 containerd[1620]: time="2025-05-13T12:54:05.405175594Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 12:54:05.405227 containerd[1620]: time="2025-05-13T12:54:05.405209796Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 12:54:05.405227 containerd[1620]: time="2025-05-13T12:54:05.405216917Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 12:54:05.405359 containerd[1620]: time="2025-05-13T12:54:05.405339618Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 12:54:05.405359 containerd[1620]: time="2025-05-13T12:54:05.405349345Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 12:54:05.405359 containerd[1620]: time="2025-05-13T12:54:05.405355440Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 12:54:05.405359 containerd[1620]: time="2025-05-13T12:54:05.405359826Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 13 12:54:05.405421 containerd[1620]: time="2025-05-13T12:54:05.405398805Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 13 12:54:05.405530 containerd[1620]: time="2025-05-13T12:54:05.405516499Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 12:54:05.405551 containerd[1620]: time="2025-05-13T12:54:05.405534740Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 12:54:05.405551 containerd[1620]: time="2025-05-13T12:54:05.405541556Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 13 12:54:05.407683 containerd[1620]: time="2025-05-13T12:54:05.407638914Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 13 12:54:05.407855 containerd[1620]: time="2025-05-13T12:54:05.407841965Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 13 12:54:05.407932 containerd[1620]: time="2025-05-13T12:54:05.407920269Z" level=info msg="metadata content store policy set" policy=shared May 13 12:54:05.411175 locksmithd[1657]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 12:54:05.453566 systemd-logind[1597]: Watching system buttons on /dev/input/event2 (Power Button) May 13 12:54:05.477300 containerd[1620]: time="2025-05-13T12:54:05.477267452Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 13 12:54:05.477355 containerd[1620]: time="2025-05-13T12:54:05.477310898Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 13 12:54:05.477355 containerd[1620]: time="2025-05-13T12:54:05.477322335Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 13 12:54:05.477355 containerd[1620]: time="2025-05-13T12:54:05.477329448Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 13 12:54:05.477355 containerd[1620]: time="2025-05-13T12:54:05.477338529Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 13 12:54:05.477355 containerd[1620]: time="2025-05-13T12:54:05.477344605Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 13 12:54:05.477355 containerd[1620]: time="2025-05-13T12:54:05.477352927Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 13 12:54:05.477444 containerd[1620]: time="2025-05-13T12:54:05.477359621Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 13 12:54:05.477444 containerd[1620]: time="2025-05-13T12:54:05.477365668Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 13 12:54:05.477444 containerd[1620]: time="2025-05-13T12:54:05.477371047Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 13 12:54:05.477444 containerd[1620]: time="2025-05-13T12:54:05.477375584Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 13 12:54:05.477444 containerd[1620]: time="2025-05-13T12:54:05.477382808Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 13 12:54:05.477506 containerd[1620]: time="2025-05-13T12:54:05.477460777Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 13 12:54:05.477506 containerd[1620]: time="2025-05-13T12:54:05.477474042Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 13 12:54:05.477506 containerd[1620]: time="2025-05-13T12:54:05.477482542Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 13 12:54:05.477506 containerd[1620]: time="2025-05-13T12:54:05.477488483Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 13 12:54:05.477506 containerd[1620]: time="2025-05-13T12:54:05.477495277Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 13 12:54:05.477506 containerd[1620]: time="2025-05-13T12:54:05.477501179Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 13 12:54:05.477583 containerd[1620]: time="2025-05-13T12:54:05.477507088Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 13 12:54:05.477583 containerd[1620]: time="2025-05-13T12:54:05.477512544Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 13 12:54:05.477583 containerd[1620]: time="2025-05-13T12:54:05.477518841Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 13 12:54:05.477583 containerd[1620]: time="2025-05-13T12:54:05.477524549Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 13 12:54:05.477583 containerd[1620]: time="2025-05-13T12:54:05.477530259Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 13 12:54:05.477583 containerd[1620]: time="2025-05-13T12:54:05.477578510Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 13 12:54:05.477666 containerd[1620]: time="2025-05-13T12:54:05.477587552Z" level=info msg="Start snapshots syncer" May 13 12:54:05.477666 containerd[1620]: time="2025-05-13T12:54:05.477600679Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 13 12:54:05.477765 containerd[1620]: time="2025-05-13T12:54:05.477740594Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 13 12:54:05.477832 containerd[1620]: time="2025-05-13T12:54:05.477772573Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 13 12:54:05.479562 containerd[1620]: time="2025-05-13T12:54:05.479548074Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 13 12:54:05.479639 containerd[1620]: time="2025-05-13T12:54:05.479622507Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 13 12:54:05.479662 containerd[1620]: time="2025-05-13T12:54:05.479644462Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 13 12:54:05.479676 containerd[1620]: time="2025-05-13T12:54:05.479664571Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 13 12:54:05.479676 containerd[1620]: time="2025-05-13T12:54:05.479671329Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 13 12:54:05.479702 containerd[1620]: time="2025-05-13T12:54:05.479678568Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 13 12:54:05.479702 containerd[1620]: time="2025-05-13T12:54:05.479684536Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 13 12:54:05.479702 containerd[1620]: time="2025-05-13T12:54:05.479690762Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 13 12:54:05.479744 containerd[1620]: time="2025-05-13T12:54:05.479705846Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 13 12:54:05.479744 containerd[1620]: time="2025-05-13T12:54:05.479714602Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 13 12:54:05.479744 containerd[1620]: time="2025-05-13T12:54:05.479720801Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 13 12:54:05.479784 containerd[1620]: time="2025-05-13T12:54:05.479747179Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 12:54:05.479784 containerd[1620]: time="2025-05-13T12:54:05.479757660Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 12:54:05.479784 containerd[1620]: time="2025-05-13T12:54:05.479762777Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 12:54:05.479784 containerd[1620]: time="2025-05-13T12:54:05.479768489Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 12:54:05.479784 containerd[1620]: time="2025-05-13T12:54:05.479772994Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 13 12:54:05.479784 containerd[1620]: time="2025-05-13T12:54:05.479780651Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 13 12:54:05.479862 containerd[1620]: time="2025-05-13T12:54:05.479786859Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 13 12:54:05.479862 containerd[1620]: time="2025-05-13T12:54:05.479796121Z" level=info msg="runtime interface created" May 13 12:54:05.479862 containerd[1620]: time="2025-05-13T12:54:05.479823205Z" level=info msg="created NRI interface" May 13 12:54:05.479862 containerd[1620]: time="2025-05-13T12:54:05.479828407Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 13 12:54:05.479862 containerd[1620]: time="2025-05-13T12:54:05.479834954Z" level=info msg="Connect containerd service" May 13 12:54:05.479862 containerd[1620]: time="2025-05-13T12:54:05.479853476Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 12:54:05.480385 containerd[1620]: time="2025-05-13T12:54:05.480370569Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 12:54:05.487204 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 12:54:05.647546 sshd_keygen[1639]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 12:54:05.664384 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 12:54:05.666100 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 12:54:05.677414 systemd[1]: issuegen.service: Deactivated successfully. May 13 12:54:05.677603 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 12:54:05.680196 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 12:54:05.698983 tar[1608]: linux-amd64/README.md May 13 12:54:05.706612 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 12:54:05.708193 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 12:54:05.710299 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 13 12:54:05.710488 systemd[1]: Reached target getty.target - Login Prompts. May 13 12:54:05.711888 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 12:54:05.828761 containerd[1620]: time="2025-05-13T12:54:05.828312009Z" level=info msg="Start subscribing containerd event" May 13 12:54:05.828761 containerd[1620]: time="2025-05-13T12:54:05.828359438Z" level=info msg="Start recovering state" May 13 12:54:05.828761 containerd[1620]: time="2025-05-13T12:54:05.828403587Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 12:54:05.828761 containerd[1620]: time="2025-05-13T12:54:05.828413322Z" level=info msg="Start event monitor" May 13 12:54:05.828761 containerd[1620]: time="2025-05-13T12:54:05.828423679Z" level=info msg="Start cni network conf syncer for default" May 13 12:54:05.828761 containerd[1620]: time="2025-05-13T12:54:05.828427568Z" level=info msg="Start streaming server" May 13 12:54:05.828761 containerd[1620]: time="2025-05-13T12:54:05.828432168Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 12:54:05.828761 containerd[1620]: time="2025-05-13T12:54:05.828436361Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 13 12:54:05.828761 containerd[1620]: time="2025-05-13T12:54:05.828453868Z" level=info msg="runtime interface starting up..." May 13 12:54:05.828761 containerd[1620]: time="2025-05-13T12:54:05.828457009Z" level=info msg="starting plugins..." May 13 12:54:05.828761 containerd[1620]: time="2025-05-13T12:54:05.828466690Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 13 12:54:05.828603 systemd[1]: Started containerd.service - containerd container runtime. May 13 12:54:05.829378 containerd[1620]: time="2025-05-13T12:54:05.829304855Z" level=info msg="containerd successfully booted in 0.471482s" May 13 12:54:05.937782 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 12:54:06.409194 systemd-networkd[1542]: ens192: Gained IPv6LL May 13 12:54:06.410128 systemd-timesyncd[1498]: Network configuration changed, trying to establish connection. May 13 12:54:06.411072 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 12:54:06.411552 systemd[1]: Reached target network-online.target - Network is Online. May 13 12:54:06.412675 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... May 13 12:54:06.420783 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:54:06.422249 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 12:54:06.470497 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 12:54:06.491197 systemd[1]: coreos-metadata.service: Deactivated successfully. May 13 12:54:06.491393 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. May 13 12:54:06.491837 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 12:54:07.241545 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:54:07.242268 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 12:54:07.242985 systemd[1]: Startup finished in 2.616s (kernel) + 7.495s (initrd) + 5.182s (userspace) = 15.293s. May 13 12:54:07.247312 (kubelet)[1805]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 12:54:07.275155 login[1762]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 13 12:54:07.276813 login[1763]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 13 12:54:07.281678 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 12:54:07.282399 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 12:54:07.289436 systemd-logind[1597]: New session 2 of user core. May 13 12:54:07.294011 systemd-logind[1597]: New session 1 of user core. May 13 12:54:07.298683 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 12:54:07.301562 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 12:54:07.310571 (systemd)[1812]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 12:54:07.312441 systemd-logind[1597]: New session c1 of user core. May 13 12:54:07.411855 systemd[1812]: Queued start job for default target default.target. May 13 12:54:07.418946 systemd[1812]: Created slice app.slice - User Application Slice. May 13 12:54:07.419311 systemd[1812]: Reached target paths.target - Paths. May 13 12:54:07.419383 systemd[1812]: Reached target timers.target - Timers. May 13 12:54:07.422115 systemd[1812]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 12:54:07.427479 systemd[1812]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 12:54:07.427602 systemd[1812]: Reached target sockets.target - Sockets. May 13 12:54:07.427668 systemd[1812]: Reached target basic.target - Basic System. May 13 12:54:07.427691 systemd[1812]: Reached target default.target - Main User Target. May 13 12:54:07.427708 systemd[1812]: Startup finished in 109ms. May 13 12:54:07.427876 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 12:54:07.435774 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 12:54:07.436521 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 12:54:07.708914 kubelet[1805]: E0513 12:54:07.708841 1805 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 12:54:07.709840 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 12:54:07.709923 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 12:54:07.710124 systemd[1]: kubelet.service: Consumed 601ms CPU time, 248.5M memory peak. May 13 12:54:08.297564 systemd-timesyncd[1498]: Network configuration changed, trying to establish connection. May 13 12:54:17.960335 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 12:54:17.961449 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:54:18.303732 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:54:18.306786 (kubelet)[1856]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 12:54:18.377187 kubelet[1856]: E0513 12:54:18.377153 1856 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 12:54:18.379672 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 12:54:18.379810 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 12:54:18.380163 systemd[1]: kubelet.service: Consumed 105ms CPU time, 101.9M memory peak. May 13 12:54:28.630211 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 12:54:28.631815 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:54:29.068109 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:54:29.070480 (kubelet)[1871]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 12:54:29.111354 kubelet[1871]: E0513 12:54:29.111314 1871 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 12:54:29.112777 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 12:54:29.112858 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 12:54:29.113193 systemd[1]: kubelet.service: Consumed 95ms CPU time, 104.3M memory peak. May 13 12:54:35.153871 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 12:54:35.155272 systemd[1]: Started sshd@0-139.178.70.101:22-147.75.109.163:43428.service - OpenSSH per-connection server daemon (147.75.109.163:43428). May 13 12:54:35.202158 sshd[1879]: Accepted publickey for core from 147.75.109.163 port 43428 ssh2: RSA SHA256:bxyL21ypRg/l6L1U5vXH7bz9HOopqyjcFfRC9D+f+uA May 13 12:54:35.202822 sshd-session[1879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:54:35.205418 systemd-logind[1597]: New session 3 of user core. May 13 12:54:35.213333 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 12:54:35.269600 systemd[1]: Started sshd@1-139.178.70.101:22-147.75.109.163:43440.service - OpenSSH per-connection server daemon (147.75.109.163:43440). May 13 12:54:35.307303 sshd[1884]: Accepted publickey for core from 147.75.109.163 port 43440 ssh2: RSA SHA256:bxyL21ypRg/l6L1U5vXH7bz9HOopqyjcFfRC9D+f+uA May 13 12:54:35.307929 sshd-session[1884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:54:35.310709 systemd-logind[1597]: New session 4 of user core. May 13 12:54:35.320151 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 12:54:35.368901 sshd[1886]: Connection closed by 147.75.109.163 port 43440 May 13 12:54:35.369847 sshd-session[1884]: pam_unix(sshd:session): session closed for user core May 13 12:54:35.379693 systemd[1]: sshd@1-139.178.70.101:22-147.75.109.163:43440.service: Deactivated successfully. May 13 12:54:35.380747 systemd[1]: session-4.scope: Deactivated successfully. May 13 12:54:35.381388 systemd-logind[1597]: Session 4 logged out. Waiting for processes to exit. May 13 12:54:35.382812 systemd[1]: Started sshd@2-139.178.70.101:22-147.75.109.163:43452.service - OpenSSH per-connection server daemon (147.75.109.163:43452). May 13 12:54:35.384287 systemd-logind[1597]: Removed session 4. May 13 12:54:35.424522 sshd[1892]: Accepted publickey for core from 147.75.109.163 port 43452 ssh2: RSA SHA256:bxyL21ypRg/l6L1U5vXH7bz9HOopqyjcFfRC9D+f+uA May 13 12:54:35.425230 sshd-session[1892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:54:35.428554 systemd-logind[1597]: New session 5 of user core. May 13 12:54:35.440378 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 12:54:35.487938 sshd[1894]: Connection closed by 147.75.109.163 port 43452 May 13 12:54:35.488301 sshd-session[1892]: pam_unix(sshd:session): session closed for user core May 13 12:54:35.498022 systemd[1]: sshd@2-139.178.70.101:22-147.75.109.163:43452.service: Deactivated successfully. May 13 12:54:35.499525 systemd[1]: session-5.scope: Deactivated successfully. May 13 12:54:35.500725 systemd-logind[1597]: Session 5 logged out. Waiting for processes to exit. May 13 12:54:35.501671 systemd[1]: Started sshd@3-139.178.70.101:22-147.75.109.163:43462.service - OpenSSH per-connection server daemon (147.75.109.163:43462). May 13 12:54:35.502629 systemd-logind[1597]: Removed session 5. May 13 12:54:35.547803 sshd[1900]: Accepted publickey for core from 147.75.109.163 port 43462 ssh2: RSA SHA256:bxyL21ypRg/l6L1U5vXH7bz9HOopqyjcFfRC9D+f+uA May 13 12:54:35.548726 sshd-session[1900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:54:35.552201 systemd-logind[1597]: New session 6 of user core. May 13 12:54:35.559248 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 12:54:35.608380 sshd[1902]: Connection closed by 147.75.109.163 port 43462 May 13 12:54:35.608737 sshd-session[1900]: pam_unix(sshd:session): session closed for user core May 13 12:54:35.621519 systemd[1]: sshd@3-139.178.70.101:22-147.75.109.163:43462.service: Deactivated successfully. May 13 12:54:35.623264 systemd[1]: session-6.scope: Deactivated successfully. May 13 12:54:35.624605 systemd-logind[1597]: Session 6 logged out. Waiting for processes to exit. May 13 12:54:35.626271 systemd[1]: Started sshd@4-139.178.70.101:22-147.75.109.163:43470.service - OpenSSH per-connection server daemon (147.75.109.163:43470). May 13 12:54:35.627737 systemd-logind[1597]: Removed session 6. May 13 12:54:35.663504 sshd[1908]: Accepted publickey for core from 147.75.109.163 port 43470 ssh2: RSA SHA256:bxyL21ypRg/l6L1U5vXH7bz9HOopqyjcFfRC9D+f+uA May 13 12:54:35.664259 sshd-session[1908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:54:35.666844 systemd-logind[1597]: New session 7 of user core. May 13 12:54:35.676202 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 12:54:35.735467 sudo[1911]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 12:54:35.735679 sudo[1911]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 12:54:35.749295 sudo[1911]: pam_unix(sudo:session): session closed for user root May 13 12:54:35.750107 sshd[1910]: Connection closed by 147.75.109.163 port 43470 May 13 12:54:35.750462 sshd-session[1908]: pam_unix(sshd:session): session closed for user core May 13 12:54:35.759131 systemd[1]: sshd@4-139.178.70.101:22-147.75.109.163:43470.service: Deactivated successfully. May 13 12:54:35.760000 systemd[1]: session-7.scope: Deactivated successfully. May 13 12:54:35.760770 systemd-logind[1597]: Session 7 logged out. Waiting for processes to exit. May 13 12:54:35.762026 systemd[1]: Started sshd@5-139.178.70.101:22-147.75.109.163:43480.service - OpenSSH per-connection server daemon (147.75.109.163:43480). May 13 12:54:35.763527 systemd-logind[1597]: Removed session 7. May 13 12:54:35.802553 sshd[1917]: Accepted publickey for core from 147.75.109.163 port 43480 ssh2: RSA SHA256:bxyL21ypRg/l6L1U5vXH7bz9HOopqyjcFfRC9D+f+uA May 13 12:54:35.803394 sshd-session[1917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:54:35.806700 systemd-logind[1597]: New session 8 of user core. May 13 12:54:35.817146 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 12:54:35.867208 sudo[1921]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 12:54:35.867399 sudo[1921]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 12:54:35.870533 sudo[1921]: pam_unix(sudo:session): session closed for user root May 13 12:54:35.874333 sudo[1920]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 13 12:54:35.874526 sudo[1920]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 12:54:35.881959 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 12:54:35.908936 augenrules[1943]: No rules May 13 12:54:35.909697 systemd[1]: audit-rules.service: Deactivated successfully. May 13 12:54:35.909855 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 12:54:35.910945 sudo[1920]: pam_unix(sudo:session): session closed for user root May 13 12:54:35.912472 sshd[1919]: Connection closed by 147.75.109.163 port 43480 May 13 12:54:35.912142 sshd-session[1917]: pam_unix(sshd:session): session closed for user core May 13 12:54:35.918280 systemd[1]: sshd@5-139.178.70.101:22-147.75.109.163:43480.service: Deactivated successfully. May 13 12:54:35.919025 systemd[1]: session-8.scope: Deactivated successfully. May 13 12:54:35.919448 systemd-logind[1597]: Session 8 logged out. Waiting for processes to exit. May 13 12:54:35.920922 systemd[1]: Started sshd@6-139.178.70.101:22-147.75.109.163:43494.service - OpenSSH per-connection server daemon (147.75.109.163:43494). May 13 12:54:35.921588 systemd-logind[1597]: Removed session 8. May 13 12:54:35.961457 sshd[1952]: Accepted publickey for core from 147.75.109.163 port 43494 ssh2: RSA SHA256:bxyL21ypRg/l6L1U5vXH7bz9HOopqyjcFfRC9D+f+uA May 13 12:54:35.962138 sshd-session[1952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:54:35.964478 systemd-logind[1597]: New session 9 of user core. May 13 12:54:35.971126 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 12:54:36.020138 sudo[1955]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 12:54:36.020562 sudo[1955]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 12:54:36.317182 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 12:54:36.326244 (dockerd)[1972]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 12:54:36.522665 dockerd[1972]: time="2025-05-13T12:54:36.522633279Z" level=info msg="Starting up" May 13 12:54:36.523244 dockerd[1972]: time="2025-05-13T12:54:36.523228856Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 13 12:54:36.536618 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1200852143-merged.mount: Deactivated successfully. May 13 12:54:36.553647 dockerd[1972]: time="2025-05-13T12:54:36.553595707Z" level=info msg="Loading containers: start." May 13 12:54:36.561080 kernel: Initializing XFRM netlink socket May 13 12:54:36.678128 systemd-timesyncd[1498]: Network configuration changed, trying to establish connection. May 13 12:54:36.700484 systemd-networkd[1542]: docker0: Link UP May 13 12:54:36.701541 dockerd[1972]: time="2025-05-13T12:54:36.701494547Z" level=info msg="Loading containers: done." May 13 12:54:36.708819 dockerd[1972]: time="2025-05-13T12:54:36.708797299Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 12:54:36.708890 dockerd[1972]: time="2025-05-13T12:54:36.708842716Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 13 12:54:36.708908 dockerd[1972]: time="2025-05-13T12:54:36.708889628Z" level=info msg="Initializing buildkit" May 13 12:54:36.718316 dockerd[1972]: time="2025-05-13T12:54:36.718296313Z" level=info msg="Completed buildkit initialization" May 13 12:54:36.722942 dockerd[1972]: time="2025-05-13T12:54:36.722927705Z" level=info msg="Daemon has completed initialization" May 13 12:54:36.722980 dockerd[1972]: time="2025-05-13T12:54:36.722950602Z" level=info msg="API listen on /run/docker.sock" May 13 12:54:36.723115 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 12:55:57.276662 systemd-resolved[1495]: Clock change detected. Flushing caches. May 13 12:55:57.276683 systemd-timesyncd[1498]: Contacted time server 162.159.200.123:123 (2.flatcar.pool.ntp.org). May 13 12:55:57.276714 systemd-timesyncd[1498]: Initial clock synchronization to Tue 2025-05-13 12:55:57.276527 UTC. May 13 12:55:58.024317 containerd[1620]: time="2025-05-13T12:55:58.024243410Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 13 12:55:58.528584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1923927587.mount: Deactivated successfully. May 13 12:55:59.463899 containerd[1620]: time="2025-05-13T12:55:59.463873907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:55:59.464502 containerd[1620]: time="2025-05-13T12:55:59.464490748Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=28682879" May 13 12:55:59.465155 containerd[1620]: time="2025-05-13T12:55:59.464675985Z" level=info msg="ImageCreate event name:\"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:55:59.466148 containerd[1620]: time="2025-05-13T12:55:59.466115891Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:55:59.466962 containerd[1620]: time="2025-05-13T12:55:59.466744330Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"28679679\" in 1.44247497s" May 13 12:55:59.466962 containerd[1620]: time="2025-05-13T12:55:59.466775504Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" May 13 12:55:59.467265 containerd[1620]: time="2025-05-13T12:55:59.467183171Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 13 12:55:59.705149 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 13 12:55:59.706211 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:55:59.860873 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:55:59.863870 (kubelet)[2236]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 12:55:59.899503 kubelet[2236]: E0513 12:55:59.899464 2236 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 12:55:59.900759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 12:55:59.900842 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 12:55:59.901192 systemd[1]: kubelet.service: Consumed 94ms CPU time, 104M memory peak. May 13 12:56:01.542153 containerd[1620]: time="2025-05-13T12:56:01.542095366Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:56:01.551453 containerd[1620]: time="2025-05-13T12:56:01.551417593Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=24779589" May 13 12:56:01.562938 containerd[1620]: time="2025-05-13T12:56:01.562901427Z" level=info msg="ImageCreate event name:\"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:56:01.572261 containerd[1620]: time="2025-05-13T12:56:01.572207743Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:56:01.572952 containerd[1620]: time="2025-05-13T12:56:01.572862399Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"26267962\" in 2.105647268s" May 13 12:56:01.572952 containerd[1620]: time="2025-05-13T12:56:01.572887825Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" May 13 12:56:01.573613 containerd[1620]: time="2025-05-13T12:56:01.573389490Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 13 12:56:02.578360 containerd[1620]: time="2025-05-13T12:56:02.577794718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:56:02.578360 containerd[1620]: time="2025-05-13T12:56:02.578196807Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=19169938" May 13 12:56:02.578360 containerd[1620]: time="2025-05-13T12:56:02.578334546Z" level=info msg="ImageCreate event name:\"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:56:02.579749 containerd[1620]: time="2025-05-13T12:56:02.579733496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:56:02.580379 containerd[1620]: time="2025-05-13T12:56:02.580362724Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"20658329\" in 1.006954785s" May 13 12:56:02.580411 containerd[1620]: time="2025-05-13T12:56:02.580379587Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" May 13 12:56:02.580935 containerd[1620]: time="2025-05-13T12:56:02.580925871Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 13 12:56:03.633280 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1157349536.mount: Deactivated successfully. May 13 12:56:04.081162 containerd[1620]: time="2025-05-13T12:56:04.081060957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:56:04.090509 containerd[1620]: time="2025-05-13T12:56:04.090466934Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917856" May 13 12:56:04.100051 containerd[1620]: time="2025-05-13T12:56:04.100002074Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:56:04.111347 containerd[1620]: time="2025-05-13T12:56:04.111308941Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:56:04.111699 containerd[1620]: time="2025-05-13T12:56:04.111596665Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 1.530619002s" May 13 12:56:04.111699 containerd[1620]: time="2025-05-13T12:56:04.111617019Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" May 13 12:56:04.111956 containerd[1620]: time="2025-05-13T12:56:04.111945852Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 13 12:56:04.757211 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1659991202.mount: Deactivated successfully. May 13 12:56:05.661943 containerd[1620]: time="2025-05-13T12:56:05.661420397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:56:05.676565 containerd[1620]: time="2025-05-13T12:56:05.676530236Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 13 12:56:05.690181 containerd[1620]: time="2025-05-13T12:56:05.690146447Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:56:05.697098 containerd[1620]: time="2025-05-13T12:56:05.697065213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:56:05.697784 containerd[1620]: time="2025-05-13T12:56:05.697765438Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.58577067s" May 13 12:56:05.697848 containerd[1620]: time="2025-05-13T12:56:05.697837474Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 13 12:56:05.698200 containerd[1620]: time="2025-05-13T12:56:05.698158052Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 13 12:56:06.367464 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1023940300.mount: Deactivated successfully. May 13 12:56:06.395598 containerd[1620]: time="2025-05-13T12:56:06.395487702Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 12:56:06.397749 containerd[1620]: time="2025-05-13T12:56:06.397723120Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 13 12:56:06.401456 containerd[1620]: time="2025-05-13T12:56:06.401428935Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 12:56:06.403535 containerd[1620]: time="2025-05-13T12:56:06.403510193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 12:56:06.403929 containerd[1620]: time="2025-05-13T12:56:06.403824540Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 705.650242ms" May 13 12:56:06.403929 containerd[1620]: time="2025-05-13T12:56:06.403845320Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 13 12:56:06.404224 containerd[1620]: time="2025-05-13T12:56:06.404207700Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 13 12:56:06.986965 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3613507265.mount: Deactivated successfully. May 13 12:56:09.536938 containerd[1620]: time="2025-05-13T12:56:09.536195068Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:56:09.537445 containerd[1620]: time="2025-05-13T12:56:09.537430033Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" May 13 12:56:09.538314 containerd[1620]: time="2025-05-13T12:56:09.538302315Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:56:09.540304 containerd[1620]: time="2025-05-13T12:56:09.540291416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:56:09.541117 containerd[1620]: time="2025-05-13T12:56:09.541099103Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.136874107s" May 13 12:56:09.541273 containerd[1620]: time="2025-05-13T12:56:09.541263810Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 13 12:56:09.955200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 13 12:56:09.956317 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:56:10.480158 update_engine[1601]: I20250513 12:56:10.479908 1601 update_attempter.cc:509] Updating boot flags... May 13 12:56:10.894209 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:56:10.900300 (kubelet)[2418]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 12:56:10.967903 kubelet[2418]: E0513 12:56:10.967874 2418 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 12:56:10.975310 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 12:56:10.975404 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 12:56:10.975601 systemd[1]: kubelet.service: Consumed 111ms CPU time, 103.8M memory peak. May 13 12:56:12.396871 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:56:12.396973 systemd[1]: kubelet.service: Consumed 111ms CPU time, 103.8M memory peak. May 13 12:56:12.398535 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:56:12.418166 systemd[1]: Reload requested from client PID 2432 ('systemctl') (unit session-9.scope)... May 13 12:56:12.418186 systemd[1]: Reloading... May 13 12:56:12.497142 zram_generator::config[2485]: No configuration found. May 13 12:56:12.545530 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 12:56:12.554007 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 13 12:56:12.622037 systemd[1]: Reloading finished in 203 ms. May 13 12:56:12.660256 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 13 12:56:12.660319 systemd[1]: kubelet.service: Failed with result 'signal'. May 13 12:56:12.660509 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:56:12.661742 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:56:13.040108 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:56:13.047313 (kubelet)[2543]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 12:56:13.142980 kubelet[2543]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 12:56:13.143786 kubelet[2543]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 13 12:56:13.143786 kubelet[2543]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 12:56:13.143786 kubelet[2543]: I0513 12:56:13.143320 2543 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 12:56:13.560969 kubelet[2543]: I0513 12:56:13.560938 2543 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 13 12:56:13.561764 kubelet[2543]: I0513 12:56:13.561063 2543 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 12:56:13.561764 kubelet[2543]: I0513 12:56:13.561240 2543 server.go:954] "Client rotation is on, will bootstrap in background" May 13 12:56:13.627179 kubelet[2543]: E0513 12:56:13.627148 2543 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.101:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" May 13 12:56:13.631612 kubelet[2543]: I0513 12:56:13.631572 2543 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 12:56:13.647076 kubelet[2543]: I0513 12:56:13.647040 2543 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 12:56:13.654174 kubelet[2543]: I0513 12:56:13.654148 2543 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 12:56:13.657566 kubelet[2543]: I0513 12:56:13.657517 2543 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 12:56:13.657710 kubelet[2543]: I0513 12:56:13.657580 2543 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 12:56:13.659247 kubelet[2543]: I0513 12:56:13.659225 2543 topology_manager.go:138] "Creating topology manager with none policy" May 13 12:56:13.659247 kubelet[2543]: I0513 12:56:13.659245 2543 container_manager_linux.go:304] "Creating device plugin manager" May 13 12:56:13.659357 kubelet[2543]: I0513 12:56:13.659345 2543 state_mem.go:36] "Initialized new in-memory state store" May 13 12:56:13.662841 kubelet[2543]: I0513 12:56:13.662821 2543 kubelet.go:446] "Attempting to sync node with API server" May 13 12:56:13.662841 kubelet[2543]: I0513 12:56:13.662840 2543 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 12:56:13.664011 kubelet[2543]: I0513 12:56:13.663988 2543 kubelet.go:352] "Adding apiserver pod source" May 13 12:56:13.664011 kubelet[2543]: I0513 12:56:13.664004 2543 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 12:56:13.670069 kubelet[2543]: W0513 12:56:13.669617 2543 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.101:6443: connect: connection refused May 13 12:56:13.670069 kubelet[2543]: E0513 12:56:13.669669 2543 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" May 13 12:56:13.670069 kubelet[2543]: W0513 12:56:13.670008 2543 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.101:6443: connect: connection refused May 13 12:56:13.670069 kubelet[2543]: E0513 12:56:13.670041 2543 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" May 13 12:56:13.671498 kubelet[2543]: I0513 12:56:13.671477 2543 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 13 12:56:13.678780 kubelet[2543]: I0513 12:56:13.678758 2543 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 12:56:13.693412 kubelet[2543]: W0513 12:56:13.693383 2543 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 12:56:13.703815 kubelet[2543]: I0513 12:56:13.703784 2543 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 13 12:56:13.703982 kubelet[2543]: I0513 12:56:13.703972 2543 server.go:1287] "Started kubelet" May 13 12:56:13.728866 kubelet[2543]: I0513 12:56:13.728813 2543 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 13 12:56:13.734512 kubelet[2543]: I0513 12:56:13.734121 2543 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 12:56:13.734512 kubelet[2543]: I0513 12:56:13.734415 2543 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 12:56:13.739352 kubelet[2543]: E0513 12:56:13.736932 2543 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.101:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.101:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f177576273e20 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 12:56:13.703937568 +0000 UTC m=+0.654312636,LastTimestamp:2025-05-13 12:56:13.703937568 +0000 UTC m=+0.654312636,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 12:56:13.739544 kubelet[2543]: I0513 12:56:13.739528 2543 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 12:56:13.740046 kubelet[2543]: I0513 12:56:13.740034 2543 server.go:490] "Adding debug handlers to kubelet server" May 13 12:56:13.740664 kubelet[2543]: I0513 12:56:13.740654 2543 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 12:56:13.741669 kubelet[2543]: I0513 12:56:13.741384 2543 volume_manager.go:297] "Starting Kubelet Volume Manager" May 13 12:56:13.741669 kubelet[2543]: E0513 12:56:13.741605 2543 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 12:56:13.743214 kubelet[2543]: I0513 12:56:13.743194 2543 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 12:56:13.743506 kubelet[2543]: I0513 12:56:13.743492 2543 reconciler.go:26] "Reconciler: start to sync state" May 13 12:56:13.752754 kubelet[2543]: W0513 12:56:13.752489 2543 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.101:6443: connect: connection refused May 13 12:56:13.752754 kubelet[2543]: E0513 12:56:13.752538 2543 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" May 13 12:56:13.752754 kubelet[2543]: E0513 12:56:13.752587 2543 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.101:6443: connect: connection refused" interval="200ms" May 13 12:56:13.754898 kubelet[2543]: I0513 12:56:13.754596 2543 factory.go:221] Registration of the systemd container factory successfully May 13 12:56:13.754898 kubelet[2543]: I0513 12:56:13.754673 2543 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 12:56:13.758126 kubelet[2543]: I0513 12:56:13.758106 2543 factory.go:221] Registration of the containerd container factory successfully May 13 12:56:13.763497 kubelet[2543]: I0513 12:56:13.763458 2543 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 12:56:13.764166 kubelet[2543]: E0513 12:56:13.763759 2543 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 12:56:13.764739 kubelet[2543]: I0513 12:56:13.764538 2543 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 12:56:13.764739 kubelet[2543]: I0513 12:56:13.764556 2543 status_manager.go:227] "Starting to sync pod status with apiserver" May 13 12:56:13.764739 kubelet[2543]: I0513 12:56:13.764575 2543 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 13 12:56:13.764739 kubelet[2543]: I0513 12:56:13.764580 2543 kubelet.go:2388] "Starting kubelet main sync loop" May 13 12:56:13.764739 kubelet[2543]: E0513 12:56:13.764613 2543 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 12:56:13.768948 kubelet[2543]: W0513 12:56:13.768909 2543 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.101:6443: connect: connection refused May 13 12:56:13.769064 kubelet[2543]: E0513 12:56:13.769051 2543 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" May 13 12:56:13.784521 kubelet[2543]: I0513 12:56:13.784494 2543 cpu_manager.go:221] "Starting CPU manager" policy="none" May 13 12:56:13.784594 kubelet[2543]: I0513 12:56:13.784511 2543 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 13 12:56:13.784594 kubelet[2543]: I0513 12:56:13.784584 2543 state_mem.go:36] "Initialized new in-memory state store" May 13 12:56:13.785731 kubelet[2543]: I0513 12:56:13.785713 2543 policy_none.go:49] "None policy: Start" May 13 12:56:13.785731 kubelet[2543]: I0513 12:56:13.785729 2543 memory_manager.go:186] "Starting memorymanager" policy="None" May 13 12:56:13.785811 kubelet[2543]: I0513 12:56:13.785743 2543 state_mem.go:35] "Initializing new in-memory state store" May 13 12:56:13.789569 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 12:56:13.805300 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 12:56:13.808284 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 12:56:13.817921 kubelet[2543]: I0513 12:56:13.817863 2543 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 12:56:13.817991 kubelet[2543]: I0513 12:56:13.817985 2543 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 12:56:13.818492 kubelet[2543]: I0513 12:56:13.817999 2543 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 12:56:13.818492 kubelet[2543]: I0513 12:56:13.818332 2543 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 12:56:13.820059 kubelet[2543]: E0513 12:56:13.820013 2543 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 13 12:56:13.820218 kubelet[2543]: E0513 12:56:13.820209 2543 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 13 12:56:13.873722 systemd[1]: Created slice kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice - libcontainer container kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice. May 13 12:56:13.890936 kubelet[2543]: E0513 12:56:13.890889 2543 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 12:56:13.893908 systemd[1]: Created slice kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice - libcontainer container kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice. May 13 12:56:13.904351 kubelet[2543]: E0513 12:56:13.904306 2543 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 12:56:13.907348 systemd[1]: Created slice kubepods-burstable-poda6b1002e2abd895850643055c69506a4.slice - libcontainer container kubepods-burstable-poda6b1002e2abd895850643055c69506a4.slice. May 13 12:56:13.908905 kubelet[2543]: E0513 12:56:13.908888 2543 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 12:56:13.918907 kubelet[2543]: I0513 12:56:13.918885 2543 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 12:56:13.919156 kubelet[2543]: E0513 12:56:13.919126 2543 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://139.178.70.101:6443/api/v1/nodes\": dial tcp 139.178.70.101:6443: connect: connection refused" node="localhost" May 13 12:56:13.944524 kubelet[2543]: I0513 12:56:13.944494 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:56:13.944714 kubelet[2543]: I0513 12:56:13.944633 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:56:13.944714 kubelet[2543]: I0513 12:56:13.944660 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:56:13.944714 kubelet[2543]: I0513 12:56:13.944674 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:56:13.944714 kubelet[2543]: I0513 12:56:13.944686 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 13 12:56:13.944714 kubelet[2543]: I0513 12:56:13.944698 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a6b1002e2abd895850643055c69506a4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a6b1002e2abd895850643055c69506a4\") " pod="kube-system/kube-apiserver-localhost" May 13 12:56:13.944939 kubelet[2543]: I0513 12:56:13.944744 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a6b1002e2abd895850643055c69506a4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a6b1002e2abd895850643055c69506a4\") " pod="kube-system/kube-apiserver-localhost" May 13 12:56:13.944939 kubelet[2543]: I0513 12:56:13.944788 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:56:13.944939 kubelet[2543]: I0513 12:56:13.944803 2543 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a6b1002e2abd895850643055c69506a4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a6b1002e2abd895850643055c69506a4\") " pod="kube-system/kube-apiserver-localhost" May 13 12:56:13.952887 kubelet[2543]: E0513 12:56:13.952856 2543 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.101:6443: connect: connection refused" interval="400ms" May 13 12:56:14.120582 kubelet[2543]: I0513 12:56:14.120263 2543 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 12:56:14.120582 kubelet[2543]: E0513 12:56:14.120491 2543 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://139.178.70.101:6443/api/v1/nodes\": dial tcp 139.178.70.101:6443: connect: connection refused" node="localhost" May 13 12:56:14.192061 containerd[1620]: time="2025-05-13T12:56:14.192025247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,}" May 13 12:56:14.205864 containerd[1620]: time="2025-05-13T12:56:14.205814242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,}" May 13 12:56:14.210454 containerd[1620]: time="2025-05-13T12:56:14.210425425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a6b1002e2abd895850643055c69506a4,Namespace:kube-system,Attempt:0,}" May 13 12:56:14.353936 kubelet[2543]: E0513 12:56:14.353909 2543 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.101:6443: connect: connection refused" interval="800ms" May 13 12:56:14.497515 kubelet[2543]: W0513 12:56:14.497426 2543 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.101:6443: connect: connection refused May 13 12:56:14.497515 kubelet[2543]: E0513 12:56:14.497486 2543 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" May 13 12:56:14.521823 kubelet[2543]: I0513 12:56:14.521667 2543 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 12:56:14.521963 kubelet[2543]: E0513 12:56:14.521951 2543 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://139.178.70.101:6443/api/v1/nodes\": dial tcp 139.178.70.101:6443: connect: connection refused" node="localhost" May 13 12:56:14.620226 kubelet[2543]: W0513 12:56:14.620176 2543 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.101:6443: connect: connection refused May 13 12:56:14.620390 kubelet[2543]: E0513 12:56:14.620365 2543 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" May 13 12:56:14.657956 kubelet[2543]: E0513 12:56:14.657888 2543 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.101:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.101:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f177576273e20 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 12:56:13.703937568 +0000 UTC m=+0.654312636,LastTimestamp:2025-05-13 12:56:13.703937568 +0000 UTC m=+0.654312636,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 12:56:14.658751 containerd[1620]: time="2025-05-13T12:56:14.658547112Z" level=info msg="connecting to shim d642f8aae52d6304173e68cac42abfd201adb30e83fa7efe7ce4ad85f6e4a375" address="unix:///run/containerd/s/f63f2def1d6bd133eca8b4855edf11e88875f7db8a834ad50c99c7087ffda47e" namespace=k8s.io protocol=ttrpc version=3 May 13 12:56:14.659114 containerd[1620]: time="2025-05-13T12:56:14.659089578Z" level=info msg="connecting to shim 515bf3809d1a9b2d74fcdb7662cfb4c12c7ecbaae066301fa19e209567483b71" address="unix:///run/containerd/s/938aadd1de04309b8a9601ae67685669c121c8edf7d6f20e3efe74b752d63eaf" namespace=k8s.io protocol=ttrpc version=3 May 13 12:56:14.663927 containerd[1620]: time="2025-05-13T12:56:14.663894632Z" level=info msg="connecting to shim 6d439a95eb501c092c51ec3d7fa30d5b7f4c4f117d1a981db57d7fcbf358ed8d" address="unix:///run/containerd/s/5e7a7c8578f09b4ccd89766872e977d2cdc69c8a14f86b88f78272959b60feb3" namespace=k8s.io protocol=ttrpc version=3 May 13 12:56:14.761323 systemd[1]: Started cri-containerd-515bf3809d1a9b2d74fcdb7662cfb4c12c7ecbaae066301fa19e209567483b71.scope - libcontainer container 515bf3809d1a9b2d74fcdb7662cfb4c12c7ecbaae066301fa19e209567483b71. May 13 12:56:14.763258 systemd[1]: Started cri-containerd-6d439a95eb501c092c51ec3d7fa30d5b7f4c4f117d1a981db57d7fcbf358ed8d.scope - libcontainer container 6d439a95eb501c092c51ec3d7fa30d5b7f4c4f117d1a981db57d7fcbf358ed8d. May 13 12:56:14.764754 systemd[1]: Started cri-containerd-d642f8aae52d6304173e68cac42abfd201adb30e83fa7efe7ce4ad85f6e4a375.scope - libcontainer container d642f8aae52d6304173e68cac42abfd201adb30e83fa7efe7ce4ad85f6e4a375. May 13 12:56:14.767147 kubelet[2543]: W0513 12:56:14.766720 2543 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.101:6443: connect: connection refused May 13 12:56:14.767147 kubelet[2543]: E0513 12:56:14.766744 2543 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" May 13 12:56:14.868604 containerd[1620]: time="2025-05-13T12:56:14.868577407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,} returns sandbox id \"515bf3809d1a9b2d74fcdb7662cfb4c12c7ecbaae066301fa19e209567483b71\"" May 13 12:56:14.871905 kubelet[2543]: W0513 12:56:14.871844 2543 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.101:6443: connect: connection refused May 13 12:56:14.871905 kubelet[2543]: E0513 12:56:14.871884 2543 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" May 13 12:56:14.876593 containerd[1620]: time="2025-05-13T12:56:14.876164634Z" level=info msg="CreateContainer within sandbox \"515bf3809d1a9b2d74fcdb7662cfb4c12c7ecbaae066301fa19e209567483b71\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 12:56:14.878590 containerd[1620]: time="2025-05-13T12:56:14.878574124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d439a95eb501c092c51ec3d7fa30d5b7f4c4f117d1a981db57d7fcbf358ed8d\"" May 13 12:56:14.879846 containerd[1620]: time="2025-05-13T12:56:14.879832975Z" level=info msg="CreateContainer within sandbox \"6d439a95eb501c092c51ec3d7fa30d5b7f4c4f117d1a981db57d7fcbf358ed8d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 12:56:14.887963 containerd[1620]: time="2025-05-13T12:56:14.887930432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a6b1002e2abd895850643055c69506a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"d642f8aae52d6304173e68cac42abfd201adb30e83fa7efe7ce4ad85f6e4a375\"" May 13 12:56:14.889208 containerd[1620]: time="2025-05-13T12:56:14.889187691Z" level=info msg="CreateContainer within sandbox \"d642f8aae52d6304173e68cac42abfd201adb30e83fa7efe7ce4ad85f6e4a375\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 12:56:14.907087 containerd[1620]: time="2025-05-13T12:56:14.906833906Z" level=info msg="Container 210d7af8104eb838cce82572e01e405f37fd04d3b709078187154c0dc1f6c07e: CDI devices from CRI Config.CDIDevices: []" May 13 12:56:14.909737 containerd[1620]: time="2025-05-13T12:56:14.908741468Z" level=info msg="Container 22b6dd07637445b967e0ef651485c7ae3559fbc72e80c9cf09e77ade9f9f044c: CDI devices from CRI Config.CDIDevices: []" May 13 12:56:14.914842 containerd[1620]: time="2025-05-13T12:56:14.914808152Z" level=info msg="CreateContainer within sandbox \"515bf3809d1a9b2d74fcdb7662cfb4c12c7ecbaae066301fa19e209567483b71\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"210d7af8104eb838cce82572e01e405f37fd04d3b709078187154c0dc1f6c07e\"" May 13 12:56:14.916147 containerd[1620]: time="2025-05-13T12:56:14.916058584Z" level=info msg="Container 10c7dd37fd6664088c9a465ebec02d38189d57c705db3dae6887a6e21102364a: CDI devices from CRI Config.CDIDevices: []" May 13 12:56:14.916689 containerd[1620]: time="2025-05-13T12:56:14.916674433Z" level=info msg="StartContainer for \"210d7af8104eb838cce82572e01e405f37fd04d3b709078187154c0dc1f6c07e\"" May 13 12:56:14.916775 containerd[1620]: time="2025-05-13T12:56:14.916757624Z" level=info msg="CreateContainer within sandbox \"6d439a95eb501c092c51ec3d7fa30d5b7f4c4f117d1a981db57d7fcbf358ed8d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"22b6dd07637445b967e0ef651485c7ae3559fbc72e80c9cf09e77ade9f9f044c\"" May 13 12:56:14.918588 containerd[1620]: time="2025-05-13T12:56:14.918567154Z" level=info msg="connecting to shim 210d7af8104eb838cce82572e01e405f37fd04d3b709078187154c0dc1f6c07e" address="unix:///run/containerd/s/938aadd1de04309b8a9601ae67685669c121c8edf7d6f20e3efe74b752d63eaf" protocol=ttrpc version=3 May 13 12:56:14.920068 containerd[1620]: time="2025-05-13T12:56:14.919363753Z" level=info msg="StartContainer for \"22b6dd07637445b967e0ef651485c7ae3559fbc72e80c9cf09e77ade9f9f044c\"" May 13 12:56:14.920068 containerd[1620]: time="2025-05-13T12:56:14.920012120Z" level=info msg="connecting to shim 22b6dd07637445b967e0ef651485c7ae3559fbc72e80c9cf09e77ade9f9f044c" address="unix:///run/containerd/s/5e7a7c8578f09b4ccd89766872e977d2cdc69c8a14f86b88f78272959b60feb3" protocol=ttrpc version=3 May 13 12:56:14.921388 containerd[1620]: time="2025-05-13T12:56:14.921370347Z" level=info msg="CreateContainer within sandbox \"d642f8aae52d6304173e68cac42abfd201adb30e83fa7efe7ce4ad85f6e4a375\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"10c7dd37fd6664088c9a465ebec02d38189d57c705db3dae6887a6e21102364a\"" May 13 12:56:14.921754 containerd[1620]: time="2025-05-13T12:56:14.921729961Z" level=info msg="StartContainer for \"10c7dd37fd6664088c9a465ebec02d38189d57c705db3dae6887a6e21102364a\"" May 13 12:56:14.922772 containerd[1620]: time="2025-05-13T12:56:14.922754156Z" level=info msg="connecting to shim 10c7dd37fd6664088c9a465ebec02d38189d57c705db3dae6887a6e21102364a" address="unix:///run/containerd/s/f63f2def1d6bd133eca8b4855edf11e88875f7db8a834ad50c99c7087ffda47e" protocol=ttrpc version=3 May 13 12:56:14.938404 systemd[1]: Started cri-containerd-210d7af8104eb838cce82572e01e405f37fd04d3b709078187154c0dc1f6c07e.scope - libcontainer container 210d7af8104eb838cce82572e01e405f37fd04d3b709078187154c0dc1f6c07e. May 13 12:56:14.945506 systemd[1]: Started cri-containerd-10c7dd37fd6664088c9a465ebec02d38189d57c705db3dae6887a6e21102364a.scope - libcontainer container 10c7dd37fd6664088c9a465ebec02d38189d57c705db3dae6887a6e21102364a. May 13 12:56:14.947594 systemd[1]: Started cri-containerd-22b6dd07637445b967e0ef651485c7ae3559fbc72e80c9cf09e77ade9f9f044c.scope - libcontainer container 22b6dd07637445b967e0ef651485c7ae3559fbc72e80c9cf09e77ade9f9f044c. May 13 12:56:15.023564 containerd[1620]: time="2025-05-13T12:56:15.023403126Z" level=info msg="StartContainer for \"22b6dd07637445b967e0ef651485c7ae3559fbc72e80c9cf09e77ade9f9f044c\" returns successfully" May 13 12:56:15.024984 containerd[1620]: time="2025-05-13T12:56:15.024087922Z" level=info msg="StartContainer for \"210d7af8104eb838cce82572e01e405f37fd04d3b709078187154c0dc1f6c07e\" returns successfully" May 13 12:56:15.024984 containerd[1620]: time="2025-05-13T12:56:15.024550167Z" level=info msg="StartContainer for \"10c7dd37fd6664088c9a465ebec02d38189d57c705db3dae6887a6e21102364a\" returns successfully" May 13 12:56:15.155038 kubelet[2543]: E0513 12:56:15.155011 2543 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.101:6443: connect: connection refused" interval="1.6s" May 13 12:56:15.323768 kubelet[2543]: I0513 12:56:15.323678 2543 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 12:56:15.324181 kubelet[2543]: E0513 12:56:15.324160 2543 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://139.178.70.101:6443/api/v1/nodes\": dial tcp 139.178.70.101:6443: connect: connection refused" node="localhost" May 13 12:56:15.760389 kubelet[2543]: E0513 12:56:15.760289 2543 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.101:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.101:6443: connect: connection refused" logger="UnhandledError" May 13 12:56:15.790359 kubelet[2543]: E0513 12:56:15.790342 2543 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 12:56:15.790785 kubelet[2543]: E0513 12:56:15.790736 2543 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 12:56:15.791728 kubelet[2543]: E0513 12:56:15.791717 2543 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 12:56:16.784267 kubelet[2543]: E0513 12:56:16.784240 2543 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 13 12:56:16.793195 kubelet[2543]: E0513 12:56:16.793175 2543 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 12:56:16.793407 kubelet[2543]: E0513 12:56:16.793395 2543 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 12:56:16.925931 kubelet[2543]: I0513 12:56:16.925796 2543 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 12:56:16.940878 kubelet[2543]: I0513 12:56:16.940848 2543 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 13 12:56:16.941541 kubelet[2543]: E0513 12:56:16.941510 2543 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 13 12:56:16.944186 kubelet[2543]: E0513 12:56:16.944165 2543 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 12:56:17.044386 kubelet[2543]: E0513 12:56:17.044319 2543 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 12:56:17.144519 kubelet[2543]: E0513 12:56:17.144488 2543 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 12:56:17.245324 kubelet[2543]: E0513 12:56:17.245293 2543 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 12:56:17.346082 kubelet[2543]: E0513 12:56:17.345995 2543 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 12:56:17.446619 kubelet[2543]: E0513 12:56:17.446578 2543 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 12:56:17.542966 kubelet[2543]: I0513 12:56:17.542761 2543 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 13 12:56:17.546705 kubelet[2543]: E0513 12:56:17.546677 2543 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 13 12:56:17.546705 kubelet[2543]: I0513 12:56:17.546702 2543 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 13 12:56:17.547843 kubelet[2543]: E0513 12:56:17.547823 2543 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 13 12:56:17.547843 kubelet[2543]: I0513 12:56:17.547841 2543 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 13 12:56:17.548715 kubelet[2543]: E0513 12:56:17.548696 2543 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 13 12:56:17.672279 kubelet[2543]: I0513 12:56:17.672186 2543 apiserver.go:52] "Watching apiserver" May 13 12:56:17.743568 kubelet[2543]: I0513 12:56:17.743527 2543 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 12:56:18.244682 kubelet[2543]: I0513 12:56:18.244575 2543 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 13 12:56:18.560719 kubelet[2543]: I0513 12:56:18.560603 2543 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 13 12:56:18.772885 systemd[1]: Reload requested from client PID 2806 ('systemctl') (unit session-9.scope)... May 13 12:56:18.772897 systemd[1]: Reloading... May 13 12:56:18.833169 zram_generator::config[2853]: No configuration found. May 13 12:56:18.902947 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 12:56:18.913161 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 13 12:56:18.991511 systemd[1]: Reloading finished in 218 ms. May 13 12:56:19.025126 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:56:19.039323 systemd[1]: kubelet.service: Deactivated successfully. May 13 12:56:19.039468 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:56:19.039500 systemd[1]: kubelet.service: Consumed 709ms CPU time, 124.6M memory peak. May 13 12:56:19.041007 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:56:19.450834 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:56:19.457563 (kubelet)[2917]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 12:56:19.506149 kubelet[2917]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 12:56:19.506149 kubelet[2917]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 13 12:56:19.506149 kubelet[2917]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 12:56:19.506391 kubelet[2917]: I0513 12:56:19.506188 2917 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 12:56:19.511059 kubelet[2917]: I0513 12:56:19.511033 2917 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 13 12:56:19.511059 kubelet[2917]: I0513 12:56:19.511050 2917 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 12:56:19.511313 kubelet[2917]: I0513 12:56:19.511298 2917 server.go:954] "Client rotation is on, will bootstrap in background" May 13 12:56:19.518787 kubelet[2917]: I0513 12:56:19.518676 2917 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 12:56:19.520477 kubelet[2917]: I0513 12:56:19.520395 2917 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 12:56:19.531711 kubelet[2917]: I0513 12:56:19.531693 2917 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 12:56:19.534178 kubelet[2917]: I0513 12:56:19.534162 2917 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 12:56:19.534475 kubelet[2917]: I0513 12:56:19.534401 2917 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 12:56:19.534561 kubelet[2917]: I0513 12:56:19.534423 2917 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 12:56:19.534625 kubelet[2917]: I0513 12:56:19.534563 2917 topology_manager.go:138] "Creating topology manager with none policy" May 13 12:56:19.534625 kubelet[2917]: I0513 12:56:19.534570 2917 container_manager_linux.go:304] "Creating device plugin manager" May 13 12:56:19.534625 kubelet[2917]: I0513 12:56:19.534593 2917 state_mem.go:36] "Initialized new in-memory state store" May 13 12:56:19.545088 kubelet[2917]: I0513 12:56:19.544328 2917 kubelet.go:446] "Attempting to sync node with API server" May 13 12:56:19.545088 kubelet[2917]: I0513 12:56:19.544378 2917 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 12:56:19.545088 kubelet[2917]: I0513 12:56:19.544418 2917 kubelet.go:352] "Adding apiserver pod source" May 13 12:56:19.545088 kubelet[2917]: I0513 12:56:19.544432 2917 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 12:56:19.548834 kubelet[2917]: I0513 12:56:19.548685 2917 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 13 12:56:19.548991 kubelet[2917]: I0513 12:56:19.548972 2917 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 12:56:19.552224 kubelet[2917]: I0513 12:56:19.551708 2917 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 13 12:56:19.552224 kubelet[2917]: I0513 12:56:19.551726 2917 server.go:1287] "Started kubelet" May 13 12:56:19.559145 kubelet[2917]: I0513 12:56:19.559037 2917 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 12:56:19.560177 kubelet[2917]: I0513 12:56:19.560155 2917 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 13 12:56:19.562640 kubelet[2917]: I0513 12:56:19.561106 2917 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 12:56:19.562681 kubelet[2917]: I0513 12:56:19.562663 2917 volume_manager.go:297] "Starting Kubelet Volume Manager" May 13 12:56:19.562855 kubelet[2917]: E0513 12:56:19.562836 2917 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 12:56:19.568239 kubelet[2917]: I0513 12:56:19.568220 2917 server.go:490] "Adding debug handlers to kubelet server" May 13 12:56:19.569031 kubelet[2917]: I0513 12:56:19.569023 2917 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 12:56:19.569610 kubelet[2917]: I0513 12:56:19.569603 2917 reconciler.go:26] "Reconciler: start to sync state" May 13 12:56:19.569832 kubelet[2917]: I0513 12:56:19.569808 2917 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 12:56:19.569954 kubelet[2917]: I0513 12:56:19.569947 2917 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 12:56:19.570823 kubelet[2917]: I0513 12:56:19.570816 2917 factory.go:221] Registration of the systemd container factory successfully May 13 12:56:19.570926 kubelet[2917]: I0513 12:56:19.570917 2917 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 12:56:19.572177 kubelet[2917]: I0513 12:56:19.572166 2917 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 12:56:19.572869 kubelet[2917]: I0513 12:56:19.572857 2917 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 12:56:19.572903 kubelet[2917]: I0513 12:56:19.572878 2917 status_manager.go:227] "Starting to sync pod status with apiserver" May 13 12:56:19.572903 kubelet[2917]: I0513 12:56:19.572890 2917 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 13 12:56:19.572939 kubelet[2917]: I0513 12:56:19.572893 2917 kubelet.go:2388] "Starting kubelet main sync loop" May 13 12:56:19.573980 kubelet[2917]: E0513 12:56:19.573955 2917 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 12:56:19.578603 kubelet[2917]: I0513 12:56:19.578588 2917 factory.go:221] Registration of the containerd container factory successfully May 13 12:56:19.579476 kubelet[2917]: E0513 12:56:19.579462 2917 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 12:56:19.612617 kubelet[2917]: I0513 12:56:19.612600 2917 cpu_manager.go:221] "Starting CPU manager" policy="none" May 13 12:56:19.612617 kubelet[2917]: I0513 12:56:19.612611 2917 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 13 12:56:19.612617 kubelet[2917]: I0513 12:56:19.612623 2917 state_mem.go:36] "Initialized new in-memory state store" May 13 12:56:19.612737 kubelet[2917]: I0513 12:56:19.612733 2917 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 12:56:19.612771 kubelet[2917]: I0513 12:56:19.612740 2917 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 12:56:19.612771 kubelet[2917]: I0513 12:56:19.612769 2917 policy_none.go:49] "None policy: Start" May 13 12:56:19.612814 kubelet[2917]: I0513 12:56:19.612776 2917 memory_manager.go:186] "Starting memorymanager" policy="None" May 13 12:56:19.612814 kubelet[2917]: I0513 12:56:19.612782 2917 state_mem.go:35] "Initializing new in-memory state store" May 13 12:56:19.612866 kubelet[2917]: I0513 12:56:19.612854 2917 state_mem.go:75] "Updated machine memory state" May 13 12:56:19.615279 kubelet[2917]: I0513 12:56:19.615266 2917 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 12:56:19.616212 kubelet[2917]: I0513 12:56:19.616003 2917 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 12:56:19.616331 kubelet[2917]: I0513 12:56:19.616313 2917 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 12:56:19.617229 kubelet[2917]: I0513 12:56:19.617216 2917 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 12:56:19.617678 kubelet[2917]: E0513 12:56:19.617664 2917 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 13 12:56:19.674932 kubelet[2917]: I0513 12:56:19.674905 2917 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 13 12:56:19.676201 kubelet[2917]: I0513 12:56:19.676185 2917 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 13 12:56:19.676392 kubelet[2917]: I0513 12:56:19.676384 2917 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 13 12:56:19.679297 kubelet[2917]: E0513 12:56:19.679253 2917 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 13 12:56:19.679911 kubelet[2917]: E0513 12:56:19.679875 2917 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 13 12:56:19.718039 kubelet[2917]: I0513 12:56:19.718018 2917 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 12:56:19.721947 kubelet[2917]: I0513 12:56:19.721916 2917 kubelet_node_status.go:125] "Node was previously registered" node="localhost" May 13 12:56:19.722036 kubelet[2917]: I0513 12:56:19.721965 2917 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 13 12:56:19.767523 sudo[2951]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 13 12:56:19.767689 sudo[2951]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 13 12:56:19.771248 kubelet[2917]: I0513 12:56:19.771217 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:56:19.771341 kubelet[2917]: I0513 12:56:19.771235 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:56:19.771401 kubelet[2917]: I0513 12:56:19.771379 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:56:19.771478 kubelet[2917]: I0513 12:56:19.771441 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 13 12:56:19.771478 kubelet[2917]: I0513 12:56:19.771453 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a6b1002e2abd895850643055c69506a4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a6b1002e2abd895850643055c69506a4\") " pod="kube-system/kube-apiserver-localhost" May 13 12:56:19.771478 kubelet[2917]: I0513 12:56:19.771463 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:56:19.771627 kubelet[2917]: I0513 12:56:19.771563 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:56:19.771627 kubelet[2917]: I0513 12:56:19.771578 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a6b1002e2abd895850643055c69506a4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a6b1002e2abd895850643055c69506a4\") " pod="kube-system/kube-apiserver-localhost" May 13 12:56:19.771627 kubelet[2917]: I0513 12:56:19.771593 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a6b1002e2abd895850643055c69506a4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a6b1002e2abd895850643055c69506a4\") " pod="kube-system/kube-apiserver-localhost" May 13 12:56:20.147469 sudo[2951]: pam_unix(sudo:session): session closed for user root May 13 12:56:20.556683 kubelet[2917]: I0513 12:56:20.556660 2917 apiserver.go:52] "Watching apiserver" May 13 12:56:20.570269 kubelet[2917]: I0513 12:56:20.570244 2917 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 12:56:20.599708 kubelet[2917]: I0513 12:56:20.599690 2917 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 13 12:56:20.602579 kubelet[2917]: E0513 12:56:20.602474 2917 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 13 12:56:20.615524 kubelet[2917]: I0513 12:56:20.615486 2917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.6154767209999998 podStartE2EDuration="1.615476721s" podCreationTimestamp="2025-05-13 12:56:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 12:56:20.615356632 +0000 UTC m=+1.153853700" watchObservedRunningTime="2025-05-13 12:56:20.615476721 +0000 UTC m=+1.153973790" May 13 12:56:20.615721 kubelet[2917]: I0513 12:56:20.615541 2917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.615537384 podStartE2EDuration="2.615537384s" podCreationTimestamp="2025-05-13 12:56:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 12:56:20.6118532 +0000 UTC m=+1.150350274" watchObservedRunningTime="2025-05-13 12:56:20.615537384 +0000 UTC m=+1.154034457" May 13 12:56:20.619962 kubelet[2917]: I0513 12:56:20.619871 2917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.619860483 podStartE2EDuration="2.619860483s" podCreationTimestamp="2025-05-13 12:56:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 12:56:20.619499151 +0000 UTC m=+1.157996225" watchObservedRunningTime="2025-05-13 12:56:20.619860483 +0000 UTC m=+1.158357557" May 13 12:56:21.271122 sudo[1955]: pam_unix(sudo:session): session closed for user root May 13 12:56:21.272154 sshd[1954]: Connection closed by 147.75.109.163 port 43494 May 13 12:56:21.272710 sshd-session[1952]: pam_unix(sshd:session): session closed for user core May 13 12:56:21.275105 systemd-logind[1597]: Session 9 logged out. Waiting for processes to exit. May 13 12:56:21.275123 systemd[1]: sshd@6-139.178.70.101:22-147.75.109.163:43494.service: Deactivated successfully. May 13 12:56:21.276568 systemd[1]: session-9.scope: Deactivated successfully. May 13 12:56:21.276792 systemd[1]: session-9.scope: Consumed 3.319s CPU time, 207.4M memory peak. May 13 12:56:21.278621 systemd-logind[1597]: Removed session 9. May 13 12:56:25.030185 kubelet[2917]: I0513 12:56:25.030054 2917 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 12:56:25.031235 containerd[1620]: time="2025-05-13T12:56:25.031212529Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 12:56:25.032227 kubelet[2917]: I0513 12:56:25.031516 2917 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 12:56:25.389555 systemd[1]: Created slice kubepods-besteffort-podbdd4070b_f960_4642_8772_b9c3275c4330.slice - libcontainer container kubepods-besteffort-podbdd4070b_f960_4642_8772_b9c3275c4330.slice. May 13 12:56:25.399182 systemd[1]: Created slice kubepods-burstable-pod6f72d658_0891_4033_80cc_2f487967107b.slice - libcontainer container kubepods-burstable-pod6f72d658_0891_4033_80cc_2f487967107b.slice. May 13 12:56:25.407496 kubelet[2917]: I0513 12:56:25.407399 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6f72d658-0891-4033-80cc-2f487967107b-cilium-run\") pod \"cilium-fn62g\" (UID: \"6f72d658-0891-4033-80cc-2f487967107b\") " pod="kube-system/cilium-fn62g" May 13 12:56:25.407496 kubelet[2917]: I0513 12:56:25.407419 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6f72d658-0891-4033-80cc-2f487967107b-cilium-config-path\") pod \"cilium-fn62g\" (UID: \"6f72d658-0891-4033-80cc-2f487967107b\") " pod="kube-system/cilium-fn62g" May 13 12:56:25.407496 kubelet[2917]: I0513 12:56:25.407431 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6f72d658-0891-4033-80cc-2f487967107b-host-proc-sys-kernel\") pod \"cilium-fn62g\" (UID: \"6f72d658-0891-4033-80cc-2f487967107b\") " pod="kube-system/cilium-fn62g" May 13 12:56:25.407496 kubelet[2917]: I0513 12:56:25.407440 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f72d658-0891-4033-80cc-2f487967107b-lib-modules\") pod \"cilium-fn62g\" (UID: \"6f72d658-0891-4033-80cc-2f487967107b\") " pod="kube-system/cilium-fn62g" May 13 12:56:25.407496 kubelet[2917]: I0513 12:56:25.407448 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bdd4070b-f960-4642-8772-b9c3275c4330-xtables-lock\") pod \"kube-proxy-29tz4\" (UID: \"bdd4070b-f960-4642-8772-b9c3275c4330\") " pod="kube-system/kube-proxy-29tz4" May 13 12:56:25.407496 kubelet[2917]: I0513 12:56:25.407467 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6f72d658-0891-4033-80cc-2f487967107b-cni-path\") pod \"cilium-fn62g\" (UID: \"6f72d658-0891-4033-80cc-2f487967107b\") " pod="kube-system/cilium-fn62g" May 13 12:56:25.407701 kubelet[2917]: I0513 12:56:25.407478 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6f72d658-0891-4033-80cc-2f487967107b-xtables-lock\") pod \"cilium-fn62g\" (UID: \"6f72d658-0891-4033-80cc-2f487967107b\") " pod="kube-system/cilium-fn62g" May 13 12:56:25.407701 kubelet[2917]: I0513 12:56:25.407492 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6f72d658-0891-4033-80cc-2f487967107b-host-proc-sys-net\") pod \"cilium-fn62g\" (UID: \"6f72d658-0891-4033-80cc-2f487967107b\") " pod="kube-system/cilium-fn62g" May 13 12:56:25.407701 kubelet[2917]: I0513 12:56:25.407501 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89jbc\" (UniqueName: \"kubernetes.io/projected/6f72d658-0891-4033-80cc-2f487967107b-kube-api-access-89jbc\") pod \"cilium-fn62g\" (UID: \"6f72d658-0891-4033-80cc-2f487967107b\") " pod="kube-system/cilium-fn62g" May 13 12:56:25.407701 kubelet[2917]: I0513 12:56:25.407510 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bdd4070b-f960-4642-8772-b9c3275c4330-kube-proxy\") pod \"kube-proxy-29tz4\" (UID: \"bdd4070b-f960-4642-8772-b9c3275c4330\") " pod="kube-system/kube-proxy-29tz4" May 13 12:56:25.407701 kubelet[2917]: I0513 12:56:25.407518 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6f72d658-0891-4033-80cc-2f487967107b-etc-cni-netd\") pod \"cilium-fn62g\" (UID: \"6f72d658-0891-4033-80cc-2f487967107b\") " pod="kube-system/cilium-fn62g" May 13 12:56:25.407701 kubelet[2917]: I0513 12:56:25.407526 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6f72d658-0891-4033-80cc-2f487967107b-bpf-maps\") pod \"cilium-fn62g\" (UID: \"6f72d658-0891-4033-80cc-2f487967107b\") " pod="kube-system/cilium-fn62g" May 13 12:56:25.407813 kubelet[2917]: I0513 12:56:25.407533 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6f72d658-0891-4033-80cc-2f487967107b-hostproc\") pod \"cilium-fn62g\" (UID: \"6f72d658-0891-4033-80cc-2f487967107b\") " pod="kube-system/cilium-fn62g" May 13 12:56:25.407813 kubelet[2917]: I0513 12:56:25.407547 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bdd4070b-f960-4642-8772-b9c3275c4330-lib-modules\") pod \"kube-proxy-29tz4\" (UID: \"bdd4070b-f960-4642-8772-b9c3275c4330\") " pod="kube-system/kube-proxy-29tz4" May 13 12:56:25.407813 kubelet[2917]: I0513 12:56:25.407559 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6f72d658-0891-4033-80cc-2f487967107b-cilium-cgroup\") pod \"cilium-fn62g\" (UID: \"6f72d658-0891-4033-80cc-2f487967107b\") " pod="kube-system/cilium-fn62g" May 13 12:56:25.407813 kubelet[2917]: I0513 12:56:25.407568 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6f72d658-0891-4033-80cc-2f487967107b-clustermesh-secrets\") pod \"cilium-fn62g\" (UID: \"6f72d658-0891-4033-80cc-2f487967107b\") " pod="kube-system/cilium-fn62g" May 13 12:56:25.407813 kubelet[2917]: I0513 12:56:25.407577 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6f72d658-0891-4033-80cc-2f487967107b-hubble-tls\") pod \"cilium-fn62g\" (UID: \"6f72d658-0891-4033-80cc-2f487967107b\") " pod="kube-system/cilium-fn62g" May 13 12:56:25.407813 kubelet[2917]: I0513 12:56:25.407585 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfnm5\" (UniqueName: \"kubernetes.io/projected/bdd4070b-f960-4642-8772-b9c3275c4330-kube-api-access-nfnm5\") pod \"kube-proxy-29tz4\" (UID: \"bdd4070b-f960-4642-8772-b9c3275c4330\") " pod="kube-system/kube-proxy-29tz4" May 13 12:56:25.697612 containerd[1620]: time="2025-05-13T12:56:25.697538852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-29tz4,Uid:bdd4070b-f960-4642-8772-b9c3275c4330,Namespace:kube-system,Attempt:0,}" May 13 12:56:25.703290 containerd[1620]: time="2025-05-13T12:56:25.702919398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fn62g,Uid:6f72d658-0891-4033-80cc-2f487967107b,Namespace:kube-system,Attempt:0,}" May 13 12:56:25.737910 containerd[1620]: time="2025-05-13T12:56:25.737846263Z" level=info msg="connecting to shim 6ba8c26f9686074d290ef5dba4d94d1289e958dc09c5aca62be265fbfca216b9" address="unix:///run/containerd/s/c7717465a35013b045776bbe725a76f9503074174f5102b00527cb7213e95afd" namespace=k8s.io protocol=ttrpc version=3 May 13 12:56:25.738748 containerd[1620]: time="2025-05-13T12:56:25.738287401Z" level=info msg="connecting to shim e106217690421697ed6602fa1873d59f1791ebe2ffa4d73ce170761d36be6b20" address="unix:///run/containerd/s/2ad8657b38e85907db3f07708c785cf87524d4f016afbdedba794c31a561fcb3" namespace=k8s.io protocol=ttrpc version=3 May 13 12:56:25.761350 systemd[1]: Started cri-containerd-6ba8c26f9686074d290ef5dba4d94d1289e958dc09c5aca62be265fbfca216b9.scope - libcontainer container 6ba8c26f9686074d290ef5dba4d94d1289e958dc09c5aca62be265fbfca216b9. May 13 12:56:25.765160 systemd[1]: Started cri-containerd-e106217690421697ed6602fa1873d59f1791ebe2ffa4d73ce170761d36be6b20.scope - libcontainer container e106217690421697ed6602fa1873d59f1791ebe2ffa4d73ce170761d36be6b20. May 13 12:56:25.792848 containerd[1620]: time="2025-05-13T12:56:25.792821313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-29tz4,Uid:bdd4070b-f960-4642-8772-b9c3275c4330,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ba8c26f9686074d290ef5dba4d94d1289e958dc09c5aca62be265fbfca216b9\"" May 13 12:56:25.794875 containerd[1620]: time="2025-05-13T12:56:25.794854732Z" level=info msg="CreateContainer within sandbox \"6ba8c26f9686074d290ef5dba4d94d1289e958dc09c5aca62be265fbfca216b9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 12:56:25.797618 containerd[1620]: time="2025-05-13T12:56:25.797594563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fn62g,Uid:6f72d658-0891-4033-80cc-2f487967107b,Namespace:kube-system,Attempt:0,} returns sandbox id \"e106217690421697ed6602fa1873d59f1791ebe2ffa4d73ce170761d36be6b20\"" May 13 12:56:25.798789 containerd[1620]: time="2025-05-13T12:56:25.798769537Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 13 12:56:25.837888 containerd[1620]: time="2025-05-13T12:56:25.837858590Z" level=info msg="Container 7c251efa68a9195316148033e9626b10e2ac5eae6c297cb4cfc22286ebaccaf6: CDI devices from CRI Config.CDIDevices: []" May 13 12:56:25.841279 containerd[1620]: time="2025-05-13T12:56:25.841256516Z" level=info msg="CreateContainer within sandbox \"6ba8c26f9686074d290ef5dba4d94d1289e958dc09c5aca62be265fbfca216b9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7c251efa68a9195316148033e9626b10e2ac5eae6c297cb4cfc22286ebaccaf6\"" May 13 12:56:25.841848 containerd[1620]: time="2025-05-13T12:56:25.841774420Z" level=info msg="StartContainer for \"7c251efa68a9195316148033e9626b10e2ac5eae6c297cb4cfc22286ebaccaf6\"" May 13 12:56:25.843248 containerd[1620]: time="2025-05-13T12:56:25.843222993Z" level=info msg="connecting to shim 7c251efa68a9195316148033e9626b10e2ac5eae6c297cb4cfc22286ebaccaf6" address="unix:///run/containerd/s/c7717465a35013b045776bbe725a76f9503074174f5102b00527cb7213e95afd" protocol=ttrpc version=3 May 13 12:56:25.861289 systemd[1]: Started cri-containerd-7c251efa68a9195316148033e9626b10e2ac5eae6c297cb4cfc22286ebaccaf6.scope - libcontainer container 7c251efa68a9195316148033e9626b10e2ac5eae6c297cb4cfc22286ebaccaf6. May 13 12:56:25.902744 containerd[1620]: time="2025-05-13T12:56:25.902720272Z" level=info msg="StartContainer for \"7c251efa68a9195316148033e9626b10e2ac5eae6c297cb4cfc22286ebaccaf6\" returns successfully" May 13 12:56:26.093810 systemd[1]: Created slice kubepods-besteffort-podde9f5769_23ad_4270_8b25_d6e236917638.slice - libcontainer container kubepods-besteffort-podde9f5769_23ad_4270_8b25_d6e236917638.slice. May 13 12:56:26.111182 kubelet[2917]: I0513 12:56:26.111152 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/de9f5769-23ad-4270-8b25-d6e236917638-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-vlwdh\" (UID: \"de9f5769-23ad-4270-8b25-d6e236917638\") " pod="kube-system/cilium-operator-6c4d7847fc-vlwdh" May 13 12:56:26.111182 kubelet[2917]: I0513 12:56:26.111188 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdbz6\" (UniqueName: \"kubernetes.io/projected/de9f5769-23ad-4270-8b25-d6e236917638-kube-api-access-sdbz6\") pod \"cilium-operator-6c4d7847fc-vlwdh\" (UID: \"de9f5769-23ad-4270-8b25-d6e236917638\") " pod="kube-system/cilium-operator-6c4d7847fc-vlwdh" May 13 12:56:26.398534 containerd[1620]: time="2025-05-13T12:56:26.398453511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-vlwdh,Uid:de9f5769-23ad-4270-8b25-d6e236917638,Namespace:kube-system,Attempt:0,}" May 13 12:56:26.496801 containerd[1620]: time="2025-05-13T12:56:26.496726349Z" level=info msg="connecting to shim 8fadcda861cf8f6b3a76f161d834a047a869c6b26576915223cc413b69efe52e" address="unix:///run/containerd/s/0251a230fca0435b2e9344e91e7fdac52cdcd1f652c815e1be800e7a045730dd" namespace=k8s.io protocol=ttrpc version=3 May 13 12:56:26.540032 systemd[1]: Started cri-containerd-8fadcda861cf8f6b3a76f161d834a047a869c6b26576915223cc413b69efe52e.scope - libcontainer container 8fadcda861cf8f6b3a76f161d834a047a869c6b26576915223cc413b69efe52e. May 13 12:56:26.576606 containerd[1620]: time="2025-05-13T12:56:26.576584407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-vlwdh,Uid:de9f5769-23ad-4270-8b25-d6e236917638,Namespace:kube-system,Attempt:0,} returns sandbox id \"8fadcda861cf8f6b3a76f161d834a047a869c6b26576915223cc413b69efe52e\"" May 13 12:56:26.650506 kubelet[2917]: I0513 12:56:26.650307 2917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-29tz4" podStartSLOduration=1.650291222 podStartE2EDuration="1.650291222s" podCreationTimestamp="2025-05-13 12:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 12:56:26.633542163 +0000 UTC m=+7.172039241" watchObservedRunningTime="2025-05-13 12:56:26.650291222 +0000 UTC m=+7.188788301" May 13 12:56:29.980429 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3435716572.mount: Deactivated successfully. May 13 12:56:32.407778 containerd[1620]: time="2025-05-13T12:56:32.407735229Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:56:32.412163 containerd[1620]: time="2025-05-13T12:56:32.412139872Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 13 12:56:32.426158 containerd[1620]: time="2025-05-13T12:56:32.426113968Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:56:32.427504 containerd[1620]: time="2025-05-13T12:56:32.427415862Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 6.628624478s" May 13 12:56:32.427504 containerd[1620]: time="2025-05-13T12:56:32.427438254Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 13 12:56:32.430370 containerd[1620]: time="2025-05-13T12:56:32.428726681Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 13 12:56:32.430370 containerd[1620]: time="2025-05-13T12:56:32.429059091Z" level=info msg="CreateContainer within sandbox \"e106217690421697ed6602fa1873d59f1791ebe2ffa4d73ce170761d36be6b20\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 12:56:32.453922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount589667973.mount: Deactivated successfully. May 13 12:56:32.457109 containerd[1620]: time="2025-05-13T12:56:32.457080689Z" level=info msg="Container d5b293926819f7d3c926416844e40939b1b8653138f2ac9b7c0b10d3785aa081: CDI devices from CRI Config.CDIDevices: []" May 13 12:56:32.457099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4198356803.mount: Deactivated successfully. May 13 12:56:32.463506 containerd[1620]: time="2025-05-13T12:56:32.463149569Z" level=info msg="CreateContainer within sandbox \"e106217690421697ed6602fa1873d59f1791ebe2ffa4d73ce170761d36be6b20\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d5b293926819f7d3c926416844e40939b1b8653138f2ac9b7c0b10d3785aa081\"" May 13 12:56:32.463741 containerd[1620]: time="2025-05-13T12:56:32.463686551Z" level=info msg="StartContainer for \"d5b293926819f7d3c926416844e40939b1b8653138f2ac9b7c0b10d3785aa081\"" May 13 12:56:32.464434 containerd[1620]: time="2025-05-13T12:56:32.464421314Z" level=info msg="connecting to shim d5b293926819f7d3c926416844e40939b1b8653138f2ac9b7c0b10d3785aa081" address="unix:///run/containerd/s/2ad8657b38e85907db3f07708c785cf87524d4f016afbdedba794c31a561fcb3" protocol=ttrpc version=3 May 13 12:56:32.507235 systemd[1]: Started cri-containerd-d5b293926819f7d3c926416844e40939b1b8653138f2ac9b7c0b10d3785aa081.scope - libcontainer container d5b293926819f7d3c926416844e40939b1b8653138f2ac9b7c0b10d3785aa081. May 13 12:56:32.552756 containerd[1620]: time="2025-05-13T12:56:32.552736603Z" level=info msg="StartContainer for \"d5b293926819f7d3c926416844e40939b1b8653138f2ac9b7c0b10d3785aa081\" returns successfully" May 13 12:56:32.567269 systemd[1]: cri-containerd-d5b293926819f7d3c926416844e40939b1b8653138f2ac9b7c0b10d3785aa081.scope: Deactivated successfully. May 13 12:56:32.629142 containerd[1620]: time="2025-05-13T12:56:32.628972043Z" level=info msg="received exit event container_id:\"d5b293926819f7d3c926416844e40939b1b8653138f2ac9b7c0b10d3785aa081\" id:\"d5b293926819f7d3c926416844e40939b1b8653138f2ac9b7c0b10d3785aa081\" pid:3335 exited_at:{seconds:1747140992 nanos:568407368}" May 13 12:56:32.636772 containerd[1620]: time="2025-05-13T12:56:32.636744409Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d5b293926819f7d3c926416844e40939b1b8653138f2ac9b7c0b10d3785aa081\" id:\"d5b293926819f7d3c926416844e40939b1b8653138f2ac9b7c0b10d3785aa081\" pid:3335 exited_at:{seconds:1747140992 nanos:568407368}" May 13 12:56:33.452786 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d5b293926819f7d3c926416844e40939b1b8653138f2ac9b7c0b10d3785aa081-rootfs.mount: Deactivated successfully. May 13 12:56:33.646107 containerd[1620]: time="2025-05-13T12:56:33.645708788Z" level=info msg="CreateContainer within sandbox \"e106217690421697ed6602fa1873d59f1791ebe2ffa4d73ce170761d36be6b20\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 12:56:33.686601 containerd[1620]: time="2025-05-13T12:56:33.686221978Z" level=info msg="Container 5d0096468bcba3936475bb4a391dad46efd88ee72ed073ba79078ec6d410218b: CDI devices from CRI Config.CDIDevices: []" May 13 12:56:33.698611 containerd[1620]: time="2025-05-13T12:56:33.698577314Z" level=info msg="CreateContainer within sandbox \"e106217690421697ed6602fa1873d59f1791ebe2ffa4d73ce170761d36be6b20\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5d0096468bcba3936475bb4a391dad46efd88ee72ed073ba79078ec6d410218b\"" May 13 12:56:33.699161 containerd[1620]: time="2025-05-13T12:56:33.698982914Z" level=info msg="StartContainer for \"5d0096468bcba3936475bb4a391dad46efd88ee72ed073ba79078ec6d410218b\"" May 13 12:56:33.699579 containerd[1620]: time="2025-05-13T12:56:33.699565027Z" level=info msg="connecting to shim 5d0096468bcba3936475bb4a391dad46efd88ee72ed073ba79078ec6d410218b" address="unix:///run/containerd/s/2ad8657b38e85907db3f07708c785cf87524d4f016afbdedba794c31a561fcb3" protocol=ttrpc version=3 May 13 12:56:33.720283 systemd[1]: Started cri-containerd-5d0096468bcba3936475bb4a391dad46efd88ee72ed073ba79078ec6d410218b.scope - libcontainer container 5d0096468bcba3936475bb4a391dad46efd88ee72ed073ba79078ec6d410218b. May 13 12:56:33.744775 containerd[1620]: time="2025-05-13T12:56:33.744745821Z" level=info msg="StartContainer for \"5d0096468bcba3936475bb4a391dad46efd88ee72ed073ba79078ec6d410218b\" returns successfully" May 13 12:56:33.749610 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 12:56:33.749952 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 12:56:33.750445 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 13 12:56:33.752075 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 12:56:33.753847 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 12:56:33.754399 systemd[1]: cri-containerd-5d0096468bcba3936475bb4a391dad46efd88ee72ed073ba79078ec6d410218b.scope: Deactivated successfully. May 13 12:56:33.758316 containerd[1620]: time="2025-05-13T12:56:33.755712890Z" level=info msg="received exit event container_id:\"5d0096468bcba3936475bb4a391dad46efd88ee72ed073ba79078ec6d410218b\" id:\"5d0096468bcba3936475bb4a391dad46efd88ee72ed073ba79078ec6d410218b\" pid:3381 exited_at:{seconds:1747140993 nanos:755489153}" May 13 12:56:33.758316 containerd[1620]: time="2025-05-13T12:56:33.755767288Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5d0096468bcba3936475bb4a391dad46efd88ee72ed073ba79078ec6d410218b\" id:\"5d0096468bcba3936475bb4a391dad46efd88ee72ed073ba79078ec6d410218b\" pid:3381 exited_at:{seconds:1747140993 nanos:755489153}" May 13 12:56:33.836352 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 12:56:34.453075 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d0096468bcba3936475bb4a391dad46efd88ee72ed073ba79078ec6d410218b-rootfs.mount: Deactivated successfully. May 13 12:56:34.650085 containerd[1620]: time="2025-05-13T12:56:34.649685694Z" level=info msg="CreateContainer within sandbox \"e106217690421697ed6602fa1873d59f1791ebe2ffa4d73ce170761d36be6b20\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 12:56:34.660145 containerd[1620]: time="2025-05-13T12:56:34.659770755Z" level=info msg="Container c3246bcafe04a3fa0d6ecfb78482581b20c96fcc60a5fb885f2522ebc69ef4dc: CDI devices from CRI Config.CDIDevices: []" May 13 12:56:34.665122 containerd[1620]: time="2025-05-13T12:56:34.664454787Z" level=info msg="CreateContainer within sandbox \"e106217690421697ed6602fa1873d59f1791ebe2ffa4d73ce170761d36be6b20\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c3246bcafe04a3fa0d6ecfb78482581b20c96fcc60a5fb885f2522ebc69ef4dc\"" May 13 12:56:34.665560 containerd[1620]: time="2025-05-13T12:56:34.665541974Z" level=info msg="StartContainer for \"c3246bcafe04a3fa0d6ecfb78482581b20c96fcc60a5fb885f2522ebc69ef4dc\"" May 13 12:56:34.667093 containerd[1620]: time="2025-05-13T12:56:34.666995259Z" level=info msg="connecting to shim c3246bcafe04a3fa0d6ecfb78482581b20c96fcc60a5fb885f2522ebc69ef4dc" address="unix:///run/containerd/s/2ad8657b38e85907db3f07708c785cf87524d4f016afbdedba794c31a561fcb3" protocol=ttrpc version=3 May 13 12:56:34.683744 systemd[1]: Started cri-containerd-c3246bcafe04a3fa0d6ecfb78482581b20c96fcc60a5fb885f2522ebc69ef4dc.scope - libcontainer container c3246bcafe04a3fa0d6ecfb78482581b20c96fcc60a5fb885f2522ebc69ef4dc. May 13 12:56:34.710395 containerd[1620]: time="2025-05-13T12:56:34.710344954Z" level=info msg="StartContainer for \"c3246bcafe04a3fa0d6ecfb78482581b20c96fcc60a5fb885f2522ebc69ef4dc\" returns successfully" May 13 12:56:34.734927 systemd[1]: cri-containerd-c3246bcafe04a3fa0d6ecfb78482581b20c96fcc60a5fb885f2522ebc69ef4dc.scope: Deactivated successfully. May 13 12:56:34.735483 systemd[1]: cri-containerd-c3246bcafe04a3fa0d6ecfb78482581b20c96fcc60a5fb885f2522ebc69ef4dc.scope: Consumed 13ms CPU time, 5.9M memory peak, 1M read from disk. May 13 12:56:34.736605 containerd[1620]: time="2025-05-13T12:56:34.736579717Z" level=info msg="received exit event container_id:\"c3246bcafe04a3fa0d6ecfb78482581b20c96fcc60a5fb885f2522ebc69ef4dc\" id:\"c3246bcafe04a3fa0d6ecfb78482581b20c96fcc60a5fb885f2522ebc69ef4dc\" pid:3426 exited_at:{seconds:1747140994 nanos:736409945}" May 13 12:56:34.736752 containerd[1620]: time="2025-05-13T12:56:34.736735497Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c3246bcafe04a3fa0d6ecfb78482581b20c96fcc60a5fb885f2522ebc69ef4dc\" id:\"c3246bcafe04a3fa0d6ecfb78482581b20c96fcc60a5fb885f2522ebc69ef4dc\" pid:3426 exited_at:{seconds:1747140994 nanos:736409945}" May 13 12:56:35.389264 containerd[1620]: time="2025-05-13T12:56:35.389233819Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:56:35.389854 containerd[1620]: time="2025-05-13T12:56:35.389838646Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 13 12:56:35.390062 containerd[1620]: time="2025-05-13T12:56:35.390046570Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:56:35.390977 containerd[1620]: time="2025-05-13T12:56:35.390957638Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.962215928s" May 13 12:56:35.391006 containerd[1620]: time="2025-05-13T12:56:35.390977309Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 13 12:56:35.392753 containerd[1620]: time="2025-05-13T12:56:35.392699356Z" level=info msg="CreateContainer within sandbox \"8fadcda861cf8f6b3a76f161d834a047a869c6b26576915223cc413b69efe52e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 13 12:56:35.405387 containerd[1620]: time="2025-05-13T12:56:35.405353349Z" level=info msg="Container 4e9416e40b5dc3bfff200f8e7431cdb775361286063697b8a502122f9167a66a: CDI devices from CRI Config.CDIDevices: []" May 13 12:56:35.419161 containerd[1620]: time="2025-05-13T12:56:35.419116062Z" level=info msg="CreateContainer within sandbox \"8fadcda861cf8f6b3a76f161d834a047a869c6b26576915223cc413b69efe52e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"4e9416e40b5dc3bfff200f8e7431cdb775361286063697b8a502122f9167a66a\"" May 13 12:56:35.420487 containerd[1620]: time="2025-05-13T12:56:35.420455500Z" level=info msg="StartContainer for \"4e9416e40b5dc3bfff200f8e7431cdb775361286063697b8a502122f9167a66a\"" May 13 12:56:35.421160 containerd[1620]: time="2025-05-13T12:56:35.421123340Z" level=info msg="connecting to shim 4e9416e40b5dc3bfff200f8e7431cdb775361286063697b8a502122f9167a66a" address="unix:///run/containerd/s/0251a230fca0435b2e9344e91e7fdac52cdcd1f652c815e1be800e7a045730dd" protocol=ttrpc version=3 May 13 12:56:35.444276 systemd[1]: Started cri-containerd-4e9416e40b5dc3bfff200f8e7431cdb775361286063697b8a502122f9167a66a.scope - libcontainer container 4e9416e40b5dc3bfff200f8e7431cdb775361286063697b8a502122f9167a66a. May 13 12:56:35.453648 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2309020388.mount: Deactivated successfully. May 13 12:56:35.453706 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c3246bcafe04a3fa0d6ecfb78482581b20c96fcc60a5fb885f2522ebc69ef4dc-rootfs.mount: Deactivated successfully. May 13 12:56:35.475530 containerd[1620]: time="2025-05-13T12:56:35.475496574Z" level=info msg="StartContainer for \"4e9416e40b5dc3bfff200f8e7431cdb775361286063697b8a502122f9167a66a\" returns successfully" May 13 12:56:35.656600 containerd[1620]: time="2025-05-13T12:56:35.656427648Z" level=info msg="CreateContainer within sandbox \"e106217690421697ed6602fa1873d59f1791ebe2ffa4d73ce170761d36be6b20\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 12:56:35.674850 kubelet[2917]: I0513 12:56:35.674651 2917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-vlwdh" podStartSLOduration=0.860737077 podStartE2EDuration="9.674638728s" podCreationTimestamp="2025-05-13 12:56:26 +0000 UTC" firstStartedPulling="2025-05-13 12:56:26.577514478 +0000 UTC m=+7.116011540" lastFinishedPulling="2025-05-13 12:56:35.391416125 +0000 UTC m=+15.929913191" observedRunningTime="2025-05-13 12:56:35.661232845 +0000 UTC m=+16.199729918" watchObservedRunningTime="2025-05-13 12:56:35.674638728 +0000 UTC m=+16.213135793" May 13 12:56:35.679007 containerd[1620]: time="2025-05-13T12:56:35.676839090Z" level=info msg="Container e745f9497f910befd708ac65fd54b44ad879a25d3747e6c02bd04281f0f35b2a: CDI devices from CRI Config.CDIDevices: []" May 13 12:56:35.695842 containerd[1620]: time="2025-05-13T12:56:35.695810356Z" level=info msg="CreateContainer within sandbox \"e106217690421697ed6602fa1873d59f1791ebe2ffa4d73ce170761d36be6b20\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e745f9497f910befd708ac65fd54b44ad879a25d3747e6c02bd04281f0f35b2a\"" May 13 12:56:35.696101 containerd[1620]: time="2025-05-13T12:56:35.696072848Z" level=info msg="StartContainer for \"e745f9497f910befd708ac65fd54b44ad879a25d3747e6c02bd04281f0f35b2a\"" May 13 12:56:35.696907 containerd[1620]: time="2025-05-13T12:56:35.696888001Z" level=info msg="connecting to shim e745f9497f910befd708ac65fd54b44ad879a25d3747e6c02bd04281f0f35b2a" address="unix:///run/containerd/s/2ad8657b38e85907db3f07708c785cf87524d4f016afbdedba794c31a561fcb3" protocol=ttrpc version=3 May 13 12:56:35.718247 systemd[1]: Started cri-containerd-e745f9497f910befd708ac65fd54b44ad879a25d3747e6c02bd04281f0f35b2a.scope - libcontainer container e745f9497f910befd708ac65fd54b44ad879a25d3747e6c02bd04281f0f35b2a. May 13 12:56:35.746561 containerd[1620]: time="2025-05-13T12:56:35.746541472Z" level=info msg="StartContainer for \"e745f9497f910befd708ac65fd54b44ad879a25d3747e6c02bd04281f0f35b2a\" returns successfully" May 13 12:56:35.748374 systemd[1]: cri-containerd-e745f9497f910befd708ac65fd54b44ad879a25d3747e6c02bd04281f0f35b2a.scope: Deactivated successfully. May 13 12:56:35.749217 containerd[1620]: time="2025-05-13T12:56:35.749172819Z" level=info msg="received exit event container_id:\"e745f9497f910befd708ac65fd54b44ad879a25d3747e6c02bd04281f0f35b2a\" id:\"e745f9497f910befd708ac65fd54b44ad879a25d3747e6c02bd04281f0f35b2a\" pid:3516 exited_at:{seconds:1747140995 nanos:749033460}" May 13 12:56:35.749336 containerd[1620]: time="2025-05-13T12:56:35.749317441Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e745f9497f910befd708ac65fd54b44ad879a25d3747e6c02bd04281f0f35b2a\" id:\"e745f9497f910befd708ac65fd54b44ad879a25d3747e6c02bd04281f0f35b2a\" pid:3516 exited_at:{seconds:1747140995 nanos:749033460}" May 13 12:56:35.767118 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e745f9497f910befd708ac65fd54b44ad879a25d3747e6c02bd04281f0f35b2a-rootfs.mount: Deactivated successfully. May 13 12:56:36.659154 containerd[1620]: time="2025-05-13T12:56:36.658700226Z" level=info msg="CreateContainer within sandbox \"e106217690421697ed6602fa1873d59f1791ebe2ffa4d73ce170761d36be6b20\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 12:56:36.683902 containerd[1620]: time="2025-05-13T12:56:36.683228645Z" level=info msg="Container 6aff68ab8d9f8a4a0b6545e735aed25ff565c63b9c0baa4b3132b3160dbf9e1d: CDI devices from CRI Config.CDIDevices: []" May 13 12:56:36.686535 containerd[1620]: time="2025-05-13T12:56:36.686512123Z" level=info msg="CreateContainer within sandbox \"e106217690421697ed6602fa1873d59f1791ebe2ffa4d73ce170761d36be6b20\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6aff68ab8d9f8a4a0b6545e735aed25ff565c63b9c0baa4b3132b3160dbf9e1d\"" May 13 12:56:36.686919 containerd[1620]: time="2025-05-13T12:56:36.686903268Z" level=info msg="StartContainer for \"6aff68ab8d9f8a4a0b6545e735aed25ff565c63b9c0baa4b3132b3160dbf9e1d\"" May 13 12:56:36.688464 containerd[1620]: time="2025-05-13T12:56:36.688419305Z" level=info msg="connecting to shim 6aff68ab8d9f8a4a0b6545e735aed25ff565c63b9c0baa4b3132b3160dbf9e1d" address="unix:///run/containerd/s/2ad8657b38e85907db3f07708c785cf87524d4f016afbdedba794c31a561fcb3" protocol=ttrpc version=3 May 13 12:56:36.711292 systemd[1]: Started cri-containerd-6aff68ab8d9f8a4a0b6545e735aed25ff565c63b9c0baa4b3132b3160dbf9e1d.scope - libcontainer container 6aff68ab8d9f8a4a0b6545e735aed25ff565c63b9c0baa4b3132b3160dbf9e1d. May 13 12:56:36.732423 containerd[1620]: time="2025-05-13T12:56:36.732390694Z" level=info msg="StartContainer for \"6aff68ab8d9f8a4a0b6545e735aed25ff565c63b9c0baa4b3132b3160dbf9e1d\" returns successfully" May 13 12:56:36.902610 containerd[1620]: time="2025-05-13T12:56:36.902572196Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6aff68ab8d9f8a4a0b6545e735aed25ff565c63b9c0baa4b3132b3160dbf9e1d\" id:\"a92c291984a7c258ac2c354248592c61c82768b3c01a5aa964d86c075741fb70\" pid:3583 exited_at:{seconds:1747140996 nanos:902123627}" May 13 12:56:36.994688 kubelet[2917]: I0513 12:56:36.994674 2917 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 13 12:56:37.081086 systemd[1]: Created slice kubepods-burstable-poda8e7ae0a_fa7f_4b6b_9504_29fb30a61bf6.slice - libcontainer container kubepods-burstable-poda8e7ae0a_fa7f_4b6b_9504_29fb30a61bf6.slice. May 13 12:56:37.087811 systemd[1]: Created slice kubepods-burstable-pode8989773_d65f_4ff4_9062_3d2052cd3d4c.slice - libcontainer container kubepods-burstable-pode8989773_d65f_4ff4_9062_3d2052cd3d4c.slice. May 13 12:56:37.132664 kubelet[2917]: I0513 12:56:37.132606 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8e7ae0a-fa7f-4b6b-9504-29fb30a61bf6-config-volume\") pod \"coredns-668d6bf9bc-2gjwq\" (UID: \"a8e7ae0a-fa7f-4b6b-9504-29fb30a61bf6\") " pod="kube-system/coredns-668d6bf9bc-2gjwq" May 13 12:56:37.132664 kubelet[2917]: I0513 12:56:37.132632 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67cxb\" (UniqueName: \"kubernetes.io/projected/e8989773-d65f-4ff4-9062-3d2052cd3d4c-kube-api-access-67cxb\") pod \"coredns-668d6bf9bc-x5lz4\" (UID: \"e8989773-d65f-4ff4-9062-3d2052cd3d4c\") " pod="kube-system/coredns-668d6bf9bc-x5lz4" May 13 12:56:37.132822 kubelet[2917]: I0513 12:56:37.132645 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thhzg\" (UniqueName: \"kubernetes.io/projected/a8e7ae0a-fa7f-4b6b-9504-29fb30a61bf6-kube-api-access-thhzg\") pod \"coredns-668d6bf9bc-2gjwq\" (UID: \"a8e7ae0a-fa7f-4b6b-9504-29fb30a61bf6\") " pod="kube-system/coredns-668d6bf9bc-2gjwq" May 13 12:56:37.132822 kubelet[2917]: I0513 12:56:37.132803 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8989773-d65f-4ff4-9062-3d2052cd3d4c-config-volume\") pod \"coredns-668d6bf9bc-x5lz4\" (UID: \"e8989773-d65f-4ff4-9062-3d2052cd3d4c\") " pod="kube-system/coredns-668d6bf9bc-x5lz4" May 13 12:56:37.397742 containerd[1620]: time="2025-05-13T12:56:37.397619851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-x5lz4,Uid:e8989773-d65f-4ff4-9062-3d2052cd3d4c,Namespace:kube-system,Attempt:0,}" May 13 12:56:37.397809 containerd[1620]: time="2025-05-13T12:56:37.397751964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2gjwq,Uid:a8e7ae0a-fa7f-4b6b-9504-29fb30a61bf6,Namespace:kube-system,Attempt:0,}" May 13 12:56:39.365646 systemd-networkd[1542]: cilium_host: Link UP May 13 12:56:39.365739 systemd-networkd[1542]: cilium_net: Link UP May 13 12:56:39.365836 systemd-networkd[1542]: cilium_net: Gained carrier May 13 12:56:39.365921 systemd-networkd[1542]: cilium_host: Gained carrier May 13 12:56:39.567232 systemd-networkd[1542]: cilium_vxlan: Link UP May 13 12:56:39.567313 systemd-networkd[1542]: cilium_vxlan: Gained carrier May 13 12:56:39.895281 systemd-networkd[1542]: cilium_net: Gained IPv6LL May 13 12:56:40.043174 kernel: NET: Registered PF_ALG protocol family May 13 12:56:40.086279 systemd-networkd[1542]: cilium_host: Gained IPv6LL May 13 12:56:40.444706 systemd-networkd[1542]: lxc_health: Link UP May 13 12:56:40.450824 systemd-networkd[1542]: lxc_health: Gained carrier May 13 12:56:40.947100 systemd-networkd[1542]: lxc46dfecba5ee6: Link UP May 13 12:56:40.954149 kernel: eth0: renamed from tmp1223d May 13 12:56:40.988450 systemd-networkd[1542]: lxc46dfecba5ee6: Gained carrier May 13 12:56:40.989175 systemd-networkd[1542]: lxcaca1ee5c9879: Link UP May 13 12:56:40.994183 kernel: eth0: renamed from tmp64951 May 13 12:56:40.995535 systemd-networkd[1542]: lxcaca1ee5c9879: Gained carrier May 13 12:56:41.494228 systemd-networkd[1542]: cilium_vxlan: Gained IPv6LL May 13 12:56:41.622227 systemd-networkd[1542]: lxc_health: Gained IPv6LL May 13 12:56:41.732280 kubelet[2917]: I0513 12:56:41.731944 2917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fn62g" podStartSLOduration=10.102191062 podStartE2EDuration="16.731930153s" podCreationTimestamp="2025-05-13 12:56:25 +0000 UTC" firstStartedPulling="2025-05-13 12:56:25.798475559 +0000 UTC m=+6.336972624" lastFinishedPulling="2025-05-13 12:56:32.428214647 +0000 UTC m=+12.966711715" observedRunningTime="2025-05-13 12:56:37.67286187 +0000 UTC m=+18.211358945" watchObservedRunningTime="2025-05-13 12:56:41.731930153 +0000 UTC m=+22.270427228" May 13 12:56:42.454227 systemd-networkd[1542]: lxc46dfecba5ee6: Gained IPv6LL May 13 12:56:42.710276 systemd-networkd[1542]: lxcaca1ee5c9879: Gained IPv6LL May 13 12:56:43.489241 containerd[1620]: time="2025-05-13T12:56:43.489196157Z" level=info msg="connecting to shim 1223d1c3cbe31eedf291f062abd6be52dd4fc1a293dfaa344bc948f5ef94fbae" address="unix:///run/containerd/s/4c8c17faaaab4b4abdfb5268b94a34d716d238b5b2b02083236181e12a35ced5" namespace=k8s.io protocol=ttrpc version=3 May 13 12:56:43.493051 containerd[1620]: time="2025-05-13T12:56:43.493026649Z" level=info msg="connecting to shim 649514e56cd69f324a27f9ada62d89deb2f0d6f0e3903d01863061d127d1c316" address="unix:///run/containerd/s/f7a0100e779ca5fcba26cc5314e93415b51dfd438224377e53f732ce747b1be9" namespace=k8s.io protocol=ttrpc version=3 May 13 12:56:43.511275 systemd[1]: Started cri-containerd-1223d1c3cbe31eedf291f062abd6be52dd4fc1a293dfaa344bc948f5ef94fbae.scope - libcontainer container 1223d1c3cbe31eedf291f062abd6be52dd4fc1a293dfaa344bc948f5ef94fbae. May 13 12:56:43.514499 systemd[1]: Started cri-containerd-649514e56cd69f324a27f9ada62d89deb2f0d6f0e3903d01863061d127d1c316.scope - libcontainer container 649514e56cd69f324a27f9ada62d89deb2f0d6f0e3903d01863061d127d1c316. May 13 12:56:43.530811 systemd-resolved[1495]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 12:56:43.533570 systemd-resolved[1495]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 12:56:43.575298 containerd[1620]: time="2025-05-13T12:56:43.575279518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2gjwq,Uid:a8e7ae0a-fa7f-4b6b-9504-29fb30a61bf6,Namespace:kube-system,Attempt:0,} returns sandbox id \"649514e56cd69f324a27f9ada62d89deb2f0d6f0e3903d01863061d127d1c316\"" May 13 12:56:43.577311 containerd[1620]: time="2025-05-13T12:56:43.576982425Z" level=info msg="CreateContainer within sandbox \"649514e56cd69f324a27f9ada62d89deb2f0d6f0e3903d01863061d127d1c316\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 12:56:43.588601 containerd[1620]: time="2025-05-13T12:56:43.588581403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-x5lz4,Uid:e8989773-d65f-4ff4-9062-3d2052cd3d4c,Namespace:kube-system,Attempt:0,} returns sandbox id \"1223d1c3cbe31eedf291f062abd6be52dd4fc1a293dfaa344bc948f5ef94fbae\"" May 13 12:56:43.595165 containerd[1620]: time="2025-05-13T12:56:43.590285044Z" level=info msg="CreateContainer within sandbox \"1223d1c3cbe31eedf291f062abd6be52dd4fc1a293dfaa344bc948f5ef94fbae\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 12:56:43.610468 containerd[1620]: time="2025-05-13T12:56:43.610435062Z" level=info msg="Container 698dfc2b2eaeadbe71086d3bf4e797d495a07460162c2191d80ac0bec72c842b: CDI devices from CRI Config.CDIDevices: []" May 13 12:56:43.611142 containerd[1620]: time="2025-05-13T12:56:43.611021528Z" level=info msg="Container 94e4da639f2c89fdece06ea4fc60b04c2a4b89bb439e9e4a5cab79b5c933d617: CDI devices from CRI Config.CDIDevices: []" May 13 12:56:43.616617 containerd[1620]: time="2025-05-13T12:56:43.616583162Z" level=info msg="CreateContainer within sandbox \"1223d1c3cbe31eedf291f062abd6be52dd4fc1a293dfaa344bc948f5ef94fbae\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"698dfc2b2eaeadbe71086d3bf4e797d495a07460162c2191d80ac0bec72c842b\"" May 13 12:56:43.618116 containerd[1620]: time="2025-05-13T12:56:43.618098600Z" level=info msg="StartContainer for \"698dfc2b2eaeadbe71086d3bf4e797d495a07460162c2191d80ac0bec72c842b\"" May 13 12:56:43.619220 containerd[1620]: time="2025-05-13T12:56:43.619181528Z" level=info msg="CreateContainer within sandbox \"649514e56cd69f324a27f9ada62d89deb2f0d6f0e3903d01863061d127d1c316\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"94e4da639f2c89fdece06ea4fc60b04c2a4b89bb439e9e4a5cab79b5c933d617\"" May 13 12:56:43.619303 containerd[1620]: time="2025-05-13T12:56:43.619182221Z" level=info msg="connecting to shim 698dfc2b2eaeadbe71086d3bf4e797d495a07460162c2191d80ac0bec72c842b" address="unix:///run/containerd/s/4c8c17faaaab4b4abdfb5268b94a34d716d238b5b2b02083236181e12a35ced5" protocol=ttrpc version=3 May 13 12:56:43.620837 containerd[1620]: time="2025-05-13T12:56:43.620686451Z" level=info msg="StartContainer for \"94e4da639f2c89fdece06ea4fc60b04c2a4b89bb439e9e4a5cab79b5c933d617\"" May 13 12:56:43.622275 containerd[1620]: time="2025-05-13T12:56:43.622258277Z" level=info msg="connecting to shim 94e4da639f2c89fdece06ea4fc60b04c2a4b89bb439e9e4a5cab79b5c933d617" address="unix:///run/containerd/s/f7a0100e779ca5fcba26cc5314e93415b51dfd438224377e53f732ce747b1be9" protocol=ttrpc version=3 May 13 12:56:43.635253 systemd[1]: Started cri-containerd-698dfc2b2eaeadbe71086d3bf4e797d495a07460162c2191d80ac0bec72c842b.scope - libcontainer container 698dfc2b2eaeadbe71086d3bf4e797d495a07460162c2191d80ac0bec72c842b. May 13 12:56:43.642263 systemd[1]: Started cri-containerd-94e4da639f2c89fdece06ea4fc60b04c2a4b89bb439e9e4a5cab79b5c933d617.scope - libcontainer container 94e4da639f2c89fdece06ea4fc60b04c2a4b89bb439e9e4a5cab79b5c933d617. May 13 12:56:43.665376 containerd[1620]: time="2025-05-13T12:56:43.665342284Z" level=info msg="StartContainer for \"94e4da639f2c89fdece06ea4fc60b04c2a4b89bb439e9e4a5cab79b5c933d617\" returns successfully" May 13 12:56:43.665630 containerd[1620]: time="2025-05-13T12:56:43.665571976Z" level=info msg="StartContainer for \"698dfc2b2eaeadbe71086d3bf4e797d495a07460162c2191d80ac0bec72c842b\" returns successfully" May 13 12:56:43.687287 kubelet[2917]: I0513 12:56:43.687252 2917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-x5lz4" podStartSLOduration=17.686394513 podStartE2EDuration="17.686394513s" podCreationTimestamp="2025-05-13 12:56:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 12:56:43.686046019 +0000 UTC m=+24.224543093" watchObservedRunningTime="2025-05-13 12:56:43.686394513 +0000 UTC m=+24.224891578" May 13 12:56:43.693858 kubelet[2917]: I0513 12:56:43.693398 2917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-2gjwq" podStartSLOduration=17.693386883 podStartE2EDuration="17.693386883s" podCreationTimestamp="2025-05-13 12:56:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 12:56:43.692566916 +0000 UTC m=+24.231063991" watchObservedRunningTime="2025-05-13 12:56:43.693386883 +0000 UTC m=+24.231883953" May 13 12:56:44.471658 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2060017428.mount: Deactivated successfully. May 13 12:57:22.956694 systemd[1]: Started sshd@7-139.178.70.101:22-147.75.109.163:41788.service - OpenSSH per-connection server daemon (147.75.109.163:41788). May 13 12:57:23.021183 sshd[4242]: Accepted publickey for core from 147.75.109.163 port 41788 ssh2: RSA SHA256:bxyL21ypRg/l6L1U5vXH7bz9HOopqyjcFfRC9D+f+uA May 13 12:57:23.022293 sshd-session[4242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:57:23.029885 systemd-logind[1597]: New session 10 of user core. May 13 12:57:23.044294 systemd[1]: Started session-10.scope - Session 10 of User core. May 13 12:57:23.819229 sshd[4244]: Connection closed by 147.75.109.163 port 41788 May 13 12:57:23.819679 sshd-session[4242]: pam_unix(sshd:session): session closed for user core May 13 12:57:23.827436 systemd[1]: sshd@7-139.178.70.101:22-147.75.109.163:41788.service: Deactivated successfully. May 13 12:57:23.828823 systemd[1]: session-10.scope: Deactivated successfully. May 13 12:57:23.830220 systemd-logind[1597]: Session 10 logged out. Waiting for processes to exit. May 13 12:57:23.830983 systemd-logind[1597]: Removed session 10. May 13 12:57:28.830375 systemd[1]: Started sshd@8-139.178.70.101:22-147.75.109.163:53348.service - OpenSSH per-connection server daemon (147.75.109.163:53348). May 13 12:57:28.884209 sshd[4259]: Accepted publickey for core from 147.75.109.163 port 53348 ssh2: RSA SHA256:bxyL21ypRg/l6L1U5vXH7bz9HOopqyjcFfRC9D+f+uA May 13 12:57:28.884993 sshd-session[4259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:57:28.887669 systemd-logind[1597]: New session 11 of user core. May 13 12:57:28.891217 systemd[1]: Started session-11.scope - Session 11 of User core. May 13 12:57:29.025220 sshd[4261]: Connection closed by 147.75.109.163 port 53348 May 13 12:57:29.025579 sshd-session[4259]: pam_unix(sshd:session): session closed for user core May 13 12:57:29.027367 systemd-logind[1597]: Session 11 logged out. Waiting for processes to exit. May 13 12:57:29.027516 systemd[1]: sshd@8-139.178.70.101:22-147.75.109.163:53348.service: Deactivated successfully. May 13 12:57:29.028662 systemd[1]: session-11.scope: Deactivated successfully. May 13 12:57:29.029931 systemd-logind[1597]: Removed session 11. May 13 12:57:34.035550 systemd[1]: Started sshd@9-139.178.70.101:22-147.75.109.163:53360.service - OpenSSH per-connection server daemon (147.75.109.163:53360). May 13 12:57:34.223263 sshd[4275]: Accepted publickey for core from 147.75.109.163 port 53360 ssh2: RSA SHA256:bxyL21ypRg/l6L1U5vXH7bz9HOopqyjcFfRC9D+f+uA May 13 12:57:34.224179 sshd-session[4275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:57:34.227776 systemd-logind[1597]: New session 12 of user core. May 13 12:57:34.236249 systemd[1]: Started session-12.scope - Session 12 of User core. May 13 12:57:34.346062 sshd[4277]: Connection closed by 147.75.109.163 port 53360 May 13 12:57:34.345683 sshd-session[4275]: pam_unix(sshd:session): session closed for user core May 13 12:57:34.355894 systemd[1]: sshd@9-139.178.70.101:22-147.75.109.163:53360.service: Deactivated successfully. May 13 12:57:34.357082 systemd[1]: session-12.scope: Deactivated successfully. May 13 12:57:34.358161 systemd-logind[1597]: Session 12 logged out. Waiting for processes to exit. May 13 12:57:34.359977 systemd[1]: Started sshd@10-139.178.70.101:22-147.75.109.163:53374.service - OpenSSH per-connection server daemon (147.75.109.163:53374). May 13 12:57:34.361048 systemd-logind[1597]: Removed session 12. May 13 12:57:34.400842 sshd[4290]: Accepted publickey for core from 147.75.109.163 port 53374 ssh2: RSA SHA256:bxyL21ypRg/l6L1U5vXH7bz9HOopqyjcFfRC9D+f+uA May 13 12:57:34.401582 sshd-session[4290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:57:34.404205 systemd-logind[1597]: New session 13 of user core. May 13 12:57:34.414214 systemd[1]: Started session-13.scope - Session 13 of User core. May 13 12:57:34.589951 sshd[4292]: Connection closed by 147.75.109.163 port 53374 May 13 12:57:34.590459 sshd-session[4290]: pam_unix(sshd:session): session closed for user core May 13 12:57:34.603307 systemd[1]: sshd@10-139.178.70.101:22-147.75.109.163:53374.service: Deactivated successfully. May 13 12:57:34.606096 systemd[1]: session-13.scope: Deactivated successfully. May 13 12:57:34.608187 systemd-logind[1597]: Session 13 logged out. Waiting for processes to exit. May 13 12:57:34.611588 systemd[1]: Started sshd@11-139.178.70.101:22-147.75.109.163:53382.service - OpenSSH per-connection server daemon (147.75.109.163:53382). May 13 12:57:34.614028 systemd-logind[1597]: Removed session 13. May 13 12:57:34.674668 sshd[4302]: Accepted publickey for core from 147.75.109.163 port 53382 ssh2: RSA SHA256:bxyL21ypRg/l6L1U5vXH7bz9HOopqyjcFfRC9D+f+uA May 13 12:57:34.675924 sshd-session[4302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:57:34.682456 systemd-logind[1597]: New session 14 of user core. May 13 12:57:34.689362 systemd[1]: Started session-14.scope - Session 14 of User core. May 13 12:57:34.792826 sshd[4304]: Connection closed by 147.75.109.163 port 53382 May 13 12:57:34.793179 sshd-session[4302]: pam_unix(sshd:session): session closed for user core May 13 12:57:34.795182 systemd[1]: sshd@11-139.178.70.101:22-147.75.109.163:53382.service: Deactivated successfully. May 13 12:57:34.796243 systemd[1]: session-14.scope: Deactivated successfully. May 13 12:57:34.796752 systemd-logind[1597]: Session 14 logged out. Waiting for processes to exit. May 13 12:57:34.797552 systemd-logind[1597]: Removed session 14. May 13 12:57:39.802525 systemd[1]: Started sshd@12-139.178.70.101:22-147.75.109.163:38048.service - OpenSSH per-connection server daemon (147.75.109.163:38048). May 13 12:57:39.849704 sshd[4315]: Accepted publickey for core from 147.75.109.163 port 38048 ssh2: RSA SHA256:bxyL21ypRg/l6L1U5vXH7bz9HOopqyjcFfRC9D+f+uA May 13 12:57:39.850613 sshd-session[4315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:57:39.853314 systemd-logind[1597]: New session 15 of user core. May 13 12:57:39.857208 systemd[1]: Started session-15.scope - Session 15 of User core. May 13 12:57:39.940871 sshd[4317]: Connection closed by 147.75.109.163 port 38048 May 13 12:57:39.940523 sshd-session[4315]: pam_unix(sshd:session): session closed for user core May 13 12:57:39.942293 systemd-logind[1597]: Session 15 logged out. Waiting for processes to exit. May 13 12:57:39.942871 systemd[1]: sshd@12-139.178.70.101:22-147.75.109.163:38048.service: Deactivated successfully. May 13 12:57:39.944075 systemd[1]: session-15.scope: Deactivated successfully. May 13 12:57:39.945205 systemd-logind[1597]: Removed session 15. May 13 12:57:44.951502 systemd[1]: Started sshd@13-139.178.70.101:22-147.75.109.163:38058.service - OpenSSH per-connection server daemon (147.75.109.163:38058). May 13 12:57:44.997331 sshd[4329]: Accepted publickey for core from 147.75.109.163 port 38058 ssh2: RSA SHA256:bxyL21ypRg/l6L1U5vXH7bz9HOopqyjcFfRC9D+f+uA May 13 12:57:44.998117 sshd-session[4329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:57:45.000532 systemd-logind[1597]: New session 16 of user core. May 13 12:57:45.005204 systemd[1]: Started session-16.scope - Session 16 of User core. May 13 12:57:45.088564 sshd[4331]: Connection closed by 147.75.109.163 port 38058 May 13 12:57:45.088958 sshd-session[4329]: pam_unix(sshd:session): session closed for user core May 13 12:57:45.098807 systemd[1]: sshd@13-139.178.70.101:22-147.75.109.163:38058.service: Deactivated successfully. May 13 12:57:45.099822 systemd[1]: session-16.scope: Deactivated successfully. May 13 12:57:45.100379 systemd-logind[1597]: Session 16 logged out. Waiting for processes to exit. May 13 12:57:45.101428 systemd-logind[1597]: Removed session 16. May 13 12:57:45.102452 systemd[1]: Started sshd@14-139.178.70.101:22-147.75.109.163:38066.service - OpenSSH per-connection server daemon (147.75.109.163:38066). May 13 12:57:45.145229 sshd[4342]: Accepted publickey for core from 147.75.109.163 port 38066 ssh2: RSA SHA256:bxyL21ypRg/l6L1U5vXH7bz9HOopqyjcFfRC9D+f+uA May 13 12:57:45.145896 sshd-session[4342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:57:45.148904 systemd-logind[1597]: New session 17 of user core. May 13 12:57:45.157211 systemd[1]: Started session-17.scope - Session 17 of User core. May 13 12:57:45.564010 sshd[4344]: Connection closed by 147.75.109.163 port 38066 May 13 12:57:45.565114 sshd-session[4342]: pam_unix(sshd:session): session closed for user core May 13 12:57:45.571979 systemd[1]: Started sshd@15-139.178.70.101:22-147.75.109.163:38080.service - OpenSSH per-connection server daemon (147.75.109.163:38080). May 13 12:57:45.574059 systemd[1]: sshd@14-139.178.70.101:22-147.75.109.163:38066.service: Deactivated successfully. May 13 12:57:45.576424 systemd[1]: session-17.scope: Deactivated successfully. May 13 12:57:45.578697 systemd-logind[1597]: Session 17 logged out. Waiting for processes to exit. May 13 12:57:45.579646 systemd-logind[1597]: Removed session 17. May 13 12:57:45.624632 sshd[4351]: Accepted publickey for core from 147.75.109.163 port 38080 ssh2: RSA SHA256:bxyL21ypRg/l6L1U5vXH7bz9HOopqyjcFfRC9D+f+uA May 13 12:57:45.624916 sshd-session[4351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:57:45.629167 systemd-logind[1597]: New session 18 of user core. May 13 12:57:45.634240 systemd[1]: Started session-18.scope - Session 18 of User core. May 13 12:57:46.452222 sshd[4356]: Connection closed by 147.75.109.163 port 38080 May 13 12:57:46.453333 sshd-session[4351]: pam_unix(sshd:session): session closed for user core May 13 12:57:46.461376 systemd[1]: sshd@15-139.178.70.101:22-147.75.109.163:38080.service: Deactivated successfully. May 13 12:57:46.462728 systemd[1]: session-18.scope: Deactivated successfully. May 13 12:57:46.464166 systemd-logind[1597]: Session 18 logged out. Waiting for processes to exit. May 13 12:57:46.466026 systemd[1]: Started sshd@16-139.178.70.101:22-147.75.109.163:38086.service - OpenSSH per-connection server daemon (147.75.109.163:38086). May 13 12:57:46.468204 systemd-logind[1597]: Removed session 18. May 13 12:57:46.509292 sshd[4373]: Accepted publickey for core from 147.75.109.163 port 38086 ssh2: RSA SHA256:bxyL21ypRg/l6L1U5vXH7bz9HOopqyjcFfRC9D+f+uA May 13 12:57:46.510060 sshd-session[4373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:57:46.513089 systemd-logind[1597]: New session 19 of user core. May 13 12:57:46.517223 systemd[1]: Started session-19.scope - Session 19 of User core. May 13 12:57:46.692541 sshd[4375]: Connection closed by 147.75.109.163 port 38086 May 13 12:57:46.693605 sshd-session[4373]: pam_unix(sshd:session): session closed for user core May 13 12:57:46.699607 systemd[1]: sshd@16-139.178.70.101:22-147.75.109.163:38086.service: Deactivated successfully. May 13 12:57:46.701553 systemd[1]: session-19.scope: Deactivated successfully. May 13 12:57:46.702805 systemd-logind[1597]: Session 19 logged out. Waiting for processes to exit. May 13 12:57:46.706213 systemd[1]: Started sshd@17-139.178.70.101:22-147.75.109.163:38096.service - OpenSSH per-connection server daemon (147.75.109.163:38096). May 13 12:57:46.708107 systemd-logind[1597]: Removed session 19. May 13 12:57:46.745218 sshd[4384]: Accepted publickey for core from 147.75.109.163 port 38096 ssh2: RSA SHA256:bxyL21ypRg/l6L1U5vXH7bz9HOopqyjcFfRC9D+f+uA May 13 12:57:46.746000 sshd-session[4384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:57:46.749165 systemd-logind[1597]: New session 20 of user core. May 13 12:57:46.761201 systemd[1]: Started session-20.scope - Session 20 of User core. May 13 12:57:46.855805 sshd[4386]: Connection closed by 147.75.109.163 port 38096 May 13 12:57:46.856158 sshd-session[4384]: pam_unix(sshd:session): session closed for user core May 13 12:57:46.858372 systemd-logind[1597]: Session 20 logged out. Waiting for processes to exit. May 13 12:57:46.858510 systemd[1]: sshd@17-139.178.70.101:22-147.75.109.163:38096.service: Deactivated successfully. May 13 12:57:46.859579 systemd[1]: session-20.scope: Deactivated successfully. May 13 12:57:46.860661 systemd-logind[1597]: Removed session 20. May 13 12:57:51.866511 systemd[1]: Started sshd@18-139.178.70.101:22-147.75.109.163:36706.service - OpenSSH per-connection server daemon (147.75.109.163:36706). May 13 12:57:51.911685 sshd[4399]: Accepted publickey for core from 147.75.109.163 port 36706 ssh2: RSA SHA256:bxyL21ypRg/l6L1U5vXH7bz9HOopqyjcFfRC9D+f+uA May 13 12:57:51.912574 sshd-session[4399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:57:51.915647 systemd-logind[1597]: New session 21 of user core. May 13 12:57:51.920331 systemd[1]: Started session-21.scope - Session 21 of User core. May 13 12:57:52.008405 sshd[4401]: Connection closed by 147.75.109.163 port 36706 May 13 12:57:52.008783 sshd-session[4399]: pam_unix(sshd:session): session closed for user core May 13 12:57:52.011346 systemd-logind[1597]: Session 21 logged out. Waiting for processes to exit. May 13 12:57:52.011535 systemd[1]: sshd@18-139.178.70.101:22-147.75.109.163:36706.service: Deactivated successfully. May 13 12:57:52.012869 systemd[1]: session-21.scope: Deactivated successfully. May 13 12:57:52.014033 systemd-logind[1597]: Removed session 21. May 13 12:57:57.020034 systemd[1]: Started sshd@19-139.178.70.101:22-147.75.109.163:36710.service - OpenSSH per-connection server daemon (147.75.109.163:36710). May 13 12:57:57.061937 sshd[4412]: Accepted publickey for core from 147.75.109.163 port 36710 ssh2: RSA SHA256:bxyL21ypRg/l6L1U5vXH7bz9HOopqyjcFfRC9D+f+uA May 13 12:57:57.062690 sshd-session[4412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:57:57.065244 systemd-logind[1597]: New session 22 of user core. May 13 12:57:57.074273 systemd[1]: Started session-22.scope - Session 22 of User core. May 13 12:57:57.164940 sshd[4414]: Connection closed by 147.75.109.163 port 36710 May 13 12:57:57.165288 sshd-session[4412]: pam_unix(sshd:session): session closed for user core May 13 12:57:57.167654 systemd[1]: sshd@19-139.178.70.101:22-147.75.109.163:36710.service: Deactivated successfully. May 13 12:57:57.168609 systemd[1]: session-22.scope: Deactivated successfully. May 13 12:57:57.169068 systemd-logind[1597]: Session 22 logged out. Waiting for processes to exit. May 13 12:57:57.169852 systemd-logind[1597]: Removed session 22. May 13 12:58:02.175406 systemd[1]: Started sshd@20-139.178.70.101:22-147.75.109.163:60050.service - OpenSSH per-connection server daemon (147.75.109.163:60050). May 13 12:58:02.228151 sshd[4427]: Accepted publickey for core from 147.75.109.163 port 60050 ssh2: RSA SHA256:bxyL21ypRg/l6L1U5vXH7bz9HOopqyjcFfRC9D+f+uA May 13 12:58:02.229269 sshd-session[4427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:58:02.231834 systemd-logind[1597]: New session 23 of user core. May 13 12:58:02.239356 systemd[1]: Started session-23.scope - Session 23 of User core. May 13 12:58:02.349820 sshd[4429]: Connection closed by 147.75.109.163 port 60050 May 13 12:58:02.349293 sshd-session[4427]: pam_unix(sshd:session): session closed for user core May 13 12:58:02.351548 systemd[1]: sshd@20-139.178.70.101:22-147.75.109.163:60050.service: Deactivated successfully. May 13 12:58:02.353294 systemd[1]: session-23.scope: Deactivated successfully. May 13 12:58:02.354538 systemd-logind[1597]: Session 23 logged out. Waiting for processes to exit. May 13 12:58:02.355777 systemd-logind[1597]: Removed session 23. May 13 12:58:07.360555 systemd[1]: Started sshd@21-139.178.70.101:22-147.75.109.163:60058.service - OpenSSH per-connection server daemon (147.75.109.163:60058). May 13 12:58:07.402474 sshd[4440]: Accepted publickey for core from 147.75.109.163 port 60058 ssh2: RSA SHA256:bxyL21ypRg/l6L1U5vXH7bz9HOopqyjcFfRC9D+f+uA May 13 12:58:07.403158 sshd-session[4440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:58:07.405707 systemd-logind[1597]: New session 24 of user core. May 13 12:58:07.409204 systemd[1]: Started session-24.scope - Session 24 of User core. May 13 12:58:07.507172 sshd[4442]: Connection closed by 147.75.109.163 port 60058 May 13 12:58:07.509069 sshd-session[4440]: pam_unix(sshd:session): session closed for user core May 13 12:58:07.513488 systemd[1]: sshd@21-139.178.70.101:22-147.75.109.163:60058.service: Deactivated successfully. May 13 12:58:07.514488 systemd[1]: session-24.scope: Deactivated successfully. May 13 12:58:07.515003 systemd-logind[1597]: Session 24 logged out. Waiting for processes to exit. May 13 12:58:07.517209 systemd[1]: Started sshd@22-139.178.70.101:22-147.75.109.163:60072.service - OpenSSH per-connection server daemon (147.75.109.163:60072). May 13 12:58:07.517984 systemd-logind[1597]: Removed session 24. May 13 12:58:07.557244 sshd[4453]: Accepted publickey for core from 147.75.109.163 port 60072 ssh2: RSA SHA256:bxyL21ypRg/l6L1U5vXH7bz9HOopqyjcFfRC9D+f+uA May 13 12:58:07.558379 sshd-session[4453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:58:07.562461 systemd-logind[1597]: New session 25 of user core. May 13 12:58:07.566252 systemd[1]: Started session-25.scope - Session 25 of User core. May 13 12:58:08.898242 containerd[1620]: time="2025-05-13T12:58:08.898205034Z" level=info msg="StopContainer for \"4e9416e40b5dc3bfff200f8e7431cdb775361286063697b8a502122f9167a66a\" with timeout 30 (s)" May 13 12:58:08.911224 containerd[1620]: time="2025-05-13T12:58:08.911198961Z" level=info msg="Stop container \"4e9416e40b5dc3bfff200f8e7431cdb775361286063697b8a502122f9167a66a\" with signal terminated" May 13 12:58:08.939721 systemd[1]: cri-containerd-4e9416e40b5dc3bfff200f8e7431cdb775361286063697b8a502122f9167a66a.scope: Deactivated successfully. May 13 12:58:08.941303 containerd[1620]: time="2025-05-13T12:58:08.941236929Z" level=info msg="received exit event container_id:\"4e9416e40b5dc3bfff200f8e7431cdb775361286063697b8a502122f9167a66a\" id:\"4e9416e40b5dc3bfff200f8e7431cdb775361286063697b8a502122f9167a66a\" pid:3481 exited_at:{seconds:1747141088 nanos:940861813}" May 13 12:58:08.941458 containerd[1620]: time="2025-05-13T12:58:08.941440697Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4e9416e40b5dc3bfff200f8e7431cdb775361286063697b8a502122f9167a66a\" id:\"4e9416e40b5dc3bfff200f8e7431cdb775361286063697b8a502122f9167a66a\" pid:3481 exited_at:{seconds:1747141088 nanos:940861813}" May 13 12:58:08.947193 containerd[1620]: time="2025-05-13T12:58:08.946937927Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 12:58:08.949996 containerd[1620]: time="2025-05-13T12:58:08.949976379Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6aff68ab8d9f8a4a0b6545e735aed25ff565c63b9c0baa4b3132b3160dbf9e1d\" id:\"b969d98f01f59d426e71e44fb164298ab79f8588ea133a2097535acd0ef6fa67\" pid:4476 exited_at:{seconds:1747141088 nanos:949379783}" May 13 12:58:08.950838 containerd[1620]: time="2025-05-13T12:58:08.950772483Z" level=info msg="StopContainer for \"6aff68ab8d9f8a4a0b6545e735aed25ff565c63b9c0baa4b3132b3160dbf9e1d\" with timeout 2 (s)" May 13 12:58:08.951086 containerd[1620]: time="2025-05-13T12:58:08.951071263Z" level=info msg="Stop container \"6aff68ab8d9f8a4a0b6545e735aed25ff565c63b9c0baa4b3132b3160dbf9e1d\" with signal terminated" May 13 12:58:08.956046 systemd-networkd[1542]: lxc_health: Link DOWN May 13 12:58:08.956051 systemd-networkd[1542]: lxc_health: Lost carrier May 13 12:58:08.959571 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e9416e40b5dc3bfff200f8e7431cdb775361286063697b8a502122f9167a66a-rootfs.mount: Deactivated successfully. May 13 12:58:08.966343 containerd[1620]: time="2025-05-13T12:58:08.966317418Z" level=info msg="StopContainer for \"4e9416e40b5dc3bfff200f8e7431cdb775361286063697b8a502122f9167a66a\" returns successfully" May 13 12:58:08.966820 containerd[1620]: time="2025-05-13T12:58:08.966799325Z" level=info msg="StopPodSandbox for \"8fadcda861cf8f6b3a76f161d834a047a869c6b26576915223cc413b69efe52e\"" May 13 12:58:08.972568 containerd[1620]: time="2025-05-13T12:58:08.972540248Z" level=info msg="Container to stop \"4e9416e40b5dc3bfff200f8e7431cdb775361286063697b8a502122f9167a66a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 12:58:08.976242 systemd[1]: cri-containerd-6aff68ab8d9f8a4a0b6545e735aed25ff565c63b9c0baa4b3132b3160dbf9e1d.scope: Deactivated successfully. May 13 12:58:08.976435 systemd[1]: cri-containerd-6aff68ab8d9f8a4a0b6545e735aed25ff565c63b9c0baa4b3132b3160dbf9e1d.scope: Consumed 4.288s CPU time, 218.2M memory peak, 100.5M read from disk, 13.3M written to disk. May 13 12:58:08.977378 systemd[1]: cri-containerd-8fadcda861cf8f6b3a76f161d834a047a869c6b26576915223cc413b69efe52e.scope: Deactivated successfully. May 13 12:58:08.978028 containerd[1620]: time="2025-05-13T12:58:08.977282555Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6aff68ab8d9f8a4a0b6545e735aed25ff565c63b9c0baa4b3132b3160dbf9e1d\" id:\"6aff68ab8d9f8a4a0b6545e735aed25ff565c63b9c0baa4b3132b3160dbf9e1d\" pid:3553 exited_at:{seconds:1747141088 nanos:977071418}" May 13 12:58:08.978028 containerd[1620]: time="2025-05-13T12:58:08.977491452Z" level=info msg="received exit event container_id:\"6aff68ab8d9f8a4a0b6545e735aed25ff565c63b9c0baa4b3132b3160dbf9e1d\" id:\"6aff68ab8d9f8a4a0b6545e735aed25ff565c63b9c0baa4b3132b3160dbf9e1d\" pid:3553 exited_at:{seconds:1747141088 nanos:977071418}" May 13 12:58:08.981630 containerd[1620]: time="2025-05-13T12:58:08.981591445Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8fadcda861cf8f6b3a76f161d834a047a869c6b26576915223cc413b69efe52e\" id:\"8fadcda861cf8f6b3a76f161d834a047a869c6b26576915223cc413b69efe52e\" pid:3148 exit_status:137 exited_at:{seconds:1747141088 nanos:981446838}" May 13 12:58:08.993760 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6aff68ab8d9f8a4a0b6545e735aed25ff565c63b9c0baa4b3132b3160dbf9e1d-rootfs.mount: Deactivated successfully. May 13 12:58:09.003420 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8fadcda861cf8f6b3a76f161d834a047a869c6b26576915223cc413b69efe52e-rootfs.mount: Deactivated successfully. May 13 12:58:09.004452 containerd[1620]: time="2025-05-13T12:58:09.004432767Z" level=info msg="shim disconnected" id=8fadcda861cf8f6b3a76f161d834a047a869c6b26576915223cc413b69efe52e namespace=k8s.io May 13 12:58:09.004452 containerd[1620]: time="2025-05-13T12:58:09.004449627Z" level=warning msg="cleaning up after shim disconnected" id=8fadcda861cf8f6b3a76f161d834a047a869c6b26576915223cc413b69efe52e namespace=k8s.io May 13 12:58:09.010267 containerd[1620]: time="2025-05-13T12:58:09.004453939Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 12:58:09.010377 containerd[1620]: time="2025-05-13T12:58:09.006866097Z" level=info msg="StopContainer for \"6aff68ab8d9f8a4a0b6545e735aed25ff565c63b9c0baa4b3132b3160dbf9e1d\" returns successfully" May 13 12:58:09.010807 containerd[1620]: time="2025-05-13T12:58:09.010659595Z" level=info msg="StopPodSandbox for \"e106217690421697ed6602fa1873d59f1791ebe2ffa4d73ce170761d36be6b20\"" May 13 12:58:09.010807 containerd[1620]: time="2025-05-13T12:58:09.010698476Z" level=info msg="Container to stop \"d5b293926819f7d3c926416844e40939b1b8653138f2ac9b7c0b10d3785aa081\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 12:58:09.010807 containerd[1620]: time="2025-05-13T12:58:09.010709044Z" level=info msg="Container to stop \"c3246bcafe04a3fa0d6ecfb78482581b20c96fcc60a5fb885f2522ebc69ef4dc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 12:58:09.010807 containerd[1620]: time="2025-05-13T12:58:09.010715338Z" level=info msg="Container to stop \"5d0096468bcba3936475bb4a391dad46efd88ee72ed073ba79078ec6d410218b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 12:58:09.010807 containerd[1620]: time="2025-05-13T12:58:09.010719786Z" level=info msg="Container to stop \"e745f9497f910befd708ac65fd54b44ad879a25d3747e6c02bd04281f0f35b2a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 12:58:09.010807 containerd[1620]: time="2025-05-13T12:58:09.010723980Z" level=info msg="Container to stop \"6aff68ab8d9f8a4a0b6545e735aed25ff565c63b9c0baa4b3132b3160dbf9e1d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 12:58:09.015366 systemd[1]: cri-containerd-e106217690421697ed6602fa1873d59f1791ebe2ffa4d73ce170761d36be6b20.scope: Deactivated successfully. May 13 12:58:09.033074 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e106217690421697ed6602fa1873d59f1791ebe2ffa4d73ce170761d36be6b20-rootfs.mount: Deactivated successfully. May 13 12:58:09.036840 containerd[1620]: time="2025-05-13T12:58:09.036815094Z" level=info msg="shim disconnected" id=e106217690421697ed6602fa1873d59f1791ebe2ffa4d73ce170761d36be6b20 namespace=k8s.io May 13 12:58:09.037004 containerd[1620]: time="2025-05-13T12:58:09.036959561Z" level=warning msg="cleaning up after shim disconnected" id=e106217690421697ed6602fa1873d59f1791ebe2ffa4d73ce170761d36be6b20 namespace=k8s.io May 13 12:58:09.037004 containerd[1620]: time="2025-05-13T12:58:09.036968505Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 12:58:09.040147 containerd[1620]: time="2025-05-13T12:58:09.039910114Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e106217690421697ed6602fa1873d59f1791ebe2ffa4d73ce170761d36be6b20\" id:\"e106217690421697ed6602fa1873d59f1791ebe2ffa4d73ce170761d36be6b20\" pid:3065 exit_status:137 exited_at:{seconds:1747141089 nanos:19745450}" May 13 12:58:09.042865 containerd[1620]: time="2025-05-13T12:58:09.042781205Z" level=info msg="received exit event sandbox_id:\"8fadcda861cf8f6b3a76f161d834a047a869c6b26576915223cc413b69efe52e\" exit_status:137 exited_at:{seconds:1747141088 nanos:981446838}" May 13 12:58:09.042922 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8fadcda861cf8f6b3a76f161d834a047a869c6b26576915223cc413b69efe52e-shm.mount: Deactivated successfully. May 13 12:58:09.048344 containerd[1620]: time="2025-05-13T12:58:09.048321639Z" level=info msg="TearDown network for sandbox \"e106217690421697ed6602fa1873d59f1791ebe2ffa4d73ce170761d36be6b20\" successfully" May 13 12:58:09.048492 containerd[1620]: time="2025-05-13T12:58:09.048426209Z" level=info msg="StopPodSandbox for \"e106217690421697ed6602fa1873d59f1791ebe2ffa4d73ce170761d36be6b20\" returns successfully" May 13 12:58:09.048589 containerd[1620]: time="2025-05-13T12:58:09.048578445Z" level=info msg="received exit event sandbox_id:\"e106217690421697ed6602fa1873d59f1791ebe2ffa4d73ce170761d36be6b20\" exit_status:137 exited_at:{seconds:1747141089 nanos:19745450}" May 13 12:58:09.048785 containerd[1620]: time="2025-05-13T12:58:09.048763024Z" level=info msg="TearDown network for sandbox \"8fadcda861cf8f6b3a76f161d834a047a869c6b26576915223cc413b69efe52e\" successfully" May 13 12:58:09.048785 containerd[1620]: time="2025-05-13T12:58:09.048777248Z" level=info msg="StopPodSandbox for \"8fadcda861cf8f6b3a76f161d834a047a869c6b26576915223cc413b69efe52e\" returns successfully" May 13 12:58:09.211221 kubelet[2917]: I0513 12:58:09.211191 2917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f72d658-0891-4033-80cc-2f487967107b-lib-modules\") pod \"6f72d658-0891-4033-80cc-2f487967107b\" (UID: \"6f72d658-0891-4033-80cc-2f487967107b\") " May 13 12:58:09.211505 kubelet[2917]: I0513 12:58:09.211226 2917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6f72d658-0891-4033-80cc-2f487967107b-etc-cni-netd\") pod \"6f72d658-0891-4033-80cc-2f487967107b\" (UID: \"6f72d658-0891-4033-80cc-2f487967107b\") " May 13 12:58:09.211505 kubelet[2917]: I0513 12:58:09.211243 2917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6f72d658-0891-4033-80cc-2f487967107b-cni-path\") pod \"6f72d658-0891-4033-80cc-2f487967107b\" (UID: \"6f72d658-0891-4033-80cc-2f487967107b\") " May 13 12:58:09.211505 kubelet[2917]: I0513 12:58:09.211255 2917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6f72d658-0891-4033-80cc-2f487967107b-xtables-lock\") pod \"6f72d658-0891-4033-80cc-2f487967107b\" (UID: \"6f72d658-0891-4033-80cc-2f487967107b\") " May 13 12:58:09.211505 kubelet[2917]: I0513 12:58:09.211273 2917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89jbc\" (UniqueName: \"kubernetes.io/projected/6f72d658-0891-4033-80cc-2f487967107b-kube-api-access-89jbc\") pod \"6f72d658-0891-4033-80cc-2f487967107b\" (UID: \"6f72d658-0891-4033-80cc-2f487967107b\") " May 13 12:58:09.211505 kubelet[2917]: I0513 12:58:09.211288 2917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6f72d658-0891-4033-80cc-2f487967107b-host-proc-sys-kernel\") pod \"6f72d658-0891-4033-80cc-2f487967107b\" (UID: \"6f72d658-0891-4033-80cc-2f487967107b\") " May 13 12:58:09.211505 kubelet[2917]: I0513 12:58:09.211303 2917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6f72d658-0891-4033-80cc-2f487967107b-cilium-config-path\") pod \"6f72d658-0891-4033-80cc-2f487967107b\" (UID: \"6f72d658-0891-4033-80cc-2f487967107b\") " May 13 12:58:09.211652 kubelet[2917]: I0513 12:58:09.211314 2917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6f72d658-0891-4033-80cc-2f487967107b-cilium-run\") pod \"6f72d658-0891-4033-80cc-2f487967107b\" (UID: \"6f72d658-0891-4033-80cc-2f487967107b\") " May 13 12:58:09.211652 kubelet[2917]: I0513 12:58:09.211326 2917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6f72d658-0891-4033-80cc-2f487967107b-clustermesh-secrets\") pod \"6f72d658-0891-4033-80cc-2f487967107b\" (UID: \"6f72d658-0891-4033-80cc-2f487967107b\") " May 13 12:58:09.211652 kubelet[2917]: I0513 12:58:09.211336 2917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6f72d658-0891-4033-80cc-2f487967107b-hostproc\") pod \"6f72d658-0891-4033-80cc-2f487967107b\" (UID: \"6f72d658-0891-4033-80cc-2f487967107b\") " May 13 12:58:09.211652 kubelet[2917]: I0513 12:58:09.211347 2917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6f72d658-0891-4033-80cc-2f487967107b-cilium-cgroup\") pod \"6f72d658-0891-4033-80cc-2f487967107b\" (UID: \"6f72d658-0891-4033-80cc-2f487967107b\") " May 13 12:58:09.211652 kubelet[2917]: I0513 12:58:09.211358 2917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6f72d658-0891-4033-80cc-2f487967107b-hubble-tls\") pod \"6f72d658-0891-4033-80cc-2f487967107b\" (UID: \"6f72d658-0891-4033-80cc-2f487967107b\") " May 13 12:58:09.211652 kubelet[2917]: I0513 12:58:09.211370 2917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/de9f5769-23ad-4270-8b25-d6e236917638-cilium-config-path\") pod \"de9f5769-23ad-4270-8b25-d6e236917638\" (UID: \"de9f5769-23ad-4270-8b25-d6e236917638\") " May 13 12:58:09.211790 kubelet[2917]: I0513 12:58:09.211382 2917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdbz6\" (UniqueName: \"kubernetes.io/projected/de9f5769-23ad-4270-8b25-d6e236917638-kube-api-access-sdbz6\") pod \"de9f5769-23ad-4270-8b25-d6e236917638\" (UID: \"de9f5769-23ad-4270-8b25-d6e236917638\") " May 13 12:58:09.211790 kubelet[2917]: I0513 12:58:09.211393 2917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6f72d658-0891-4033-80cc-2f487967107b-bpf-maps\") pod \"6f72d658-0891-4033-80cc-2f487967107b\" (UID: \"6f72d658-0891-4033-80cc-2f487967107b\") " May 13 12:58:09.211790 kubelet[2917]: I0513 12:58:09.211405 2917 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6f72d658-0891-4033-80cc-2f487967107b-host-proc-sys-net\") pod \"6f72d658-0891-4033-80cc-2f487967107b\" (UID: \"6f72d658-0891-4033-80cc-2f487967107b\") " May 13 12:58:09.211790 kubelet[2917]: I0513 12:58:09.211466 2917 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f72d658-0891-4033-80cc-2f487967107b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6f72d658-0891-4033-80cc-2f487967107b" (UID: "6f72d658-0891-4033-80cc-2f487967107b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 12:58:09.211790 kubelet[2917]: I0513 12:58:09.211498 2917 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f72d658-0891-4033-80cc-2f487967107b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6f72d658-0891-4033-80cc-2f487967107b" (UID: "6f72d658-0891-4033-80cc-2f487967107b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 12:58:09.211899 kubelet[2917]: I0513 12:58:09.211512 2917 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f72d658-0891-4033-80cc-2f487967107b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6f72d658-0891-4033-80cc-2f487967107b" (UID: "6f72d658-0891-4033-80cc-2f487967107b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 12:58:09.211899 kubelet[2917]: I0513 12:58:09.211522 2917 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f72d658-0891-4033-80cc-2f487967107b-cni-path" (OuterVolumeSpecName: "cni-path") pod "6f72d658-0891-4033-80cc-2f487967107b" (UID: "6f72d658-0891-4033-80cc-2f487967107b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 12:58:09.211899 kubelet[2917]: I0513 12:58:09.211532 2917 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f72d658-0891-4033-80cc-2f487967107b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6f72d658-0891-4033-80cc-2f487967107b" (UID: "6f72d658-0891-4033-80cc-2f487967107b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 12:58:09.211967 kubelet[2917]: I0513 12:58:09.211947 2917 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f72d658-0891-4033-80cc-2f487967107b-hostproc" (OuterVolumeSpecName: "hostproc") pod "6f72d658-0891-4033-80cc-2f487967107b" (UID: "6f72d658-0891-4033-80cc-2f487967107b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 12:58:09.211991 kubelet[2917]: I0513 12:58:09.211967 2917 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f72d658-0891-4033-80cc-2f487967107b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6f72d658-0891-4033-80cc-2f487967107b" (UID: "6f72d658-0891-4033-80cc-2f487967107b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 12:58:09.214008 kubelet[2917]: I0513 12:58:09.213992 2917 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f72d658-0891-4033-80cc-2f487967107b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6f72d658-0891-4033-80cc-2f487967107b" (UID: "6f72d658-0891-4033-80cc-2f487967107b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 12:58:09.214197 kubelet[2917]: I0513 12:58:09.214167 2917 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f72d658-0891-4033-80cc-2f487967107b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6f72d658-0891-4033-80cc-2f487967107b" (UID: "6f72d658-0891-4033-80cc-2f487967107b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 12:58:09.215896 kubelet[2917]: I0513 12:58:09.215792 2917 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f72d658-0891-4033-80cc-2f487967107b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6f72d658-0891-4033-80cc-2f487967107b" (UID: "6f72d658-0891-4033-80cc-2f487967107b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 13 12:58:09.215896 kubelet[2917]: I0513 12:58:09.215847 2917 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f72d658-0891-4033-80cc-2f487967107b-kube-api-access-89jbc" (OuterVolumeSpecName: "kube-api-access-89jbc") pod "6f72d658-0891-4033-80cc-2f487967107b" (UID: "6f72d658-0891-4033-80cc-2f487967107b"). InnerVolumeSpecName "kube-api-access-89jbc". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 13 12:58:09.216148 kubelet[2917]: I0513 12:58:09.216032 2917 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f72d658-0891-4033-80cc-2f487967107b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6f72d658-0891-4033-80cc-2f487967107b" (UID: "6f72d658-0891-4033-80cc-2f487967107b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 12:58:09.216346 kubelet[2917]: I0513 12:58:09.216327 2917 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f72d658-0891-4033-80cc-2f487967107b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6f72d658-0891-4033-80cc-2f487967107b" (UID: "6f72d658-0891-4033-80cc-2f487967107b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 13 12:58:09.217395 kubelet[2917]: I0513 12:58:09.217382 2917 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de9f5769-23ad-4270-8b25-d6e236917638-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "de9f5769-23ad-4270-8b25-d6e236917638" (UID: "de9f5769-23ad-4270-8b25-d6e236917638"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 13 12:58:09.219374 kubelet[2917]: I0513 12:58:09.219355 2917 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f72d658-0891-4033-80cc-2f487967107b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6f72d658-0891-4033-80cc-2f487967107b" (UID: "6f72d658-0891-4033-80cc-2f487967107b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 13 12:58:09.219462 kubelet[2917]: I0513 12:58:09.219444 2917 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de9f5769-23ad-4270-8b25-d6e236917638-kube-api-access-sdbz6" (OuterVolumeSpecName: "kube-api-access-sdbz6") pod "de9f5769-23ad-4270-8b25-d6e236917638" (UID: "de9f5769-23ad-4270-8b25-d6e236917638"). InnerVolumeSpecName "kube-api-access-sdbz6". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 13 12:58:09.312046 kubelet[2917]: I0513 12:58:09.312015 2917 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6f72d658-0891-4033-80cc-2f487967107b-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 13 12:58:09.312046 kubelet[2917]: I0513 12:58:09.312037 2917 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6f72d658-0891-4033-80cc-2f487967107b-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 13 12:58:09.312046 kubelet[2917]: I0513 12:58:09.312046 2917 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f72d658-0891-4033-80cc-2f487967107b-lib-modules\") on node \"localhost\" DevicePath \"\"" May 13 12:58:09.312046 kubelet[2917]: I0513 12:58:09.312052 2917 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6f72d658-0891-4033-80cc-2f487967107b-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 13 12:58:09.312234 kubelet[2917]: I0513 12:58:09.312058 2917 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6f72d658-0891-4033-80cc-2f487967107b-cni-path\") on node \"localhost\" DevicePath \"\"" May 13 12:58:09.312234 kubelet[2917]: I0513 12:58:09.312064 2917 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6f72d658-0891-4033-80cc-2f487967107b-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 13 12:58:09.312234 kubelet[2917]: I0513 12:58:09.312070 2917 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-89jbc\" (UniqueName: \"kubernetes.io/projected/6f72d658-0891-4033-80cc-2f487967107b-kube-api-access-89jbc\") on node \"localhost\" DevicePath \"\"" May 13 12:58:09.312234 kubelet[2917]: I0513 12:58:09.312077 2917 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6f72d658-0891-4033-80cc-2f487967107b-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 13 12:58:09.312234 kubelet[2917]: I0513 12:58:09.312084 2917 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6f72d658-0891-4033-80cc-2f487967107b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 12:58:09.312234 kubelet[2917]: I0513 12:58:09.312113 2917 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6f72d658-0891-4033-80cc-2f487967107b-cilium-run\") on node \"localhost\" DevicePath \"\"" May 13 12:58:09.312234 kubelet[2917]: I0513 12:58:09.312122 2917 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6f72d658-0891-4033-80cc-2f487967107b-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 13 12:58:09.312389 kubelet[2917]: I0513 12:58:09.312282 2917 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6f72d658-0891-4033-80cc-2f487967107b-hostproc\") on node \"localhost\" DevicePath \"\"" May 13 12:58:09.312389 kubelet[2917]: I0513 12:58:09.312292 2917 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6f72d658-0891-4033-80cc-2f487967107b-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 13 12:58:09.312389 kubelet[2917]: I0513 12:58:09.312299 2917 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6f72d658-0891-4033-80cc-2f487967107b-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 13 12:58:09.312389 kubelet[2917]: I0513 12:58:09.312306 2917 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/de9f5769-23ad-4270-8b25-d6e236917638-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 12:58:09.312389 kubelet[2917]: I0513 12:58:09.312312 2917 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sdbz6\" (UniqueName: \"kubernetes.io/projected/de9f5769-23ad-4270-8b25-d6e236917638-kube-api-access-sdbz6\") on node \"localhost\" DevicePath \"\"" May 13 12:58:09.577635 systemd[1]: Removed slice kubepods-burstable-pod6f72d658_0891_4033_80cc_2f487967107b.slice - libcontainer container kubepods-burstable-pod6f72d658_0891_4033_80cc_2f487967107b.slice. May 13 12:58:09.577702 systemd[1]: kubepods-burstable-pod6f72d658_0891_4033_80cc_2f487967107b.slice: Consumed 4.341s CPU time, 219.1M memory peak, 101.6M read from disk, 13.3M written to disk. May 13 12:58:09.578949 systemd[1]: Removed slice kubepods-besteffort-podde9f5769_23ad_4270_8b25_d6e236917638.slice - libcontainer container kubepods-besteffort-podde9f5769_23ad_4270_8b25_d6e236917638.slice. May 13 12:58:09.655142 kubelet[2917]: E0513 12:58:09.655100 2917 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 12:58:09.813811 kubelet[2917]: I0513 12:58:09.813787 2917 scope.go:117] "RemoveContainer" containerID="4e9416e40b5dc3bfff200f8e7431cdb775361286063697b8a502122f9167a66a" May 13 12:58:09.818032 containerd[1620]: time="2025-05-13T12:58:09.818003293Z" level=info msg="RemoveContainer for \"4e9416e40b5dc3bfff200f8e7431cdb775361286063697b8a502122f9167a66a\"" May 13 12:58:09.835251 containerd[1620]: time="2025-05-13T12:58:09.834552548Z" level=info msg="RemoveContainer for \"4e9416e40b5dc3bfff200f8e7431cdb775361286063697b8a502122f9167a66a\" returns successfully" May 13 12:58:09.835251 containerd[1620]: time="2025-05-13T12:58:09.835177726Z" level=error msg="ContainerStatus for \"4e9416e40b5dc3bfff200f8e7431cdb775361286063697b8a502122f9167a66a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4e9416e40b5dc3bfff200f8e7431cdb775361286063697b8a502122f9167a66a\": not found" May 13 12:58:09.835338 kubelet[2917]: I0513 12:58:09.835053 2917 scope.go:117] "RemoveContainer" containerID="4e9416e40b5dc3bfff200f8e7431cdb775361286063697b8a502122f9167a66a" May 13 12:58:09.835981 kubelet[2917]: E0513 12:58:09.835958 2917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4e9416e40b5dc3bfff200f8e7431cdb775361286063697b8a502122f9167a66a\": not found" containerID="4e9416e40b5dc3bfff200f8e7431cdb775361286063697b8a502122f9167a66a" May 13 12:58:09.836228 kubelet[2917]: I0513 12:58:09.836034 2917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4e9416e40b5dc3bfff200f8e7431cdb775361286063697b8a502122f9167a66a"} err="failed to get container status \"4e9416e40b5dc3bfff200f8e7431cdb775361286063697b8a502122f9167a66a\": rpc error: code = NotFound desc = an error occurred when try to find container \"4e9416e40b5dc3bfff200f8e7431cdb775361286063697b8a502122f9167a66a\": not found" May 13 12:58:09.837069 kubelet[2917]: I0513 12:58:09.837012 2917 scope.go:117] "RemoveContainer" containerID="6aff68ab8d9f8a4a0b6545e735aed25ff565c63b9c0baa4b3132b3160dbf9e1d" May 13 12:58:09.838367 containerd[1620]: time="2025-05-13T12:58:09.838273347Z" level=info msg="RemoveContainer for \"6aff68ab8d9f8a4a0b6545e735aed25ff565c63b9c0baa4b3132b3160dbf9e1d\"" May 13 12:58:09.841535 containerd[1620]: time="2025-05-13T12:58:09.841516414Z" level=info msg="RemoveContainer for \"6aff68ab8d9f8a4a0b6545e735aed25ff565c63b9c0baa4b3132b3160dbf9e1d\" returns successfully" May 13 12:58:09.842409 kubelet[2917]: I0513 12:58:09.842016 2917 scope.go:117] "RemoveContainer" containerID="e745f9497f910befd708ac65fd54b44ad879a25d3747e6c02bd04281f0f35b2a" May 13 12:58:09.846027 containerd[1620]: time="2025-05-13T12:58:09.846011172Z" level=info msg="RemoveContainer for \"e745f9497f910befd708ac65fd54b44ad879a25d3747e6c02bd04281f0f35b2a\"" May 13 12:58:09.847893 containerd[1620]: time="2025-05-13T12:58:09.847877906Z" level=info msg="RemoveContainer for \"e745f9497f910befd708ac65fd54b44ad879a25d3747e6c02bd04281f0f35b2a\" returns successfully" May 13 12:58:09.848020 kubelet[2917]: I0513 12:58:09.848003 2917 scope.go:117] "RemoveContainer" containerID="c3246bcafe04a3fa0d6ecfb78482581b20c96fcc60a5fb885f2522ebc69ef4dc" May 13 12:58:09.849472 containerd[1620]: time="2025-05-13T12:58:09.849457312Z" level=info msg="RemoveContainer for \"c3246bcafe04a3fa0d6ecfb78482581b20c96fcc60a5fb885f2522ebc69ef4dc\"" May 13 12:58:09.850907 containerd[1620]: time="2025-05-13T12:58:09.850894193Z" level=info msg="RemoveContainer for \"c3246bcafe04a3fa0d6ecfb78482581b20c96fcc60a5fb885f2522ebc69ef4dc\" returns successfully" May 13 12:58:09.850981 kubelet[2917]: I0513 12:58:09.850971 2917 scope.go:117] "RemoveContainer" containerID="5d0096468bcba3936475bb4a391dad46efd88ee72ed073ba79078ec6d410218b" May 13 12:58:09.851922 containerd[1620]: time="2025-05-13T12:58:09.851856856Z" level=info msg="RemoveContainer for \"5d0096468bcba3936475bb4a391dad46efd88ee72ed073ba79078ec6d410218b\"" May 13 12:58:09.853211 containerd[1620]: time="2025-05-13T12:58:09.853199689Z" level=info msg="RemoveContainer for \"5d0096468bcba3936475bb4a391dad46efd88ee72ed073ba79078ec6d410218b\" returns successfully" May 13 12:58:09.853331 kubelet[2917]: I0513 12:58:09.853308 2917 scope.go:117] "RemoveContainer" containerID="d5b293926819f7d3c926416844e40939b1b8653138f2ac9b7c0b10d3785aa081" May 13 12:58:09.854038 containerd[1620]: time="2025-05-13T12:58:09.854025769Z" level=info msg="RemoveContainer for \"d5b293926819f7d3c926416844e40939b1b8653138f2ac9b7c0b10d3785aa081\"" May 13 12:58:09.855206 containerd[1620]: time="2025-05-13T12:58:09.855193050Z" level=info msg="RemoveContainer for \"d5b293926819f7d3c926416844e40939b1b8653138f2ac9b7c0b10d3785aa081\" returns successfully" May 13 12:58:09.855272 kubelet[2917]: I0513 12:58:09.855262 2917 scope.go:117] "RemoveContainer" containerID="6aff68ab8d9f8a4a0b6545e735aed25ff565c63b9c0baa4b3132b3160dbf9e1d" May 13 12:58:09.855414 containerd[1620]: time="2025-05-13T12:58:09.855393867Z" level=error msg="ContainerStatus for \"6aff68ab8d9f8a4a0b6545e735aed25ff565c63b9c0baa4b3132b3160dbf9e1d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6aff68ab8d9f8a4a0b6545e735aed25ff565c63b9c0baa4b3132b3160dbf9e1d\": not found" May 13 12:58:09.855530 kubelet[2917]: E0513 12:58:09.855490 2917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6aff68ab8d9f8a4a0b6545e735aed25ff565c63b9c0baa4b3132b3160dbf9e1d\": not found" containerID="6aff68ab8d9f8a4a0b6545e735aed25ff565c63b9c0baa4b3132b3160dbf9e1d" May 13 12:58:09.855530 kubelet[2917]: I0513 12:58:09.855505 2917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6aff68ab8d9f8a4a0b6545e735aed25ff565c63b9c0baa4b3132b3160dbf9e1d"} err="failed to get container status \"6aff68ab8d9f8a4a0b6545e735aed25ff565c63b9c0baa4b3132b3160dbf9e1d\": rpc error: code = NotFound desc = an error occurred when try to find container \"6aff68ab8d9f8a4a0b6545e735aed25ff565c63b9c0baa4b3132b3160dbf9e1d\": not found" May 13 12:58:09.855530 kubelet[2917]: I0513 12:58:09.855516 2917 scope.go:117] "RemoveContainer" containerID="e745f9497f910befd708ac65fd54b44ad879a25d3747e6c02bd04281f0f35b2a" May 13 12:58:09.855715 containerd[1620]: time="2025-05-13T12:58:09.855702737Z" level=error msg="ContainerStatus for \"e745f9497f910befd708ac65fd54b44ad879a25d3747e6c02bd04281f0f35b2a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e745f9497f910befd708ac65fd54b44ad879a25d3747e6c02bd04281f0f35b2a\": not found" May 13 12:58:09.855845 kubelet[2917]: E0513 12:58:09.855836 2917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e745f9497f910befd708ac65fd54b44ad879a25d3747e6c02bd04281f0f35b2a\": not found" containerID="e745f9497f910befd708ac65fd54b44ad879a25d3747e6c02bd04281f0f35b2a" May 13 12:58:09.855933 kubelet[2917]: I0513 12:58:09.855881 2917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e745f9497f910befd708ac65fd54b44ad879a25d3747e6c02bd04281f0f35b2a"} err="failed to get container status \"e745f9497f910befd708ac65fd54b44ad879a25d3747e6c02bd04281f0f35b2a\": rpc error: code = NotFound desc = an error occurred when try to find container \"e745f9497f910befd708ac65fd54b44ad879a25d3747e6c02bd04281f0f35b2a\": not found" May 13 12:58:09.855933 kubelet[2917]: I0513 12:58:09.855892 2917 scope.go:117] "RemoveContainer" containerID="c3246bcafe04a3fa0d6ecfb78482581b20c96fcc60a5fb885f2522ebc69ef4dc" May 13 12:58:09.856057 containerd[1620]: time="2025-05-13T12:58:09.855976166Z" level=error msg="ContainerStatus for \"c3246bcafe04a3fa0d6ecfb78482581b20c96fcc60a5fb885f2522ebc69ef4dc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c3246bcafe04a3fa0d6ecfb78482581b20c96fcc60a5fb885f2522ebc69ef4dc\": not found" May 13 12:58:09.856082 kubelet[2917]: E0513 12:58:09.856026 2917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c3246bcafe04a3fa0d6ecfb78482581b20c96fcc60a5fb885f2522ebc69ef4dc\": not found" containerID="c3246bcafe04a3fa0d6ecfb78482581b20c96fcc60a5fb885f2522ebc69ef4dc" May 13 12:58:09.856082 kubelet[2917]: I0513 12:58:09.856035 2917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c3246bcafe04a3fa0d6ecfb78482581b20c96fcc60a5fb885f2522ebc69ef4dc"} err="failed to get container status \"c3246bcafe04a3fa0d6ecfb78482581b20c96fcc60a5fb885f2522ebc69ef4dc\": rpc error: code = NotFound desc = an error occurred when try to find container \"c3246bcafe04a3fa0d6ecfb78482581b20c96fcc60a5fb885f2522ebc69ef4dc\": not found" May 13 12:58:09.856082 kubelet[2917]: I0513 12:58:09.856042 2917 scope.go:117] "RemoveContainer" containerID="5d0096468bcba3936475bb4a391dad46efd88ee72ed073ba79078ec6d410218b" May 13 12:58:09.856266 containerd[1620]: time="2025-05-13T12:58:09.856248348Z" level=error msg="ContainerStatus for \"5d0096468bcba3936475bb4a391dad46efd88ee72ed073ba79078ec6d410218b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5d0096468bcba3936475bb4a391dad46efd88ee72ed073ba79078ec6d410218b\": not found" May 13 12:58:09.856324 kubelet[2917]: E0513 12:58:09.856306 2917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5d0096468bcba3936475bb4a391dad46efd88ee72ed073ba79078ec6d410218b\": not found" containerID="5d0096468bcba3936475bb4a391dad46efd88ee72ed073ba79078ec6d410218b" May 13 12:58:09.856324 kubelet[2917]: I0513 12:58:09.856319 2917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5d0096468bcba3936475bb4a391dad46efd88ee72ed073ba79078ec6d410218b"} err="failed to get container status \"5d0096468bcba3936475bb4a391dad46efd88ee72ed073ba79078ec6d410218b\": rpc error: code = NotFound desc = an error occurred when try to find container \"5d0096468bcba3936475bb4a391dad46efd88ee72ed073ba79078ec6d410218b\": not found" May 13 12:58:09.856387 kubelet[2917]: I0513 12:58:09.856327 2917 scope.go:117] "RemoveContainer" containerID="d5b293926819f7d3c926416844e40939b1b8653138f2ac9b7c0b10d3785aa081" May 13 12:58:09.856404 containerd[1620]: time="2025-05-13T12:58:09.856391069Z" level=error msg="ContainerStatus for \"d5b293926819f7d3c926416844e40939b1b8653138f2ac9b7c0b10d3785aa081\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d5b293926819f7d3c926416844e40939b1b8653138f2ac9b7c0b10d3785aa081\": not found" May 13 12:58:09.856490 kubelet[2917]: E0513 12:58:09.856450 2917 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d5b293926819f7d3c926416844e40939b1b8653138f2ac9b7c0b10d3785aa081\": not found" containerID="d5b293926819f7d3c926416844e40939b1b8653138f2ac9b7c0b10d3785aa081" May 13 12:58:09.856490 kubelet[2917]: I0513 12:58:09.856459 2917 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d5b293926819f7d3c926416844e40939b1b8653138f2ac9b7c0b10d3785aa081"} err="failed to get container status \"d5b293926819f7d3c926416844e40939b1b8653138f2ac9b7c0b10d3785aa081\": rpc error: code = NotFound desc = an error occurred when try to find container \"d5b293926819f7d3c926416844e40939b1b8653138f2ac9b7c0b10d3785aa081\": not found" May 13 12:58:09.959852 systemd[1]: var-lib-kubelet-pods-de9f5769\x2d23ad\x2d4270\x2d8b25\x2dd6e236917638-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsdbz6.mount: Deactivated successfully. May 13 12:58:09.959931 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e106217690421697ed6602fa1873d59f1791ebe2ffa4d73ce170761d36be6b20-shm.mount: Deactivated successfully. May 13 12:58:09.959995 systemd[1]: var-lib-kubelet-pods-6f72d658\x2d0891\x2d4033\x2d80cc\x2d2f487967107b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d89jbc.mount: Deactivated successfully. May 13 12:58:09.960045 systemd[1]: var-lib-kubelet-pods-6f72d658\x2d0891\x2d4033\x2d80cc\x2d2f487967107b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 13 12:58:09.960093 systemd[1]: var-lib-kubelet-pods-6f72d658\x2d0891\x2d4033\x2d80cc\x2d2f487967107b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 13 12:58:10.864152 sshd[4455]: Connection closed by 147.75.109.163 port 60072 May 13 12:58:10.864521 sshd-session[4453]: pam_unix(sshd:session): session closed for user core May 13 12:58:10.872212 systemd[1]: sshd@22-139.178.70.101:22-147.75.109.163:60072.service: Deactivated successfully. May 13 12:58:10.873869 systemd[1]: session-25.scope: Deactivated successfully. May 13 12:58:10.875164 systemd-logind[1597]: Session 25 logged out. Waiting for processes to exit. May 13 12:58:10.876958 systemd[1]: Started sshd@23-139.178.70.101:22-147.75.109.163:60378.service - OpenSSH per-connection server daemon (147.75.109.163:60378). May 13 12:58:10.877784 systemd-logind[1597]: Removed session 25. May 13 12:58:10.915565 sshd[4604]: Accepted publickey for core from 147.75.109.163 port 60378 ssh2: RSA SHA256:bxyL21ypRg/l6L1U5vXH7bz9HOopqyjcFfRC9D+f+uA May 13 12:58:10.916490 sshd-session[4604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:58:10.920485 systemd-logind[1597]: New session 26 of user core. May 13 12:58:10.931255 systemd[1]: Started session-26.scope - Session 26 of User core. May 13 12:58:11.287099 sshd[4606]: Connection closed by 147.75.109.163 port 60378 May 13 12:58:11.288356 sshd-session[4604]: pam_unix(sshd:session): session closed for user core May 13 12:58:11.296385 systemd[1]: sshd@23-139.178.70.101:22-147.75.109.163:60378.service: Deactivated successfully. May 13 12:58:11.298368 systemd[1]: session-26.scope: Deactivated successfully. May 13 12:58:11.299201 systemd-logind[1597]: Session 26 logged out. Waiting for processes to exit. May 13 12:58:11.304653 systemd[1]: Started sshd@24-139.178.70.101:22-147.75.109.163:60392.service - OpenSSH per-connection server daemon (147.75.109.163:60392). May 13 12:58:11.308868 systemd-logind[1597]: Removed session 26. May 13 12:58:11.324367 kubelet[2917]: I0513 12:58:11.324217 2917 memory_manager.go:355] "RemoveStaleState removing state" podUID="6f72d658-0891-4033-80cc-2f487967107b" containerName="cilium-agent" May 13 12:58:11.324367 kubelet[2917]: I0513 12:58:11.324236 2917 memory_manager.go:355] "RemoveStaleState removing state" podUID="de9f5769-23ad-4270-8b25-d6e236917638" containerName="cilium-operator" May 13 12:58:11.332676 systemd[1]: Created slice kubepods-burstable-pod0f4c107e_0257_4013_86b4_28c72ec99c7e.slice - libcontainer container kubepods-burstable-pod0f4c107e_0257_4013_86b4_28c72ec99c7e.slice. May 13 12:58:11.352065 sshd[4617]: Accepted publickey for core from 147.75.109.163 port 60392 ssh2: RSA SHA256:bxyL21ypRg/l6L1U5vXH7bz9HOopqyjcFfRC9D+f+uA May 13 12:58:11.352741 sshd-session[4617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:58:11.358741 systemd-logind[1597]: New session 27 of user core. May 13 12:58:11.363342 systemd[1]: Started session-27.scope - Session 27 of User core. May 13 12:58:11.413070 sshd[4619]: Connection closed by 147.75.109.163 port 60392 May 13 12:58:11.414011 sshd-session[4617]: pam_unix(sshd:session): session closed for user core May 13 12:58:11.422925 kubelet[2917]: I0513 12:58:11.422902 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0f4c107e-0257-4013-86b4-28c72ec99c7e-etc-cni-netd\") pod \"cilium-676gs\" (UID: \"0f4c107e-0257-4013-86b4-28c72ec99c7e\") " pod="kube-system/cilium-676gs" May 13 12:58:11.423002 kubelet[2917]: I0513 12:58:11.422929 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0f4c107e-0257-4013-86b4-28c72ec99c7e-host-proc-sys-net\") pod \"cilium-676gs\" (UID: \"0f4c107e-0257-4013-86b4-28c72ec99c7e\") " pod="kube-system/cilium-676gs" May 13 12:58:11.423002 kubelet[2917]: I0513 12:58:11.422950 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0f4c107e-0257-4013-86b4-28c72ec99c7e-cilium-cgroup\") pod \"cilium-676gs\" (UID: \"0f4c107e-0257-4013-86b4-28c72ec99c7e\") " pod="kube-system/cilium-676gs" May 13 12:58:11.423002 kubelet[2917]: I0513 12:58:11.422962 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0f4c107e-0257-4013-86b4-28c72ec99c7e-cilium-run\") pod \"cilium-676gs\" (UID: \"0f4c107e-0257-4013-86b4-28c72ec99c7e\") " pod="kube-system/cilium-676gs" May 13 12:58:11.423002 kubelet[2917]: I0513 12:58:11.422973 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0f4c107e-0257-4013-86b4-28c72ec99c7e-hostproc\") pod \"cilium-676gs\" (UID: \"0f4c107e-0257-4013-86b4-28c72ec99c7e\") " pod="kube-system/cilium-676gs" May 13 12:58:11.423002 kubelet[2917]: I0513 12:58:11.422983 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0f4c107e-0257-4013-86b4-28c72ec99c7e-cni-path\") pod \"cilium-676gs\" (UID: \"0f4c107e-0257-4013-86b4-28c72ec99c7e\") " pod="kube-system/cilium-676gs" May 13 12:58:11.423122 kubelet[2917]: I0513 12:58:11.423004 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0f4c107e-0257-4013-86b4-28c72ec99c7e-cilium-ipsec-secrets\") pod \"cilium-676gs\" (UID: \"0f4c107e-0257-4013-86b4-28c72ec99c7e\") " pod="kube-system/cilium-676gs" May 13 12:58:11.423122 kubelet[2917]: I0513 12:58:11.423018 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0f4c107e-0257-4013-86b4-28c72ec99c7e-bpf-maps\") pod \"cilium-676gs\" (UID: \"0f4c107e-0257-4013-86b4-28c72ec99c7e\") " pod="kube-system/cilium-676gs" May 13 12:58:11.423122 kubelet[2917]: I0513 12:58:11.423028 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0f4c107e-0257-4013-86b4-28c72ec99c7e-lib-modules\") pod \"cilium-676gs\" (UID: \"0f4c107e-0257-4013-86b4-28c72ec99c7e\") " pod="kube-system/cilium-676gs" May 13 12:58:11.423122 kubelet[2917]: I0513 12:58:11.423043 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0f4c107e-0257-4013-86b4-28c72ec99c7e-host-proc-sys-kernel\") pod \"cilium-676gs\" (UID: \"0f4c107e-0257-4013-86b4-28c72ec99c7e\") " pod="kube-system/cilium-676gs" May 13 12:58:11.423122 kubelet[2917]: I0513 12:58:11.423061 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nds5\" (UniqueName: \"kubernetes.io/projected/0f4c107e-0257-4013-86b4-28c72ec99c7e-kube-api-access-4nds5\") pod \"cilium-676gs\" (UID: \"0f4c107e-0257-4013-86b4-28c72ec99c7e\") " pod="kube-system/cilium-676gs" May 13 12:58:11.423281 kubelet[2917]: I0513 12:58:11.423075 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0f4c107e-0257-4013-86b4-28c72ec99c7e-cilium-config-path\") pod \"cilium-676gs\" (UID: \"0f4c107e-0257-4013-86b4-28c72ec99c7e\") " pod="kube-system/cilium-676gs" May 13 12:58:11.423281 kubelet[2917]: I0513 12:58:11.423087 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0f4c107e-0257-4013-86b4-28c72ec99c7e-xtables-lock\") pod \"cilium-676gs\" (UID: \"0f4c107e-0257-4013-86b4-28c72ec99c7e\") " pod="kube-system/cilium-676gs" May 13 12:58:11.423281 kubelet[2917]: I0513 12:58:11.423106 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0f4c107e-0257-4013-86b4-28c72ec99c7e-clustermesh-secrets\") pod \"cilium-676gs\" (UID: \"0f4c107e-0257-4013-86b4-28c72ec99c7e\") " pod="kube-system/cilium-676gs" May 13 12:58:11.423281 kubelet[2917]: I0513 12:58:11.423123 2917 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0f4c107e-0257-4013-86b4-28c72ec99c7e-hubble-tls\") pod \"cilium-676gs\" (UID: \"0f4c107e-0257-4013-86b4-28c72ec99c7e\") " pod="kube-system/cilium-676gs" May 13 12:58:11.424942 systemd[1]: sshd@24-139.178.70.101:22-147.75.109.163:60392.service: Deactivated successfully. May 13 12:58:11.426882 systemd[1]: session-27.scope: Deactivated successfully. May 13 12:58:11.427474 systemd-logind[1597]: Session 27 logged out. Waiting for processes to exit. May 13 12:58:11.429266 systemd[1]: Started sshd@25-139.178.70.101:22-147.75.109.163:60406.service - OpenSSH per-connection server daemon (147.75.109.163:60406). May 13 12:58:11.430618 systemd-logind[1597]: Removed session 27. May 13 12:58:11.472543 sshd[4626]: Accepted publickey for core from 147.75.109.163 port 60406 ssh2: RSA SHA256:bxyL21ypRg/l6L1U5vXH7bz9HOopqyjcFfRC9D+f+uA May 13 12:58:11.473424 sshd-session[4626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:58:11.476753 systemd-logind[1597]: New session 28 of user core. May 13 12:58:11.485392 systemd[1]: Started session-28.scope - Session 28 of User core. May 13 12:58:11.575486 kubelet[2917]: I0513 12:58:11.575353 2917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f72d658-0891-4033-80cc-2f487967107b" path="/var/lib/kubelet/pods/6f72d658-0891-4033-80cc-2f487967107b/volumes" May 13 12:58:11.576031 kubelet[2917]: I0513 12:58:11.575986 2917 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de9f5769-23ad-4270-8b25-d6e236917638" path="/var/lib/kubelet/pods/de9f5769-23ad-4270-8b25-d6e236917638/volumes" May 13 12:58:11.637932 containerd[1620]: time="2025-05-13T12:58:11.637872911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-676gs,Uid:0f4c107e-0257-4013-86b4-28c72ec99c7e,Namespace:kube-system,Attempt:0,}" May 13 12:58:11.648582 containerd[1620]: time="2025-05-13T12:58:11.648524300Z" level=info msg="connecting to shim 762d4cc6729f811a32912ee4dd54bd7f106158ccc23048ab599ba8211642481f" address="unix:///run/containerd/s/317c2e834c02d25ee3546463ca238703494a703d1790ba02f26d8fc20d646c4b" namespace=k8s.io protocol=ttrpc version=3 May 13 12:58:11.667268 systemd[1]: Started cri-containerd-762d4cc6729f811a32912ee4dd54bd7f106158ccc23048ab599ba8211642481f.scope - libcontainer container 762d4cc6729f811a32912ee4dd54bd7f106158ccc23048ab599ba8211642481f. May 13 12:58:11.683116 containerd[1620]: time="2025-05-13T12:58:11.683089402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-676gs,Uid:0f4c107e-0257-4013-86b4-28c72ec99c7e,Namespace:kube-system,Attempt:0,} returns sandbox id \"762d4cc6729f811a32912ee4dd54bd7f106158ccc23048ab599ba8211642481f\"" May 13 12:58:11.685470 containerd[1620]: time="2025-05-13T12:58:11.685448569Z" level=info msg="CreateContainer within sandbox \"762d4cc6729f811a32912ee4dd54bd7f106158ccc23048ab599ba8211642481f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 12:58:11.688468 containerd[1620]: time="2025-05-13T12:58:11.688444299Z" level=info msg="Container f7d9ddc7cdef7e8779ad44567a6d5658c7aa3c64c431e522b2c117778305939d: CDI devices from CRI Config.CDIDevices: []" May 13 12:58:11.691317 containerd[1620]: time="2025-05-13T12:58:11.691295798Z" level=info msg="CreateContainer within sandbox \"762d4cc6729f811a32912ee4dd54bd7f106158ccc23048ab599ba8211642481f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f7d9ddc7cdef7e8779ad44567a6d5658c7aa3c64c431e522b2c117778305939d\"" May 13 12:58:11.691696 containerd[1620]: time="2025-05-13T12:58:11.691675675Z" level=info msg="StartContainer for \"f7d9ddc7cdef7e8779ad44567a6d5658c7aa3c64c431e522b2c117778305939d\"" May 13 12:58:11.692247 containerd[1620]: time="2025-05-13T12:58:11.692217589Z" level=info msg="connecting to shim f7d9ddc7cdef7e8779ad44567a6d5658c7aa3c64c431e522b2c117778305939d" address="unix:///run/containerd/s/317c2e834c02d25ee3546463ca238703494a703d1790ba02f26d8fc20d646c4b" protocol=ttrpc version=3 May 13 12:58:11.714292 systemd[1]: Started cri-containerd-f7d9ddc7cdef7e8779ad44567a6d5658c7aa3c64c431e522b2c117778305939d.scope - libcontainer container f7d9ddc7cdef7e8779ad44567a6d5658c7aa3c64c431e522b2c117778305939d. May 13 12:58:11.734043 containerd[1620]: time="2025-05-13T12:58:11.734013952Z" level=info msg="StartContainer for \"f7d9ddc7cdef7e8779ad44567a6d5658c7aa3c64c431e522b2c117778305939d\" returns successfully" May 13 12:58:11.748334 systemd[1]: cri-containerd-f7d9ddc7cdef7e8779ad44567a6d5658c7aa3c64c431e522b2c117778305939d.scope: Deactivated successfully. May 13 12:58:11.748532 systemd[1]: cri-containerd-f7d9ddc7cdef7e8779ad44567a6d5658c7aa3c64c431e522b2c117778305939d.scope: Consumed 13ms CPU time, 9.7M memory peak, 3.3M read from disk. May 13 12:58:11.749603 containerd[1620]: time="2025-05-13T12:58:11.749580709Z" level=info msg="received exit event container_id:\"f7d9ddc7cdef7e8779ad44567a6d5658c7aa3c64c431e522b2c117778305939d\" id:\"f7d9ddc7cdef7e8779ad44567a6d5658c7aa3c64c431e522b2c117778305939d\" pid:4695 exited_at:{seconds:1747141091 nanos:749309896}" May 13 12:58:11.749743 containerd[1620]: time="2025-05-13T12:58:11.749728130Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f7d9ddc7cdef7e8779ad44567a6d5658c7aa3c64c431e522b2c117778305939d\" id:\"f7d9ddc7cdef7e8779ad44567a6d5658c7aa3c64c431e522b2c117778305939d\" pid:4695 exited_at:{seconds:1747141091 nanos:749309896}" May 13 12:58:11.811363 kubelet[2917]: I0513 12:58:11.811284 2917 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-13T12:58:11Z","lastTransitionTime":"2025-05-13T12:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 13 12:58:11.838714 containerd[1620]: time="2025-05-13T12:58:11.837884608Z" level=info msg="CreateContainer within sandbox \"762d4cc6729f811a32912ee4dd54bd7f106158ccc23048ab599ba8211642481f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 12:58:11.843651 containerd[1620]: time="2025-05-13T12:58:11.843613511Z" level=info msg="Container 2849d4c73b8a333abbc598f6dc9af3d1a61e7eabc02bfe6e58d307cc1ac3d7cb: CDI devices from CRI Config.CDIDevices: []" May 13 12:58:11.847050 containerd[1620]: time="2025-05-13T12:58:11.846998745Z" level=info msg="CreateContainer within sandbox \"762d4cc6729f811a32912ee4dd54bd7f106158ccc23048ab599ba8211642481f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2849d4c73b8a333abbc598f6dc9af3d1a61e7eabc02bfe6e58d307cc1ac3d7cb\"" May 13 12:58:11.848465 containerd[1620]: time="2025-05-13T12:58:11.848417144Z" level=info msg="StartContainer for \"2849d4c73b8a333abbc598f6dc9af3d1a61e7eabc02bfe6e58d307cc1ac3d7cb\"" May 13 12:58:11.851604 containerd[1620]: time="2025-05-13T12:58:11.851551569Z" level=info msg="connecting to shim 2849d4c73b8a333abbc598f6dc9af3d1a61e7eabc02bfe6e58d307cc1ac3d7cb" address="unix:///run/containerd/s/317c2e834c02d25ee3546463ca238703494a703d1790ba02f26d8fc20d646c4b" protocol=ttrpc version=3 May 13 12:58:11.868209 systemd[1]: Started cri-containerd-2849d4c73b8a333abbc598f6dc9af3d1a61e7eabc02bfe6e58d307cc1ac3d7cb.scope - libcontainer container 2849d4c73b8a333abbc598f6dc9af3d1a61e7eabc02bfe6e58d307cc1ac3d7cb. May 13 12:58:11.885320 containerd[1620]: time="2025-05-13T12:58:11.885295341Z" level=info msg="StartContainer for \"2849d4c73b8a333abbc598f6dc9af3d1a61e7eabc02bfe6e58d307cc1ac3d7cb\" returns successfully" May 13 12:58:11.894740 systemd[1]: cri-containerd-2849d4c73b8a333abbc598f6dc9af3d1a61e7eabc02bfe6e58d307cc1ac3d7cb.scope: Deactivated successfully. May 13 12:58:11.894909 systemd[1]: cri-containerd-2849d4c73b8a333abbc598f6dc9af3d1a61e7eabc02bfe6e58d307cc1ac3d7cb.scope: Consumed 10ms CPU time, 7.6M memory peak, 2.2M read from disk. May 13 12:58:11.895296 containerd[1620]: time="2025-05-13T12:58:11.895252731Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2849d4c73b8a333abbc598f6dc9af3d1a61e7eabc02bfe6e58d307cc1ac3d7cb\" id:\"2849d4c73b8a333abbc598f6dc9af3d1a61e7eabc02bfe6e58d307cc1ac3d7cb\" pid:4742 exited_at:{seconds:1747141091 nanos:894796552}" May 13 12:58:11.895296 containerd[1620]: time="2025-05-13T12:58:11.895254014Z" level=info msg="received exit event container_id:\"2849d4c73b8a333abbc598f6dc9af3d1a61e7eabc02bfe6e58d307cc1ac3d7cb\" id:\"2849d4c73b8a333abbc598f6dc9af3d1a61e7eabc02bfe6e58d307cc1ac3d7cb\" pid:4742 exited_at:{seconds:1747141091 nanos:894796552}" May 13 12:58:12.573491 kubelet[2917]: E0513 12:58:12.573453 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-2gjwq" podUID="a8e7ae0a-fa7f-4b6b-9504-29fb30a61bf6" May 13 12:58:12.841494 containerd[1620]: time="2025-05-13T12:58:12.841434381Z" level=info msg="CreateContainer within sandbox \"762d4cc6729f811a32912ee4dd54bd7f106158ccc23048ab599ba8211642481f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 12:58:12.869560 containerd[1620]: time="2025-05-13T12:58:12.868393089Z" level=info msg="Container 2493c949d3a12b6c17f13b5a79fa374a7aac0e787cc200f40130348f4493f2b4: CDI devices from CRI Config.CDIDevices: []" May 13 12:58:12.868435 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount94099257.mount: Deactivated successfully. May 13 12:58:12.883390 containerd[1620]: time="2025-05-13T12:58:12.883356037Z" level=info msg="CreateContainer within sandbox \"762d4cc6729f811a32912ee4dd54bd7f106158ccc23048ab599ba8211642481f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2493c949d3a12b6c17f13b5a79fa374a7aac0e787cc200f40130348f4493f2b4\"" May 13 12:58:12.883893 containerd[1620]: time="2025-05-13T12:58:12.883877938Z" level=info msg="StartContainer for \"2493c949d3a12b6c17f13b5a79fa374a7aac0e787cc200f40130348f4493f2b4\"" May 13 12:58:12.884788 containerd[1620]: time="2025-05-13T12:58:12.884771291Z" level=info msg="connecting to shim 2493c949d3a12b6c17f13b5a79fa374a7aac0e787cc200f40130348f4493f2b4" address="unix:///run/containerd/s/317c2e834c02d25ee3546463ca238703494a703d1790ba02f26d8fc20d646c4b" protocol=ttrpc version=3 May 13 12:58:12.901345 systemd[1]: Started cri-containerd-2493c949d3a12b6c17f13b5a79fa374a7aac0e787cc200f40130348f4493f2b4.scope - libcontainer container 2493c949d3a12b6c17f13b5a79fa374a7aac0e787cc200f40130348f4493f2b4. May 13 12:58:12.923001 containerd[1620]: time="2025-05-13T12:58:12.922949002Z" level=info msg="StartContainer for \"2493c949d3a12b6c17f13b5a79fa374a7aac0e787cc200f40130348f4493f2b4\" returns successfully" May 13 12:58:12.927856 systemd[1]: cri-containerd-2493c949d3a12b6c17f13b5a79fa374a7aac0e787cc200f40130348f4493f2b4.scope: Deactivated successfully. May 13 12:58:12.928019 systemd[1]: cri-containerd-2493c949d3a12b6c17f13b5a79fa374a7aac0e787cc200f40130348f4493f2b4.scope: Consumed 13ms CPU time, 5.9M memory peak, 1.1M read from disk. May 13 12:58:12.928560 containerd[1620]: time="2025-05-13T12:58:12.928537862Z" level=info msg="received exit event container_id:\"2493c949d3a12b6c17f13b5a79fa374a7aac0e787cc200f40130348f4493f2b4\" id:\"2493c949d3a12b6c17f13b5a79fa374a7aac0e787cc200f40130348f4493f2b4\" pid:4785 exited_at:{seconds:1747141092 nanos:928309062}" May 13 12:58:12.928729 containerd[1620]: time="2025-05-13T12:58:12.928714843Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2493c949d3a12b6c17f13b5a79fa374a7aac0e787cc200f40130348f4493f2b4\" id:\"2493c949d3a12b6c17f13b5a79fa374a7aac0e787cc200f40130348f4493f2b4\" pid:4785 exited_at:{seconds:1747141092 nanos:928309062}" May 13 12:58:13.528255 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2493c949d3a12b6c17f13b5a79fa374a7aac0e787cc200f40130348f4493f2b4-rootfs.mount: Deactivated successfully. May 13 12:58:13.845608 containerd[1620]: time="2025-05-13T12:58:13.845499637Z" level=info msg="CreateContainer within sandbox \"762d4cc6729f811a32912ee4dd54bd7f106158ccc23048ab599ba8211642481f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 12:58:13.851790 containerd[1620]: time="2025-05-13T12:58:13.851676466Z" level=info msg="Container 09d6f2ac1291a59c37507b020d91dff40f80fbb9a021c9e8fe98ad47c74a4c8b: CDI devices from CRI Config.CDIDevices: []" May 13 12:58:13.859157 containerd[1620]: time="2025-05-13T12:58:13.859097085Z" level=info msg="CreateContainer within sandbox \"762d4cc6729f811a32912ee4dd54bd7f106158ccc23048ab599ba8211642481f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"09d6f2ac1291a59c37507b020d91dff40f80fbb9a021c9e8fe98ad47c74a4c8b\"" May 13 12:58:13.859662 containerd[1620]: time="2025-05-13T12:58:13.859646591Z" level=info msg="StartContainer for \"09d6f2ac1291a59c37507b020d91dff40f80fbb9a021c9e8fe98ad47c74a4c8b\"" May 13 12:58:13.861030 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1169797527.mount: Deactivated successfully. May 13 12:58:13.861315 containerd[1620]: time="2025-05-13T12:58:13.861296344Z" level=info msg="connecting to shim 09d6f2ac1291a59c37507b020d91dff40f80fbb9a021c9e8fe98ad47c74a4c8b" address="unix:///run/containerd/s/317c2e834c02d25ee3546463ca238703494a703d1790ba02f26d8fc20d646c4b" protocol=ttrpc version=3 May 13 12:58:13.885288 systemd[1]: Started cri-containerd-09d6f2ac1291a59c37507b020d91dff40f80fbb9a021c9e8fe98ad47c74a4c8b.scope - libcontainer container 09d6f2ac1291a59c37507b020d91dff40f80fbb9a021c9e8fe98ad47c74a4c8b. May 13 12:58:13.905430 systemd[1]: cri-containerd-09d6f2ac1291a59c37507b020d91dff40f80fbb9a021c9e8fe98ad47c74a4c8b.scope: Deactivated successfully. May 13 12:58:13.906485 containerd[1620]: time="2025-05-13T12:58:13.906053163Z" level=info msg="TaskExit event in podsandbox handler container_id:\"09d6f2ac1291a59c37507b020d91dff40f80fbb9a021c9e8fe98ad47c74a4c8b\" id:\"09d6f2ac1291a59c37507b020d91dff40f80fbb9a021c9e8fe98ad47c74a4c8b\" pid:4824 exited_at:{seconds:1747141093 nanos:905663526}" May 13 12:58:13.906593 containerd[1620]: time="2025-05-13T12:58:13.906573861Z" level=info msg="received exit event container_id:\"09d6f2ac1291a59c37507b020d91dff40f80fbb9a021c9e8fe98ad47c74a4c8b\" id:\"09d6f2ac1291a59c37507b020d91dff40f80fbb9a021c9e8fe98ad47c74a4c8b\" pid:4824 exited_at:{seconds:1747141093 nanos:905663526}" May 13 12:58:13.912873 containerd[1620]: time="2025-05-13T12:58:13.912849650Z" level=info msg="StartContainer for \"09d6f2ac1291a59c37507b020d91dff40f80fbb9a021c9e8fe98ad47c74a4c8b\" returns successfully" May 13 12:58:13.935727 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-09d6f2ac1291a59c37507b020d91dff40f80fbb9a021c9e8fe98ad47c74a4c8b-rootfs.mount: Deactivated successfully. May 13 12:58:14.573676 kubelet[2917]: E0513 12:58:14.573451 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-2gjwq" podUID="a8e7ae0a-fa7f-4b6b-9504-29fb30a61bf6" May 13 12:58:14.656679 kubelet[2917]: E0513 12:58:14.656644 2917 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 12:58:14.850205 containerd[1620]: time="2025-05-13T12:58:14.849371704Z" level=info msg="CreateContainer within sandbox \"762d4cc6729f811a32912ee4dd54bd7f106158ccc23048ab599ba8211642481f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 12:58:14.857445 containerd[1620]: time="2025-05-13T12:58:14.857423718Z" level=info msg="Container 6163c1372c96ce9c967ed945b91c0b3f4e5c86c457f79bd68140f9680545cd6b: CDI devices from CRI Config.CDIDevices: []" May 13 12:58:14.860991 containerd[1620]: time="2025-05-13T12:58:14.860973021Z" level=info msg="CreateContainer within sandbox \"762d4cc6729f811a32912ee4dd54bd7f106158ccc23048ab599ba8211642481f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6163c1372c96ce9c967ed945b91c0b3f4e5c86c457f79bd68140f9680545cd6b\"" May 13 12:58:14.862172 containerd[1620]: time="2025-05-13T12:58:14.862119440Z" level=info msg="StartContainer for \"6163c1372c96ce9c967ed945b91c0b3f4e5c86c457f79bd68140f9680545cd6b\"" May 13 12:58:14.862845 containerd[1620]: time="2025-05-13T12:58:14.862831232Z" level=info msg="connecting to shim 6163c1372c96ce9c967ed945b91c0b3f4e5c86c457f79bd68140f9680545cd6b" address="unix:///run/containerd/s/317c2e834c02d25ee3546463ca238703494a703d1790ba02f26d8fc20d646c4b" protocol=ttrpc version=3 May 13 12:58:14.885351 systemd[1]: Started cri-containerd-6163c1372c96ce9c967ed945b91c0b3f4e5c86c457f79bd68140f9680545cd6b.scope - libcontainer container 6163c1372c96ce9c967ed945b91c0b3f4e5c86c457f79bd68140f9680545cd6b. May 13 12:58:14.903811 containerd[1620]: time="2025-05-13T12:58:14.903783847Z" level=info msg="StartContainer for \"6163c1372c96ce9c967ed945b91c0b3f4e5c86c457f79bd68140f9680545cd6b\" returns successfully" May 13 12:58:14.990001 containerd[1620]: time="2025-05-13T12:58:14.989969750Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6163c1372c96ce9c967ed945b91c0b3f4e5c86c457f79bd68140f9680545cd6b\" id:\"a8b515d1e78bd3cbfbf2c4ae1e7d0e7a2e334d3e340dd4e63be593ca084bb070\" pid:4893 exited_at:{seconds:1747141094 nanos:989572642}" May 13 12:58:15.848155 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) May 13 12:58:15.869919 kubelet[2917]: I0513 12:58:15.869716 2917 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-676gs" podStartSLOduration=4.869684451 podStartE2EDuration="4.869684451s" podCreationTimestamp="2025-05-13 12:58:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 12:58:15.86967599 +0000 UTC m=+116.408173074" watchObservedRunningTime="2025-05-13 12:58:15.869684451 +0000 UTC m=+116.408181517" May 13 12:58:16.574176 kubelet[2917]: E0513 12:58:16.574110 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-2gjwq" podUID="a8e7ae0a-fa7f-4b6b-9504-29fb30a61bf6" May 13 12:58:17.900577 containerd[1620]: time="2025-05-13T12:58:17.900545305Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6163c1372c96ce9c967ed945b91c0b3f4e5c86c457f79bd68140f9680545cd6b\" id:\"8f2a122b805107fc62df20c4da036cf019aa7175e950aec638e5e06ee483b5fd\" pid:5055 exit_status:1 exited_at:{seconds:1747141097 nanos:900297115}" May 13 12:58:18.573863 kubelet[2917]: E0513 12:58:18.573711 2917 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-2gjwq" podUID="a8e7ae0a-fa7f-4b6b-9504-29fb30a61bf6" May 13 12:58:18.674423 systemd-networkd[1542]: lxc_health: Link UP May 13 12:58:18.689050 systemd-networkd[1542]: lxc_health: Gained carrier May 13 12:58:19.615105 containerd[1620]: time="2025-05-13T12:58:19.615054995Z" level=info msg="StopPodSandbox for \"e106217690421697ed6602fa1873d59f1791ebe2ffa4d73ce170761d36be6b20\"" May 13 12:58:19.615664 containerd[1620]: time="2025-05-13T12:58:19.615440460Z" level=info msg="TearDown network for sandbox \"e106217690421697ed6602fa1873d59f1791ebe2ffa4d73ce170761d36be6b20\" successfully" May 13 12:58:19.615664 containerd[1620]: time="2025-05-13T12:58:19.615464398Z" level=info msg="StopPodSandbox for \"e106217690421697ed6602fa1873d59f1791ebe2ffa4d73ce170761d36be6b20\" returns successfully" May 13 12:58:19.616114 containerd[1620]: time="2025-05-13T12:58:19.615839404Z" level=info msg="RemovePodSandbox for \"e106217690421697ed6602fa1873d59f1791ebe2ffa4d73ce170761d36be6b20\"" May 13 12:58:19.616114 containerd[1620]: time="2025-05-13T12:58:19.615912622Z" level=info msg="Forcibly stopping sandbox \"e106217690421697ed6602fa1873d59f1791ebe2ffa4d73ce170761d36be6b20\"" May 13 12:58:19.616114 containerd[1620]: time="2025-05-13T12:58:19.615965219Z" level=info msg="TearDown network for sandbox \"e106217690421697ed6602fa1873d59f1791ebe2ffa4d73ce170761d36be6b20\" successfully" May 13 12:58:19.621038 containerd[1620]: time="2025-05-13T12:58:19.620982939Z" level=info msg="Ensure that sandbox e106217690421697ed6602fa1873d59f1791ebe2ffa4d73ce170761d36be6b20 in task-service has been cleanup successfully" May 13 12:58:19.622124 containerd[1620]: time="2025-05-13T12:58:19.622074077Z" level=info msg="RemovePodSandbox \"e106217690421697ed6602fa1873d59f1791ebe2ffa4d73ce170761d36be6b20\" returns successfully" May 13 12:58:19.622714 containerd[1620]: time="2025-05-13T12:58:19.622519121Z" level=info msg="StopPodSandbox for \"8fadcda861cf8f6b3a76f161d834a047a869c6b26576915223cc413b69efe52e\"" May 13 12:58:19.622714 containerd[1620]: time="2025-05-13T12:58:19.622595356Z" level=info msg="TearDown network for sandbox \"8fadcda861cf8f6b3a76f161d834a047a869c6b26576915223cc413b69efe52e\" successfully" May 13 12:58:19.622714 containerd[1620]: time="2025-05-13T12:58:19.622604490Z" level=info msg="StopPodSandbox for \"8fadcda861cf8f6b3a76f161d834a047a869c6b26576915223cc413b69efe52e\" returns successfully" May 13 12:58:19.623066 containerd[1620]: time="2025-05-13T12:58:19.622891383Z" level=info msg="RemovePodSandbox for \"8fadcda861cf8f6b3a76f161d834a047a869c6b26576915223cc413b69efe52e\"" May 13 12:58:19.623066 containerd[1620]: time="2025-05-13T12:58:19.622960739Z" level=info msg="Forcibly stopping sandbox \"8fadcda861cf8f6b3a76f161d834a047a869c6b26576915223cc413b69efe52e\"" May 13 12:58:19.623066 containerd[1620]: time="2025-05-13T12:58:19.623017429Z" level=info msg="TearDown network for sandbox \"8fadcda861cf8f6b3a76f161d834a047a869c6b26576915223cc413b69efe52e\" successfully" May 13 12:58:19.623917 containerd[1620]: time="2025-05-13T12:58:19.623903462Z" level=info msg="Ensure that sandbox 8fadcda861cf8f6b3a76f161d834a047a869c6b26576915223cc413b69efe52e in task-service has been cleanup successfully" May 13 12:58:19.625051 containerd[1620]: time="2025-05-13T12:58:19.624999828Z" level=info msg="RemovePodSandbox \"8fadcda861cf8f6b3a76f161d834a047a869c6b26576915223cc413b69efe52e\" returns successfully" May 13 12:58:20.032210 containerd[1620]: time="2025-05-13T12:58:20.032166232Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6163c1372c96ce9c967ed945b91c0b3f4e5c86c457f79bd68140f9680545cd6b\" id:\"0fe1f90ec6b6beda62e0d85fba258f34190f922cddce39398cf3201c627de2ea\" pid:5435 exited_at:{seconds:1747141100 nanos:31771590}" May 13 12:58:20.182290 systemd-networkd[1542]: lxc_health: Gained IPv6LL May 13 12:58:22.148502 containerd[1620]: time="2025-05-13T12:58:22.148443717Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6163c1372c96ce9c967ed945b91c0b3f4e5c86c457f79bd68140f9680545cd6b\" id:\"f248d4652c3b26ce5118a7f981c05059134e0259ef0f69aa550a395d588da3f5\" pid:5468 exited_at:{seconds:1747141102 nanos:148125450}" May 13 12:58:24.225012 containerd[1620]: time="2025-05-13T12:58:24.224840488Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6163c1372c96ce9c967ed945b91c0b3f4e5c86c457f79bd68140f9680545cd6b\" id:\"a30006e3b07fc9d0c3786c4beadff04de7a1a0ae0a66d85af9acf8cde0a9d2fe\" pid:5490 exited_at:{seconds:1747141104 nanos:224326071}" May 13 12:58:24.235180 sshd[4628]: Connection closed by 147.75.109.163 port 60406 May 13 12:58:24.235819 sshd-session[4626]: pam_unix(sshd:session): session closed for user core May 13 12:58:24.239244 systemd-logind[1597]: Session 28 logged out. Waiting for processes to exit. May 13 12:58:24.239720 systemd[1]: sshd@25-139.178.70.101:22-147.75.109.163:60406.service: Deactivated successfully. May 13 12:58:24.241681 systemd[1]: session-28.scope: Deactivated successfully. May 13 12:58:24.243908 systemd-logind[1597]: Removed session 28.