May 14 05:06:32.724874 kernel: Linux version 6.12.20-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed May 14 03:42:56 -00 2025 May 14 05:06:32.724889 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=bd5d20a479abde3485dc2e7b97a54e804895b9926289ae86f84794bef32a40f3 May 14 05:06:32.724895 kernel: Disabled fast string operations May 14 05:06:32.724898 kernel: BIOS-provided physical RAM map: May 14 05:06:32.724902 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable May 14 05:06:32.724906 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved May 14 05:06:32.724912 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved May 14 05:06:32.724916 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable May 14 05:06:32.724920 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data May 14 05:06:32.724924 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS May 14 05:06:32.724928 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable May 14 05:06:32.724932 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved May 14 05:06:32.724936 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved May 14 05:06:32.724940 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved May 14 05:06:32.724946 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved May 14 05:06:32.724951 kernel: NX (Execute Disable) protection: active May 14 05:06:32.724955 kernel: APIC: Static calls initialized May 14 05:06:32.724960 kernel: SMBIOS 2.7 present. May 14 05:06:32.724964 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 May 14 05:06:32.724969 kernel: DMI: Memory slots populated: 1/128 May 14 05:06:32.724974 kernel: vmware: hypercall mode: 0x00 May 14 05:06:32.724979 kernel: Hypervisor detected: VMware May 14 05:06:32.724984 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz May 14 05:06:32.724988 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz May 14 05:06:32.724993 kernel: vmware: using clock offset of 3270961383 ns May 14 05:06:32.724997 kernel: tsc: Detected 3408.000 MHz processor May 14 05:06:32.725002 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 14 05:06:32.725007 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 14 05:06:32.725012 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 May 14 05:06:32.725016 kernel: total RAM covered: 3072M May 14 05:06:32.725022 kernel: Found optimal setting for mtrr clean up May 14 05:06:32.725027 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G May 14 05:06:32.725032 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs May 14 05:06:32.725037 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 14 05:06:32.725042 kernel: Using GB pages for direct mapping May 14 05:06:32.725046 kernel: ACPI: Early table checksum verification disabled May 14 05:06:32.725051 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) May 14 05:06:32.725056 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) May 14 05:06:32.725061 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) May 14 05:06:32.725067 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) May 14 05:06:32.725073 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 May 14 05:06:32.725078 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 May 14 05:06:32.725083 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) May 14 05:06:32.725088 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) May 14 05:06:32.725093 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) May 14 05:06:32.725098 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) May 14 05:06:32.725103 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) May 14 05:06:32.725108 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) May 14 05:06:32.725113 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] May 14 05:06:32.725118 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] May 14 05:06:32.725123 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] May 14 05:06:32.725128 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] May 14 05:06:32.725132 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] May 14 05:06:32.725137 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] May 14 05:06:32.725143 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] May 14 05:06:32.725148 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] May 14 05:06:32.725153 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] May 14 05:06:32.725157 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] May 14 05:06:32.725162 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 14 05:06:32.725167 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] May 14 05:06:32.725172 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug May 14 05:06:32.725177 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00001000-0x7fffffff] May 14 05:06:32.725182 kernel: NODE_DATA(0) allocated [mem 0x7fff8dc0-0x7fffffff] May 14 05:06:32.725188 kernel: Zone ranges: May 14 05:06:32.725309 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 14 05:06:32.725316 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] May 14 05:06:32.725321 kernel: Normal empty May 14 05:06:32.725326 kernel: Device empty May 14 05:06:32.725331 kernel: Movable zone start for each node May 14 05:06:32.725336 kernel: Early memory node ranges May 14 05:06:32.725340 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] May 14 05:06:32.725345 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] May 14 05:06:32.725350 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] May 14 05:06:32.725357 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] May 14 05:06:32.725362 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 14 05:06:32.725367 kernel: On node 0, zone DMA: 98 pages in unavailable ranges May 14 05:06:32.725372 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges May 14 05:06:32.725377 kernel: ACPI: PM-Timer IO Port: 0x1008 May 14 05:06:32.725382 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) May 14 05:06:32.725387 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) May 14 05:06:32.725392 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) May 14 05:06:32.725396 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) May 14 05:06:32.725402 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) May 14 05:06:32.725407 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) May 14 05:06:32.725412 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) May 14 05:06:32.725416 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) May 14 05:06:32.725421 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) May 14 05:06:32.725426 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) May 14 05:06:32.725431 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) May 14 05:06:32.725435 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) May 14 05:06:32.725440 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) May 14 05:06:32.725445 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) May 14 05:06:32.725450 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) May 14 05:06:32.725455 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) May 14 05:06:32.725460 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) May 14 05:06:32.725465 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) May 14 05:06:32.725470 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) May 14 05:06:32.725474 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) May 14 05:06:32.725479 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) May 14 05:06:32.725484 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) May 14 05:06:32.725489 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) May 14 05:06:32.725494 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) May 14 05:06:32.725499 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) May 14 05:06:32.725504 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) May 14 05:06:32.725509 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) May 14 05:06:32.725514 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) May 14 05:06:32.725519 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) May 14 05:06:32.725523 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) May 14 05:06:32.725528 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) May 14 05:06:32.725533 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) May 14 05:06:32.725538 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) May 14 05:06:32.725543 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) May 14 05:06:32.725548 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) May 14 05:06:32.725553 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) May 14 05:06:32.725558 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) May 14 05:06:32.725563 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) May 14 05:06:32.725567 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) May 14 05:06:32.725572 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) May 14 05:06:32.725581 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) May 14 05:06:32.725586 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) May 14 05:06:32.725591 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) May 14 05:06:32.725596 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) May 14 05:06:32.725602 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) May 14 05:06:32.725607 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) May 14 05:06:32.725612 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) May 14 05:06:32.725617 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) May 14 05:06:32.725623 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) May 14 05:06:32.725628 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) May 14 05:06:32.725633 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) May 14 05:06:32.725639 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) May 14 05:06:32.725644 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) May 14 05:06:32.725649 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) May 14 05:06:32.725654 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) May 14 05:06:32.725659 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) May 14 05:06:32.725664 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) May 14 05:06:32.725669 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) May 14 05:06:32.725674 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) May 14 05:06:32.725679 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) May 14 05:06:32.725684 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) May 14 05:06:32.725690 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) May 14 05:06:32.725696 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) May 14 05:06:32.725700 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) May 14 05:06:32.725706 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) May 14 05:06:32.725711 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) May 14 05:06:32.725716 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) May 14 05:06:32.725721 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) May 14 05:06:32.725726 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) May 14 05:06:32.725731 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) May 14 05:06:32.725736 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) May 14 05:06:32.725743 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) May 14 05:06:32.725748 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) May 14 05:06:32.725753 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) May 14 05:06:32.725758 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) May 14 05:06:32.725763 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) May 14 05:06:32.725768 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) May 14 05:06:32.725773 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) May 14 05:06:32.725779 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) May 14 05:06:32.725784 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) May 14 05:06:32.725790 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) May 14 05:06:32.725795 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) May 14 05:06:32.725800 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) May 14 05:06:32.725805 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) May 14 05:06:32.725810 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) May 14 05:06:32.725815 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) May 14 05:06:32.725820 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) May 14 05:06:32.725825 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) May 14 05:06:32.725830 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) May 14 05:06:32.725835 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) May 14 05:06:32.725841 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) May 14 05:06:32.725847 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) May 14 05:06:32.725852 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) May 14 05:06:32.725857 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) May 14 05:06:32.725862 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) May 14 05:06:32.725867 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) May 14 05:06:32.725873 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) May 14 05:06:32.725878 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) May 14 05:06:32.725883 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) May 14 05:06:32.725888 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) May 14 05:06:32.725894 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) May 14 05:06:32.725899 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) May 14 05:06:32.725904 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) May 14 05:06:32.725910 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) May 14 05:06:32.725915 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) May 14 05:06:32.725920 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) May 14 05:06:32.725925 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) May 14 05:06:32.725930 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) May 14 05:06:32.725935 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) May 14 05:06:32.725940 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) May 14 05:06:32.725946 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) May 14 05:06:32.725951 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) May 14 05:06:32.725956 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) May 14 05:06:32.725961 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) May 14 05:06:32.725967 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) May 14 05:06:32.725972 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) May 14 05:06:32.725977 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) May 14 05:06:32.725982 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) May 14 05:06:32.725987 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) May 14 05:06:32.725992 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) May 14 05:06:32.725998 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) May 14 05:06:32.726003 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) May 14 05:06:32.726008 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) May 14 05:06:32.726013 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) May 14 05:06:32.726018 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) May 14 05:06:32.726023 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) May 14 05:06:32.726029 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) May 14 05:06:32.726034 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) May 14 05:06:32.726038 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 May 14 05:06:32.726044 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) May 14 05:06:32.726050 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 14 05:06:32.726055 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 May 14 05:06:32.726060 kernel: TSC deadline timer available May 14 05:06:32.726066 kernel: CPU topo: Max. logical packages: 128 May 14 05:06:32.726071 kernel: CPU topo: Max. logical dies: 128 May 14 05:06:32.726076 kernel: CPU topo: Max. dies per package: 1 May 14 05:06:32.726081 kernel: CPU topo: Max. threads per core: 1 May 14 05:06:32.726086 kernel: CPU topo: Num. cores per package: 1 May 14 05:06:32.726091 kernel: CPU topo: Num. threads per package: 1 May 14 05:06:32.726097 kernel: CPU topo: Allowing 2 present CPUs plus 126 hotplug CPUs May 14 05:06:32.726106 kernel: [mem 0x80000000-0xefffffff] available for PCI devices May 14 05:06:32.726111 kernel: Booting paravirtualized kernel on VMware hypervisor May 14 05:06:32.726116 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 14 05:06:32.726122 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 May 14 05:06:32.726127 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 May 14 05:06:32.726132 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 May 14 05:06:32.726138 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 May 14 05:06:32.726143 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 May 14 05:06:32.726149 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 May 14 05:06:32.726154 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 May 14 05:06:32.726159 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 May 14 05:06:32.726164 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 May 14 05:06:32.726169 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 May 14 05:06:32.726174 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 May 14 05:06:32.726179 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 May 14 05:06:32.726184 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 May 14 05:06:32.726189 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 May 14 05:06:32.726363 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 May 14 05:06:32.726369 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 May 14 05:06:32.726375 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 May 14 05:06:32.726380 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 May 14 05:06:32.726385 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 May 14 05:06:32.726391 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=bd5d20a479abde3485dc2e7b97a54e804895b9926289ae86f84794bef32a40f3 May 14 05:06:32.726396 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 14 05:06:32.726401 kernel: random: crng init done May 14 05:06:32.726408 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes May 14 05:06:32.726413 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes May 14 05:06:32.726419 kernel: printk: log_buf_len min size: 262144 bytes May 14 05:06:32.726424 kernel: printk: log_buf_len: 1048576 bytes May 14 05:06:32.726429 kernel: printk: early log buf free: 245576(93%) May 14 05:06:32.726434 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 05:06:32.726439 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 14 05:06:32.726445 kernel: Fallback order for Node 0: 0 May 14 05:06:32.726450 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524157 May 14 05:06:32.726456 kernel: Policy zone: DMA32 May 14 05:06:32.726462 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 14 05:06:32.726467 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 May 14 05:06:32.726472 kernel: ftrace: allocating 40065 entries in 157 pages May 14 05:06:32.726478 kernel: ftrace: allocated 157 pages with 5 groups May 14 05:06:32.726483 kernel: Dynamic Preempt: voluntary May 14 05:06:32.726488 kernel: rcu: Preemptible hierarchical RCU implementation. May 14 05:06:32.726493 kernel: rcu: RCU event tracing is enabled. May 14 05:06:32.726499 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. May 14 05:06:32.726505 kernel: Trampoline variant of Tasks RCU enabled. May 14 05:06:32.726510 kernel: Rude variant of Tasks RCU enabled. May 14 05:06:32.726515 kernel: Tracing variant of Tasks RCU enabled. May 14 05:06:32.726520 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 14 05:06:32.726526 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 May 14 05:06:32.726531 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. May 14 05:06:32.726536 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. May 14 05:06:32.726541 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. May 14 05:06:32.726547 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 May 14 05:06:32.726553 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. May 14 05:06:32.726558 kernel: Console: colour VGA+ 80x25 May 14 05:06:32.726563 kernel: printk: legacy console [tty0] enabled May 14 05:06:32.726568 kernel: printk: legacy console [ttyS0] enabled May 14 05:06:32.726573 kernel: ACPI: Core revision 20240827 May 14 05:06:32.726579 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns May 14 05:06:32.726584 kernel: APIC: Switch to symmetric I/O mode setup May 14 05:06:32.726589 kernel: x2apic enabled May 14 05:06:32.726594 kernel: APIC: Switched APIC routing to: physical x2apic May 14 05:06:32.726599 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 14 05:06:32.726606 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns May 14 05:06:32.726611 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) May 14 05:06:32.726616 kernel: Disabled fast string operations May 14 05:06:32.726621 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 14 05:06:32.726626 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 14 05:06:32.726632 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 14 05:06:32.726637 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit May 14 05:06:32.726642 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS May 14 05:06:32.726648 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 14 05:06:32.726654 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT May 14 05:06:32.726659 kernel: RETBleed: Mitigation: Enhanced IBRS May 14 05:06:32.726664 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 14 05:06:32.726669 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 14 05:06:32.726675 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 14 05:06:32.726680 kernel: SRBDS: Unknown: Dependent on hypervisor status May 14 05:06:32.726685 kernel: GDS: Unknown: Dependent on hypervisor status May 14 05:06:32.726690 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 14 05:06:32.726696 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 14 05:06:32.726702 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 14 05:06:32.726707 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 14 05:06:32.726712 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 14 05:06:32.726717 kernel: Freeing SMP alternatives memory: 32K May 14 05:06:32.726723 kernel: pid_max: default: 131072 minimum: 1024 May 14 05:06:32.726728 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 14 05:06:32.726733 kernel: landlock: Up and running. May 14 05:06:32.726738 kernel: SELinux: Initializing. May 14 05:06:32.726744 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 14 05:06:32.726750 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 14 05:06:32.726755 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) May 14 05:06:32.727041 kernel: Performance Events: Skylake events, core PMU driver. May 14 05:06:32.727049 kernel: core: CPUID marked event: 'cpu cycles' unavailable May 14 05:06:32.727054 kernel: core: CPUID marked event: 'instructions' unavailable May 14 05:06:32.727060 kernel: core: CPUID marked event: 'bus cycles' unavailable May 14 05:06:32.727065 kernel: core: CPUID marked event: 'cache references' unavailable May 14 05:06:32.727070 kernel: core: CPUID marked event: 'cache misses' unavailable May 14 05:06:32.727077 kernel: core: CPUID marked event: 'branch instructions' unavailable May 14 05:06:32.727082 kernel: core: CPUID marked event: 'branch misses' unavailable May 14 05:06:32.727087 kernel: ... version: 1 May 14 05:06:32.727093 kernel: ... bit width: 48 May 14 05:06:32.727098 kernel: ... generic registers: 4 May 14 05:06:32.727103 kernel: ... value mask: 0000ffffffffffff May 14 05:06:32.727108 kernel: ... max period: 000000007fffffff May 14 05:06:32.727132 kernel: ... fixed-purpose events: 0 May 14 05:06:32.727138 kernel: ... event mask: 000000000000000f May 14 05:06:32.727144 kernel: signal: max sigframe size: 1776 May 14 05:06:32.727163 kernel: rcu: Hierarchical SRCU implementation. May 14 05:06:32.727169 kernel: rcu: Max phase no-delay instances is 400. May 14 05:06:32.727174 kernel: Timer migration: 3 hierarchy levels; 8 children per group; 3 crossnode level May 14 05:06:32.727179 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 14 05:06:32.727184 kernel: smp: Bringing up secondary CPUs ... May 14 05:06:32.727190 kernel: smpboot: x86: Booting SMP configuration: May 14 05:06:32.727201 kernel: .... node #0, CPUs: #1 May 14 05:06:32.727207 kernel: Disabled fast string operations May 14 05:06:32.727213 kernel: smp: Brought up 1 node, 2 CPUs May 14 05:06:32.727218 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) May 14 05:06:32.727224 kernel: Memory: 1924256K/2096628K available (14336K kernel code, 2438K rwdata, 9944K rodata, 54416K init, 2544K bss, 160992K reserved, 0K cma-reserved) May 14 05:06:32.727229 kernel: devtmpfs: initialized May 14 05:06:32.727235 kernel: x86/mm: Memory block size: 128MB May 14 05:06:32.727240 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) May 14 05:06:32.727245 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 14 05:06:32.727251 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) May 14 05:06:32.727256 kernel: pinctrl core: initialized pinctrl subsystem May 14 05:06:32.727262 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 14 05:06:32.727268 kernel: audit: initializing netlink subsys (disabled) May 14 05:06:32.727273 kernel: audit: type=2000 audit(1747199190.064:1): state=initialized audit_enabled=0 res=1 May 14 05:06:32.727278 kernel: thermal_sys: Registered thermal governor 'step_wise' May 14 05:06:32.727283 kernel: thermal_sys: Registered thermal governor 'user_space' May 14 05:06:32.727288 kernel: cpuidle: using governor menu May 14 05:06:32.727293 kernel: Simple Boot Flag at 0x36 set to 0x80 May 14 05:06:32.727299 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 14 05:06:32.727304 kernel: dca service started, version 1.12.1 May 14 05:06:32.727310 kernel: PCI: ECAM [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) for domain 0000 [bus 00-7f] May 14 05:06:32.727321 kernel: PCI: Using configuration type 1 for base access May 14 05:06:32.727328 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 14 05:06:32.727333 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 14 05:06:32.727339 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 14 05:06:32.727344 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 14 05:06:32.727350 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 14 05:06:32.727355 kernel: ACPI: Added _OSI(Module Device) May 14 05:06:32.727361 kernel: ACPI: Added _OSI(Processor Device) May 14 05:06:32.727367 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 14 05:06:32.727373 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 14 05:06:32.727378 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 14 05:06:32.727384 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored May 14 05:06:32.727389 kernel: ACPI: Interpreter enabled May 14 05:06:32.727395 kernel: ACPI: PM: (supports S0 S1 S5) May 14 05:06:32.727400 kernel: ACPI: Using IOAPIC for interrupt routing May 14 05:06:32.727406 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 14 05:06:32.727412 kernel: PCI: Using E820 reservations for host bridge windows May 14 05:06:32.727418 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F May 14 05:06:32.727423 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) May 14 05:06:32.727495 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 05:06:32.727545 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] May 14 05:06:32.727591 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] May 14 05:06:32.727599 kernel: PCI host bridge to bus 0000:00 May 14 05:06:32.727649 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 14 05:06:32.727693 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] May 14 05:06:32.727734 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 14 05:06:32.727773 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 14 05:06:32.727813 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] May 14 05:06:32.727852 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] May 14 05:06:32.727906 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 conventional PCI endpoint May 14 05:06:32.727962 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 conventional PCI bridge May 14 05:06:32.728010 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 14 05:06:32.728063 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint May 14 05:06:32.728114 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a conventional PCI endpoint May 14 05:06:32.728163 kernel: pci 0000:00:07.1: BAR 4 [io 0x1060-0x106f] May 14 05:06:32.728230 kernel: pci 0000:00:07.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk May 14 05:06:32.728279 kernel: pci 0000:00:07.1: BAR 1 [io 0x03f6]: legacy IDE quirk May 14 05:06:32.728349 kernel: pci 0000:00:07.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk May 14 05:06:32.728400 kernel: pci 0000:00:07.1: BAR 3 [io 0x0376]: legacy IDE quirk May 14 05:06:32.732259 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint May 14 05:06:32.732317 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI May 14 05:06:32.732370 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB May 14 05:06:32.732427 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 conventional PCI endpoint May 14 05:06:32.732476 kernel: pci 0000:00:07.7: BAR 0 [io 0x1080-0x10bf] May 14 05:06:32.732524 kernel: pci 0000:00:07.7: BAR 1 [mem 0xfebfe000-0xfebfffff 64bit] May 14 05:06:32.732576 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 conventional PCI endpoint May 14 05:06:32.732625 kernel: pci 0000:00:0f.0: BAR 0 [io 0x1070-0x107f] May 14 05:06:32.732674 kernel: pci 0000:00:0f.0: BAR 1 [mem 0xe8000000-0xefffffff pref] May 14 05:06:32.732721 kernel: pci 0000:00:0f.0: BAR 2 [mem 0xfe000000-0xfe7fffff] May 14 05:06:32.732767 kernel: pci 0000:00:0f.0: ROM [mem 0x00000000-0x00007fff pref] May 14 05:06:32.732812 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 14 05:06:32.732864 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 conventional PCI bridge May 14 05:06:32.732911 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) May 14 05:06:32.732956 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] May 14 05:06:32.733004 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] May 14 05:06:32.733051 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] May 14 05:06:32.733102 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 14 05:06:32.733151 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] May 14 05:06:32.733242 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] May 14 05:06:32.733294 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] May 14 05:06:32.733342 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold May 14 05:06:32.733395 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 14 05:06:32.733442 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] May 14 05:06:32.733487 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] May 14 05:06:32.733533 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] May 14 05:06:32.733579 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] May 14 05:06:32.733625 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold May 14 05:06:32.733675 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 14 05:06:32.733724 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] May 14 05:06:32.733770 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] May 14 05:06:32.733817 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] May 14 05:06:32.733862 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] May 14 05:06:32.733907 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold May 14 05:06:32.733960 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 14 05:06:32.734010 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] May 14 05:06:32.734056 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] May 14 05:06:32.734102 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] May 14 05:06:32.734148 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold May 14 05:06:32.734227 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 14 05:06:32.734278 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] May 14 05:06:32.734325 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] May 14 05:06:32.734374 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] May 14 05:06:32.734421 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold May 14 05:06:32.734472 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 14 05:06:32.734518 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] May 14 05:06:32.734564 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] May 14 05:06:32.734610 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] May 14 05:06:32.734656 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold May 14 05:06:32.734708 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 14 05:06:32.734755 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] May 14 05:06:32.734801 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] May 14 05:06:32.734847 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] May 14 05:06:32.734893 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold May 14 05:06:32.734942 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 14 05:06:32.734990 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] May 14 05:06:32.735037 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] May 14 05:06:32.735083 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] May 14 05:06:32.735134 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold May 14 05:06:32.735185 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 14 05:06:32.735240 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] May 14 05:06:32.735286 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] May 14 05:06:32.735332 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] May 14 05:06:32.735378 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold May 14 05:06:32.735431 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 14 05:06:32.735478 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] May 14 05:06:32.735527 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] May 14 05:06:32.735573 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] May 14 05:06:32.735619 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] May 14 05:06:32.735665 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold May 14 05:06:32.735716 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 14 05:06:32.735766 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] May 14 05:06:32.735812 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] May 14 05:06:32.735876 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] May 14 05:06:32.735946 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] May 14 05:06:32.735993 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold May 14 05:06:32.736042 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 14 05:06:32.736111 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] May 14 05:06:32.736163 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] May 14 05:06:32.736228 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] May 14 05:06:32.736276 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold May 14 05:06:32.736327 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 14 05:06:32.736373 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] May 14 05:06:32.736419 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] May 14 05:06:32.736465 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] May 14 05:06:32.736513 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold May 14 05:06:32.736563 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 14 05:06:32.736609 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] May 14 05:06:32.736655 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] May 14 05:06:32.736703 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] May 14 05:06:32.736756 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold May 14 05:06:32.736808 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 14 05:06:32.736858 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] May 14 05:06:32.736904 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] May 14 05:06:32.736950 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] May 14 05:06:32.736995 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold May 14 05:06:32.737046 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 14 05:06:32.737093 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] May 14 05:06:32.737139 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] May 14 05:06:32.737187 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] May 14 05:06:32.737247 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold May 14 05:06:32.737297 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 14 05:06:32.737344 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] May 14 05:06:32.737390 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] May 14 05:06:32.737436 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] May 14 05:06:32.737482 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] May 14 05:06:32.737527 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold May 14 05:06:32.737581 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 14 05:06:32.737628 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] May 14 05:06:32.737674 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] May 14 05:06:32.737722 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] May 14 05:06:32.737769 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] May 14 05:06:32.737815 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold May 14 05:06:32.737866 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 14 05:06:32.737913 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] May 14 05:06:32.737959 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] May 14 05:06:32.738005 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] May 14 05:06:32.738053 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] May 14 05:06:32.738099 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold May 14 05:06:32.738153 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 14 05:06:32.738211 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] May 14 05:06:32.738262 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] May 14 05:06:32.738308 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] May 14 05:06:32.738354 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold May 14 05:06:32.738404 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 14 05:06:32.738454 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] May 14 05:06:32.738500 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] May 14 05:06:32.738546 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] May 14 05:06:32.738592 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold May 14 05:06:32.738642 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 14 05:06:32.738690 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] May 14 05:06:32.738738 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] May 14 05:06:32.738784 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] May 14 05:06:32.738851 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold May 14 05:06:32.738917 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 14 05:06:32.738965 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] May 14 05:06:32.739011 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] May 14 05:06:32.739058 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] May 14 05:06:32.739132 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold May 14 05:06:32.739225 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 14 05:06:32.739273 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] May 14 05:06:32.739320 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] May 14 05:06:32.739367 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] May 14 05:06:32.739413 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold May 14 05:06:32.739463 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 14 05:06:32.739510 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] May 14 05:06:32.739559 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] May 14 05:06:32.739605 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] May 14 05:06:32.739652 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] May 14 05:06:32.739699 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold May 14 05:06:32.739750 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 14 05:06:32.739798 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] May 14 05:06:32.739844 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] May 14 05:06:32.739892 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] May 14 05:06:32.739938 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] May 14 05:06:32.739984 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold May 14 05:06:32.740035 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 14 05:06:32.740082 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] May 14 05:06:32.740128 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] May 14 05:06:32.740174 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] May 14 05:06:32.740313 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold May 14 05:06:32.740548 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 14 05:06:32.740601 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] May 14 05:06:32.740667 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] May 14 05:06:32.740722 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] May 14 05:06:32.740775 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold May 14 05:06:32.740827 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 14 05:06:32.740878 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] May 14 05:06:32.740925 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] May 14 05:06:32.740972 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] May 14 05:06:32.741018 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold May 14 05:06:32.741069 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 14 05:06:32.741143 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] May 14 05:06:32.741232 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] May 14 05:06:32.741284 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] May 14 05:06:32.741331 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold May 14 05:06:32.741383 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 14 05:06:32.741430 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] May 14 05:06:32.741477 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] May 14 05:06:32.741523 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] May 14 05:06:32.741570 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold May 14 05:06:32.741623 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 PCIe Root Port May 14 05:06:32.741672 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] May 14 05:06:32.741718 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] May 14 05:06:32.741765 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] May 14 05:06:32.741811 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold May 14 05:06:32.741859 kernel: pci_bus 0000:01: extended config space not accessible May 14 05:06:32.741919 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 14 05:06:32.741971 kernel: pci_bus 0000:02: extended config space not accessible May 14 05:06:32.741982 kernel: acpiphp: Slot [32] registered May 14 05:06:32.741987 kernel: acpiphp: Slot [33] registered May 14 05:06:32.741993 kernel: acpiphp: Slot [34] registered May 14 05:06:32.741999 kernel: acpiphp: Slot [35] registered May 14 05:06:32.742004 kernel: acpiphp: Slot [36] registered May 14 05:06:32.742014 kernel: acpiphp: Slot [37] registered May 14 05:06:32.742019 kernel: acpiphp: Slot [38] registered May 14 05:06:32.742025 kernel: acpiphp: Slot [39] registered May 14 05:06:32.742030 kernel: acpiphp: Slot [40] registered May 14 05:06:32.742040 kernel: acpiphp: Slot [41] registered May 14 05:06:32.742046 kernel: acpiphp: Slot [42] registered May 14 05:06:32.742051 kernel: acpiphp: Slot [43] registered May 14 05:06:32.742057 kernel: acpiphp: Slot [44] registered May 14 05:06:32.742062 kernel: acpiphp: Slot [45] registered May 14 05:06:32.742068 kernel: acpiphp: Slot [46] registered May 14 05:06:32.742074 kernel: acpiphp: Slot [47] registered May 14 05:06:32.742079 kernel: acpiphp: Slot [48] registered May 14 05:06:32.742085 kernel: acpiphp: Slot [49] registered May 14 05:06:32.742091 kernel: acpiphp: Slot [50] registered May 14 05:06:32.742097 kernel: acpiphp: Slot [51] registered May 14 05:06:32.742105 kernel: acpiphp: Slot [52] registered May 14 05:06:32.742116 kernel: acpiphp: Slot [53] registered May 14 05:06:32.742122 kernel: acpiphp: Slot [54] registered May 14 05:06:32.742127 kernel: acpiphp: Slot [55] registered May 14 05:06:32.742133 kernel: acpiphp: Slot [56] registered May 14 05:06:32.742143 kernel: acpiphp: Slot [57] registered May 14 05:06:32.742149 kernel: acpiphp: Slot [58] registered May 14 05:06:32.742155 kernel: acpiphp: Slot [59] registered May 14 05:06:32.742162 kernel: acpiphp: Slot [60] registered May 14 05:06:32.742168 kernel: acpiphp: Slot [61] registered May 14 05:06:32.742173 kernel: acpiphp: Slot [62] registered May 14 05:06:32.742179 kernel: acpiphp: Slot [63] registered May 14 05:06:32.742240 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) May 14 05:06:32.742287 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) May 14 05:06:32.742334 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) May 14 05:06:32.742380 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) May 14 05:06:32.742428 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) May 14 05:06:32.742475 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) May 14 05:06:32.742527 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 PCIe Endpoint May 14 05:06:32.742575 kernel: pci 0000:03:00.0: BAR 0 [io 0x4000-0x4007] May 14 05:06:32.742622 kernel: pci 0000:03:00.0: BAR 1 [mem 0xfd5f8000-0xfd5fffff 64bit] May 14 05:06:32.742669 kernel: pci 0000:03:00.0: ROM [mem 0x00000000-0x0000ffff pref] May 14 05:06:32.742716 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold May 14 05:06:32.742764 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' May 14 05:06:32.742813 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] May 14 05:06:32.742860 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] May 14 05:06:32.742907 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] May 14 05:06:32.742953 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] May 14 05:06:32.743000 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] May 14 05:06:32.743046 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] May 14 05:06:32.743093 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] May 14 05:06:32.743142 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] May 14 05:06:32.743203 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 PCIe Endpoint May 14 05:06:32.743257 kernel: pci 0000:0b:00.0: BAR 0 [mem 0xfd4fc000-0xfd4fcfff] May 14 05:06:32.743305 kernel: pci 0000:0b:00.0: BAR 1 [mem 0xfd4fd000-0xfd4fdfff] May 14 05:06:32.743352 kernel: pci 0000:0b:00.0: BAR 2 [mem 0xfd4fe000-0xfd4fffff] May 14 05:06:32.743398 kernel: pci 0000:0b:00.0: BAR 3 [io 0x5000-0x500f] May 14 05:06:32.743445 kernel: pci 0000:0b:00.0: ROM [mem 0x00000000-0x0000ffff pref] May 14 05:06:32.743495 kernel: pci 0000:0b:00.0: supports D1 D2 May 14 05:06:32.743541 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 14 05:06:32.743588 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' May 14 05:06:32.743634 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] May 14 05:06:32.743680 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] May 14 05:06:32.743727 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] May 14 05:06:32.743773 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] May 14 05:06:32.743821 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] May 14 05:06:32.743868 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] May 14 05:06:32.743914 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] May 14 05:06:32.743961 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] May 14 05:06:32.744007 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] May 14 05:06:32.744053 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] May 14 05:06:32.744100 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] May 14 05:06:32.744239 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] May 14 05:06:32.744289 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] May 14 05:06:32.744336 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] May 14 05:06:32.744383 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] May 14 05:06:32.744429 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] May 14 05:06:32.744476 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] May 14 05:06:32.744522 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] May 14 05:06:32.744568 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] May 14 05:06:32.744616 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] May 14 05:06:32.744663 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] May 14 05:06:32.744709 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] May 14 05:06:32.744756 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] May 14 05:06:32.744820 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] May 14 05:06:32.744829 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 May 14 05:06:32.744835 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 May 14 05:06:32.744841 kernel: ACPI: PCI: Interrupt link LNKB disabled May 14 05:06:32.744849 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 14 05:06:32.744855 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 May 14 05:06:32.744860 kernel: iommu: Default domain type: Translated May 14 05:06:32.744866 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 14 05:06:32.744872 kernel: PCI: Using ACPI for IRQ routing May 14 05:06:32.744877 kernel: PCI: pci_cache_line_size set to 64 bytes May 14 05:06:32.744883 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] May 14 05:06:32.744889 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] May 14 05:06:32.744936 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device May 14 05:06:32.744984 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible May 14 05:06:32.745031 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 14 05:06:32.745039 kernel: vgaarb: loaded May 14 05:06:32.745045 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 May 14 05:06:32.745051 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter May 14 05:06:32.745056 kernel: clocksource: Switched to clocksource tsc-early May 14 05:06:32.745062 kernel: VFS: Disk quotas dquot_6.6.0 May 14 05:06:32.745067 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 14 05:06:32.745073 kernel: pnp: PnP ACPI init May 14 05:06:32.745124 kernel: system 00:00: [io 0x1000-0x103f] has been reserved May 14 05:06:32.745168 kernel: system 00:00: [io 0x1040-0x104f] has been reserved May 14 05:06:32.745225 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved May 14 05:06:32.745271 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved May 14 05:06:32.745318 kernel: pnp 00:06: [dma 2] May 14 05:06:32.745363 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved May 14 05:06:32.745408 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved May 14 05:06:32.745450 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved May 14 05:06:32.745458 kernel: pnp: PnP ACPI: found 8 devices May 14 05:06:32.745464 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 14 05:06:32.745469 kernel: NET: Registered PF_INET protocol family May 14 05:06:32.745475 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 14 05:06:32.745481 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 14 05:06:32.745487 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 14 05:06:32.745494 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 14 05:06:32.745500 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 14 05:06:32.745577 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 14 05:06:32.746653 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 14 05:06:32.746663 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 14 05:06:32.746669 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 14 05:06:32.746675 kernel: NET: Registered PF_XDP protocol family May 14 05:06:32.746739 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 14 05:06:32.746793 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 May 14 05:06:32.746847 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 May 14 05:06:32.746896 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 May 14 05:06:32.746944 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 May 14 05:06:32.746991 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 May 14 05:06:32.747039 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 May 14 05:06:32.747086 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 May 14 05:06:32.747133 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 May 14 05:06:32.747183 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 May 14 05:06:32.747243 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 May 14 05:06:32.747291 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 May 14 05:06:32.747338 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 May 14 05:06:32.747384 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 May 14 05:06:32.747432 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 May 14 05:06:32.747478 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 May 14 05:06:32.747525 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 May 14 05:06:32.747574 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 May 14 05:06:32.747621 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 May 14 05:06:32.747669 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 May 14 05:06:32.747715 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 May 14 05:06:32.747777 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 May 14 05:06:32.747825 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 May 14 05:06:32.747872 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref]: assigned May 14 05:06:32.747919 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref]: assigned May 14 05:06:32.747968 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space May 14 05:06:32.748014 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign May 14 05:06:32.748077 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space May 14 05:06:32.748125 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign May 14 05:06:32.748222 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space May 14 05:06:32.748272 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign May 14 05:06:32.748320 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space May 14 05:06:32.748366 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign May 14 05:06:32.748417 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space May 14 05:06:32.748482 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign May 14 05:06:32.748531 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space May 14 05:06:32.748578 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign May 14 05:06:32.748627 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space May 14 05:06:32.748675 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign May 14 05:06:32.748722 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space May 14 05:06:32.748774 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign May 14 05:06:32.748822 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space May 14 05:06:32.748870 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign May 14 05:06:32.748918 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space May 14 05:06:32.748966 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign May 14 05:06:32.749015 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space May 14 05:06:32.749063 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign May 14 05:06:32.749117 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space May 14 05:06:32.749169 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign May 14 05:06:32.749238 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space May 14 05:06:32.749289 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign May 14 05:06:32.749338 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space May 14 05:06:32.749386 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign May 14 05:06:32.749434 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space May 14 05:06:32.749482 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign May 14 05:06:32.749530 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space May 14 05:06:32.749582 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign May 14 05:06:32.749630 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space May 14 05:06:32.749679 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign May 14 05:06:32.749728 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space May 14 05:06:32.749776 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign May 14 05:06:32.749825 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space May 14 05:06:32.749875 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign May 14 05:06:32.749923 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space May 14 05:06:32.749974 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign May 14 05:06:32.750022 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space May 14 05:06:32.750070 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign May 14 05:06:32.750119 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: can't assign; no space May 14 05:06:32.750167 kernel: pci 0000:00:18.7: bridge window [io size 0x1000]: failed to assign May 14 05:06:32.750541 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: can't assign; no space May 14 05:06:32.750593 kernel: pci 0000:00:18.6: bridge window [io size 0x1000]: failed to assign May 14 05:06:32.750642 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: can't assign; no space May 14 05:06:32.750690 kernel: pci 0000:00:18.5: bridge window [io size 0x1000]: failed to assign May 14 05:06:32.750741 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: can't assign; no space May 14 05:06:32.750790 kernel: pci 0000:00:18.4: bridge window [io size 0x1000]: failed to assign May 14 05:06:32.750838 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: can't assign; no space May 14 05:06:32.750886 kernel: pci 0000:00:18.3: bridge window [io size 0x1000]: failed to assign May 14 05:06:32.750934 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: can't assign; no space May 14 05:06:32.750982 kernel: pci 0000:00:18.2: bridge window [io size 0x1000]: failed to assign May 14 05:06:32.751030 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: can't assign; no space May 14 05:06:32.751078 kernel: pci 0000:00:17.7: bridge window [io size 0x1000]: failed to assign May 14 05:06:32.751130 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: can't assign; no space May 14 05:06:32.751181 kernel: pci 0000:00:17.6: bridge window [io size 0x1000]: failed to assign May 14 05:06:32.751247 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: can't assign; no space May 14 05:06:32.751296 kernel: pci 0000:00:17.5: bridge window [io size 0x1000]: failed to assign May 14 05:06:32.751345 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: can't assign; no space May 14 05:06:32.751393 kernel: pci 0000:00:17.4: bridge window [io size 0x1000]: failed to assign May 14 05:06:32.751442 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: can't assign; no space May 14 05:06:32.751490 kernel: pci 0000:00:17.3: bridge window [io size 0x1000]: failed to assign May 14 05:06:32.751539 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: can't assign; no space May 14 05:06:32.751588 kernel: pci 0000:00:16.7: bridge window [io size 0x1000]: failed to assign May 14 05:06:32.751637 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: can't assign; no space May 14 05:06:32.751689 kernel: pci 0000:00:16.6: bridge window [io size 0x1000]: failed to assign May 14 05:06:32.751738 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: can't assign; no space May 14 05:06:32.751786 kernel: pci 0000:00:16.5: bridge window [io size 0x1000]: failed to assign May 14 05:06:32.751834 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: can't assign; no space May 14 05:06:32.751883 kernel: pci 0000:00:16.4: bridge window [io size 0x1000]: failed to assign May 14 05:06:32.751932 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: can't assign; no space May 14 05:06:32.751980 kernel: pci 0000:00:16.3: bridge window [io size 0x1000]: failed to assign May 14 05:06:32.752032 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: can't assign; no space May 14 05:06:32.752081 kernel: pci 0000:00:15.7: bridge window [io size 0x1000]: failed to assign May 14 05:06:32.752134 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: can't assign; no space May 14 05:06:32.752182 kernel: pci 0000:00:15.6: bridge window [io size 0x1000]: failed to assign May 14 05:06:32.752251 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: can't assign; no space May 14 05:06:32.752300 kernel: pci 0000:00:15.5: bridge window [io size 0x1000]: failed to assign May 14 05:06:32.752350 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: can't assign; no space May 14 05:06:32.752399 kernel: pci 0000:00:15.4: bridge window [io size 0x1000]: failed to assign May 14 05:06:32.752453 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: can't assign; no space May 14 05:06:32.752502 kernel: pci 0000:00:15.3: bridge window [io size 0x1000]: failed to assign May 14 05:06:32.752552 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 14 05:06:32.752601 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] May 14 05:06:32.752649 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] May 14 05:06:32.752696 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] May 14 05:06:32.752744 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] May 14 05:06:32.752797 kernel: pci 0000:03:00.0: ROM [mem 0xfd500000-0xfd50ffff pref]: assigned May 14 05:06:32.752848 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] May 14 05:06:32.752897 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] May 14 05:06:32.752945 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] May 14 05:06:32.752993 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] May 14 05:06:32.753043 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] May 14 05:06:32.753092 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] May 14 05:06:32.753140 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] May 14 05:06:32.753188 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] May 14 05:06:32.753255 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] May 14 05:06:32.753304 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] May 14 05:06:32.753355 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] May 14 05:06:32.753404 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] May 14 05:06:32.753452 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] May 14 05:06:32.753501 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] May 14 05:06:32.753549 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] May 14 05:06:32.753598 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] May 14 05:06:32.753645 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] May 14 05:06:32.753694 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] May 14 05:06:32.753745 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] May 14 05:06:32.753794 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] May 14 05:06:32.753844 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] May 14 05:06:32.753893 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] May 14 05:06:32.753941 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] May 14 05:06:32.753989 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] May 14 05:06:32.754037 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] May 14 05:06:32.754088 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] May 14 05:06:32.754136 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] May 14 05:06:32.754188 kernel: pci 0000:0b:00.0: ROM [mem 0xfd400000-0xfd40ffff pref]: assigned May 14 05:06:32.754247 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] May 14 05:06:32.754296 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] May 14 05:06:32.754344 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] May 14 05:06:32.754392 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] May 14 05:06:32.754440 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] May 14 05:06:32.754491 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] May 14 05:06:32.754538 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] May 14 05:06:32.754586 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] May 14 05:06:32.754635 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] May 14 05:06:32.754683 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] May 14 05:06:32.754731 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] May 14 05:06:32.754779 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] May 14 05:06:32.754827 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] May 14 05:06:32.754875 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] May 14 05:06:32.754923 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] May 14 05:06:32.754974 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] May 14 05:06:32.755021 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] May 14 05:06:32.755069 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] May 14 05:06:32.755121 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] May 14 05:06:32.755169 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] May 14 05:06:32.755232 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] May 14 05:06:32.755281 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] May 14 05:06:32.755331 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] May 14 05:06:32.755380 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] May 14 05:06:32.755428 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] May 14 05:06:32.755475 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] May 14 05:06:32.755523 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] May 14 05:06:32.755572 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] May 14 05:06:32.755621 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] May 14 05:06:32.755671 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] May 14 05:06:32.755719 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] May 14 05:06:32.755768 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] May 14 05:06:32.755816 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] May 14 05:06:32.755863 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] May 14 05:06:32.755911 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] May 14 05:06:32.755961 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] May 14 05:06:32.756009 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] May 14 05:06:32.756057 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] May 14 05:06:32.756104 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] May 14 05:06:32.756156 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] May 14 05:06:32.756215 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] May 14 05:06:32.756267 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] May 14 05:06:32.756315 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] May 14 05:06:32.756366 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] May 14 05:06:32.756414 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] May 14 05:06:32.756463 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] May 14 05:06:32.756510 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] May 14 05:06:32.756561 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] May 14 05:06:32.756610 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] May 14 05:06:32.756657 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] May 14 05:06:32.756705 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] May 14 05:06:32.756753 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] May 14 05:06:32.756801 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] May 14 05:06:32.756849 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] May 14 05:06:32.756901 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] May 14 05:06:32.756949 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] May 14 05:06:32.756996 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] May 14 05:06:32.757043 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] May 14 05:06:32.757092 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] May 14 05:06:32.757140 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] May 14 05:06:32.757188 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] May 14 05:06:32.757243 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] May 14 05:06:32.757291 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] May 14 05:06:32.757341 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] May 14 05:06:32.757389 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] May 14 05:06:32.757438 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] May 14 05:06:32.757486 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] May 14 05:06:32.757533 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] May 14 05:06:32.757582 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] May 14 05:06:32.757630 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] May 14 05:06:32.757677 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] May 14 05:06:32.757728 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] May 14 05:06:32.757776 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] May 14 05:06:32.757824 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] May 14 05:06:32.757872 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] May 14 05:06:32.757919 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] May 14 05:06:32.757969 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] May 14 05:06:32.758021 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] May 14 05:06:32.758069 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] May 14 05:06:32.758121 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] May 14 05:06:32.758169 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] May 14 05:06:32.758220 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] May 14 05:06:32.758262 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] May 14 05:06:32.758304 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] May 14 05:06:32.758346 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] May 14 05:06:32.758396 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] May 14 05:06:32.758440 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] May 14 05:06:32.758484 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] May 14 05:06:32.758527 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] May 14 05:06:32.758571 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] May 14 05:06:32.758617 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] May 14 05:06:32.758661 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] May 14 05:06:32.758708 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] May 14 05:06:32.758758 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] May 14 05:06:32.758802 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] May 14 05:06:32.758846 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] May 14 05:06:32.758894 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] May 14 05:06:32.758938 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] May 14 05:06:32.758981 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] May 14 05:06:32.759031 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] May 14 05:06:32.759075 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] May 14 05:06:32.759123 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] May 14 05:06:32.759186 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] May 14 05:06:32.759236 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] May 14 05:06:32.759286 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] May 14 05:06:32.759330 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] May 14 05:06:32.759379 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] May 14 05:06:32.759423 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] May 14 05:06:32.759471 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] May 14 05:06:32.759514 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] May 14 05:06:32.759561 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] May 14 05:06:32.759604 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] May 14 05:06:32.759652 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] May 14 05:06:32.759696 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] May 14 05:06:32.759738 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] May 14 05:06:32.759787 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] May 14 05:06:32.759830 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] May 14 05:06:32.759874 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] May 14 05:06:32.759922 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] May 14 05:06:32.759966 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] May 14 05:06:32.760009 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] May 14 05:06:32.760055 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] May 14 05:06:32.760099 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] May 14 05:06:32.760145 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] May 14 05:06:32.760190 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] May 14 05:06:32.760259 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] May 14 05:06:32.760303 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] May 14 05:06:32.760350 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] May 14 05:06:32.760393 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] May 14 05:06:32.760440 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] May 14 05:06:32.760486 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] May 14 05:06:32.760532 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] May 14 05:06:32.760575 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] May 14 05:06:32.760617 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] May 14 05:06:32.760665 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] May 14 05:06:32.760708 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] May 14 05:06:32.760751 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] May 14 05:06:32.760799 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] May 14 05:06:32.760842 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] May 14 05:06:32.760884 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] May 14 05:06:32.760930 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] May 14 05:06:32.760974 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] May 14 05:06:32.761020 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] May 14 05:06:32.761065 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] May 14 05:06:32.761131 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] May 14 05:06:32.761175 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] May 14 05:06:32.761246 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] May 14 05:06:32.761294 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] May 14 05:06:32.761342 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] May 14 05:06:32.761387 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] May 14 05:06:32.761437 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] May 14 05:06:32.763240 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] May 14 05:06:32.763293 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] May 14 05:06:32.763355 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] May 14 05:06:32.763401 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] May 14 05:06:32.763445 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] May 14 05:06:32.763497 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] May 14 05:06:32.763540 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] May 14 05:06:32.763587 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] May 14 05:06:32.763630 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] May 14 05:06:32.763678 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] May 14 05:06:32.763721 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] May 14 05:06:32.763770 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] May 14 05:06:32.763815 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] May 14 05:06:32.763863 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] May 14 05:06:32.763907 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] May 14 05:06:32.763954 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] May 14 05:06:32.763998 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] May 14 05:06:32.764051 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 14 05:06:32.764062 kernel: PCI: CLS 32 bytes, default 64 May 14 05:06:32.764068 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 14 05:06:32.764074 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns May 14 05:06:32.764080 kernel: clocksource: Switched to clocksource tsc May 14 05:06:32.764086 kernel: Initialise system trusted keyrings May 14 05:06:32.764092 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 14 05:06:32.764097 kernel: Key type asymmetric registered May 14 05:06:32.764103 kernel: Asymmetric key parser 'x509' registered May 14 05:06:32.764110 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 14 05:06:32.764116 kernel: io scheduler mq-deadline registered May 14 05:06:32.764135 kernel: io scheduler kyber registered May 14 05:06:32.764141 kernel: io scheduler bfq registered May 14 05:06:32.764190 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 May 14 05:06:32.764710 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 14 05:06:32.764763 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 May 14 05:06:32.764812 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 14 05:06:32.764864 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 May 14 05:06:32.764912 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 14 05:06:32.764960 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 May 14 05:06:32.765008 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 14 05:06:32.765055 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 May 14 05:06:32.765107 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 14 05:06:32.765156 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 May 14 05:06:32.765494 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 14 05:06:32.765550 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 May 14 05:06:32.765599 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 14 05:06:32.765649 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 May 14 05:06:32.765697 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 14 05:06:32.765746 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 May 14 05:06:32.765794 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 14 05:06:32.765845 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 May 14 05:06:32.765893 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 14 05:06:32.767632 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 May 14 05:06:32.767691 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 14 05:06:32.767742 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 May 14 05:06:32.767802 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 14 05:06:32.767854 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 May 14 05:06:32.767902 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 14 05:06:32.767954 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 May 14 05:06:32.768001 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 14 05:06:32.768050 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 May 14 05:06:32.768097 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 14 05:06:32.768151 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 May 14 05:06:32.768218 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 14 05:06:32.768271 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 May 14 05:06:32.768322 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 14 05:06:32.768370 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 May 14 05:06:32.768418 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 14 05:06:32.768466 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 May 14 05:06:32.768512 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 14 05:06:32.768561 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 May 14 05:06:32.768608 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 14 05:06:32.768656 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 May 14 05:06:32.768706 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 14 05:06:32.768754 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 May 14 05:06:32.768801 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 14 05:06:32.768848 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 May 14 05:06:32.768896 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 14 05:06:32.768944 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 May 14 05:06:32.768991 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 14 05:06:32.769041 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 May 14 05:06:32.769088 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 14 05:06:32.769171 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 May 14 05:06:32.769231 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 14 05:06:32.769280 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 May 14 05:06:32.769327 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 14 05:06:32.769376 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 May 14 05:06:32.769423 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 14 05:06:32.769474 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 May 14 05:06:32.769520 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 14 05:06:32.769568 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 May 14 05:06:32.769615 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 14 05:06:32.769663 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 May 14 05:06:32.769710 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 14 05:06:32.769757 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 May 14 05:06:32.769807 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 14 05:06:32.769818 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 14 05:06:32.769824 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 14 05:06:32.769830 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 14 05:06:32.770130 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 May 14 05:06:32.770138 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 14 05:06:32.770145 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 14 05:06:32.770219 kernel: rtc_cmos 00:01: registered as rtc0 May 14 05:06:32.770277 kernel: rtc_cmos 00:01: setting system clock to 2025-05-14T05:06:32 UTC (1747199192) May 14 05:06:32.770287 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 14 05:06:32.770328 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram May 14 05:06:32.770337 kernel: intel_pstate: CPU model not supported May 14 05:06:32.770343 kernel: NET: Registered PF_INET6 protocol family May 14 05:06:32.770350 kernel: Segment Routing with IPv6 May 14 05:06:32.770356 kernel: In-situ OAM (IOAM) with IPv6 May 14 05:06:32.770361 kernel: NET: Registered PF_PACKET protocol family May 14 05:06:32.770369 kernel: Key type dns_resolver registered May 14 05:06:32.770376 kernel: IPI shorthand broadcast: enabled May 14 05:06:32.770381 kernel: sched_clock: Marking stable (2408215475, 170948055)->(2592528274, -13364744) May 14 05:06:32.770387 kernel: registered taskstats version 1 May 14 05:06:32.770393 kernel: Loading compiled-in X.509 certificates May 14 05:06:32.770399 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.20-flatcar: de56839f264dfa1264ece2be0efda2f53967cc2a' May 14 05:06:32.770405 kernel: Demotion targets for Node 0: null May 14 05:06:32.770411 kernel: Key type .fscrypt registered May 14 05:06:32.770417 kernel: Key type fscrypt-provisioning registered May 14 05:06:32.770424 kernel: ima: No TPM chip found, activating TPM-bypass! May 14 05:06:32.770431 kernel: ima: Allocated hash algorithm: sha1 May 14 05:06:32.770436 kernel: ima: No architecture policies found May 14 05:06:32.770443 kernel: clk: Disabling unused clocks May 14 05:06:32.770448 kernel: Warning: unable to open an initial console. May 14 05:06:32.770455 kernel: Freeing unused kernel image (initmem) memory: 54416K May 14 05:06:32.770462 kernel: Write protecting the kernel read-only data: 24576k May 14 05:06:32.770468 kernel: Freeing unused kernel image (rodata/data gap) memory: 296K May 14 05:06:32.770475 kernel: Run /init as init process May 14 05:06:32.770482 kernel: with arguments: May 14 05:06:32.770488 kernel: /init May 14 05:06:32.770493 kernel: with environment: May 14 05:06:32.770499 kernel: HOME=/ May 14 05:06:32.770505 kernel: TERM=linux May 14 05:06:32.770511 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 14 05:06:32.770517 systemd[1]: Successfully made /usr/ read-only. May 14 05:06:32.770526 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 05:06:32.770533 systemd[1]: Detected virtualization vmware. May 14 05:06:32.770540 systemd[1]: Detected architecture x86-64. May 14 05:06:32.770545 systemd[1]: Running in initrd. May 14 05:06:32.770551 systemd[1]: No hostname configured, using default hostname. May 14 05:06:32.770558 systemd[1]: Hostname set to . May 14 05:06:32.770564 systemd[1]: Initializing machine ID from random generator. May 14 05:06:32.770570 systemd[1]: Queued start job for default target initrd.target. May 14 05:06:32.770576 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 05:06:32.770583 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 05:06:32.770590 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 14 05:06:32.770596 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 05:06:32.770602 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 14 05:06:32.770609 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 14 05:06:32.770616 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 14 05:06:32.770623 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 14 05:06:32.770630 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 05:06:32.770636 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 05:06:32.770642 systemd[1]: Reached target paths.target - Path Units. May 14 05:06:32.770648 systemd[1]: Reached target slices.target - Slice Units. May 14 05:06:32.770654 systemd[1]: Reached target swap.target - Swaps. May 14 05:06:32.770660 systemd[1]: Reached target timers.target - Timer Units. May 14 05:06:32.770666 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 14 05:06:32.770672 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 05:06:32.770680 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 14 05:06:32.770686 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 14 05:06:32.770693 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 05:06:32.770699 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 05:06:32.770705 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 05:06:32.770711 systemd[1]: Reached target sockets.target - Socket Units. May 14 05:06:32.770717 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 14 05:06:32.770723 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 05:06:32.770731 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 14 05:06:32.770737 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 14 05:06:32.770743 systemd[1]: Starting systemd-fsck-usr.service... May 14 05:06:32.770749 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 05:06:32.770756 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 05:06:32.770762 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 05:06:32.770768 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 14 05:06:32.770776 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 05:06:32.770782 systemd[1]: Finished systemd-fsck-usr.service. May 14 05:06:32.770788 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 05:06:32.770805 systemd-journald[244]: Collecting audit messages is disabled. May 14 05:06:32.770823 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 05:06:32.770829 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 14 05:06:32.770835 kernel: Bridge firewalling registered May 14 05:06:32.770841 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 05:06:32.770848 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 05:06:32.770854 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 05:06:32.770862 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 05:06:32.770868 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 05:06:32.770874 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 05:06:32.770880 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 05:06:32.770887 systemd-journald[244]: Journal started May 14 05:06:32.770901 systemd-journald[244]: Runtime Journal (/run/log/journal/92d34e1fbe794a01b050b459a76bd81e) is 4.8M, max 38.8M, 34M free. May 14 05:06:32.716393 systemd-modules-load[245]: Inserted module 'overlay' May 14 05:06:32.744805 systemd-modules-load[245]: Inserted module 'br_netfilter' May 14 05:06:32.773213 systemd[1]: Started systemd-journald.service - Journal Service. May 14 05:06:32.775060 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 05:06:32.775789 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 14 05:06:32.776289 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 05:06:32.783233 systemd-tmpfiles[279]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 14 05:06:32.785150 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 05:06:32.786109 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 05:06:32.789000 dracut-cmdline[278]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=bd5d20a479abde3485dc2e7b97a54e804895b9926289ae86f84794bef32a40f3 May 14 05:06:32.814399 systemd-resolved[288]: Positive Trust Anchors: May 14 05:06:32.814619 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 05:06:32.814642 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 05:06:32.817567 systemd-resolved[288]: Defaulting to hostname 'linux'. May 14 05:06:32.818244 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 05:06:32.818390 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 05:06:32.836205 kernel: SCSI subsystem initialized May 14 05:06:32.842205 kernel: Loading iSCSI transport class v2.0-870. May 14 05:06:32.849213 kernel: iscsi: registered transport (tcp) May 14 05:06:32.862214 kernel: iscsi: registered transport (qla4xxx) May 14 05:06:32.862232 kernel: QLogic iSCSI HBA Driver May 14 05:06:32.872031 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 05:06:32.881592 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 05:06:32.881909 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 05:06:32.903846 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 14 05:06:32.905087 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 14 05:06:32.941207 kernel: raid6: avx2x4 gen() 49299 MB/s May 14 05:06:32.958206 kernel: raid6: avx2x2 gen() 54394 MB/s May 14 05:06:32.975314 kernel: raid6: avx2x1 gen() 45777 MB/s May 14 05:06:32.975330 kernel: raid6: using algorithm avx2x2 gen() 54394 MB/s May 14 05:06:32.993317 kernel: raid6: .... xor() 32757 MB/s, rmw enabled May 14 05:06:32.993332 kernel: raid6: using avx2x2 recovery algorithm May 14 05:06:33.006207 kernel: xor: automatically using best checksumming function avx May 14 05:06:33.102212 kernel: Btrfs loaded, zoned=no, fsverity=no May 14 05:06:33.105830 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 14 05:06:33.106708 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 05:06:33.125705 systemd-udevd[491]: Using default interface naming scheme 'v255'. May 14 05:06:33.129191 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 05:06:33.130181 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 14 05:06:33.145217 dracut-pre-trigger[497]: rd.md=0: removing MD RAID activation May 14 05:06:33.157928 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 14 05:06:33.158665 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 05:06:33.224004 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 05:06:33.225034 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 14 05:06:33.293465 kernel: VMware PVSCSI driver - version 1.0.7.0-k May 14 05:06:33.293501 kernel: vmw_pvscsi: using 64bit dma May 14 05:06:33.293509 kernel: vmw_pvscsi: max_id: 16 May 14 05:06:33.293517 kernel: vmw_pvscsi: setting ring_pages to 8 May 14 05:06:33.302453 kernel: vmw_pvscsi: enabling reqCallThreshold May 14 05:06:33.302476 kernel: vmw_pvscsi: driver-based request coalescing enabled May 14 05:06:33.302485 kernel: vmw_pvscsi: using MSI-X May 14 05:06:33.303206 kernel: libata version 3.00 loaded. May 14 05:06:33.306206 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 May 14 05:06:33.308543 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 May 14 05:06:33.309672 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 May 14 05:06:33.317131 kernel: VMware vmxnet3 virtual NIC driver - version 1.9.0.0-k-NAPI May 14 05:06:33.317147 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 May 14 05:06:33.327408 kernel: ata_piix 0000:00:07.1: version 2.13 May 14 05:06:33.327496 kernel: scsi host1: ata_piix May 14 05:06:33.327563 kernel: scsi host2: ata_piix May 14 05:06:33.327624 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 lpm-pol 0 May 14 05:06:33.327633 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 lpm-pol 0 May 14 05:06:33.327640 kernel: cryptd: max_cpu_qlen set to 1000 May 14 05:06:33.327647 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps May 14 05:06:33.329635 (udev-worker)[536]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. May 14 05:06:33.333465 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 05:06:33.333559 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 05:06:33.333812 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 14 05:06:33.335325 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 05:06:33.338632 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) May 14 05:06:33.343748 kernel: sd 0:0:0:0: [sda] Write Protect is off May 14 05:06:33.343818 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 May 14 05:06:33.343879 kernel: sd 0:0:0:0: [sda] Cache data unavailable May 14 05:06:33.343937 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through May 14 05:06:33.343995 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 14 05:06:33.344006 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 14 05:06:33.356622 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 05:06:33.485215 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 May 14 05:06:33.491209 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 May 14 05:06:33.502227 kernel: AES CTR mode by8 optimization enabled May 14 05:06:33.504954 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 May 14 05:06:33.515239 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 May 14 05:06:33.528250 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray May 14 05:06:33.537448 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 14 05:06:33.537460 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 14 05:06:33.552588 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. May 14 05:06:33.557926 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. May 14 05:06:33.563248 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. May 14 05:06:33.567280 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. May 14 05:06:33.567402 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. May 14 05:06:33.568050 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 14 05:06:33.611209 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 14 05:06:33.618202 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 14 05:06:33.761799 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 14 05:06:33.762431 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 14 05:06:33.762787 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 05:06:33.763121 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 05:06:33.764052 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 14 05:06:33.782167 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 14 05:06:34.626453 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 14 05:06:34.626490 disk-uuid[644]: The operation has completed successfully. May 14 05:06:34.671060 systemd[1]: disk-uuid.service: Deactivated successfully. May 14 05:06:34.671125 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 14 05:06:34.681604 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 14 05:06:34.693842 sh[674]: Success May 14 05:06:34.705263 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 14 05:06:34.705286 kernel: device-mapper: uevent: version 1.0.3 May 14 05:06:34.706686 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 14 05:06:34.713208 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" May 14 05:06:34.758581 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 14 05:06:34.759886 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 14 05:06:34.770298 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 14 05:06:34.783203 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 14 05:06:34.783236 kernel: BTRFS: device fsid 522ba959-9153-4a92-926e-3277bc1060e7 devid 1 transid 40 /dev/mapper/usr (254:0) scanned by mount (686) May 14 05:06:34.784696 kernel: BTRFS info (device dm-0): first mount of filesystem 522ba959-9153-4a92-926e-3277bc1060e7 May 14 05:06:34.784714 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 14 05:06:34.786306 kernel: BTRFS info (device dm-0): using free-space-tree May 14 05:06:34.794103 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 14 05:06:34.794285 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 14 05:06:34.794829 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... May 14 05:06:34.796247 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 14 05:06:34.821134 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 (8:6) scanned by mount (709) May 14 05:06:34.821166 kernel: BTRFS info (device sda6): first mount of filesystem 27ac52bc-c86c-4e09-9b91-c3f9e8d3f2a0 May 14 05:06:34.821175 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 14 05:06:34.823076 kernel: BTRFS info (device sda6): using free-space-tree May 14 05:06:34.832237 kernel: BTRFS info (device sda6): last unmount of filesystem 27ac52bc-c86c-4e09-9b91-c3f9e8d3f2a0 May 14 05:06:34.834785 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 14 05:06:34.836265 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 14 05:06:34.871659 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. May 14 05:06:34.872523 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 14 05:06:34.953673 ignition[728]: Ignition 2.21.0 May 14 05:06:34.953680 ignition[728]: Stage: fetch-offline May 14 05:06:34.953695 ignition[728]: no configs at "/usr/lib/ignition/base.d" May 14 05:06:34.953701 ignition[728]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 14 05:06:34.953744 ignition[728]: parsed url from cmdline: "" May 14 05:06:34.953745 ignition[728]: no config URL provided May 14 05:06:34.953748 ignition[728]: reading system config file "/usr/lib/ignition/user.ign" May 14 05:06:34.953752 ignition[728]: no config at "/usr/lib/ignition/user.ign" May 14 05:06:34.954104 ignition[728]: config successfully fetched May 14 05:06:34.954120 ignition[728]: parsing config with SHA512: 97ca18288acc2ce73a09fe9f367560358ac2679515d8ea0de533963d3fb78b1696d0c8e049576d904eb5232ecbaeebc6a61f310e23712b1ebf9007500061d652 May 14 05:06:34.956384 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 05:06:34.957573 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 05:06:34.958528 unknown[728]: fetched base config from "system" May 14 05:06:34.958867 ignition[728]: fetch-offline: fetch-offline passed May 14 05:06:34.958534 unknown[728]: fetched user config from "vmware" May 14 05:06:34.958902 ignition[728]: Ignition finished successfully May 14 05:06:34.960251 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 14 05:06:34.978512 systemd-networkd[865]: lo: Link UP May 14 05:06:34.978520 systemd-networkd[865]: lo: Gained carrier May 14 05:06:34.979343 systemd-networkd[865]: Enumeration completed May 14 05:06:34.979604 systemd-networkd[865]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. May 14 05:06:34.979837 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 05:06:34.983342 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated May 14 05:06:34.983452 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps May 14 05:06:34.979979 systemd[1]: Reached target network.target - Network. May 14 05:06:34.980070 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 14 05:06:34.980682 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 14 05:06:34.983044 systemd-networkd[865]: ens192: Link UP May 14 05:06:34.983046 systemd-networkd[865]: ens192: Gained carrier May 14 05:06:34.998128 ignition[869]: Ignition 2.21.0 May 14 05:06:34.998137 ignition[869]: Stage: kargs May 14 05:06:34.998254 ignition[869]: no configs at "/usr/lib/ignition/base.d" May 14 05:06:34.998260 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 14 05:06:34.999075 ignition[869]: kargs: kargs passed May 14 05:06:34.999214 ignition[869]: Ignition finished successfully May 14 05:06:35.000482 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 14 05:06:35.001311 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 14 05:06:35.016755 ignition[877]: Ignition 2.21.0 May 14 05:06:35.016765 ignition[877]: Stage: disks May 14 05:06:35.016846 ignition[877]: no configs at "/usr/lib/ignition/base.d" May 14 05:06:35.016852 ignition[877]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 14 05:06:35.017923 ignition[877]: disks: disks passed May 14 05:06:35.017950 ignition[877]: Ignition finished successfully May 14 05:06:35.018962 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 14 05:06:35.019497 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 14 05:06:35.019745 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 14 05:06:35.019972 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 05:06:35.020186 systemd[1]: Reached target sysinit.target - System Initialization. May 14 05:06:35.020439 systemd[1]: Reached target basic.target - Basic System. May 14 05:06:35.021139 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 14 05:06:35.040770 systemd-fsck[885]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks May 14 05:06:35.041688 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 14 05:06:35.043236 systemd[1]: Mounting sysroot.mount - /sysroot... May 14 05:06:35.111206 kernel: EXT4-fs (sda9): mounted filesystem 7fda6268-ffdc-406a-8662-dffb0e9a24fa r/w with ordered data mode. Quota mode: none. May 14 05:06:35.111377 systemd[1]: Mounted sysroot.mount - /sysroot. May 14 05:06:35.111692 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 14 05:06:35.112586 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 05:06:35.113199 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 14 05:06:35.114396 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 14 05:06:35.114609 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 14 05:06:35.114819 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 14 05:06:35.123121 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 14 05:06:35.124256 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 14 05:06:35.128220 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 (8:6) scanned by mount (893) May 14 05:06:35.130627 kernel: BTRFS info (device sda6): first mount of filesystem 27ac52bc-c86c-4e09-9b91-c3f9e8d3f2a0 May 14 05:06:35.130643 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 14 05:06:35.130652 kernel: BTRFS info (device sda6): using free-space-tree May 14 05:06:35.136279 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 05:06:35.153642 initrd-setup-root[917]: cut: /sysroot/etc/passwd: No such file or directory May 14 05:06:35.156923 initrd-setup-root[924]: cut: /sysroot/etc/group: No such file or directory May 14 05:06:35.158950 initrd-setup-root[931]: cut: /sysroot/etc/shadow: No such file or directory May 14 05:06:35.160854 initrd-setup-root[938]: cut: /sysroot/etc/gshadow: No such file or directory May 14 05:06:35.249926 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 14 05:06:35.250751 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 14 05:06:35.252254 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 14 05:06:35.259219 kernel: BTRFS info (device sda6): last unmount of filesystem 27ac52bc-c86c-4e09-9b91-c3f9e8d3f2a0 May 14 05:06:35.273514 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 14 05:06:35.275113 ignition[1006]: INFO : Ignition 2.21.0 May 14 05:06:35.275113 ignition[1006]: INFO : Stage: mount May 14 05:06:35.275442 ignition[1006]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 05:06:35.275442 ignition[1006]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 14 05:06:35.276072 ignition[1006]: INFO : mount: mount passed May 14 05:06:35.276072 ignition[1006]: INFO : Ignition finished successfully May 14 05:06:35.276804 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 14 05:06:35.277739 systemd[1]: Starting ignition-files.service - Ignition (files)... May 14 05:06:35.782605 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 14 05:06:35.783521 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 05:06:35.799207 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 (8:6) scanned by mount (1020) May 14 05:06:35.801352 kernel: BTRFS info (device sda6): first mount of filesystem 27ac52bc-c86c-4e09-9b91-c3f9e8d3f2a0 May 14 05:06:35.801367 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 14 05:06:35.801378 kernel: BTRFS info (device sda6): using free-space-tree May 14 05:06:35.804834 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 05:06:35.821928 ignition[1037]: INFO : Ignition 2.21.0 May 14 05:06:35.821928 ignition[1037]: INFO : Stage: files May 14 05:06:35.822294 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 05:06:35.822294 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 14 05:06:35.822871 ignition[1037]: DEBUG : files: compiled without relabeling support, skipping May 14 05:06:35.835575 ignition[1037]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 14 05:06:35.835575 ignition[1037]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 14 05:06:35.858757 ignition[1037]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 14 05:06:35.859122 ignition[1037]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 14 05:06:35.859538 unknown[1037]: wrote ssh authorized keys file for user: core May 14 05:06:35.859888 ignition[1037]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 14 05:06:35.862448 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 14 05:06:35.862448 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 14 05:06:36.038589 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 14 05:06:36.242246 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 14 05:06:36.242565 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 14 05:06:36.242565 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 14 05:06:36.705297 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 14 05:06:36.785747 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 14 05:06:36.786003 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 14 05:06:36.786003 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 14 05:06:36.786003 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 14 05:06:36.786003 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 14 05:06:36.786003 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 05:06:36.786874 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 05:06:36.786874 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 05:06:36.786874 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 05:06:36.787369 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 14 05:06:36.787369 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 14 05:06:36.787369 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 14 05:06:36.789512 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 14 05:06:36.789732 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 14 05:06:36.789732 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 May 14 05:06:37.008393 systemd-networkd[865]: ens192: Gained IPv6LL May 14 05:06:37.175359 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 14 05:06:37.553817 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 14 05:06:37.554239 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" May 14 05:06:37.555098 ignition[1037]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" May 14 05:06:37.555098 ignition[1037]: INFO : files: op(d): [started] processing unit "prepare-helm.service" May 14 05:06:37.555770 ignition[1037]: INFO : files: op(d): op(e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 05:06:37.556104 ignition[1037]: INFO : files: op(d): op(e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 05:06:37.556104 ignition[1037]: INFO : files: op(d): [finished] processing unit "prepare-helm.service" May 14 05:06:37.556646 ignition[1037]: INFO : files: op(f): [started] processing unit "coreos-metadata.service" May 14 05:06:37.556646 ignition[1037]: INFO : files: op(f): op(10): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 05:06:37.556646 ignition[1037]: INFO : files: op(f): op(10): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 05:06:37.556646 ignition[1037]: INFO : files: op(f): [finished] processing unit "coreos-metadata.service" May 14 05:06:37.556646 ignition[1037]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" May 14 05:06:37.578702 ignition[1037]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" May 14 05:06:37.580689 ignition[1037]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 14 05:06:37.580977 ignition[1037]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" May 14 05:06:37.581303 ignition[1037]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" May 14 05:06:37.581303 ignition[1037]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" May 14 05:06:37.582326 ignition[1037]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" May 14 05:06:37.582326 ignition[1037]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" May 14 05:06:37.582326 ignition[1037]: INFO : files: files passed May 14 05:06:37.582326 ignition[1037]: INFO : Ignition finished successfully May 14 05:06:37.583036 systemd[1]: Finished ignition-files.service - Ignition (files). May 14 05:06:37.583867 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 14 05:06:37.585251 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 14 05:06:37.591250 systemd[1]: ignition-quench.service: Deactivated successfully. May 14 05:06:37.591312 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 14 05:06:37.594777 initrd-setup-root-after-ignition[1069]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 05:06:37.594777 initrd-setup-root-after-ignition[1069]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 14 05:06:37.595742 initrd-setup-root-after-ignition[1073]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 05:06:37.596416 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 05:06:37.596697 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 14 05:06:37.597443 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 14 05:06:37.625304 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 14 05:06:37.625371 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 14 05:06:37.625629 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 14 05:06:37.625750 systemd[1]: Reached target initrd.target - Initrd Default Target. May 14 05:06:37.625955 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 14 05:06:37.626388 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 14 05:06:37.649496 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 05:06:37.650270 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 14 05:06:37.660840 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 14 05:06:37.661122 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 05:06:37.661516 systemd[1]: Stopped target timers.target - Timer Units. May 14 05:06:37.661785 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 14 05:06:37.661955 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 05:06:37.662360 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 14 05:06:37.662629 systemd[1]: Stopped target basic.target - Basic System. May 14 05:06:37.662881 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 14 05:06:37.663160 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 14 05:06:37.663443 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 14 05:06:37.663580 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 14 05:06:37.663719 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 14 05:06:37.663850 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 14 05:06:37.664003 systemd[1]: Stopped target sysinit.target - System Initialization. May 14 05:06:37.664176 systemd[1]: Stopped target local-fs.target - Local File Systems. May 14 05:06:37.664314 systemd[1]: Stopped target swap.target - Swaps. May 14 05:06:37.664417 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 14 05:06:37.664483 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 14 05:06:37.664671 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 14 05:06:37.664810 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 05:06:37.664932 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 14 05:06:37.665232 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 05:06:37.665390 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 14 05:06:37.665454 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 14 05:06:37.665813 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 14 05:06:37.665877 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 14 05:06:37.666136 systemd[1]: Stopped target paths.target - Path Units. May 14 05:06:37.666260 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 14 05:06:37.671296 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 05:06:37.671471 systemd[1]: Stopped target slices.target - Slice Units. May 14 05:06:37.671662 systemd[1]: Stopped target sockets.target - Socket Units. May 14 05:06:37.671839 systemd[1]: iscsid.socket: Deactivated successfully. May 14 05:06:37.671906 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 14 05:06:37.672127 systemd[1]: iscsiuio.socket: Deactivated successfully. May 14 05:06:37.672170 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 05:06:37.672428 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 14 05:06:37.672511 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 05:06:37.672729 systemd[1]: ignition-files.service: Deactivated successfully. May 14 05:06:37.672806 systemd[1]: Stopped ignition-files.service - Ignition (files). May 14 05:06:37.673459 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 14 05:06:37.673556 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 14 05:06:37.673639 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 14 05:06:37.675292 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 14 05:06:37.675394 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 14 05:06:37.675462 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 14 05:06:37.675644 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 14 05:06:37.675722 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 14 05:06:37.678544 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 14 05:06:37.689236 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 14 05:06:37.695445 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 14 05:06:37.702047 ignition[1093]: INFO : Ignition 2.21.0 May 14 05:06:37.702047 ignition[1093]: INFO : Stage: umount May 14 05:06:37.702047 ignition[1093]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 05:06:37.702047 ignition[1093]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 14 05:06:37.702047 ignition[1093]: INFO : umount: umount passed May 14 05:06:37.702047 ignition[1093]: INFO : Ignition finished successfully May 14 05:06:37.703492 systemd[1]: ignition-mount.service: Deactivated successfully. May 14 05:06:37.703559 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 14 05:06:37.703787 systemd[1]: Stopped target network.target - Network. May 14 05:06:37.703895 systemd[1]: ignition-disks.service: Deactivated successfully. May 14 05:06:37.703920 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 14 05:06:37.704059 systemd[1]: ignition-kargs.service: Deactivated successfully. May 14 05:06:37.704079 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 14 05:06:37.704219 systemd[1]: ignition-setup.service: Deactivated successfully. May 14 05:06:37.704240 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 14 05:06:37.704389 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 14 05:06:37.704408 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 14 05:06:37.704625 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 14 05:06:37.704778 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 14 05:06:37.709222 systemd[1]: systemd-resolved.service: Deactivated successfully. May 14 05:06:37.709292 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 14 05:06:37.710587 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 14 05:06:37.710711 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 14 05:06:37.710735 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 05:06:37.711468 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 14 05:06:37.717045 systemd[1]: systemd-networkd.service: Deactivated successfully. May 14 05:06:37.717111 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 14 05:06:37.717787 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 14 05:06:37.717865 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 14 05:06:37.718034 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 14 05:06:37.718051 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 14 05:06:37.718637 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 14 05:06:37.718735 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 14 05:06:37.718759 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 05:06:37.718885 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. May 14 05:06:37.718906 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. May 14 05:06:37.719027 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 05:06:37.719047 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 05:06:37.719232 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 14 05:06:37.719252 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 14 05:06:37.719507 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 05:06:37.720054 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 05:06:37.728974 systemd[1]: network-cleanup.service: Deactivated successfully. May 14 05:06:37.729044 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 14 05:06:37.730540 systemd[1]: systemd-udevd.service: Deactivated successfully. May 14 05:06:37.730732 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 05:06:37.731036 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 14 05:06:37.731060 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 14 05:06:37.731628 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 14 05:06:37.731647 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 14 05:06:37.731978 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 14 05:06:37.732001 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 14 05:06:37.732288 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 14 05:06:37.732311 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 14 05:06:37.732456 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 05:06:37.732480 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 05:06:37.733599 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 14 05:06:37.733835 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 14 05:06:37.733861 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 14 05:06:37.734335 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 14 05:06:37.734358 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 05:06:37.734784 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 05:06:37.734810 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 05:06:37.744136 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 14 05:06:37.744357 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 14 05:06:37.803247 systemd[1]: sysroot-boot.service: Deactivated successfully. May 14 05:06:37.803538 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 14 05:06:37.803754 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 14 05:06:37.803851 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 14 05:06:37.803877 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 14 05:06:37.804378 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 14 05:06:37.818963 systemd[1]: Switching root. May 14 05:06:37.860292 systemd-journald[244]: Journal stopped May 14 05:06:38.962252 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). May 14 05:06:38.962276 kernel: SELinux: policy capability network_peer_controls=1 May 14 05:06:38.962285 kernel: SELinux: policy capability open_perms=1 May 14 05:06:38.962291 kernel: SELinux: policy capability extended_socket_class=1 May 14 05:06:38.962296 kernel: SELinux: policy capability always_check_network=0 May 14 05:06:38.962302 kernel: SELinux: policy capability cgroup_seclabel=1 May 14 05:06:38.962309 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 14 05:06:38.962314 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 14 05:06:38.962320 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 14 05:06:38.962325 kernel: SELinux: policy capability userspace_initial_context=0 May 14 05:06:38.962331 kernel: audit: type=1403 audit(1747199198.461:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 14 05:06:38.962338 systemd[1]: Successfully loaded SELinux policy in 30.960ms. May 14 05:06:38.962346 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.167ms. May 14 05:06:38.962353 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 05:06:38.962360 systemd[1]: Detected virtualization vmware. May 14 05:06:38.962366 systemd[1]: Detected architecture x86-64. May 14 05:06:38.962374 systemd[1]: Detected first boot. May 14 05:06:38.962381 systemd[1]: Initializing machine ID from random generator. May 14 05:06:38.962466 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc May 14 05:06:38.962477 zram_generator::config[1136]: No configuration found. May 14 05:06:38.962485 kernel: Guest personality initialized and is active May 14 05:06:38.962491 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 14 05:06:38.962497 kernel: Initialized host personality May 14 05:06:38.962505 kernel: NET: Registered PF_VSOCK protocol family May 14 05:06:38.962512 systemd[1]: Populated /etc with preset unit settings. May 14 05:06:38.962520 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 14 05:06:38.962527 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" May 14 05:06:38.962534 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 14 05:06:38.962540 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 14 05:06:38.962547 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 14 05:06:38.962555 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 14 05:06:38.962562 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 14 05:06:38.962569 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 14 05:06:38.962575 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 14 05:06:38.962582 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 14 05:06:38.962588 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 14 05:06:38.962595 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 14 05:06:38.962603 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 14 05:06:38.962610 systemd[1]: Created slice user.slice - User and Session Slice. May 14 05:06:38.962617 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 05:06:38.962625 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 05:06:38.962632 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 14 05:06:38.962638 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 14 05:06:38.962645 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 14 05:06:38.962652 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 05:06:38.962661 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 14 05:06:38.962668 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 05:06:38.962674 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 05:06:38.962681 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 14 05:06:38.962688 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 14 05:06:38.962695 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 14 05:06:38.962701 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 14 05:06:38.962708 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 05:06:38.962716 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 05:06:38.962723 systemd[1]: Reached target slices.target - Slice Units. May 14 05:06:38.962730 systemd[1]: Reached target swap.target - Swaps. May 14 05:06:38.962736 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 14 05:06:38.962743 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 14 05:06:38.962751 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 14 05:06:38.962758 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 05:06:38.962765 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 05:06:38.962772 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 05:06:38.962779 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 14 05:06:38.962786 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 14 05:06:38.962792 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 14 05:06:38.962799 systemd[1]: Mounting media.mount - External Media Directory... May 14 05:06:38.962808 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 05:06:38.962815 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 14 05:06:38.962822 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 14 05:06:38.962829 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 14 05:06:38.962836 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 14 05:06:38.962843 systemd[1]: Reached target machines.target - Containers. May 14 05:06:38.962849 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 14 05:06:38.962856 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... May 14 05:06:38.962864 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 05:06:38.962871 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 14 05:06:38.962878 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 05:06:38.962885 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 05:06:38.962892 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 05:06:38.962898 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 14 05:06:38.962905 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 05:06:38.962913 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 14 05:06:38.962922 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 14 05:06:38.962929 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 14 05:06:38.962936 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 14 05:06:38.962943 systemd[1]: Stopped systemd-fsck-usr.service. May 14 05:06:38.962950 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 05:06:38.962959 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 05:06:38.962965 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 05:06:38.962972 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 05:06:38.962979 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 14 05:06:38.962987 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 14 05:06:38.962995 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 05:06:38.963002 systemd[1]: verity-setup.service: Deactivated successfully. May 14 05:06:38.963009 systemd[1]: Stopped verity-setup.service. May 14 05:06:38.963016 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 05:06:38.963023 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 14 05:06:38.963029 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 14 05:06:38.963036 systemd[1]: Mounted media.mount - External Media Directory. May 14 05:06:38.963044 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 14 05:06:38.963051 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 14 05:06:38.963058 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 14 05:06:38.963064 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 05:06:38.963071 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 14 05:06:38.963078 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 14 05:06:38.963085 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 05:06:38.963092 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 05:06:38.963098 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 05:06:38.963107 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 05:06:38.963114 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 05:06:38.963121 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 05:06:38.963140 systemd-journald[1219]: Collecting audit messages is disabled. May 14 05:06:38.963159 kernel: fuse: init (API version 7.41) May 14 05:06:38.963167 systemd-journald[1219]: Journal started May 14 05:06:38.963182 systemd-journald[1219]: Runtime Journal (/run/log/journal/ee9d86c74f41429584b5dfd651cf59aa) is 4.8M, max 38.8M, 34M free. May 14 05:06:38.800884 systemd[1]: Queued start job for default target multi-user.target. May 14 05:06:38.816220 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 14 05:06:38.816454 systemd[1]: systemd-journald.service: Deactivated successfully. May 14 05:06:38.963686 jq[1206]: true May 14 05:06:38.964352 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 14 05:06:38.967206 systemd[1]: Started systemd-journald.service - Journal Service. May 14 05:06:38.967731 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 14 05:06:38.968231 kernel: loop: module loaded May 14 05:06:38.968472 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 14 05:06:38.968724 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 05:06:38.968821 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 05:06:38.978840 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 05:06:38.982260 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 14 05:06:38.985214 jq[1236]: true May 14 05:06:38.985202 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 14 05:06:38.985328 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 14 05:06:38.985346 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 05:06:38.986039 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 14 05:06:38.994493 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 14 05:06:38.994917 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 05:06:38.999663 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 14 05:06:39.002213 kernel: ACPI: bus type drm_connector registered May 14 05:06:39.002726 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 14 05:06:39.002864 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 05:06:39.007097 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 14 05:06:39.007244 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 05:06:39.009253 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 05:06:39.013422 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 14 05:06:39.014890 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 05:06:39.015009 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 05:06:39.015289 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 14 05:06:39.016369 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 14 05:06:39.016524 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 14 05:06:39.016787 systemd-journald[1219]: Time spent on flushing to /var/log/journal/ee9d86c74f41429584b5dfd651cf59aa is 58.310ms for 1758 entries. May 14 05:06:39.016787 systemd-journald[1219]: System Journal (/var/log/journal/ee9d86c74f41429584b5dfd651cf59aa) is 8M, max 584.8M, 576.8M free. May 14 05:06:39.100006 systemd-journald[1219]: Received client request to flush runtime journal. May 14 05:06:39.100042 kernel: loop0: detected capacity change from 0 to 2960 May 14 05:06:39.100057 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 14 05:06:39.040999 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 14 05:06:39.074886 ignition[1271]: Ignition 2.21.0 May 14 05:06:39.041191 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 14 05:06:39.081290 ignition[1271]: deleting config from guestinfo properties May 14 05:06:39.043353 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 14 05:06:39.063502 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 14 05:06:39.069988 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 14 05:06:39.072733 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 05:06:39.102450 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 14 05:06:39.110305 kernel: loop1: detected capacity change from 0 to 205544 May 14 05:06:39.112914 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 14 05:06:39.113227 ignition[1271]: Successfully deleted config May 14 05:06:39.124006 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). May 14 05:06:39.142689 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 14 05:06:39.144421 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 05:06:39.153246 kernel: loop2: detected capacity change from 0 to 113872 May 14 05:06:39.180423 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 05:06:39.182037 systemd-tmpfiles[1304]: ACLs are not supported, ignoring. May 14 05:06:39.182049 systemd-tmpfiles[1304]: ACLs are not supported, ignoring. May 14 05:06:39.184976 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 05:06:39.188329 kernel: loop3: detected capacity change from 0 to 146240 May 14 05:06:39.256213 kernel: loop4: detected capacity change from 0 to 2960 May 14 05:06:39.309275 kernel: loop5: detected capacity change from 0 to 205544 May 14 05:06:39.340213 kernel: loop6: detected capacity change from 0 to 113872 May 14 05:06:39.366210 kernel: loop7: detected capacity change from 0 to 146240 May 14 05:06:39.383120 (sd-merge)[1311]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. May 14 05:06:39.383653 (sd-merge)[1311]: Merged extensions into '/usr'. May 14 05:06:39.392742 systemd[1]: Reload requested from client PID 1269 ('systemd-sysext') (unit systemd-sysext.service)... May 14 05:06:39.392827 systemd[1]: Reloading... May 14 05:06:39.454984 zram_generator::config[1337]: No configuration found. May 14 05:06:39.541498 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 05:06:39.551006 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 14 05:06:39.597538 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 14 05:06:39.597627 systemd[1]: Reloading finished in 204 ms. May 14 05:06:39.612884 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 14 05:06:39.620039 systemd[1]: Starting ensure-sysext.service... May 14 05:06:39.622632 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 05:06:39.637647 systemd[1]: Reload requested from client PID 1393 ('systemctl') (unit ensure-sysext.service)... May 14 05:06:39.637657 systemd[1]: Reloading... May 14 05:06:39.648297 systemd-tmpfiles[1394]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 14 05:06:39.684269 zram_generator::config[1423]: No configuration found. May 14 05:06:39.648320 systemd-tmpfiles[1394]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 14 05:06:39.648477 systemd-tmpfiles[1394]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 05:06:39.648626 systemd-tmpfiles[1394]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 14 05:06:39.649096 systemd-tmpfiles[1394]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 05:06:39.649272 systemd-tmpfiles[1394]: ACLs are not supported, ignoring. May 14 05:06:39.649304 systemd-tmpfiles[1394]: ACLs are not supported, ignoring. May 14 05:06:39.664127 systemd-tmpfiles[1394]: Detected autofs mount point /boot during canonicalization of boot. May 14 05:06:39.664131 systemd-tmpfiles[1394]: Skipping /boot May 14 05:06:39.694685 systemd-tmpfiles[1394]: Detected autofs mount point /boot during canonicalization of boot. May 14 05:06:39.694692 systemd-tmpfiles[1394]: Skipping /boot May 14 05:06:39.697298 ldconfig[1261]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 14 05:06:39.745752 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 05:06:39.753290 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 14 05:06:39.796071 systemd[1]: Reloading finished in 158 ms. May 14 05:06:39.821545 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 14 05:06:39.821863 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 14 05:06:39.824511 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 05:06:39.829271 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 05:06:39.830668 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 14 05:06:39.833251 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 14 05:06:39.834692 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 05:06:39.835595 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 05:06:39.839569 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 14 05:06:39.845490 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 14 05:06:39.846860 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 05:06:39.849638 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 05:06:39.851037 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 05:06:39.852356 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 05:06:39.852499 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 05:06:39.852575 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 05:06:39.852636 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 05:06:39.854780 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 05:06:39.854863 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 05:06:39.854916 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 05:06:39.854969 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 05:06:39.856567 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 05:06:39.860623 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 05:06:39.860806 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 05:06:39.860864 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 05:06:39.860952 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 05:06:39.865609 systemd[1]: Finished ensure-sysext.service. May 14 05:06:39.868099 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 14 05:06:39.869317 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 14 05:06:39.881401 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 05:06:39.884288 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 05:06:39.889112 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 14 05:06:39.890711 systemd-udevd[1486]: Using default interface naming scheme 'v255'. May 14 05:06:39.891311 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 14 05:06:39.891592 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 05:06:39.891691 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 05:06:39.891918 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 05:06:39.892002 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 05:06:39.892471 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 05:06:39.893034 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 05:06:39.893781 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 05:06:39.893810 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 05:06:39.909590 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 14 05:06:39.912456 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 14 05:06:39.914145 augenrules[1520]: No rules May 14 05:06:39.913565 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 14 05:06:39.913959 systemd[1]: audit-rules.service: Deactivated successfully. May 14 05:06:39.914062 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 05:06:39.915648 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 05:06:39.928964 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 05:06:39.930877 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 05:06:39.967457 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 14 05:06:39.967619 systemd[1]: Reached target time-set.target - System Time Set. May 14 05:06:40.000631 systemd-resolved[1484]: Positive Trust Anchors: May 14 05:06:40.000639 systemd-resolved[1484]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 05:06:40.000663 systemd-resolved[1484]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 05:06:40.003671 systemd-resolved[1484]: Defaulting to hostname 'linux'. May 14 05:06:40.005621 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 05:06:40.005843 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 05:06:40.006228 systemd[1]: Reached target sysinit.target - System Initialization. May 14 05:06:40.006380 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 14 05:06:40.006508 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 14 05:06:40.006717 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 14 05:06:40.008299 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 14 05:06:40.008446 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 14 05:06:40.008556 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 14 05:06:40.008672 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 14 05:06:40.008688 systemd[1]: Reached target paths.target - Path Units. May 14 05:06:40.008779 systemd[1]: Reached target timers.target - Timer Units. May 14 05:06:40.009432 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 14 05:06:40.010426 systemd[1]: Starting docker.socket - Docker Socket for the API... May 14 05:06:40.013907 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 14 05:06:40.014106 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 14 05:06:40.014234 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 14 05:06:40.020517 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 14 05:06:40.020828 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 14 05:06:40.021307 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 14 05:06:40.021719 systemd[1]: Reached target sockets.target - Socket Units. May 14 05:06:40.021819 systemd[1]: Reached target basic.target - Basic System. May 14 05:06:40.021941 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 14 05:06:40.021953 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 14 05:06:40.023326 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 14 05:06:40.027015 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 14 05:06:40.029284 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 14 05:06:40.031439 systemd-networkd[1536]: lo: Link UP May 14 05:06:40.031443 systemd-networkd[1536]: lo: Gained carrier May 14 05:06:40.031806 systemd-networkd[1536]: Enumeration completed May 14 05:06:40.032575 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 14 05:06:40.032690 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 14 05:06:40.034839 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 14 05:06:40.039335 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 14 05:06:40.043575 jq[1566]: false May 14 05:06:40.044540 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 14 05:06:40.046421 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 14 05:06:40.048431 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 14 05:06:40.052332 systemd[1]: Starting systemd-logind.service - User Login Management... May 14 05:06:40.052917 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 14 05:06:40.055067 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 14 05:06:40.056950 systemd[1]: Starting update-engine.service - Update Engine... May 14 05:06:40.057984 google_oslogin_nss_cache[1568]: oslogin_cache_refresh[1568]: Refreshing passwd entry cache May 14 05:06:40.058459 oslogin_cache_refresh[1568]: Refreshing passwd entry cache May 14 05:06:40.060777 google_oslogin_nss_cache[1568]: oslogin_cache_refresh[1568]: Failure getting users, quitting May 14 05:06:40.060813 oslogin_cache_refresh[1568]: Failure getting users, quitting May 14 05:06:40.060850 google_oslogin_nss_cache[1568]: oslogin_cache_refresh[1568]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 14 05:06:40.060878 oslogin_cache_refresh[1568]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 14 05:06:40.060927 google_oslogin_nss_cache[1568]: oslogin_cache_refresh[1568]: Refreshing group entry cache May 14 05:06:40.060953 oslogin_cache_refresh[1568]: Refreshing group entry cache May 14 05:06:40.061305 google_oslogin_nss_cache[1568]: oslogin_cache_refresh[1568]: Failure getting groups, quitting May 14 05:06:40.061336 oslogin_cache_refresh[1568]: Failure getting groups, quitting May 14 05:06:40.061366 google_oslogin_nss_cache[1568]: oslogin_cache_refresh[1568]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 14 05:06:40.061385 oslogin_cache_refresh[1568]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 14 05:06:40.064959 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 14 05:06:40.068311 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... May 14 05:06:40.068843 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 05:06:40.070461 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 14 05:06:40.070711 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 14 05:06:40.070817 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 14 05:06:40.070961 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 14 05:06:40.071063 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 14 05:06:40.077252 systemd[1]: Reached target network.target - Network. May 14 05:06:40.080103 systemd[1]: Starting containerd.service - containerd container runtime... May 14 05:06:40.081364 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 14 05:06:40.088221 update_engine[1575]: I20250514 05:06:40.087457 1575 main.cc:92] Flatcar Update Engine starting May 14 05:06:40.090562 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 14 05:06:40.090899 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 14 05:06:40.092246 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 14 05:06:40.096213 jq[1577]: true May 14 05:06:40.096771 systemd[1]: motdgen.service: Deactivated successfully. May 14 05:06:40.104256 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 14 05:06:40.115741 extend-filesystems[1567]: Found loop4 May 14 05:06:40.115741 extend-filesystems[1567]: Found loop5 May 14 05:06:40.115741 extend-filesystems[1567]: Found loop6 May 14 05:06:40.116850 extend-filesystems[1567]: Found loop7 May 14 05:06:40.116850 extend-filesystems[1567]: Found sda May 14 05:06:40.116850 extend-filesystems[1567]: Found sda1 May 14 05:06:40.116850 extend-filesystems[1567]: Found sda2 May 14 05:06:40.116850 extend-filesystems[1567]: Found sda3 May 14 05:06:40.116850 extend-filesystems[1567]: Found usr May 14 05:06:40.116850 extend-filesystems[1567]: Found sda4 May 14 05:06:40.116850 extend-filesystems[1567]: Found sda6 May 14 05:06:40.116850 extend-filesystems[1567]: Found sda7 May 14 05:06:40.116850 extend-filesystems[1567]: Found sda9 May 14 05:06:40.116850 extend-filesystems[1567]: Found sr0 May 14 05:06:40.116376 systemd[1]: extend-filesystems.service: Deactivated successfully. May 14 05:06:40.116963 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 14 05:06:40.122533 jq[1594]: true May 14 05:06:40.127992 systemd-logind[1573]: New seat seat0. May 14 05:06:40.128485 systemd[1]: Started systemd-logind.service - User Login Management. May 14 05:06:40.136951 (ntainerd)[1604]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 14 05:06:40.136962 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. May 14 05:06:40.139241 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... May 14 05:06:40.147629 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 14 05:06:40.163830 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. May 14 05:06:40.217975 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 14 05:06:40.217495 dbus-daemon[1563]: [system] SELinux support is enabled May 14 05:06:40.222155 tar[1585]: linux-amd64/helm May 14 05:06:40.220753 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 14 05:06:40.221222 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 14 05:06:40.221239 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 14 05:06:40.221373 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 14 05:06:40.221384 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 14 05:06:40.224952 bash[1627]: Updated "/home/core/.ssh/authorized_keys" May 14 05:06:40.225804 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 14 05:06:40.226216 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 14 05:06:40.226613 unknown[1606]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath May 14 05:06:40.229087 unknown[1606]: Core dump limit set to -1 May 14 05:06:40.233228 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 14 05:06:40.235517 dbus-daemon[1563]: [system] Successfully activated service 'org.freedesktop.systemd1' May 14 05:06:40.238270 update_engine[1575]: I20250514 05:06:40.237552 1575 update_check_scheduler.cc:74] Next update check in 6m19s May 14 05:06:40.243964 systemd[1]: Started update-engine.service - Update Engine. May 14 05:06:40.263205 kernel: ACPI: button: Power Button [PWRF] May 14 05:06:40.266001 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 14 05:06:40.313914 kernel: mousedev: PS/2 mouse device common for all mice May 14 05:06:40.331434 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated May 14 05:06:40.331599 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps May 14 05:06:40.328538 systemd-networkd[1536]: ens192: Configuring with /etc/systemd/network/00-vmware.network. May 14 05:06:40.331382 systemd-networkd[1536]: ens192: Link UP May 14 05:06:40.331757 systemd-networkd[1536]: ens192: Gained carrier May 14 05:06:40.343328 systemd-timesyncd[1501]: Network configuration changed, trying to establish connection. May 14 05:06:40.398993 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. May 14 05:06:40.424326 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 14 05:06:40.463460 locksmithd[1632]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 14 05:06:40.475808 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 14 05:06:40.498223 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! May 14 05:06:40.568696 containerd[1604]: time="2025-05-14T05:06:40Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 14 05:06:40.570590 containerd[1604]: time="2025-05-14T05:06:40.569366770Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 14 05:06:40.597720 containerd[1604]: time="2025-05-14T05:06:40.597699060Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="16.049µs" May 14 05:06:40.597783 containerd[1604]: time="2025-05-14T05:06:40.597773574Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 14 05:06:40.597824 containerd[1604]: time="2025-05-14T05:06:40.597816788Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 14 05:06:40.597899 sshd_keygen[1626]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 14 05:06:40.599169 containerd[1604]: time="2025-05-14T05:06:40.598790055Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 14 05:06:40.599169 containerd[1604]: time="2025-05-14T05:06:40.598803878Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 14 05:06:40.599169 containerd[1604]: time="2025-05-14T05:06:40.598819215Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 05:06:40.599169 containerd[1604]: time="2025-05-14T05:06:40.598854425Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 05:06:40.599169 containerd[1604]: time="2025-05-14T05:06:40.598861458Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 05:06:40.599169 containerd[1604]: time="2025-05-14T05:06:40.598964331Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 05:06:40.599169 containerd[1604]: time="2025-05-14T05:06:40.598971984Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 05:06:40.599169 containerd[1604]: time="2025-05-14T05:06:40.598977700Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 05:06:40.599169 containerd[1604]: time="2025-05-14T05:06:40.598982299Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 14 05:06:40.599169 containerd[1604]: time="2025-05-14T05:06:40.599021689Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 14 05:06:40.599169 containerd[1604]: time="2025-05-14T05:06:40.599130662Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 05:06:40.599375 containerd[1604]: time="2025-05-14T05:06:40.599147287Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 05:06:40.599375 containerd[1604]: time="2025-05-14T05:06:40.599153434Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 14 05:06:40.600159 containerd[1604]: time="2025-05-14T05:06:40.600147315Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 14 05:06:40.600593 containerd[1604]: time="2025-05-14T05:06:40.600581744Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 14 05:06:40.600828 containerd[1604]: time="2025-05-14T05:06:40.600817933Z" level=info msg="metadata content store policy set" policy=shared May 14 05:06:40.620818 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 14 05:06:40.623432 systemd[1]: Starting issuegen.service - Generate /run/issue... May 14 05:06:40.636334 systemd[1]: issuegen.service: Deactivated successfully. May 14 05:06:40.636679 systemd[1]: Finished issuegen.service - Generate /run/issue. May 14 05:06:40.638291 containerd[1604]: time="2025-05-14T05:06:40.636813092Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 14 05:06:40.638291 containerd[1604]: time="2025-05-14T05:06:40.636846927Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 14 05:06:40.638291 containerd[1604]: time="2025-05-14T05:06:40.636856567Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 14 05:06:40.638291 containerd[1604]: time="2025-05-14T05:06:40.636867753Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 14 05:06:40.638291 containerd[1604]: time="2025-05-14T05:06:40.636876299Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 14 05:06:40.639285 containerd[1604]: time="2025-05-14T05:06:40.638404591Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 14 05:06:40.639285 containerd[1604]: time="2025-05-14T05:06:40.638423184Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 14 05:06:40.639285 containerd[1604]: time="2025-05-14T05:06:40.638431782Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 14 05:06:40.639285 containerd[1604]: time="2025-05-14T05:06:40.638438321Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 14 05:06:40.639285 containerd[1604]: time="2025-05-14T05:06:40.638443934Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 14 05:06:40.639285 containerd[1604]: time="2025-05-14T05:06:40.638449623Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 14 05:06:40.639285 containerd[1604]: time="2025-05-14T05:06:40.638463561Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 14 05:06:40.639285 containerd[1604]: time="2025-05-14T05:06:40.638529289Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 14 05:06:40.639285 containerd[1604]: time="2025-05-14T05:06:40.638544136Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 14 05:06:40.639285 containerd[1604]: time="2025-05-14T05:06:40.638553245Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 14 05:06:40.639285 containerd[1604]: time="2025-05-14T05:06:40.638559339Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 14 05:06:40.639285 containerd[1604]: time="2025-05-14T05:06:40.638564896Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 14 05:06:40.639285 containerd[1604]: time="2025-05-14T05:06:40.638571643Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 14 05:06:40.639285 containerd[1604]: time="2025-05-14T05:06:40.638577802Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 14 05:06:40.639285 containerd[1604]: time="2025-05-14T05:06:40.638583392Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 14 05:06:40.639511 containerd[1604]: time="2025-05-14T05:06:40.638591368Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 14 05:06:40.639511 containerd[1604]: time="2025-05-14T05:06:40.638597512Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 14 05:06:40.639511 containerd[1604]: time="2025-05-14T05:06:40.638603874Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 14 05:06:40.639511 containerd[1604]: time="2025-05-14T05:06:40.638645609Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 14 05:06:40.639511 containerd[1604]: time="2025-05-14T05:06:40.638654773Z" level=info msg="Start snapshots syncer" May 14 05:06:40.639511 containerd[1604]: time="2025-05-14T05:06:40.638670838Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 14 05:06:40.639587 containerd[1604]: time="2025-05-14T05:06:40.638810706Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 14 05:06:40.639587 containerd[1604]: time="2025-05-14T05:06:40.638838630Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 14 05:06:40.639893 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 14 05:06:40.640892 containerd[1604]: time="2025-05-14T05:06:40.640478337Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 14 05:06:40.640892 containerd[1604]: time="2025-05-14T05:06:40.640534845Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 14 05:06:40.640892 containerd[1604]: time="2025-05-14T05:06:40.640549957Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 14 05:06:40.640892 containerd[1604]: time="2025-05-14T05:06:40.640556546Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 14 05:06:40.640892 containerd[1604]: time="2025-05-14T05:06:40.640562387Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 14 05:06:40.640892 containerd[1604]: time="2025-05-14T05:06:40.640569469Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 14 05:06:40.640892 containerd[1604]: time="2025-05-14T05:06:40.640575000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 14 05:06:40.640892 containerd[1604]: time="2025-05-14T05:06:40.640583535Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 14 05:06:40.640892 containerd[1604]: time="2025-05-14T05:06:40.640602478Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 14 05:06:40.640892 containerd[1604]: time="2025-05-14T05:06:40.640609430Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 14 05:06:40.640892 containerd[1604]: time="2025-05-14T05:06:40.640615443Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 14 05:06:40.640892 containerd[1604]: time="2025-05-14T05:06:40.640633249Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 05:06:40.640892 containerd[1604]: time="2025-05-14T05:06:40.640643078Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 05:06:40.640892 containerd[1604]: time="2025-05-14T05:06:40.640648194Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 05:06:40.641079 containerd[1604]: time="2025-05-14T05:06:40.640653154Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 05:06:40.641079 containerd[1604]: time="2025-05-14T05:06:40.640657616Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 14 05:06:40.641079 containerd[1604]: time="2025-05-14T05:06:40.640662664Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 14 05:06:40.641079 containerd[1604]: time="2025-05-14T05:06:40.640668092Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 14 05:06:40.641079 containerd[1604]: time="2025-05-14T05:06:40.640677221Z" level=info msg="runtime interface created" May 14 05:06:40.641079 containerd[1604]: time="2025-05-14T05:06:40.640679962Z" level=info msg="created NRI interface" May 14 05:06:40.641079 containerd[1604]: time="2025-05-14T05:06:40.640684942Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 14 05:06:40.641079 containerd[1604]: time="2025-05-14T05:06:40.640691794Z" level=info msg="Connect containerd service" May 14 05:06:40.641079 containerd[1604]: time="2025-05-14T05:06:40.640709013Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 14 05:06:40.645442 containerd[1604]: time="2025-05-14T05:06:40.644549008Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 05:06:40.661879 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 14 05:06:40.664388 systemd[1]: Started getty@tty1.service - Getty on tty1. May 14 05:06:40.666364 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 14 05:06:40.667320 systemd[1]: Reached target getty.target - Login Prompts. May 14 05:06:40.681428 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 05:06:40.690512 (udev-worker)[1551]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. May 14 05:06:40.720173 systemd-logind[1573]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 14 05:06:40.731365 systemd-logind[1573]: Watching system buttons on /dev/input/event2 (Power Button) May 14 05:06:40.809786 containerd[1604]: time="2025-05-14T05:06:40.808772521Z" level=info msg="Start subscribing containerd event" May 14 05:06:40.809786 containerd[1604]: time="2025-05-14T05:06:40.808818322Z" level=info msg="Start recovering state" May 14 05:06:40.809786 containerd[1604]: time="2025-05-14T05:06:40.808891306Z" level=info msg="Start event monitor" May 14 05:06:40.809786 containerd[1604]: time="2025-05-14T05:06:40.808900656Z" level=info msg="Start cni network conf syncer for default" May 14 05:06:40.809786 containerd[1604]: time="2025-05-14T05:06:40.808905095Z" level=info msg="Start streaming server" May 14 05:06:40.809786 containerd[1604]: time="2025-05-14T05:06:40.808909672Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 14 05:06:40.809786 containerd[1604]: time="2025-05-14T05:06:40.808913728Z" level=info msg="runtime interface starting up..." May 14 05:06:40.809786 containerd[1604]: time="2025-05-14T05:06:40.808916926Z" level=info msg="starting plugins..." May 14 05:06:40.809786 containerd[1604]: time="2025-05-14T05:06:40.808924740Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 14 05:06:40.809786 containerd[1604]: time="2025-05-14T05:06:40.808937036Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 14 05:06:40.809786 containerd[1604]: time="2025-05-14T05:06:40.808988475Z" level=info msg=serving... address=/run/containerd/containerd.sock May 14 05:06:40.809786 containerd[1604]: time="2025-05-14T05:06:40.809032195Z" level=info msg="containerd successfully booted in 0.240674s" May 14 05:06:40.809279 systemd[1]: Started containerd.service - containerd container runtime. May 14 05:06:40.844949 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 05:06:40.907902 tar[1585]: linux-amd64/LICENSE May 14 05:06:40.907902 tar[1585]: linux-amd64/README.md May 14 05:06:40.915138 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 14 05:06:42.064419 systemd-networkd[1536]: ens192: Gained IPv6LL May 14 05:06:42.065038 systemd-timesyncd[1501]: Network configuration changed, trying to establish connection. May 14 05:06:42.065872 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 14 05:06:42.066951 systemd[1]: Reached target network-online.target - Network is Online. May 14 05:06:42.068511 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... May 14 05:06:42.082217 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 05:06:42.085043 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 14 05:06:42.119638 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 14 05:06:42.120053 systemd[1]: coreos-metadata.service: Deactivated successfully. May 14 05:06:42.120202 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. May 14 05:06:42.121520 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 14 05:06:43.505605 systemd-timesyncd[1501]: Network configuration changed, trying to establish connection. May 14 05:06:43.637001 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 05:06:43.637941 systemd[1]: Reached target multi-user.target - Multi-User System. May 14 05:06:43.639924 systemd[1]: Startup finished in 2.461s (kernel) + 5.869s (initrd) + 5.208s (userspace) = 13.539s. May 14 05:06:43.658489 (kubelet)[1787]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 05:06:43.793381 login[1735]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 14 05:06:43.795878 login[1736]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 14 05:06:43.806063 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 14 05:06:43.807181 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 14 05:06:43.810238 systemd-logind[1573]: New session 1 of user core. May 14 05:06:43.812888 systemd-logind[1573]: New session 2 of user core. May 14 05:06:43.827524 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 14 05:06:43.829111 systemd[1]: Starting user@500.service - User Manager for UID 500... May 14 05:06:43.843883 (systemd)[1794]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 14 05:06:43.845642 systemd-logind[1573]: New session c1 of user core. May 14 05:06:43.938042 systemd[1794]: Queued start job for default target default.target. May 14 05:06:43.947947 systemd[1794]: Created slice app.slice - User Application Slice. May 14 05:06:43.947969 systemd[1794]: Reached target paths.target - Paths. May 14 05:06:43.947995 systemd[1794]: Reached target timers.target - Timers. May 14 05:06:43.948660 systemd[1794]: Starting dbus.socket - D-Bus User Message Bus Socket... May 14 05:06:43.954928 systemd[1794]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 14 05:06:43.955014 systemd[1794]: Reached target sockets.target - Sockets. May 14 05:06:43.955079 systemd[1794]: Reached target basic.target - Basic System. May 14 05:06:43.955183 systemd[1794]: Reached target default.target - Main User Target. May 14 05:06:43.955208 systemd[1]: Started user@500.service - User Manager for UID 500. May 14 05:06:43.955285 systemd[1794]: Startup finished in 105ms. May 14 05:06:43.956273 systemd[1]: Started session-1.scope - Session 1 of User core. May 14 05:06:43.956800 systemd[1]: Started session-2.scope - Session 2 of User core. May 14 05:06:44.755389 kubelet[1787]: E0514 05:06:44.755346 1787 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 05:06:44.757243 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 05:06:44.757408 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 05:06:44.757702 systemd[1]: kubelet.service: Consumed 659ms CPU time, 236.1M memory peak. May 14 05:06:54.995009 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 14 05:06:54.996347 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 05:06:55.440956 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 05:06:55.449427 (kubelet)[1837]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 05:06:55.487505 kubelet[1837]: E0514 05:06:55.487452 1837 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 05:06:55.489951 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 05:06:55.490103 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 05:06:55.490481 systemd[1]: kubelet.service: Consumed 89ms CPU time, 95.7M memory peak. May 14 05:07:05.494978 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 14 05:07:05.496404 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 05:07:05.777899 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 05:07:05.787356 (kubelet)[1853]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 05:07:05.848872 kubelet[1853]: E0514 05:07:05.848837 1853 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 05:07:05.850387 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 05:07:05.850471 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 05:07:05.850908 systemd[1]: kubelet.service: Consumed 77ms CPU time, 96.2M memory peak. May 14 05:07:10.323678 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 14 05:07:10.324822 systemd[1]: Started sshd@0-139.178.70.105:22-139.178.89.65:34046.service - OpenSSH per-connection server daemon (139.178.89.65:34046). May 14 05:07:10.395500 sshd[1860]: Accepted publickey for core from 139.178.89.65 port 34046 ssh2: RSA SHA256:sWEHzEuAS00wVyRssVrF9wwUJZsltkVMESa8qG2astk May 14 05:07:10.396280 sshd-session[1860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:07:10.399341 systemd-logind[1573]: New session 3 of user core. May 14 05:07:10.409514 systemd[1]: Started session-3.scope - Session 3 of User core. May 14 05:07:10.464374 systemd[1]: Started sshd@1-139.178.70.105:22-139.178.89.65:34060.service - OpenSSH per-connection server daemon (139.178.89.65:34060). May 14 05:07:10.499233 sshd[1865]: Accepted publickey for core from 139.178.89.65 port 34060 ssh2: RSA SHA256:sWEHzEuAS00wVyRssVrF9wwUJZsltkVMESa8qG2astk May 14 05:07:10.504871 sshd-session[1865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:07:10.507495 systemd-logind[1573]: New session 4 of user core. May 14 05:07:10.514271 systemd[1]: Started session-4.scope - Session 4 of User core. May 14 05:07:10.561277 sshd[1867]: Connection closed by 139.178.89.65 port 34060 May 14 05:07:10.561882 sshd-session[1865]: pam_unix(sshd:session): session closed for user core May 14 05:07:10.571243 systemd[1]: sshd@1-139.178.70.105:22-139.178.89.65:34060.service: Deactivated successfully. May 14 05:07:10.572575 systemd[1]: session-4.scope: Deactivated successfully. May 14 05:07:10.573243 systemd-logind[1573]: Session 4 logged out. Waiting for processes to exit. May 14 05:07:10.575006 systemd[1]: Started sshd@2-139.178.70.105:22-139.178.89.65:34066.service - OpenSSH per-connection server daemon (139.178.89.65:34066). May 14 05:07:10.576433 systemd-logind[1573]: Removed session 4. May 14 05:07:10.614898 sshd[1873]: Accepted publickey for core from 139.178.89.65 port 34066 ssh2: RSA SHA256:sWEHzEuAS00wVyRssVrF9wwUJZsltkVMESa8qG2astk May 14 05:07:10.615875 sshd-session[1873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:07:10.619139 systemd-logind[1573]: New session 5 of user core. May 14 05:07:10.626396 systemd[1]: Started session-5.scope - Session 5 of User core. May 14 05:07:10.673634 sshd[1875]: Connection closed by 139.178.89.65 port 34066 May 14 05:07:10.674484 sshd-session[1873]: pam_unix(sshd:session): session closed for user core May 14 05:07:10.683505 systemd[1]: sshd@2-139.178.70.105:22-139.178.89.65:34066.service: Deactivated successfully. May 14 05:07:10.684547 systemd[1]: session-5.scope: Deactivated successfully. May 14 05:07:10.685015 systemd-logind[1573]: Session 5 logged out. Waiting for processes to exit. May 14 05:07:10.686320 systemd[1]: Started sshd@3-139.178.70.105:22-139.178.89.65:34072.service - OpenSSH per-connection server daemon (139.178.89.65:34072). May 14 05:07:10.687184 systemd-logind[1573]: Removed session 5. May 14 05:07:10.728076 sshd[1881]: Accepted publickey for core from 139.178.89.65 port 34072 ssh2: RSA SHA256:sWEHzEuAS00wVyRssVrF9wwUJZsltkVMESa8qG2astk May 14 05:07:10.729041 sshd-session[1881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:07:10.732465 systemd-logind[1573]: New session 6 of user core. May 14 05:07:10.738360 systemd[1]: Started session-6.scope - Session 6 of User core. May 14 05:07:10.787106 sshd[1883]: Connection closed by 139.178.89.65 port 34072 May 14 05:07:10.787568 sshd-session[1881]: pam_unix(sshd:session): session closed for user core May 14 05:07:10.794637 systemd[1]: sshd@3-139.178.70.105:22-139.178.89.65:34072.service: Deactivated successfully. May 14 05:07:10.796350 systemd[1]: session-6.scope: Deactivated successfully. May 14 05:07:10.796899 systemd-logind[1573]: Session 6 logged out. Waiting for processes to exit. May 14 05:07:10.799129 systemd[1]: Started sshd@4-139.178.70.105:22-139.178.89.65:34084.service - OpenSSH per-connection server daemon (139.178.89.65:34084). May 14 05:07:10.799594 systemd-logind[1573]: Removed session 6. May 14 05:07:10.840317 sshd[1889]: Accepted publickey for core from 139.178.89.65 port 34084 ssh2: RSA SHA256:sWEHzEuAS00wVyRssVrF9wwUJZsltkVMESa8qG2astk May 14 05:07:10.841096 sshd-session[1889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:07:10.843938 systemd-logind[1573]: New session 7 of user core. May 14 05:07:10.852374 systemd[1]: Started session-7.scope - Session 7 of User core. May 14 05:07:10.941526 sudo[1892]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 14 05:07:10.941686 sudo[1892]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 05:07:10.955712 sudo[1892]: pam_unix(sudo:session): session closed for user root May 14 05:07:10.956628 sshd[1891]: Connection closed by 139.178.89.65 port 34084 May 14 05:07:10.956995 sshd-session[1889]: pam_unix(sshd:session): session closed for user core May 14 05:07:10.963648 systemd[1]: sshd@4-139.178.70.105:22-139.178.89.65:34084.service: Deactivated successfully. May 14 05:07:10.965114 systemd[1]: session-7.scope: Deactivated successfully. May 14 05:07:10.965728 systemd-logind[1573]: Session 7 logged out. Waiting for processes to exit. May 14 05:07:10.968177 systemd[1]: Started sshd@5-139.178.70.105:22-139.178.89.65:34098.service - OpenSSH per-connection server daemon (139.178.89.65:34098). May 14 05:07:10.969135 systemd-logind[1573]: Removed session 7. May 14 05:07:11.008438 sshd[1898]: Accepted publickey for core from 139.178.89.65 port 34098 ssh2: RSA SHA256:sWEHzEuAS00wVyRssVrF9wwUJZsltkVMESa8qG2astk May 14 05:07:11.009455 sshd-session[1898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:07:11.012844 systemd-logind[1573]: New session 8 of user core. May 14 05:07:11.023360 systemd[1]: Started session-8.scope - Session 8 of User core. May 14 05:07:11.072207 sudo[1902]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 14 05:07:11.072363 sudo[1902]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 05:07:11.074864 sudo[1902]: pam_unix(sudo:session): session closed for user root May 14 05:07:11.077966 sudo[1901]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 14 05:07:11.078122 sudo[1901]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 05:07:11.085241 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 05:07:11.120881 augenrules[1924]: No rules May 14 05:07:11.121839 systemd[1]: audit-rules.service: Deactivated successfully. May 14 05:07:11.122000 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 05:07:11.123062 sudo[1901]: pam_unix(sudo:session): session closed for user root May 14 05:07:11.124002 sshd[1900]: Connection closed by 139.178.89.65 port 34098 May 14 05:07:11.124376 sshd-session[1898]: pam_unix(sshd:session): session closed for user core May 14 05:07:11.130810 systemd[1]: sshd@5-139.178.70.105:22-139.178.89.65:34098.service: Deactivated successfully. May 14 05:07:11.132367 systemd[1]: session-8.scope: Deactivated successfully. May 14 05:07:11.132979 systemd-logind[1573]: Session 8 logged out. Waiting for processes to exit. May 14 05:07:11.134191 systemd-logind[1573]: Removed session 8. May 14 05:07:11.135300 systemd[1]: Started sshd@6-139.178.70.105:22-139.178.89.65:34110.service - OpenSSH per-connection server daemon (139.178.89.65:34110). May 14 05:07:11.170533 sshd[1933]: Accepted publickey for core from 139.178.89.65 port 34110 ssh2: RSA SHA256:sWEHzEuAS00wVyRssVrF9wwUJZsltkVMESa8qG2astk May 14 05:07:11.171822 sshd-session[1933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:07:11.175396 systemd-logind[1573]: New session 9 of user core. May 14 05:07:11.189379 systemd[1]: Started session-9.scope - Session 9 of User core. May 14 05:07:11.237041 sudo[1936]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 14 05:07:11.237202 sudo[1936]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 05:07:11.665975 systemd[1]: Starting docker.service - Docker Application Container Engine... May 14 05:07:11.679486 (dockerd)[1954]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 14 05:07:12.004592 dockerd[1954]: time="2025-05-14T05:07:12.004331159Z" level=info msg="Starting up" May 14 05:07:12.005907 dockerd[1954]: time="2025-05-14T05:07:12.005785636Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 14 05:07:12.099462 dockerd[1954]: time="2025-05-14T05:07:12.099428319Z" level=info msg="Loading containers: start." May 14 05:07:12.153213 kernel: Initializing XFRM netlink socket May 14 05:07:12.344244 systemd-timesyncd[1501]: Network configuration changed, trying to establish connection. May 14 05:07:12.387311 systemd-networkd[1536]: docker0: Link UP May 14 05:07:12.389223 dockerd[1954]: time="2025-05-14T05:07:12.389188514Z" level=info msg="Loading containers: done." May 14 05:07:12.398686 dockerd[1954]: time="2025-05-14T05:07:12.398653717Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 14 05:07:12.398781 dockerd[1954]: time="2025-05-14T05:07:12.398728199Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 14 05:07:12.398804 dockerd[1954]: time="2025-05-14T05:07:12.398796058Z" level=info msg="Initializing buildkit" May 14 05:07:12.424442 dockerd[1954]: time="2025-05-14T05:07:12.424394328Z" level=info msg="Completed buildkit initialization" May 14 05:07:12.430120 dockerd[1954]: time="2025-05-14T05:07:12.430077221Z" level=info msg="Daemon has completed initialization" May 14 05:07:12.430928 dockerd[1954]: time="2025-05-14T05:07:12.430165532Z" level=info msg="API listen on /run/docker.sock" May 14 05:07:12.431042 systemd[1]: Started docker.service - Docker Application Container Engine. May 14 05:08:33.663962 systemd-resolved[1484]: Clock change detected. Flushing caches. May 14 05:08:33.664424 systemd-timesyncd[1501]: Contacted time server 23.150.41.122:123 (2.flatcar.pool.ntp.org). May 14 05:08:33.664459 systemd-timesyncd[1501]: Initial clock synchronization to Wed 2025-05-14 05:08:33.663913 UTC. May 14 05:08:35.208354 containerd[1604]: time="2025-05-14T05:08:35.208319910Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 14 05:08:35.892133 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3530728429.mount: Deactivated successfully. May 14 05:08:36.990782 containerd[1604]: time="2025-05-14T05:08:36.990750910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:08:36.991164 containerd[1604]: time="2025-05-14T05:08:36.991099462Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27960987" May 14 05:08:36.992733 containerd[1604]: time="2025-05-14T05:08:36.992701439Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:08:36.994686 containerd[1604]: time="2025-05-14T05:08:36.994049454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:08:36.994738 containerd[1604]: time="2025-05-14T05:08:36.994714678Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 1.786362357s" May 14 05:08:36.994738 containerd[1604]: time="2025-05-14T05:08:36.994735071Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" May 14 05:08:36.996171 containerd[1604]: time="2025-05-14T05:08:36.996152653Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 14 05:08:37.042406 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 14 05:08:37.045814 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 05:08:37.191525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 05:08:37.194280 (kubelet)[2217]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 05:08:37.224956 kubelet[2217]: E0514 05:08:37.224925 2217 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 05:08:37.226670 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 05:08:37.226766 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 05:08:37.227112 systemd[1]: kubelet.service: Consumed 86ms CPU time, 95.6M memory peak. May 14 05:08:38.407202 containerd[1604]: time="2025-05-14T05:08:38.407018522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:08:38.412410 containerd[1604]: time="2025-05-14T05:08:38.412387005Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713776" May 14 05:08:38.415091 containerd[1604]: time="2025-05-14T05:08:38.415059786Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:08:38.425488 containerd[1604]: time="2025-05-14T05:08:38.425455230Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:08:38.426208 containerd[1604]: time="2025-05-14T05:08:38.425856953Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 1.429684922s" May 14 05:08:38.426208 containerd[1604]: time="2025-05-14T05:08:38.425875480Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" May 14 05:08:38.426365 containerd[1604]: time="2025-05-14T05:08:38.426354410Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 14 05:08:39.606696 containerd[1604]: time="2025-05-14T05:08:39.606643657Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:08:39.607083 containerd[1604]: time="2025-05-14T05:08:39.607065908Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780386" May 14 05:08:39.607465 containerd[1604]: time="2025-05-14T05:08:39.607451544Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:08:39.608821 containerd[1604]: time="2025-05-14T05:08:39.608808340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:08:39.609351 containerd[1604]: time="2025-05-14T05:08:39.609334578Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 1.182936864s" May 14 05:08:39.609380 containerd[1604]: time="2025-05-14T05:08:39.609352846Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" May 14 05:08:39.609692 containerd[1604]: time="2025-05-14T05:08:39.609664801Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 14 05:08:40.444336 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount594127851.mount: Deactivated successfully. May 14 05:08:40.889699 containerd[1604]: time="2025-05-14T05:08:40.889606188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:08:40.891989 containerd[1604]: time="2025-05-14T05:08:40.891969187Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354625" May 14 05:08:40.896853 containerd[1604]: time="2025-05-14T05:08:40.896830176Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:08:40.905691 containerd[1604]: time="2025-05-14T05:08:40.905653739Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:08:40.906135 containerd[1604]: time="2025-05-14T05:08:40.905990681Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 1.296267254s" May 14 05:08:40.906135 containerd[1604]: time="2025-05-14T05:08:40.906010657Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" May 14 05:08:40.906264 containerd[1604]: time="2025-05-14T05:08:40.906252921Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 14 05:08:41.975369 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2672769871.mount: Deactivated successfully. May 14 05:08:42.947123 containerd[1604]: time="2025-05-14T05:08:42.947091206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:08:42.952087 containerd[1604]: time="2025-05-14T05:08:42.952069081Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 14 05:08:42.957995 containerd[1604]: time="2025-05-14T05:08:42.957958133Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:08:42.960061 containerd[1604]: time="2025-05-14T05:08:42.960044797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:08:42.960489 containerd[1604]: time="2025-05-14T05:08:42.960475393Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.054207091s" May 14 05:08:42.960538 containerd[1604]: time="2025-05-14T05:08:42.960530574Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 14 05:08:42.960894 containerd[1604]: time="2025-05-14T05:08:42.960868061Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 14 05:08:43.385252 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount885939986.mount: Deactivated successfully. May 14 05:08:43.387203 containerd[1604]: time="2025-05-14T05:08:43.387177901Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 05:08:43.387799 containerd[1604]: time="2025-05-14T05:08:43.387786470Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 14 05:08:43.388192 containerd[1604]: time="2025-05-14T05:08:43.388177865Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 05:08:43.389500 containerd[1604]: time="2025-05-14T05:08:43.389484418Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 05:08:43.390029 containerd[1604]: time="2025-05-14T05:08:43.390014300Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 429.129353ms" May 14 05:08:43.390055 containerd[1604]: time="2025-05-14T05:08:43.390040595Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 14 05:08:43.390410 containerd[1604]: time="2025-05-14T05:08:43.390393185Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 14 05:08:43.859268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3079277677.mount: Deactivated successfully. May 14 05:08:46.159831 update_engine[1575]: I20250514 05:08:46.159710 1575 update_attempter.cc:509] Updating boot flags... May 14 05:08:46.886757 containerd[1604]: time="2025-05-14T05:08:46.885958979Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:08:46.887376 containerd[1604]: time="2025-05-14T05:08:46.887359342Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" May 14 05:08:46.887654 containerd[1604]: time="2025-05-14T05:08:46.887641209Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:08:46.889598 containerd[1604]: time="2025-05-14T05:08:46.889583420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:08:46.890026 containerd[1604]: time="2025-05-14T05:08:46.889946498Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.499538537s" May 14 05:08:46.890078 containerd[1604]: time="2025-05-14T05:08:46.890070288Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 14 05:08:47.292500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 14 05:08:47.293573 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 05:08:48.475776 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 05:08:48.484137 (kubelet)[2389]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 05:08:48.545757 kubelet[2389]: E0514 05:08:48.545723 2389 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 05:08:48.547712 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 05:08:48.547861 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 05:08:48.548225 systemd[1]: kubelet.service: Consumed 94ms CPU time, 96.4M memory peak. May 14 05:08:49.630979 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 05:08:49.631108 systemd[1]: kubelet.service: Consumed 94ms CPU time, 96.4M memory peak. May 14 05:08:49.638366 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 05:08:49.647985 systemd[1]: Reload requested from client PID 2402 ('systemctl') (unit session-9.scope)... May 14 05:08:49.647993 systemd[1]: Reloading... May 14 05:08:49.710726 zram_generator::config[2446]: No configuration found. May 14 05:08:49.773556 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 05:08:49.781662 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 14 05:08:49.847331 systemd[1]: Reloading finished in 199 ms. May 14 05:08:49.904667 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 14 05:08:49.904753 systemd[1]: kubelet.service: Failed with result 'signal'. May 14 05:08:49.904999 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 05:08:49.906555 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 05:08:50.172182 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 05:08:50.184873 (kubelet)[2514]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 05:08:50.214718 kubelet[2514]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 05:08:50.214718 kubelet[2514]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 05:08:50.214718 kubelet[2514]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 05:08:50.223634 kubelet[2514]: I0514 05:08:50.223599 2514 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 05:08:50.461745 kubelet[2514]: I0514 05:08:50.461647 2514 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 14 05:08:50.461745 kubelet[2514]: I0514 05:08:50.461686 2514 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 05:08:50.462077 kubelet[2514]: I0514 05:08:50.462062 2514 server.go:929] "Client rotation is on, will bootstrap in background" May 14 05:08:50.500743 kubelet[2514]: I0514 05:08:50.500718 2514 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 05:08:50.517704 kubelet[2514]: E0514 05:08:50.516310 2514 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.105:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" May 14 05:08:50.533423 kubelet[2514]: I0514 05:08:50.533397 2514 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 14 05:08:50.558178 kubelet[2514]: I0514 05:08:50.558152 2514 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 05:08:50.572980 kubelet[2514]: I0514 05:08:50.572948 2514 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 14 05:08:50.578329 kubelet[2514]: I0514 05:08:50.578281 2514 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 05:08:50.578467 kubelet[2514]: I0514 05:08:50.578326 2514 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 05:08:50.578539 kubelet[2514]: I0514 05:08:50.578476 2514 topology_manager.go:138] "Creating topology manager with none policy" May 14 05:08:50.578539 kubelet[2514]: I0514 05:08:50.578485 2514 container_manager_linux.go:300] "Creating device plugin manager" May 14 05:08:50.578588 kubelet[2514]: I0514 05:08:50.578576 2514 state_mem.go:36] "Initialized new in-memory state store" May 14 05:08:50.609098 kubelet[2514]: I0514 05:08:50.609055 2514 kubelet.go:408] "Attempting to sync node with API server" May 14 05:08:50.609098 kubelet[2514]: I0514 05:08:50.609089 2514 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 05:08:50.618373 kubelet[2514]: I0514 05:08:50.618331 2514 kubelet.go:314] "Adding apiserver pod source" May 14 05:08:50.618373 kubelet[2514]: I0514 05:08:50.618358 2514 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 05:08:50.623701 kubelet[2514]: W0514 05:08:50.623284 2514 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused May 14 05:08:50.623701 kubelet[2514]: E0514 05:08:50.623313 2514 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" May 14 05:08:50.628912 kubelet[2514]: W0514 05:08:50.628848 2514 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused May 14 05:08:50.628912 kubelet[2514]: E0514 05:08:50.628892 2514 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" May 14 05:08:50.629152 kubelet[2514]: I0514 05:08:50.629030 2514 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 14 05:08:50.632318 kubelet[2514]: I0514 05:08:50.632249 2514 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 05:08:50.634644 kubelet[2514]: W0514 05:08:50.634469 2514 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 05:08:50.634972 kubelet[2514]: I0514 05:08:50.634964 2514 server.go:1269] "Started kubelet" May 14 05:08:50.635075 kubelet[2514]: I0514 05:08:50.635054 2514 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 05:08:50.635210 kubelet[2514]: I0514 05:08:50.635185 2514 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 05:08:50.635435 kubelet[2514]: I0514 05:08:50.635427 2514 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 05:08:50.638250 kubelet[2514]: I0514 05:08:50.637999 2514 server.go:460] "Adding debug handlers to kubelet server" May 14 05:08:50.639234 kubelet[2514]: I0514 05:08:50.639216 2514 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 05:08:50.643689 kubelet[2514]: E0514 05:08:50.640170 2514 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.105:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.105:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f4c88be4bdb04 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-14 05:08:50.634947332 +0000 UTC m=+0.448174847,LastTimestamp:2025-05-14 05:08:50.634947332 +0000 UTC m=+0.448174847,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 14 05:08:50.643689 kubelet[2514]: I0514 05:08:50.642338 2514 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 05:08:50.644260 kubelet[2514]: E0514 05:08:50.644247 2514 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 05:08:50.644322 kubelet[2514]: I0514 05:08:50.644316 2514 volume_manager.go:289] "Starting Kubelet Volume Manager" May 14 05:08:50.645566 kubelet[2514]: I0514 05:08:50.645553 2514 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 14 05:08:50.645674 kubelet[2514]: I0514 05:08:50.645668 2514 reconciler.go:26] "Reconciler: start to sync state" May 14 05:08:50.646007 kubelet[2514]: W0514 05:08:50.645982 2514 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused May 14 05:08:50.646060 kubelet[2514]: E0514 05:08:50.646051 2514 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" May 14 05:08:50.646219 kubelet[2514]: E0514 05:08:50.646201 2514 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.105:6443: connect: connection refused" interval="200ms" May 14 05:08:50.654290 kubelet[2514]: I0514 05:08:50.654270 2514 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 05:08:50.656599 kubelet[2514]: I0514 05:08:50.656578 2514 factory.go:221] Registration of the containerd container factory successfully May 14 05:08:50.656687 kubelet[2514]: I0514 05:08:50.656673 2514 factory.go:221] Registration of the systemd container factory successfully May 14 05:08:50.657979 kubelet[2514]: E0514 05:08:50.657962 2514 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 05:08:50.666887 kubelet[2514]: I0514 05:08:50.666856 2514 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 05:08:50.668095 kubelet[2514]: I0514 05:08:50.668081 2514 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 05:08:50.668168 kubelet[2514]: I0514 05:08:50.668162 2514 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 05:08:50.668208 kubelet[2514]: I0514 05:08:50.668203 2514 kubelet.go:2321] "Starting kubelet main sync loop" May 14 05:08:50.668263 kubelet[2514]: E0514 05:08:50.668253 2514 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 05:08:50.677570 kubelet[2514]: W0514 05:08:50.677552 2514 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused May 14 05:08:50.677739 kubelet[2514]: E0514 05:08:50.677727 2514 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" May 14 05:08:50.678485 kubelet[2514]: I0514 05:08:50.678477 2514 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 05:08:50.678578 kubelet[2514]: I0514 05:08:50.678573 2514 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 05:08:50.678617 kubelet[2514]: I0514 05:08:50.678613 2514 state_mem.go:36] "Initialized new in-memory state store" May 14 05:08:50.679702 kubelet[2514]: I0514 05:08:50.679693 2514 policy_none.go:49] "None policy: Start" May 14 05:08:50.680098 kubelet[2514]: I0514 05:08:50.680089 2514 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 05:08:50.680177 kubelet[2514]: I0514 05:08:50.680173 2514 state_mem.go:35] "Initializing new in-memory state store" May 14 05:08:50.685280 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 14 05:08:50.695006 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 14 05:08:50.697746 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 14 05:08:50.704510 kubelet[2514]: I0514 05:08:50.704468 2514 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 05:08:50.704685 kubelet[2514]: I0514 05:08:50.704608 2514 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 05:08:50.704764 kubelet[2514]: I0514 05:08:50.704618 2514 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 05:08:50.705685 kubelet[2514]: I0514 05:08:50.705633 2514 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 05:08:50.706923 kubelet[2514]: E0514 05:08:50.706897 2514 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 14 05:08:50.776061 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice - libcontainer container kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. May 14 05:08:50.790333 systemd[1]: Created slice kubepods-burstable-pod9d16cd0cc746756d41811641f02831fb.slice - libcontainer container kubepods-burstable-pod9d16cd0cc746756d41811641f02831fb.slice. May 14 05:08:50.803299 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice - libcontainer container kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. May 14 05:08:50.805600 kubelet[2514]: I0514 05:08:50.805567 2514 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 05:08:50.805911 kubelet[2514]: E0514 05:08:50.805896 2514 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.105:6443/api/v1/nodes\": dial tcp 139.178.70.105:6443: connect: connection refused" node="localhost" May 14 05:08:50.846535 kubelet[2514]: E0514 05:08:50.846506 2514 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.105:6443: connect: connection refused" interval="400ms" May 14 05:08:50.946262 kubelet[2514]: I0514 05:08:50.946231 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 05:08:50.946355 kubelet[2514]: I0514 05:08:50.946266 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 05:08:50.946355 kubelet[2514]: I0514 05:08:50.946289 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 05:08:50.946355 kubelet[2514]: I0514 05:08:50.946308 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 14 05:08:50.946355 kubelet[2514]: I0514 05:08:50.946324 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9d16cd0cc746756d41811641f02831fb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9d16cd0cc746756d41811641f02831fb\") " pod="kube-system/kube-apiserver-localhost" May 14 05:08:50.946355 kubelet[2514]: I0514 05:08:50.946337 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 05:08:50.946440 kubelet[2514]: I0514 05:08:50.946353 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 05:08:50.946440 kubelet[2514]: I0514 05:08:50.946366 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9d16cd0cc746756d41811641f02831fb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9d16cd0cc746756d41811641f02831fb\") " pod="kube-system/kube-apiserver-localhost" May 14 05:08:50.946440 kubelet[2514]: I0514 05:08:50.946380 2514 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9d16cd0cc746756d41811641f02831fb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9d16cd0cc746756d41811641f02831fb\") " pod="kube-system/kube-apiserver-localhost" May 14 05:08:51.007284 kubelet[2514]: I0514 05:08:51.007263 2514 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 05:08:51.007524 kubelet[2514]: E0514 05:08:51.007507 2514 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.105:6443/api/v1/nodes\": dial tcp 139.178.70.105:6443: connect: connection refused" node="localhost" May 14 05:08:51.090315 containerd[1604]: time="2025-05-14T05:08:51.090217573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" May 14 05:08:51.102001 containerd[1604]: time="2025-05-14T05:08:51.101983620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9d16cd0cc746756d41811641f02831fb,Namespace:kube-system,Attempt:0,}" May 14 05:08:51.105374 containerd[1604]: time="2025-05-14T05:08:51.105356232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" May 14 05:08:51.247577 kubelet[2514]: E0514 05:08:51.247544 2514 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.105:6443: connect: connection refused" interval="800ms" May 14 05:08:51.331997 containerd[1604]: time="2025-05-14T05:08:51.331961519Z" level=info msg="connecting to shim 0c9aeec706ccdf9c519801911d2e716e6d8a45fa0d076256d7e9bfa39fd78cf1" address="unix:///run/containerd/s/3a960a2cad0c961b4fec41a82dff936ca40cbd2c9511b4e8ff3c31562661ac3f" namespace=k8s.io protocol=ttrpc version=3 May 14 05:08:51.333189 containerd[1604]: time="2025-05-14T05:08:51.333104686Z" level=info msg="connecting to shim e86f7b7cd53fdf3db5cae6c13f26555ed27eae5dafba6b0c35d19bfee51ad404" address="unix:///run/containerd/s/ad6e32907530784e3cfdf26b0e0ab49660f1df64cbbba40396e24165c1f02392" namespace=k8s.io protocol=ttrpc version=3 May 14 05:08:51.336399 containerd[1604]: time="2025-05-14T05:08:51.336375356Z" level=info msg="connecting to shim 51d582f431234e5f9ac62d52259fa493326a9e17f2b4f914a27508f5482b18fb" address="unix:///run/containerd/s/6b15f4fd3b30ba21f67d2df527451e1695fce98e03f292b05f9089058f69b150" namespace=k8s.io protocol=ttrpc version=3 May 14 05:08:51.410201 kubelet[2514]: I0514 05:08:51.409949 2514 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 05:08:51.410201 kubelet[2514]: E0514 05:08:51.410134 2514 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.105:6443/api/v1/nodes\": dial tcp 139.178.70.105:6443: connect: connection refused" node="localhost" May 14 05:08:51.475003 kubelet[2514]: W0514 05:08:51.474960 2514 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused May 14 05:08:51.475094 kubelet[2514]: E0514 05:08:51.475009 2514 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" May 14 05:08:51.478828 kubelet[2514]: W0514 05:08:51.478764 2514 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused May 14 05:08:51.478828 kubelet[2514]: E0514 05:08:51.478797 2514 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" May 14 05:08:51.491783 systemd[1]: Started cri-containerd-0c9aeec706ccdf9c519801911d2e716e6d8a45fa0d076256d7e9bfa39fd78cf1.scope - libcontainer container 0c9aeec706ccdf9c519801911d2e716e6d8a45fa0d076256d7e9bfa39fd78cf1. May 14 05:08:51.493373 systemd[1]: Started cri-containerd-51d582f431234e5f9ac62d52259fa493326a9e17f2b4f914a27508f5482b18fb.scope - libcontainer container 51d582f431234e5f9ac62d52259fa493326a9e17f2b4f914a27508f5482b18fb. May 14 05:08:51.494986 systemd[1]: Started cri-containerd-e86f7b7cd53fdf3db5cae6c13f26555ed27eae5dafba6b0c35d19bfee51ad404.scope - libcontainer container e86f7b7cd53fdf3db5cae6c13f26555ed27eae5dafba6b0c35d19bfee51ad404. May 14 05:08:51.616094 containerd[1604]: time="2025-05-14T05:08:51.616068958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"51d582f431234e5f9ac62d52259fa493326a9e17f2b4f914a27508f5482b18fb\"" May 14 05:08:51.619323 containerd[1604]: time="2025-05-14T05:08:51.619295146Z" level=info msg="CreateContainer within sandbox \"51d582f431234e5f9ac62d52259fa493326a9e17f2b4f914a27508f5482b18fb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 14 05:08:51.619739 containerd[1604]: time="2025-05-14T05:08:51.619560654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"e86f7b7cd53fdf3db5cae6c13f26555ed27eae5dafba6b0c35d19bfee51ad404\"" May 14 05:08:51.620245 containerd[1604]: time="2025-05-14T05:08:51.620229900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9d16cd0cc746756d41811641f02831fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c9aeec706ccdf9c519801911d2e716e6d8a45fa0d076256d7e9bfa39fd78cf1\"" May 14 05:08:51.623972 containerd[1604]: time="2025-05-14T05:08:51.623825329Z" level=info msg="CreateContainer within sandbox \"e86f7b7cd53fdf3db5cae6c13f26555ed27eae5dafba6b0c35d19bfee51ad404\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 14 05:08:51.624211 containerd[1604]: time="2025-05-14T05:08:51.624193978Z" level=info msg="CreateContainer within sandbox \"0c9aeec706ccdf9c519801911d2e716e6d8a45fa0d076256d7e9bfa39fd78cf1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 14 05:08:51.633246 containerd[1604]: time="2025-05-14T05:08:51.633221384Z" level=info msg="Container e80cf5ea99b9e9368ae2f400bb653d30ff80a19b9176713cbb8facf623721622: CDI devices from CRI Config.CDIDevices: []" May 14 05:08:51.633399 containerd[1604]: time="2025-05-14T05:08:51.633381677Z" level=info msg="Container 4b7e2b1e17fe1604a6519eba587b85fe4303f15bda88146ad42cd87c98d77ca9: CDI devices from CRI Config.CDIDevices: []" May 14 05:08:51.633531 containerd[1604]: time="2025-05-14T05:08:51.633521909Z" level=info msg="Container f80b6c60d24529a5f3df8ab5d3b48f99b291e120889119f14501904f1ab87b30: CDI devices from CRI Config.CDIDevices: []" May 14 05:08:51.638669 containerd[1604]: time="2025-05-14T05:08:51.638471174Z" level=info msg="CreateContainer within sandbox \"0c9aeec706ccdf9c519801911d2e716e6d8a45fa0d076256d7e9bfa39fd78cf1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f80b6c60d24529a5f3df8ab5d3b48f99b291e120889119f14501904f1ab87b30\"" May 14 05:08:51.640825 containerd[1604]: time="2025-05-14T05:08:51.640793695Z" level=info msg="StartContainer for \"f80b6c60d24529a5f3df8ab5d3b48f99b291e120889119f14501904f1ab87b30\"" May 14 05:08:51.642919 containerd[1604]: time="2025-05-14T05:08:51.642731270Z" level=info msg="CreateContainer within sandbox \"51d582f431234e5f9ac62d52259fa493326a9e17f2b4f914a27508f5482b18fb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4b7e2b1e17fe1604a6519eba587b85fe4303f15bda88146ad42cd87c98d77ca9\"" May 14 05:08:51.643165 containerd[1604]: time="2025-05-14T05:08:51.643150559Z" level=info msg="connecting to shim f80b6c60d24529a5f3df8ab5d3b48f99b291e120889119f14501904f1ab87b30" address="unix:///run/containerd/s/3a960a2cad0c961b4fec41a82dff936ca40cbd2c9511b4e8ff3c31562661ac3f" protocol=ttrpc version=3 May 14 05:08:51.644170 containerd[1604]: time="2025-05-14T05:08:51.644154678Z" level=info msg="CreateContainer within sandbox \"e86f7b7cd53fdf3db5cae6c13f26555ed27eae5dafba6b0c35d19bfee51ad404\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e80cf5ea99b9e9368ae2f400bb653d30ff80a19b9176713cbb8facf623721622\"" May 14 05:08:51.645409 containerd[1604]: time="2025-05-14T05:08:51.644327906Z" level=info msg="StartContainer for \"4b7e2b1e17fe1604a6519eba587b85fe4303f15bda88146ad42cd87c98d77ca9\"" May 14 05:08:51.645832 containerd[1604]: time="2025-05-14T05:08:51.645806737Z" level=info msg="StartContainer for \"e80cf5ea99b9e9368ae2f400bb653d30ff80a19b9176713cbb8facf623721622\"" May 14 05:08:51.646377 containerd[1604]: time="2025-05-14T05:08:51.646357529Z" level=info msg="connecting to shim e80cf5ea99b9e9368ae2f400bb653d30ff80a19b9176713cbb8facf623721622" address="unix:///run/containerd/s/ad6e32907530784e3cfdf26b0e0ab49660f1df64cbbba40396e24165c1f02392" protocol=ttrpc version=3 May 14 05:08:51.647921 containerd[1604]: time="2025-05-14T05:08:51.647754886Z" level=info msg="connecting to shim 4b7e2b1e17fe1604a6519eba587b85fe4303f15bda88146ad42cd87c98d77ca9" address="unix:///run/containerd/s/6b15f4fd3b30ba21f67d2df527451e1695fce98e03f292b05f9089058f69b150" protocol=ttrpc version=3 May 14 05:08:51.657828 systemd[1]: Started cri-containerd-f80b6c60d24529a5f3df8ab5d3b48f99b291e120889119f14501904f1ab87b30.scope - libcontainer container f80b6c60d24529a5f3df8ab5d3b48f99b291e120889119f14501904f1ab87b30. May 14 05:08:51.665828 systemd[1]: Started cri-containerd-e80cf5ea99b9e9368ae2f400bb653d30ff80a19b9176713cbb8facf623721622.scope - libcontainer container e80cf5ea99b9e9368ae2f400bb653d30ff80a19b9176713cbb8facf623721622. May 14 05:08:51.669303 systemd[1]: Started cri-containerd-4b7e2b1e17fe1604a6519eba587b85fe4303f15bda88146ad42cd87c98d77ca9.scope - libcontainer container 4b7e2b1e17fe1604a6519eba587b85fe4303f15bda88146ad42cd87c98d77ca9. May 14 05:08:51.722058 containerd[1604]: time="2025-05-14T05:08:51.722023493Z" level=info msg="StartContainer for \"f80b6c60d24529a5f3df8ab5d3b48f99b291e120889119f14501904f1ab87b30\" returns successfully" May 14 05:08:51.723004 containerd[1604]: time="2025-05-14T05:08:51.722980821Z" level=info msg="StartContainer for \"4b7e2b1e17fe1604a6519eba587b85fe4303f15bda88146ad42cd87c98d77ca9\" returns successfully" May 14 05:08:51.744990 containerd[1604]: time="2025-05-14T05:08:51.744955872Z" level=info msg="StartContainer for \"e80cf5ea99b9e9368ae2f400bb653d30ff80a19b9176713cbb8facf623721622\" returns successfully" May 14 05:08:51.894790 kubelet[2514]: W0514 05:08:51.894750 2514 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused May 14 05:08:51.894897 kubelet[2514]: E0514 05:08:51.894797 2514 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" May 14 05:08:51.918330 kubelet[2514]: W0514 05:08:51.918248 2514 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused May 14 05:08:51.918330 kubelet[2514]: E0514 05:08:51.918288 2514 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.105:6443: connect: connection refused" logger="UnhandledError" May 14 05:08:52.048395 kubelet[2514]: E0514 05:08:52.048364 2514 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.105:6443: connect: connection refused" interval="1.6s" May 14 05:08:52.211599 kubelet[2514]: I0514 05:08:52.211313 2514 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 05:08:52.211599 kubelet[2514]: E0514 05:08:52.211512 2514 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.105:6443/api/v1/nodes\": dial tcp 139.178.70.105:6443: connect: connection refused" node="localhost" May 14 05:08:53.651207 kubelet[2514]: E0514 05:08:53.651176 2514 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 14 05:08:53.678562 kubelet[2514]: E0514 05:08:53.678519 2514 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found May 14 05:08:53.814501 kubelet[2514]: I0514 05:08:53.814466 2514 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 05:08:53.822970 kubelet[2514]: I0514 05:08:53.822841 2514 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 14 05:08:53.822970 kubelet[2514]: E0514 05:08:53.822866 2514 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 14 05:08:53.830539 kubelet[2514]: E0514 05:08:53.830517 2514 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 05:08:53.931618 kubelet[2514]: E0514 05:08:53.931545 2514 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 05:08:54.032246 kubelet[2514]: E0514 05:08:54.032216 2514 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 05:08:54.132447 kubelet[2514]: E0514 05:08:54.132421 2514 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 05:08:54.233228 kubelet[2514]: E0514 05:08:54.233147 2514 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 05:08:54.626413 kubelet[2514]: I0514 05:08:54.626391 2514 apiserver.go:52] "Watching apiserver" May 14 05:08:54.645757 kubelet[2514]: I0514 05:08:54.645725 2514 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 14 05:08:55.291697 systemd[1]: Reload requested from client PID 2778 ('systemctl') (unit session-9.scope)... May 14 05:08:55.291713 systemd[1]: Reloading... May 14 05:08:55.340709 zram_generator::config[2821]: No configuration found. May 14 05:08:55.411643 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 05:08:55.419991 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") May 14 05:08:55.495115 systemd[1]: Reloading finished in 203 ms. May 14 05:08:55.513198 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 05:08:55.526101 systemd[1]: kubelet.service: Deactivated successfully. May 14 05:08:55.526330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 05:08:55.526409 systemd[1]: kubelet.service: Consumed 465ms CPU time, 114.8M memory peak. May 14 05:08:55.528312 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 05:08:55.699999 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 05:08:55.706976 (kubelet)[2889]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 05:08:55.794324 kubelet[2889]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 05:08:55.794324 kubelet[2889]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 05:08:55.794324 kubelet[2889]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 05:08:55.794541 kubelet[2889]: I0514 05:08:55.794355 2889 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 05:08:55.798716 kubelet[2889]: I0514 05:08:55.798469 2889 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 14 05:08:55.798716 kubelet[2889]: I0514 05:08:55.798481 2889 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 05:08:55.798716 kubelet[2889]: I0514 05:08:55.798604 2889 server.go:929] "Client rotation is on, will bootstrap in background" May 14 05:08:55.799371 kubelet[2889]: I0514 05:08:55.799359 2889 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 14 05:08:55.800412 kubelet[2889]: I0514 05:08:55.800400 2889 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 05:08:55.804699 kubelet[2889]: I0514 05:08:55.803820 2889 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 14 05:08:55.805233 kubelet[2889]: I0514 05:08:55.805218 2889 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 05:08:55.805285 kubelet[2889]: I0514 05:08:55.805274 2889 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 14 05:08:55.805351 kubelet[2889]: I0514 05:08:55.805333 2889 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 05:08:55.805467 kubelet[2889]: I0514 05:08:55.805350 2889 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 05:08:55.805522 kubelet[2889]: I0514 05:08:55.805469 2889 topology_manager.go:138] "Creating topology manager with none policy" May 14 05:08:55.805522 kubelet[2889]: I0514 05:08:55.805474 2889 container_manager_linux.go:300] "Creating device plugin manager" May 14 05:08:55.805522 kubelet[2889]: I0514 05:08:55.805491 2889 state_mem.go:36] "Initialized new in-memory state store" May 14 05:08:55.805571 kubelet[2889]: I0514 05:08:55.805549 2889 kubelet.go:408] "Attempting to sync node with API server" May 14 05:08:55.805571 kubelet[2889]: I0514 05:08:55.805556 2889 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 05:08:55.805603 kubelet[2889]: I0514 05:08:55.805573 2889 kubelet.go:314] "Adding apiserver pod source" May 14 05:08:55.805603 kubelet[2889]: I0514 05:08:55.805582 2889 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 05:08:55.805819 kubelet[2889]: I0514 05:08:55.805805 2889 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 14 05:08:55.806029 kubelet[2889]: I0514 05:08:55.806016 2889 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 05:08:55.806235 kubelet[2889]: I0514 05:08:55.806224 2889 server.go:1269] "Started kubelet" May 14 05:08:55.807110 kubelet[2889]: I0514 05:08:55.807098 2889 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 05:08:55.813724 kubelet[2889]: I0514 05:08:55.811755 2889 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 05:08:55.813724 kubelet[2889]: I0514 05:08:55.812307 2889 server.go:460] "Adding debug handlers to kubelet server" May 14 05:08:55.813724 kubelet[2889]: I0514 05:08:55.812720 2889 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 05:08:55.813724 kubelet[2889]: I0514 05:08:55.812820 2889 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 05:08:55.813724 kubelet[2889]: I0514 05:08:55.813078 2889 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 05:08:55.814504 kubelet[2889]: I0514 05:08:55.814485 2889 volume_manager.go:289] "Starting Kubelet Volume Manager" May 14 05:08:55.814599 kubelet[2889]: E0514 05:08:55.814590 2889 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 05:08:55.816706 kubelet[2889]: I0514 05:08:55.815535 2889 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 14 05:08:55.816706 kubelet[2889]: I0514 05:08:55.815596 2889 reconciler.go:26] "Reconciler: start to sync state" May 14 05:08:55.816755 kubelet[2889]: I0514 05:08:55.816734 2889 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 05:08:55.817887 kubelet[2889]: I0514 05:08:55.817874 2889 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 05:08:55.817922 kubelet[2889]: I0514 05:08:55.817905 2889 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 05:08:55.817922 kubelet[2889]: I0514 05:08:55.817915 2889 kubelet.go:2321] "Starting kubelet main sync loop" May 14 05:08:55.817953 kubelet[2889]: E0514 05:08:55.817935 2889 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 05:08:55.818765 kubelet[2889]: I0514 05:08:55.818748 2889 factory.go:221] Registration of the systemd container factory successfully May 14 05:08:55.818857 kubelet[2889]: I0514 05:08:55.818847 2889 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 05:08:55.828572 kubelet[2889]: I0514 05:08:55.828556 2889 factory.go:221] Registration of the containerd container factory successfully May 14 05:08:55.856158 kubelet[2889]: I0514 05:08:55.856146 2889 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 05:08:55.856259 kubelet[2889]: I0514 05:08:55.856252 2889 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 05:08:55.856296 kubelet[2889]: I0514 05:08:55.856292 2889 state_mem.go:36] "Initialized new in-memory state store" May 14 05:08:55.856428 kubelet[2889]: I0514 05:08:55.856422 2889 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 14 05:08:55.856481 kubelet[2889]: I0514 05:08:55.856461 2889 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 14 05:08:55.856521 kubelet[2889]: I0514 05:08:55.856517 2889 policy_none.go:49] "None policy: Start" May 14 05:08:55.856869 kubelet[2889]: I0514 05:08:55.856863 2889 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 05:08:55.856958 kubelet[2889]: I0514 05:08:55.856954 2889 state_mem.go:35] "Initializing new in-memory state store" May 14 05:08:55.857094 kubelet[2889]: I0514 05:08:55.857088 2889 state_mem.go:75] "Updated machine memory state" May 14 05:08:55.859807 kubelet[2889]: I0514 05:08:55.859798 2889 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 05:08:55.860088 kubelet[2889]: I0514 05:08:55.860081 2889 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 05:08:55.860160 kubelet[2889]: I0514 05:08:55.860142 2889 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 05:08:55.860754 kubelet[2889]: I0514 05:08:55.860720 2889 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 05:08:55.922777 kubelet[2889]: E0514 05:08:55.922727 2889 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 14 05:08:55.923040 kubelet[2889]: E0514 05:08:55.923015 2889 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 14 05:08:55.963173 kubelet[2889]: I0514 05:08:55.962319 2889 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 05:08:55.966383 kubelet[2889]: I0514 05:08:55.966361 2889 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 14 05:08:55.966536 kubelet[2889]: I0514 05:08:55.966428 2889 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 14 05:08:56.017128 kubelet[2889]: I0514 05:08:56.017095 2889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 14 05:08:56.017324 kubelet[2889]: I0514 05:08:56.017221 2889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9d16cd0cc746756d41811641f02831fb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9d16cd0cc746756d41811641f02831fb\") " pod="kube-system/kube-apiserver-localhost" May 14 05:08:56.017324 kubelet[2889]: I0514 05:08:56.017241 2889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 05:08:56.017324 kubelet[2889]: I0514 05:08:56.017253 2889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 05:08:56.017482 kubelet[2889]: I0514 05:08:56.017266 2889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 05:08:56.017482 kubelet[2889]: I0514 05:08:56.017430 2889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9d16cd0cc746756d41811641f02831fb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9d16cd0cc746756d41811641f02831fb\") " pod="kube-system/kube-apiserver-localhost" May 14 05:08:56.017482 kubelet[2889]: I0514 05:08:56.017444 2889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9d16cd0cc746756d41811641f02831fb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9d16cd0cc746756d41811641f02831fb\") " pod="kube-system/kube-apiserver-localhost" May 14 05:08:56.017482 kubelet[2889]: I0514 05:08:56.017456 2889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 05:08:56.017482 kubelet[2889]: I0514 05:08:56.017467 2889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 05:08:56.295341 sudo[2922]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 14 05:08:56.295724 sudo[2922]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 14 05:08:56.687361 sudo[2922]: pam_unix(sudo:session): session closed for user root May 14 05:08:56.810577 kubelet[2889]: I0514 05:08:56.810450 2889 apiserver.go:52] "Watching apiserver" May 14 05:08:56.815888 kubelet[2889]: I0514 05:08:56.815868 2889 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 14 05:08:56.899236 kubelet[2889]: I0514 05:08:56.899060 2889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.89904842 podStartE2EDuration="1.89904842s" podCreationTimestamp="2025-05-14 05:08:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 05:08:56.889138972 +0000 UTC m=+1.153256826" watchObservedRunningTime="2025-05-14 05:08:56.89904842 +0000 UTC m=+1.163166262" May 14 05:08:56.906843 kubelet[2889]: I0514 05:08:56.906753 2889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.9067447299999998 podStartE2EDuration="2.90674473s" podCreationTimestamp="2025-05-14 05:08:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 05:08:56.899566325 +0000 UTC m=+1.163684172" watchObservedRunningTime="2025-05-14 05:08:56.90674473 +0000 UTC m=+1.170862584" May 14 05:08:56.906843 kubelet[2889]: I0514 05:08:56.906795 2889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.906793019 podStartE2EDuration="2.906793019s" podCreationTimestamp="2025-05-14 05:08:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 05:08:56.904818207 +0000 UTC m=+1.168936053" watchObservedRunningTime="2025-05-14 05:08:56.906793019 +0000 UTC m=+1.170910867" May 14 05:08:57.796163 sudo[1936]: pam_unix(sudo:session): session closed for user root May 14 05:08:57.796866 sshd[1935]: Connection closed by 139.178.89.65 port 34110 May 14 05:08:57.797558 sshd-session[1933]: pam_unix(sshd:session): session closed for user core May 14 05:08:57.799453 systemd[1]: sshd@6-139.178.70.105:22-139.178.89.65:34110.service: Deactivated successfully. May 14 05:08:57.800974 systemd[1]: session-9.scope: Deactivated successfully. May 14 05:08:57.801135 systemd[1]: session-9.scope: Consumed 3.039s CPU time, 209.7M memory peak. May 14 05:08:57.802156 systemd-logind[1573]: Session 9 logged out. Waiting for processes to exit. May 14 05:08:57.803186 systemd-logind[1573]: Removed session 9. May 14 05:09:00.776515 kubelet[2889]: I0514 05:09:00.776493 2889 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 14 05:09:00.776772 containerd[1604]: time="2025-05-14T05:09:00.776674589Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 05:09:00.776894 kubelet[2889]: I0514 05:09:00.776804 2889 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 14 05:09:01.878051 systemd[1]: Created slice kubepods-besteffort-podd0ed6790_7bfc_41b7_917e_c82547d53b79.slice - libcontainer container kubepods-besteffort-podd0ed6790_7bfc_41b7_917e_c82547d53b79.slice. May 14 05:09:01.888171 systemd[1]: Created slice kubepods-burstable-pod1f72d0e9_16f8_42de_b237_cb3cf049fc9e.slice - libcontainer container kubepods-burstable-pod1f72d0e9_16f8_42de_b237_cb3cf049fc9e.slice. May 14 05:09:01.955382 kubelet[2889]: I0514 05:09:01.955353 2889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-host-proc-sys-net\") pod \"cilium-drksv\" (UID: \"1f72d0e9-16f8-42de-b237-cb3cf049fc9e\") " pod="kube-system/cilium-drksv" May 14 05:09:01.955382 kubelet[2889]: I0514 05:09:01.955384 2889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-host-proc-sys-kernel\") pod \"cilium-drksv\" (UID: \"1f72d0e9-16f8-42de-b237-cb3cf049fc9e\") " pod="kube-system/cilium-drksv" May 14 05:09:01.955945 kubelet[2889]: I0514 05:09:01.955401 2889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d0ed6790-7bfc-41b7-917e-c82547d53b79-kube-proxy\") pod \"kube-proxy-lmj4z\" (UID: \"d0ed6790-7bfc-41b7-917e-c82547d53b79\") " pod="kube-system/kube-proxy-lmj4z" May 14 05:09:01.955945 kubelet[2889]: I0514 05:09:01.955412 2889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq42j\" (UniqueName: \"kubernetes.io/projected/d0ed6790-7bfc-41b7-917e-c82547d53b79-kube-api-access-jq42j\") pod \"kube-proxy-lmj4z\" (UID: \"d0ed6790-7bfc-41b7-917e-c82547d53b79\") " pod="kube-system/kube-proxy-lmj4z" May 14 05:09:01.955945 kubelet[2889]: I0514 05:09:01.955563 2889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-lib-modules\") pod \"cilium-drksv\" (UID: \"1f72d0e9-16f8-42de-b237-cb3cf049fc9e\") " pod="kube-system/cilium-drksv" May 14 05:09:01.955945 kubelet[2889]: I0514 05:09:01.955578 2889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-hubble-tls\") pod \"cilium-drksv\" (UID: \"1f72d0e9-16f8-42de-b237-cb3cf049fc9e\") " pod="kube-system/cilium-drksv" May 14 05:09:01.955945 kubelet[2889]: I0514 05:09:01.955591 2889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d0ed6790-7bfc-41b7-917e-c82547d53b79-lib-modules\") pod \"kube-proxy-lmj4z\" (UID: \"d0ed6790-7bfc-41b7-917e-c82547d53b79\") " pod="kube-system/kube-proxy-lmj4z" May 14 05:09:01.955945 kubelet[2889]: I0514 05:09:01.955615 2889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-cilium-run\") pod \"cilium-drksv\" (UID: \"1f72d0e9-16f8-42de-b237-cb3cf049fc9e\") " pod="kube-system/cilium-drksv" May 14 05:09:01.956384 kubelet[2889]: I0514 05:09:01.955648 2889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-etc-cni-netd\") pod \"cilium-drksv\" (UID: \"1f72d0e9-16f8-42de-b237-cb3cf049fc9e\") " pod="kube-system/cilium-drksv" May 14 05:09:01.956384 kubelet[2889]: I0514 05:09:01.955697 2889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-cilium-cgroup\") pod \"cilium-drksv\" (UID: \"1f72d0e9-16f8-42de-b237-cb3cf049fc9e\") " pod="kube-system/cilium-drksv" May 14 05:09:01.956384 kubelet[2889]: I0514 05:09:01.955719 2889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d0ed6790-7bfc-41b7-917e-c82547d53b79-xtables-lock\") pod \"kube-proxy-lmj4z\" (UID: \"d0ed6790-7bfc-41b7-917e-c82547d53b79\") " pod="kube-system/kube-proxy-lmj4z" May 14 05:09:01.956384 kubelet[2889]: I0514 05:09:01.955735 2889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-cni-path\") pod \"cilium-drksv\" (UID: \"1f72d0e9-16f8-42de-b237-cb3cf049fc9e\") " pod="kube-system/cilium-drksv" May 14 05:09:01.956384 kubelet[2889]: I0514 05:09:01.955747 2889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77hvp\" (UniqueName: \"kubernetes.io/projected/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-kube-api-access-77hvp\") pod \"cilium-drksv\" (UID: \"1f72d0e9-16f8-42de-b237-cb3cf049fc9e\") " pod="kube-system/cilium-drksv" May 14 05:09:01.956384 kubelet[2889]: I0514 05:09:01.955774 2889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-bpf-maps\") pod \"cilium-drksv\" (UID: \"1f72d0e9-16f8-42de-b237-cb3cf049fc9e\") " pod="kube-system/cilium-drksv" May 14 05:09:01.956828 kubelet[2889]: I0514 05:09:01.955786 2889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-hostproc\") pod \"cilium-drksv\" (UID: \"1f72d0e9-16f8-42de-b237-cb3cf049fc9e\") " pod="kube-system/cilium-drksv" May 14 05:09:01.956828 kubelet[2889]: I0514 05:09:01.955798 2889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-xtables-lock\") pod \"cilium-drksv\" (UID: \"1f72d0e9-16f8-42de-b237-cb3cf049fc9e\") " pod="kube-system/cilium-drksv" May 14 05:09:01.956828 kubelet[2889]: I0514 05:09:01.955810 2889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-clustermesh-secrets\") pod \"cilium-drksv\" (UID: \"1f72d0e9-16f8-42de-b237-cb3cf049fc9e\") " pod="kube-system/cilium-drksv" May 14 05:09:01.956828 kubelet[2889]: I0514 05:09:01.955826 2889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-cilium-config-path\") pod \"cilium-drksv\" (UID: \"1f72d0e9-16f8-42de-b237-cb3cf049fc9e\") " pod="kube-system/cilium-drksv" May 14 05:09:01.970321 systemd[1]: Created slice kubepods-besteffort-pod4926213b_8b5a_4019_bd68_d79cd14a0813.slice - libcontainer container kubepods-besteffort-pod4926213b_8b5a_4019_bd68_d79cd14a0813.slice. May 14 05:09:02.056525 kubelet[2889]: I0514 05:09:02.056495 2889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2ljn\" (UniqueName: \"kubernetes.io/projected/4926213b-8b5a-4019-bd68-d79cd14a0813-kube-api-access-d2ljn\") pod \"cilium-operator-5d85765b45-sk46x\" (UID: \"4926213b-8b5a-4019-bd68-d79cd14a0813\") " pod="kube-system/cilium-operator-5d85765b45-sk46x" May 14 05:09:02.056641 kubelet[2889]: I0514 05:09:02.056625 2889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4926213b-8b5a-4019-bd68-d79cd14a0813-cilium-config-path\") pod \"cilium-operator-5d85765b45-sk46x\" (UID: \"4926213b-8b5a-4019-bd68-d79cd14a0813\") " pod="kube-system/cilium-operator-5d85765b45-sk46x" May 14 05:09:02.187976 containerd[1604]: time="2025-05-14T05:09:02.187794555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lmj4z,Uid:d0ed6790-7bfc-41b7-917e-c82547d53b79,Namespace:kube-system,Attempt:0,}" May 14 05:09:02.193578 containerd[1604]: time="2025-05-14T05:09:02.193552139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-drksv,Uid:1f72d0e9-16f8-42de-b237-cb3cf049fc9e,Namespace:kube-system,Attempt:0,}" May 14 05:09:02.202098 containerd[1604]: time="2025-05-14T05:09:02.201907580Z" level=info msg="connecting to shim f5fd8e71b75cd171814a05986a12ac5a5dbce531a472d9d19e1265a30d2d9516" address="unix:///run/containerd/s/ad43918db99d5b7f671fb37e4c0a1624c993f866348fc01bd8d3e4bcf514b4b5" namespace=k8s.io protocol=ttrpc version=3 May 14 05:09:02.207938 containerd[1604]: time="2025-05-14T05:09:02.207900078Z" level=info msg="connecting to shim 38d45b6b7e3a8908316e3e19f3efa59ba996ee95cdb48bff53c76a0dcba764d2" address="unix:///run/containerd/s/07461d805788d21ad13796fca49d3a17508ab23b012d75d2aed8c5318278e409" namespace=k8s.io protocol=ttrpc version=3 May 14 05:09:02.220868 systemd[1]: Started cri-containerd-f5fd8e71b75cd171814a05986a12ac5a5dbce531a472d9d19e1265a30d2d9516.scope - libcontainer container f5fd8e71b75cd171814a05986a12ac5a5dbce531a472d9d19e1265a30d2d9516. May 14 05:09:02.225443 systemd[1]: Started cri-containerd-38d45b6b7e3a8908316e3e19f3efa59ba996ee95cdb48bff53c76a0dcba764d2.scope - libcontainer container 38d45b6b7e3a8908316e3e19f3efa59ba996ee95cdb48bff53c76a0dcba764d2. May 14 05:09:02.247016 containerd[1604]: time="2025-05-14T05:09:02.246997703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lmj4z,Uid:d0ed6790-7bfc-41b7-917e-c82547d53b79,Namespace:kube-system,Attempt:0,} returns sandbox id \"f5fd8e71b75cd171814a05986a12ac5a5dbce531a472d9d19e1265a30d2d9516\"" May 14 05:09:02.249203 containerd[1604]: time="2025-05-14T05:09:02.248812575Z" level=info msg="CreateContainer within sandbox \"f5fd8e71b75cd171814a05986a12ac5a5dbce531a472d9d19e1265a30d2d9516\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 05:09:02.251903 containerd[1604]: time="2025-05-14T05:09:02.251875068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-drksv,Uid:1f72d0e9-16f8-42de-b237-cb3cf049fc9e,Namespace:kube-system,Attempt:0,} returns sandbox id \"38d45b6b7e3a8908316e3e19f3efa59ba996ee95cdb48bff53c76a0dcba764d2\"" May 14 05:09:02.253163 containerd[1604]: time="2025-05-14T05:09:02.253146827Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 14 05:09:02.272800 containerd[1604]: time="2025-05-14T05:09:02.272774429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-sk46x,Uid:4926213b-8b5a-4019-bd68-d79cd14a0813,Namespace:kube-system,Attempt:0,}" May 14 05:09:02.278272 containerd[1604]: time="2025-05-14T05:09:02.278253045Z" level=info msg="Container 15bbaa0487fb9fa16d6908e560daa18e55ad048e0e923a9bb1b8f6b81d1055bb: CDI devices from CRI Config.CDIDevices: []" May 14 05:09:02.295064 containerd[1604]: time="2025-05-14T05:09:02.295039409Z" level=info msg="CreateContainer within sandbox \"f5fd8e71b75cd171814a05986a12ac5a5dbce531a472d9d19e1265a30d2d9516\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"15bbaa0487fb9fa16d6908e560daa18e55ad048e0e923a9bb1b8f6b81d1055bb\"" May 14 05:09:02.295694 containerd[1604]: time="2025-05-14T05:09:02.295616098Z" level=info msg="StartContainer for \"15bbaa0487fb9fa16d6908e560daa18e55ad048e0e923a9bb1b8f6b81d1055bb\"" May 14 05:09:02.296590 containerd[1604]: time="2025-05-14T05:09:02.296571303Z" level=info msg="connecting to shim 15bbaa0487fb9fa16d6908e560daa18e55ad048e0e923a9bb1b8f6b81d1055bb" address="unix:///run/containerd/s/ad43918db99d5b7f671fb37e4c0a1624c993f866348fc01bd8d3e4bcf514b4b5" protocol=ttrpc version=3 May 14 05:09:02.303365 containerd[1604]: time="2025-05-14T05:09:02.302889357Z" level=info msg="connecting to shim 77ad0fa95aa9551e61800463166c3b30f25eeb2385a7a4cdcb2de7f987ee9ba1" address="unix:///run/containerd/s/766430c092d17af8f54bef842164748778cf8a1f4de4f877382ecec2a8f74270" namespace=k8s.io protocol=ttrpc version=3 May 14 05:09:02.313818 systemd[1]: Started cri-containerd-15bbaa0487fb9fa16d6908e560daa18e55ad048e0e923a9bb1b8f6b81d1055bb.scope - libcontainer container 15bbaa0487fb9fa16d6908e560daa18e55ad048e0e923a9bb1b8f6b81d1055bb. May 14 05:09:02.331833 systemd[1]: Started cri-containerd-77ad0fa95aa9551e61800463166c3b30f25eeb2385a7a4cdcb2de7f987ee9ba1.scope - libcontainer container 77ad0fa95aa9551e61800463166c3b30f25eeb2385a7a4cdcb2de7f987ee9ba1. May 14 05:09:02.349659 containerd[1604]: time="2025-05-14T05:09:02.349635618Z" level=info msg="StartContainer for \"15bbaa0487fb9fa16d6908e560daa18e55ad048e0e923a9bb1b8f6b81d1055bb\" returns successfully" May 14 05:09:02.374054 containerd[1604]: time="2025-05-14T05:09:02.374030606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-sk46x,Uid:4926213b-8b5a-4019-bd68-d79cd14a0813,Namespace:kube-system,Attempt:0,} returns sandbox id \"77ad0fa95aa9551e61800463166c3b30f25eeb2385a7a4cdcb2de7f987ee9ba1\"" May 14 05:09:04.150893 kubelet[2889]: I0514 05:09:04.150758 2889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lmj4z" podStartSLOduration=3.150743283 podStartE2EDuration="3.150743283s" podCreationTimestamp="2025-05-14 05:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 05:09:02.862727089 +0000 UTC m=+7.126844944" watchObservedRunningTime="2025-05-14 05:09:04.150743283 +0000 UTC m=+8.414861138" May 14 05:09:06.345777 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1930411129.mount: Deactivated successfully. May 14 05:09:08.392207 containerd[1604]: time="2025-05-14T05:09:08.392157701Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:09:08.393528 containerd[1604]: time="2025-05-14T05:09:08.393499338Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 14 05:09:08.393995 containerd[1604]: time="2025-05-14T05:09:08.393972891Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:09:08.396157 containerd[1604]: time="2025-05-14T05:09:08.396119411Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 6.142736338s" May 14 05:09:08.396235 containerd[1604]: time="2025-05-14T05:09:08.396172646Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 14 05:09:08.397696 containerd[1604]: time="2025-05-14T05:09:08.397635513Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 14 05:09:08.399231 containerd[1604]: time="2025-05-14T05:09:08.398824791Z" level=info msg="CreateContainer within sandbox \"38d45b6b7e3a8908316e3e19f3efa59ba996ee95cdb48bff53c76a0dcba764d2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 05:09:08.410029 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2087215038.mount: Deactivated successfully. May 14 05:09:08.410793 containerd[1604]: time="2025-05-14T05:09:08.410773897Z" level=info msg="Container 389179e3bad736224aeef9f0a8418ddba754243f64c4dfa5d177e9e6f003321a: CDI devices from CRI Config.CDIDevices: []" May 14 05:09:08.412103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount947495656.mount: Deactivated successfully. May 14 05:09:08.425208 containerd[1604]: time="2025-05-14T05:09:08.425183425Z" level=info msg="CreateContainer within sandbox \"38d45b6b7e3a8908316e3e19f3efa59ba996ee95cdb48bff53c76a0dcba764d2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"389179e3bad736224aeef9f0a8418ddba754243f64c4dfa5d177e9e6f003321a\"" May 14 05:09:08.425920 containerd[1604]: time="2025-05-14T05:09:08.425485044Z" level=info msg="StartContainer for \"389179e3bad736224aeef9f0a8418ddba754243f64c4dfa5d177e9e6f003321a\"" May 14 05:09:08.426111 containerd[1604]: time="2025-05-14T05:09:08.426071830Z" level=info msg="connecting to shim 389179e3bad736224aeef9f0a8418ddba754243f64c4dfa5d177e9e6f003321a" address="unix:///run/containerd/s/07461d805788d21ad13796fca49d3a17508ab23b012d75d2aed8c5318278e409" protocol=ttrpc version=3 May 14 05:09:08.461807 systemd[1]: Started cri-containerd-389179e3bad736224aeef9f0a8418ddba754243f64c4dfa5d177e9e6f003321a.scope - libcontainer container 389179e3bad736224aeef9f0a8418ddba754243f64c4dfa5d177e9e6f003321a. May 14 05:09:08.481560 containerd[1604]: time="2025-05-14T05:09:08.481536397Z" level=info msg="StartContainer for \"389179e3bad736224aeef9f0a8418ddba754243f64c4dfa5d177e9e6f003321a\" returns successfully" May 14 05:09:08.488711 systemd[1]: cri-containerd-389179e3bad736224aeef9f0a8418ddba754243f64c4dfa5d177e9e6f003321a.scope: Deactivated successfully. May 14 05:09:08.523694 containerd[1604]: time="2025-05-14T05:09:08.523583045Z" level=info msg="received exit event container_id:\"389179e3bad736224aeef9f0a8418ddba754243f64c4dfa5d177e9e6f003321a\" id:\"389179e3bad736224aeef9f0a8418ddba754243f64c4dfa5d177e9e6f003321a\" pid:3305 exited_at:{seconds:1747199348 nanos:490447266}" May 14 05:09:08.524159 containerd[1604]: time="2025-05-14T05:09:08.524096667Z" level=info msg="TaskExit event in podsandbox handler container_id:\"389179e3bad736224aeef9f0a8418ddba754243f64c4dfa5d177e9e6f003321a\" id:\"389179e3bad736224aeef9f0a8418ddba754243f64c4dfa5d177e9e6f003321a\" pid:3305 exited_at:{seconds:1747199348 nanos:490447266}" May 14 05:09:08.870905 containerd[1604]: time="2025-05-14T05:09:08.870496865Z" level=info msg="CreateContainer within sandbox \"38d45b6b7e3a8908316e3e19f3efa59ba996ee95cdb48bff53c76a0dcba764d2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 05:09:08.875688 containerd[1604]: time="2025-05-14T05:09:08.875337736Z" level=info msg="Container 1567c8e36a925e7aea079f47bddf715dc75aa44eecb9451fc8187577824ee1ae: CDI devices from CRI Config.CDIDevices: []" May 14 05:09:08.891355 containerd[1604]: time="2025-05-14T05:09:08.891325496Z" level=info msg="CreateContainer within sandbox \"38d45b6b7e3a8908316e3e19f3efa59ba996ee95cdb48bff53c76a0dcba764d2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1567c8e36a925e7aea079f47bddf715dc75aa44eecb9451fc8187577824ee1ae\"" May 14 05:09:08.891860 containerd[1604]: time="2025-05-14T05:09:08.891823158Z" level=info msg="StartContainer for \"1567c8e36a925e7aea079f47bddf715dc75aa44eecb9451fc8187577824ee1ae\"" May 14 05:09:08.892457 containerd[1604]: time="2025-05-14T05:09:08.892434830Z" level=info msg="connecting to shim 1567c8e36a925e7aea079f47bddf715dc75aa44eecb9451fc8187577824ee1ae" address="unix:///run/containerd/s/07461d805788d21ad13796fca49d3a17508ab23b012d75d2aed8c5318278e409" protocol=ttrpc version=3 May 14 05:09:08.905773 systemd[1]: Started cri-containerd-1567c8e36a925e7aea079f47bddf715dc75aa44eecb9451fc8187577824ee1ae.scope - libcontainer container 1567c8e36a925e7aea079f47bddf715dc75aa44eecb9451fc8187577824ee1ae. May 14 05:09:08.923164 containerd[1604]: time="2025-05-14T05:09:08.923134698Z" level=info msg="StartContainer for \"1567c8e36a925e7aea079f47bddf715dc75aa44eecb9451fc8187577824ee1ae\" returns successfully" May 14 05:09:08.930345 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 05:09:08.930485 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 05:09:08.930671 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 14 05:09:08.932351 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 05:09:08.933560 systemd[1]: cri-containerd-1567c8e36a925e7aea079f47bddf715dc75aa44eecb9451fc8187577824ee1ae.scope: Deactivated successfully. May 14 05:09:08.934549 containerd[1604]: time="2025-05-14T05:09:08.934498238Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1567c8e36a925e7aea079f47bddf715dc75aa44eecb9451fc8187577824ee1ae\" id:\"1567c8e36a925e7aea079f47bddf715dc75aa44eecb9451fc8187577824ee1ae\" pid:3349 exited_at:{seconds:1747199348 nanos:933909717}" May 14 05:09:08.934549 containerd[1604]: time="2025-05-14T05:09:08.934519269Z" level=info msg="received exit event container_id:\"1567c8e36a925e7aea079f47bddf715dc75aa44eecb9451fc8187577824ee1ae\" id:\"1567c8e36a925e7aea079f47bddf715dc75aa44eecb9451fc8187577824ee1ae\" pid:3349 exited_at:{seconds:1747199348 nanos:933909717}" May 14 05:09:08.956930 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 05:09:09.408912 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-389179e3bad736224aeef9f0a8418ddba754243f64c4dfa5d177e9e6f003321a-rootfs.mount: Deactivated successfully. May 14 05:09:09.580067 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1748984837.mount: Deactivated successfully. May 14 05:09:09.874172 containerd[1604]: time="2025-05-14T05:09:09.874141006Z" level=info msg="CreateContainer within sandbox \"38d45b6b7e3a8908316e3e19f3efa59ba996ee95cdb48bff53c76a0dcba764d2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 05:09:09.886150 containerd[1604]: time="2025-05-14T05:09:09.886030207Z" level=info msg="Container 9b9b0503cb889b9c3a87b9fd8bbb956cc0d26fdb275563a20034d00b77672532: CDI devices from CRI Config.CDIDevices: []" May 14 05:09:09.898946 containerd[1604]: time="2025-05-14T05:09:09.898923834Z" level=info msg="CreateContainer within sandbox \"38d45b6b7e3a8908316e3e19f3efa59ba996ee95cdb48bff53c76a0dcba764d2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9b9b0503cb889b9c3a87b9fd8bbb956cc0d26fdb275563a20034d00b77672532\"" May 14 05:09:09.899173 containerd[1604]: time="2025-05-14T05:09:09.899159092Z" level=info msg="StartContainer for \"9b9b0503cb889b9c3a87b9fd8bbb956cc0d26fdb275563a20034d00b77672532\"" May 14 05:09:09.900097 containerd[1604]: time="2025-05-14T05:09:09.900022424Z" level=info msg="connecting to shim 9b9b0503cb889b9c3a87b9fd8bbb956cc0d26fdb275563a20034d00b77672532" address="unix:///run/containerd/s/07461d805788d21ad13796fca49d3a17508ab23b012d75d2aed8c5318278e409" protocol=ttrpc version=3 May 14 05:09:09.919767 systemd[1]: Started cri-containerd-9b9b0503cb889b9c3a87b9fd8bbb956cc0d26fdb275563a20034d00b77672532.scope - libcontainer container 9b9b0503cb889b9c3a87b9fd8bbb956cc0d26fdb275563a20034d00b77672532. May 14 05:09:09.949911 containerd[1604]: time="2025-05-14T05:09:09.949877553Z" level=info msg="StartContainer for \"9b9b0503cb889b9c3a87b9fd8bbb956cc0d26fdb275563a20034d00b77672532\" returns successfully" May 14 05:09:09.958343 systemd[1]: cri-containerd-9b9b0503cb889b9c3a87b9fd8bbb956cc0d26fdb275563a20034d00b77672532.scope: Deactivated successfully. May 14 05:09:09.958510 systemd[1]: cri-containerd-9b9b0503cb889b9c3a87b9fd8bbb956cc0d26fdb275563a20034d00b77672532.scope: Consumed 14ms CPU time, 5.6M memory peak, 1M read from disk. May 14 05:09:09.959891 containerd[1604]: time="2025-05-14T05:09:09.959838537Z" level=info msg="received exit event container_id:\"9b9b0503cb889b9c3a87b9fd8bbb956cc0d26fdb275563a20034d00b77672532\" id:\"9b9b0503cb889b9c3a87b9fd8bbb956cc0d26fdb275563a20034d00b77672532\" pid:3407 exited_at:{seconds:1747199349 nanos:959712283}" May 14 05:09:09.960032 containerd[1604]: time="2025-05-14T05:09:09.960002329Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9b9b0503cb889b9c3a87b9fd8bbb956cc0d26fdb275563a20034d00b77672532\" id:\"9b9b0503cb889b9c3a87b9fd8bbb956cc0d26fdb275563a20034d00b77672532\" pid:3407 exited_at:{seconds:1747199349 nanos:959712283}" May 14 05:09:10.176725 containerd[1604]: time="2025-05-14T05:09:10.176392163Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:09:10.177383 containerd[1604]: time="2025-05-14T05:09:10.177362696Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 14 05:09:10.177606 containerd[1604]: time="2025-05-14T05:09:10.177595523Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:09:10.178727 containerd[1604]: time="2025-05-14T05:09:10.178714810Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.781057379s" May 14 05:09:10.178790 containerd[1604]: time="2025-05-14T05:09:10.178781036Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 14 05:09:10.180562 containerd[1604]: time="2025-05-14T05:09:10.180550498Z" level=info msg="CreateContainer within sandbox \"77ad0fa95aa9551e61800463166c3b30f25eeb2385a7a4cdcb2de7f987ee9ba1\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 14 05:09:10.184496 containerd[1604]: time="2025-05-14T05:09:10.184212262Z" level=info msg="Container 3368a79cb371d1250811cea2ec6e74fab311fb0fa12e1ecaf18d5c49a62a7958: CDI devices from CRI Config.CDIDevices: []" May 14 05:09:10.186976 containerd[1604]: time="2025-05-14T05:09:10.186963954Z" level=info msg="CreateContainer within sandbox \"77ad0fa95aa9551e61800463166c3b30f25eeb2385a7a4cdcb2de7f987ee9ba1\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3368a79cb371d1250811cea2ec6e74fab311fb0fa12e1ecaf18d5c49a62a7958\"" May 14 05:09:10.187299 containerd[1604]: time="2025-05-14T05:09:10.187288608Z" level=info msg="StartContainer for \"3368a79cb371d1250811cea2ec6e74fab311fb0fa12e1ecaf18d5c49a62a7958\"" May 14 05:09:10.187794 containerd[1604]: time="2025-05-14T05:09:10.187783149Z" level=info msg="connecting to shim 3368a79cb371d1250811cea2ec6e74fab311fb0fa12e1ecaf18d5c49a62a7958" address="unix:///run/containerd/s/766430c092d17af8f54bef842164748778cf8a1f4de4f877382ecec2a8f74270" protocol=ttrpc version=3 May 14 05:09:10.201765 systemd[1]: Started cri-containerd-3368a79cb371d1250811cea2ec6e74fab311fb0fa12e1ecaf18d5c49a62a7958.scope - libcontainer container 3368a79cb371d1250811cea2ec6e74fab311fb0fa12e1ecaf18d5c49a62a7958. May 14 05:09:10.220519 containerd[1604]: time="2025-05-14T05:09:10.220482115Z" level=info msg="StartContainer for \"3368a79cb371d1250811cea2ec6e74fab311fb0fa12e1ecaf18d5c49a62a7958\" returns successfully" May 14 05:09:10.881780 kubelet[2889]: I0514 05:09:10.881698 2889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-sk46x" podStartSLOduration=2.077350709 podStartE2EDuration="9.881686531s" podCreationTimestamp="2025-05-14 05:09:01 +0000 UTC" firstStartedPulling="2025-05-14 05:09:02.374996218 +0000 UTC m=+6.639114063" lastFinishedPulling="2025-05-14 05:09:10.17933204 +0000 UTC m=+14.443449885" observedRunningTime="2025-05-14 05:09:10.881037533 +0000 UTC m=+15.145155381" watchObservedRunningTime="2025-05-14 05:09:10.881686531 +0000 UTC m=+15.145804380" May 14 05:09:10.885939 containerd[1604]: time="2025-05-14T05:09:10.885916653Z" level=info msg="CreateContainer within sandbox \"38d45b6b7e3a8908316e3e19f3efa59ba996ee95cdb48bff53c76a0dcba764d2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 05:09:10.893925 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1628002115.mount: Deactivated successfully. May 14 05:09:10.895842 containerd[1604]: time="2025-05-14T05:09:10.895782122Z" level=info msg="Container 54c3beccfd917d0494f7120eda31a7f4a10232be975c7b485e2570498a9e6beb: CDI devices from CRI Config.CDIDevices: []" May 14 05:09:10.901575 containerd[1604]: time="2025-05-14T05:09:10.901553393Z" level=info msg="CreateContainer within sandbox \"38d45b6b7e3a8908316e3e19f3efa59ba996ee95cdb48bff53c76a0dcba764d2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"54c3beccfd917d0494f7120eda31a7f4a10232be975c7b485e2570498a9e6beb\"" May 14 05:09:10.902035 containerd[1604]: time="2025-05-14T05:09:10.902013971Z" level=info msg="StartContainer for \"54c3beccfd917d0494f7120eda31a7f4a10232be975c7b485e2570498a9e6beb\"" May 14 05:09:10.903729 containerd[1604]: time="2025-05-14T05:09:10.902791350Z" level=info msg="connecting to shim 54c3beccfd917d0494f7120eda31a7f4a10232be975c7b485e2570498a9e6beb" address="unix:///run/containerd/s/07461d805788d21ad13796fca49d3a17508ab23b012d75d2aed8c5318278e409" protocol=ttrpc version=3 May 14 05:09:10.919774 systemd[1]: Started cri-containerd-54c3beccfd917d0494f7120eda31a7f4a10232be975c7b485e2570498a9e6beb.scope - libcontainer container 54c3beccfd917d0494f7120eda31a7f4a10232be975c7b485e2570498a9e6beb. May 14 05:09:10.937097 systemd[1]: cri-containerd-54c3beccfd917d0494f7120eda31a7f4a10232be975c7b485e2570498a9e6beb.scope: Deactivated successfully. May 14 05:09:10.937436 containerd[1604]: time="2025-05-14T05:09:10.937408591Z" level=info msg="StartContainer for \"54c3beccfd917d0494f7120eda31a7f4a10232be975c7b485e2570498a9e6beb\" returns successfully" May 14 05:09:10.938153 containerd[1604]: time="2025-05-14T05:09:10.938137528Z" level=info msg="received exit event container_id:\"54c3beccfd917d0494f7120eda31a7f4a10232be975c7b485e2570498a9e6beb\" id:\"54c3beccfd917d0494f7120eda31a7f4a10232be975c7b485e2570498a9e6beb\" pid:3484 exited_at:{seconds:1747199350 nanos:938047625}" May 14 05:09:10.938832 containerd[1604]: time="2025-05-14T05:09:10.938817805Z" level=info msg="TaskExit event in podsandbox handler container_id:\"54c3beccfd917d0494f7120eda31a7f4a10232be975c7b485e2570498a9e6beb\" id:\"54c3beccfd917d0494f7120eda31a7f4a10232be975c7b485e2570498a9e6beb\" pid:3484 exited_at:{seconds:1747199350 nanos:938047625}" May 14 05:09:11.409028 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54c3beccfd917d0494f7120eda31a7f4a10232be975c7b485e2570498a9e6beb-rootfs.mount: Deactivated successfully. May 14 05:09:11.889015 containerd[1604]: time="2025-05-14T05:09:11.888878833Z" level=info msg="CreateContainer within sandbox \"38d45b6b7e3a8908316e3e19f3efa59ba996ee95cdb48bff53c76a0dcba764d2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 05:09:11.898412 containerd[1604]: time="2025-05-14T05:09:11.898323930Z" level=info msg="Container 308aaaba218a213ba3e7134d4c00e7ff0ce95dbc6c345df80e732352f702dc86: CDI devices from CRI Config.CDIDevices: []" May 14 05:09:11.905141 containerd[1604]: time="2025-05-14T05:09:11.905055077Z" level=info msg="CreateContainer within sandbox \"38d45b6b7e3a8908316e3e19f3efa59ba996ee95cdb48bff53c76a0dcba764d2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"308aaaba218a213ba3e7134d4c00e7ff0ce95dbc6c345df80e732352f702dc86\"" May 14 05:09:11.905787 containerd[1604]: time="2025-05-14T05:09:11.905763508Z" level=info msg="StartContainer for \"308aaaba218a213ba3e7134d4c00e7ff0ce95dbc6c345df80e732352f702dc86\"" May 14 05:09:11.906530 containerd[1604]: time="2025-05-14T05:09:11.906518573Z" level=info msg="connecting to shim 308aaaba218a213ba3e7134d4c00e7ff0ce95dbc6c345df80e732352f702dc86" address="unix:///run/containerd/s/07461d805788d21ad13796fca49d3a17508ab23b012d75d2aed8c5318278e409" protocol=ttrpc version=3 May 14 05:09:11.924763 systemd[1]: Started cri-containerd-308aaaba218a213ba3e7134d4c00e7ff0ce95dbc6c345df80e732352f702dc86.scope - libcontainer container 308aaaba218a213ba3e7134d4c00e7ff0ce95dbc6c345df80e732352f702dc86. May 14 05:09:11.945534 containerd[1604]: time="2025-05-14T05:09:11.945510270Z" level=info msg="StartContainer for \"308aaaba218a213ba3e7134d4c00e7ff0ce95dbc6c345df80e732352f702dc86\" returns successfully" May 14 05:09:12.056039 containerd[1604]: time="2025-05-14T05:09:12.055949499Z" level=info msg="TaskExit event in podsandbox handler container_id:\"308aaaba218a213ba3e7134d4c00e7ff0ce95dbc6c345df80e732352f702dc86\" id:\"67636d3119860eb5dfeb65d7601c4586be2f7c7f61dae02919b921b6ceebe2ed\" pid:3552 exited_at:{seconds:1747199352 nanos:55617004}" May 14 05:09:12.139180 kubelet[2889]: I0514 05:09:12.139086 2889 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 14 05:09:12.161744 systemd[1]: Created slice kubepods-burstable-pod1293ed4c_e2f8_46b2_9651_835cc2ea1c4f.slice - libcontainer container kubepods-burstable-pod1293ed4c_e2f8_46b2_9651_835cc2ea1c4f.slice. May 14 05:09:12.167753 systemd[1]: Created slice kubepods-burstable-pod8c38e4f5_0e87_4275_96d0_41651277c5c9.slice - libcontainer container kubepods-burstable-pod8c38e4f5_0e87_4275_96d0_41651277c5c9.slice. May 14 05:09:12.321281 kubelet[2889]: I0514 05:09:12.321255 2889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c38e4f5-0e87-4275-96d0-41651277c5c9-config-volume\") pod \"coredns-6f6b679f8f-87665\" (UID: \"8c38e4f5-0e87-4275-96d0-41651277c5c9\") " pod="kube-system/coredns-6f6b679f8f-87665" May 14 05:09:12.321281 kubelet[2889]: I0514 05:09:12.321283 2889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btcrh\" (UniqueName: \"kubernetes.io/projected/8c38e4f5-0e87-4275-96d0-41651277c5c9-kube-api-access-btcrh\") pod \"coredns-6f6b679f8f-87665\" (UID: \"8c38e4f5-0e87-4275-96d0-41651277c5c9\") " pod="kube-system/coredns-6f6b679f8f-87665" May 14 05:09:12.321389 kubelet[2889]: I0514 05:09:12.321299 2889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1293ed4c-e2f8-46b2-9651-835cc2ea1c4f-config-volume\") pod \"coredns-6f6b679f8f-lnc7b\" (UID: \"1293ed4c-e2f8-46b2-9651-835cc2ea1c4f\") " pod="kube-system/coredns-6f6b679f8f-lnc7b" May 14 05:09:12.321389 kubelet[2889]: I0514 05:09:12.321310 2889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zthnc\" (UniqueName: \"kubernetes.io/projected/1293ed4c-e2f8-46b2-9651-835cc2ea1c4f-kube-api-access-zthnc\") pod \"coredns-6f6b679f8f-lnc7b\" (UID: \"1293ed4c-e2f8-46b2-9651-835cc2ea1c4f\") " pod="kube-system/coredns-6f6b679f8f-lnc7b" May 14 05:09:12.466773 containerd[1604]: time="2025-05-14T05:09:12.466662711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lnc7b,Uid:1293ed4c-e2f8-46b2-9651-835cc2ea1c4f,Namespace:kube-system,Attempt:0,}" May 14 05:09:12.474247 containerd[1604]: time="2025-05-14T05:09:12.473792586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-87665,Uid:8c38e4f5-0e87-4275-96d0-41651277c5c9,Namespace:kube-system,Attempt:0,}" May 14 05:09:14.167173 systemd-networkd[1536]: cilium_host: Link UP May 14 05:09:14.167979 systemd-networkd[1536]: cilium_net: Link UP May 14 05:09:14.168847 systemd-networkd[1536]: cilium_net: Gained carrier May 14 05:09:14.169707 systemd-networkd[1536]: cilium_host: Gained carrier May 14 05:09:14.327733 systemd-networkd[1536]: cilium_vxlan: Link UP May 14 05:09:14.328617 systemd-networkd[1536]: cilium_vxlan: Gained carrier May 14 05:09:14.471762 systemd-networkd[1536]: cilium_net: Gained IPv6LL May 14 05:09:14.727764 systemd-networkd[1536]: cilium_host: Gained IPv6LL May 14 05:09:15.556025 kernel: NET: Registered PF_ALG protocol family May 14 05:09:16.058432 systemd-networkd[1536]: lxc_health: Link UP May 14 05:09:16.074871 systemd-networkd[1536]: lxc_health: Gained carrier May 14 05:09:16.136839 systemd-networkd[1536]: cilium_vxlan: Gained IPv6LL May 14 05:09:16.208948 kubelet[2889]: I0514 05:09:16.208873 2889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-drksv" podStartSLOduration=9.063384374 podStartE2EDuration="15.207830496s" podCreationTimestamp="2025-05-14 05:09:01 +0000 UTC" firstStartedPulling="2025-05-14 05:09:02.252491576 +0000 UTC m=+6.516609421" lastFinishedPulling="2025-05-14 05:09:08.396937689 +0000 UTC m=+12.661055543" observedRunningTime="2025-05-14 05:09:12.900614825 +0000 UTC m=+17.164732679" watchObservedRunningTime="2025-05-14 05:09:16.207830496 +0000 UTC m=+20.471948351" May 14 05:09:16.522826 kernel: eth0: renamed from tmp88f5a May 14 05:09:16.521191 systemd-networkd[1536]: lxc7b2e8a73e3c2: Link UP May 14 05:09:16.526237 systemd-networkd[1536]: lxc7b2e8a73e3c2: Gained carrier May 14 05:09:16.534759 systemd-networkd[1536]: lxc463177a360e3: Link UP May 14 05:09:16.541900 kernel: eth0: renamed from tmp938b1 May 14 05:09:16.543721 systemd-networkd[1536]: lxc463177a360e3: Gained carrier May 14 05:09:17.288771 systemd-networkd[1536]: lxc_health: Gained IPv6LL May 14 05:09:18.055794 systemd-networkd[1536]: lxc7b2e8a73e3c2: Gained IPv6LL May 14 05:09:18.439770 systemd-networkd[1536]: lxc463177a360e3: Gained IPv6LL May 14 05:09:19.389868 containerd[1604]: time="2025-05-14T05:09:19.389825704Z" level=info msg="connecting to shim 938b1366bc346e97da3fdaf8e6f4c3abe60e5fac6ad3bd445203198a4270bf0f" address="unix:///run/containerd/s/dd4a70a455966e358008a68e3899c476d36f0ccd2bfe780d7a7e1672006a9a28" namespace=k8s.io protocol=ttrpc version=3 May 14 05:09:19.392768 containerd[1604]: time="2025-05-14T05:09:19.391786842Z" level=info msg="connecting to shim 88f5ad94e6ac01d7b2db930ecb8345c2ebaa8de232928be02dde73a6242efc92" address="unix:///run/containerd/s/e8e8e9e6203368bfdad7a1477ded5f7bac895403255b6ba8d0b54aa5b9c22d59" namespace=k8s.io protocol=ttrpc version=3 May 14 05:09:19.416833 systemd[1]: Started cri-containerd-938b1366bc346e97da3fdaf8e6f4c3abe60e5fac6ad3bd445203198a4270bf0f.scope - libcontainer container 938b1366bc346e97da3fdaf8e6f4c3abe60e5fac6ad3bd445203198a4270bf0f. May 14 05:09:19.419586 systemd[1]: Started cri-containerd-88f5ad94e6ac01d7b2db930ecb8345c2ebaa8de232928be02dde73a6242efc92.scope - libcontainer container 88f5ad94e6ac01d7b2db930ecb8345c2ebaa8de232928be02dde73a6242efc92. May 14 05:09:19.429760 systemd-resolved[1484]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 05:09:19.435362 systemd-resolved[1484]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 05:09:19.470274 containerd[1604]: time="2025-05-14T05:09:19.470249426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-87665,Uid:8c38e4f5-0e87-4275-96d0-41651277c5c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"88f5ad94e6ac01d7b2db930ecb8345c2ebaa8de232928be02dde73a6242efc92\"" May 14 05:09:19.472429 containerd[1604]: time="2025-05-14T05:09:19.472408882Z" level=info msg="CreateContainer within sandbox \"88f5ad94e6ac01d7b2db930ecb8345c2ebaa8de232928be02dde73a6242efc92\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 05:09:19.477933 containerd[1604]: time="2025-05-14T05:09:19.477822219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lnc7b,Uid:1293ed4c-e2f8-46b2-9651-835cc2ea1c4f,Namespace:kube-system,Attempt:0,} returns sandbox id \"938b1366bc346e97da3fdaf8e6f4c3abe60e5fac6ad3bd445203198a4270bf0f\"" May 14 05:09:19.479161 containerd[1604]: time="2025-05-14T05:09:19.479128228Z" level=info msg="CreateContainer within sandbox \"938b1366bc346e97da3fdaf8e6f4c3abe60e5fac6ad3bd445203198a4270bf0f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 05:09:19.529967 containerd[1604]: time="2025-05-14T05:09:19.529933445Z" level=info msg="Container 005a21e806fde61f868b70cb2d0dec82815ee875f95be33574c395d20c962a0d: CDI devices from CRI Config.CDIDevices: []" May 14 05:09:19.530828 containerd[1604]: time="2025-05-14T05:09:19.530734755Z" level=info msg="Container 30e8f9ba45c383042de3c78b6c2af3aa7cbcad8222ba68d0987698760af931be: CDI devices from CRI Config.CDIDevices: []" May 14 05:09:19.535141 containerd[1604]: time="2025-05-14T05:09:19.535106135Z" level=info msg="CreateContainer within sandbox \"88f5ad94e6ac01d7b2db930ecb8345c2ebaa8de232928be02dde73a6242efc92\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"005a21e806fde61f868b70cb2d0dec82815ee875f95be33574c395d20c962a0d\"" May 14 05:09:19.535758 containerd[1604]: time="2025-05-14T05:09:19.535735592Z" level=info msg="StartContainer for \"005a21e806fde61f868b70cb2d0dec82815ee875f95be33574c395d20c962a0d\"" May 14 05:09:19.536954 containerd[1604]: time="2025-05-14T05:09:19.536713454Z" level=info msg="connecting to shim 005a21e806fde61f868b70cb2d0dec82815ee875f95be33574c395d20c962a0d" address="unix:///run/containerd/s/e8e8e9e6203368bfdad7a1477ded5f7bac895403255b6ba8d0b54aa5b9c22d59" protocol=ttrpc version=3 May 14 05:09:19.537546 containerd[1604]: time="2025-05-14T05:09:19.537503190Z" level=info msg="CreateContainer within sandbox \"938b1366bc346e97da3fdaf8e6f4c3abe60e5fac6ad3bd445203198a4270bf0f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"30e8f9ba45c383042de3c78b6c2af3aa7cbcad8222ba68d0987698760af931be\"" May 14 05:09:19.538167 containerd[1604]: time="2025-05-14T05:09:19.538135422Z" level=info msg="StartContainer for \"30e8f9ba45c383042de3c78b6c2af3aa7cbcad8222ba68d0987698760af931be\"" May 14 05:09:19.540758 containerd[1604]: time="2025-05-14T05:09:19.540721703Z" level=info msg="connecting to shim 30e8f9ba45c383042de3c78b6c2af3aa7cbcad8222ba68d0987698760af931be" address="unix:///run/containerd/s/dd4a70a455966e358008a68e3899c476d36f0ccd2bfe780d7a7e1672006a9a28" protocol=ttrpc version=3 May 14 05:09:19.557862 systemd[1]: Started cri-containerd-005a21e806fde61f868b70cb2d0dec82815ee875f95be33574c395d20c962a0d.scope - libcontainer container 005a21e806fde61f868b70cb2d0dec82815ee875f95be33574c395d20c962a0d. May 14 05:09:19.562816 systemd[1]: Started cri-containerd-30e8f9ba45c383042de3c78b6c2af3aa7cbcad8222ba68d0987698760af931be.scope - libcontainer container 30e8f9ba45c383042de3c78b6c2af3aa7cbcad8222ba68d0987698760af931be. May 14 05:09:19.609925 containerd[1604]: time="2025-05-14T05:09:19.609892659Z" level=info msg="StartContainer for \"30e8f9ba45c383042de3c78b6c2af3aa7cbcad8222ba68d0987698760af931be\" returns successfully" May 14 05:09:19.610127 containerd[1604]: time="2025-05-14T05:09:19.610115631Z" level=info msg="StartContainer for \"005a21e806fde61f868b70cb2d0dec82815ee875f95be33574c395d20c962a0d\" returns successfully" May 14 05:09:19.915217 kubelet[2889]: I0514 05:09:19.915171 2889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-lnc7b" podStartSLOduration=18.915159843 podStartE2EDuration="18.915159843s" podCreationTimestamp="2025-05-14 05:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 05:09:19.914742445 +0000 UTC m=+24.178860310" watchObservedRunningTime="2025-05-14 05:09:19.915159843 +0000 UTC m=+24.179277691" May 14 05:09:20.374816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount813600236.mount: Deactivated successfully. May 14 05:09:20.927366 kubelet[2889]: I0514 05:09:20.927324 2889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-87665" podStartSLOduration=19.927310128 podStartE2EDuration="19.927310128s" podCreationTimestamp="2025-05-14 05:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 05:09:19.928497726 +0000 UTC m=+24.192615580" watchObservedRunningTime="2025-05-14 05:09:20.927310128 +0000 UTC m=+25.191427981" May 14 05:09:26.820020 kubelet[2889]: I0514 05:09:26.819629 2889 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 05:10:04.830958 systemd[1]: Started sshd@7-139.178.70.105:22-139.178.89.65:38580.service - OpenSSH per-connection server daemon (139.178.89.65:38580). May 14 05:10:04.914083 sshd[4213]: Accepted publickey for core from 139.178.89.65 port 38580 ssh2: RSA SHA256:sWEHzEuAS00wVyRssVrF9wwUJZsltkVMESa8qG2astk May 14 05:10:04.915252 sshd-session[4213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:10:04.918663 systemd-logind[1573]: New session 10 of user core. May 14 05:10:04.927802 systemd[1]: Started session-10.scope - Session 10 of User core. May 14 05:10:05.553457 sshd[4215]: Connection closed by 139.178.89.65 port 38580 May 14 05:10:05.553952 sshd-session[4213]: pam_unix(sshd:session): session closed for user core May 14 05:10:05.561925 systemd[1]: sshd@7-139.178.70.105:22-139.178.89.65:38580.service: Deactivated successfully. May 14 05:10:05.563260 systemd[1]: session-10.scope: Deactivated successfully. May 14 05:10:05.564258 systemd-logind[1573]: Session 10 logged out. Waiting for processes to exit. May 14 05:10:05.564980 systemd-logind[1573]: Removed session 10. May 14 05:10:10.562741 systemd[1]: Started sshd@8-139.178.70.105:22-139.178.89.65:38780.service - OpenSSH per-connection server daemon (139.178.89.65:38780). May 14 05:10:10.754523 sshd[4228]: Accepted publickey for core from 139.178.89.65 port 38780 ssh2: RSA SHA256:sWEHzEuAS00wVyRssVrF9wwUJZsltkVMESa8qG2astk May 14 05:10:10.755716 sshd-session[4228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:10:10.758322 systemd-logind[1573]: New session 11 of user core. May 14 05:10:10.763766 systemd[1]: Started session-11.scope - Session 11 of User core. May 14 05:10:11.034080 sshd[4230]: Connection closed by 139.178.89.65 port 38780 May 14 05:10:11.033599 sshd-session[4228]: pam_unix(sshd:session): session closed for user core May 14 05:10:11.037183 systemd[1]: sshd@8-139.178.70.105:22-139.178.89.65:38780.service: Deactivated successfully. May 14 05:10:11.037453 systemd-logind[1573]: Session 11 logged out. Waiting for processes to exit. May 14 05:10:11.039068 systemd[1]: session-11.scope: Deactivated successfully. May 14 05:10:11.040672 systemd-logind[1573]: Removed session 11. May 14 05:10:16.045024 systemd[1]: Started sshd@9-139.178.70.105:22-139.178.89.65:38790.service - OpenSSH per-connection server daemon (139.178.89.65:38790). May 14 05:10:16.086750 sshd[4242]: Accepted publickey for core from 139.178.89.65 port 38790 ssh2: RSA SHA256:sWEHzEuAS00wVyRssVrF9wwUJZsltkVMESa8qG2astk May 14 05:10:16.088203 sshd-session[4242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:10:16.092904 systemd-logind[1573]: New session 12 of user core. May 14 05:10:16.097768 systemd[1]: Started session-12.scope - Session 12 of User core. May 14 05:10:16.192699 sshd[4244]: Connection closed by 139.178.89.65 port 38790 May 14 05:10:16.193061 sshd-session[4242]: pam_unix(sshd:session): session closed for user core May 14 05:10:16.195154 systemd[1]: sshd@9-139.178.70.105:22-139.178.89.65:38790.service: Deactivated successfully. May 14 05:10:16.196408 systemd[1]: session-12.scope: Deactivated successfully. May 14 05:10:16.196936 systemd-logind[1573]: Session 12 logged out. Waiting for processes to exit. May 14 05:10:16.197768 systemd-logind[1573]: Removed session 12. May 14 05:10:21.205119 systemd[1]: Started sshd@10-139.178.70.105:22-139.178.89.65:36558.service - OpenSSH per-connection server daemon (139.178.89.65:36558). May 14 05:10:21.259380 sshd[4259]: Accepted publickey for core from 139.178.89.65 port 36558 ssh2: RSA SHA256:sWEHzEuAS00wVyRssVrF9wwUJZsltkVMESa8qG2astk May 14 05:10:21.260547 sshd-session[4259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:10:21.264513 systemd-logind[1573]: New session 13 of user core. May 14 05:10:21.271886 systemd[1]: Started session-13.scope - Session 13 of User core. May 14 05:10:21.385944 sshd[4261]: Connection closed by 139.178.89.65 port 36558 May 14 05:10:21.385486 sshd-session[4259]: pam_unix(sshd:session): session closed for user core May 14 05:10:21.394038 systemd[1]: sshd@10-139.178.70.105:22-139.178.89.65:36558.service: Deactivated successfully. May 14 05:10:21.395903 systemd[1]: session-13.scope: Deactivated successfully. May 14 05:10:21.397726 systemd-logind[1573]: Session 13 logged out. Waiting for processes to exit. May 14 05:10:21.399025 systemd[1]: Started sshd@11-139.178.70.105:22-139.178.89.65:36568.service - OpenSSH per-connection server daemon (139.178.89.65:36568). May 14 05:10:21.400461 systemd-logind[1573]: Removed session 13. May 14 05:10:21.434485 sshd[4274]: Accepted publickey for core from 139.178.89.65 port 36568 ssh2: RSA SHA256:sWEHzEuAS00wVyRssVrF9wwUJZsltkVMESa8qG2astk May 14 05:10:21.435286 sshd-session[4274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:10:21.438414 systemd-logind[1573]: New session 14 of user core. May 14 05:10:21.444761 systemd[1]: Started session-14.scope - Session 14 of User core. May 14 05:10:21.575798 sshd[4276]: Connection closed by 139.178.89.65 port 36568 May 14 05:10:21.576312 sshd-session[4274]: pam_unix(sshd:session): session closed for user core May 14 05:10:21.586098 systemd[1]: sshd@11-139.178.70.105:22-139.178.89.65:36568.service: Deactivated successfully. May 14 05:10:21.590324 systemd[1]: session-14.scope: Deactivated successfully. May 14 05:10:21.592593 systemd-logind[1573]: Session 14 logged out. Waiting for processes to exit. May 14 05:10:21.596585 systemd[1]: Started sshd@12-139.178.70.105:22-139.178.89.65:36578.service - OpenSSH per-connection server daemon (139.178.89.65:36578). May 14 05:10:21.598912 systemd-logind[1573]: Removed session 14. May 14 05:10:21.633391 sshd[4285]: Accepted publickey for core from 139.178.89.65 port 36578 ssh2: RSA SHA256:sWEHzEuAS00wVyRssVrF9wwUJZsltkVMESa8qG2astk May 14 05:10:21.634202 sshd-session[4285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:10:21.637512 systemd-logind[1573]: New session 15 of user core. May 14 05:10:21.644770 systemd[1]: Started session-15.scope - Session 15 of User core. May 14 05:10:21.733790 sshd[4287]: Connection closed by 139.178.89.65 port 36578 May 14 05:10:21.734149 sshd-session[4285]: pam_unix(sshd:session): session closed for user core May 14 05:10:21.736824 systemd-logind[1573]: Session 15 logged out. Waiting for processes to exit. May 14 05:10:21.736928 systemd[1]: sshd@12-139.178.70.105:22-139.178.89.65:36578.service: Deactivated successfully. May 14 05:10:21.738038 systemd[1]: session-15.scope: Deactivated successfully. May 14 05:10:21.738831 systemd-logind[1573]: Removed session 15. May 14 05:10:26.744985 systemd[1]: Started sshd@13-139.178.70.105:22-139.178.89.65:54646.service - OpenSSH per-connection server daemon (139.178.89.65:54646). May 14 05:10:26.785923 sshd[4298]: Accepted publickey for core from 139.178.89.65 port 54646 ssh2: RSA SHA256:sWEHzEuAS00wVyRssVrF9wwUJZsltkVMESa8qG2astk May 14 05:10:26.786825 sshd-session[4298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:10:26.790217 systemd-logind[1573]: New session 16 of user core. May 14 05:10:26.798998 systemd[1]: Started session-16.scope - Session 16 of User core. May 14 05:10:26.898160 sshd[4300]: Connection closed by 139.178.89.65 port 54646 May 14 05:10:26.898535 sshd-session[4298]: pam_unix(sshd:session): session closed for user core May 14 05:10:26.904029 systemd[1]: sshd@13-139.178.70.105:22-139.178.89.65:54646.service: Deactivated successfully. May 14 05:10:26.905244 systemd[1]: session-16.scope: Deactivated successfully. May 14 05:10:26.906165 systemd-logind[1573]: Session 16 logged out. Waiting for processes to exit. May 14 05:10:26.907945 systemd[1]: Started sshd@14-139.178.70.105:22-139.178.89.65:54648.service - OpenSSH per-connection server daemon (139.178.89.65:54648). May 14 05:10:26.909021 systemd-logind[1573]: Removed session 16. May 14 05:10:26.945099 sshd[4312]: Accepted publickey for core from 139.178.89.65 port 54648 ssh2: RSA SHA256:sWEHzEuAS00wVyRssVrF9wwUJZsltkVMESa8qG2astk May 14 05:10:26.945986 sshd-session[4312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:10:26.949162 systemd-logind[1573]: New session 17 of user core. May 14 05:10:26.958934 systemd[1]: Started session-17.scope - Session 17 of User core. May 14 05:10:27.636344 sshd[4314]: Connection closed by 139.178.89.65 port 54648 May 14 05:10:27.639327 sshd-session[4312]: pam_unix(sshd:session): session closed for user core May 14 05:10:27.645252 systemd[1]: Started sshd@15-139.178.70.105:22-139.178.89.65:54658.service - OpenSSH per-connection server daemon (139.178.89.65:54658). May 14 05:10:27.659310 systemd[1]: sshd@14-139.178.70.105:22-139.178.89.65:54648.service: Deactivated successfully. May 14 05:10:27.660444 systemd[1]: session-17.scope: Deactivated successfully. May 14 05:10:27.661520 systemd-logind[1573]: Session 17 logged out. Waiting for processes to exit. May 14 05:10:27.662658 systemd-logind[1573]: Removed session 17. May 14 05:10:27.746015 sshd[4321]: Accepted publickey for core from 139.178.89.65 port 54658 ssh2: RSA SHA256:sWEHzEuAS00wVyRssVrF9wwUJZsltkVMESa8qG2astk May 14 05:10:27.746998 sshd-session[4321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:10:27.749640 systemd-logind[1573]: New session 18 of user core. May 14 05:10:27.752785 systemd[1]: Started session-18.scope - Session 18 of User core. May 14 05:10:29.416455 sshd[4326]: Connection closed by 139.178.89.65 port 54658 May 14 05:10:29.416389 sshd-session[4321]: pam_unix(sshd:session): session closed for user core May 14 05:10:29.425173 systemd[1]: sshd@15-139.178.70.105:22-139.178.89.65:54658.service: Deactivated successfully. May 14 05:10:29.428200 systemd[1]: session-18.scope: Deactivated successfully. May 14 05:10:29.429886 systemd-logind[1573]: Session 18 logged out. Waiting for processes to exit. May 14 05:10:29.432670 systemd[1]: Started sshd@16-139.178.70.105:22-139.178.89.65:54670.service - OpenSSH per-connection server daemon (139.178.89.65:54670). May 14 05:10:29.434273 systemd-logind[1573]: Removed session 18. May 14 05:10:29.488114 sshd[4343]: Accepted publickey for core from 139.178.89.65 port 54670 ssh2: RSA SHA256:sWEHzEuAS00wVyRssVrF9wwUJZsltkVMESa8qG2astk May 14 05:10:29.488759 sshd-session[4343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:10:29.492112 systemd-logind[1573]: New session 19 of user core. May 14 05:10:29.501772 systemd[1]: Started session-19.scope - Session 19 of User core. May 14 05:10:29.803375 sshd[4345]: Connection closed by 139.178.89.65 port 54670 May 14 05:10:29.804058 sshd-session[4343]: pam_unix(sshd:session): session closed for user core May 14 05:10:29.812390 systemd[1]: sshd@16-139.178.70.105:22-139.178.89.65:54670.service: Deactivated successfully. May 14 05:10:29.815119 systemd[1]: session-19.scope: Deactivated successfully. May 14 05:10:29.817141 systemd-logind[1573]: Session 19 logged out. Waiting for processes to exit. May 14 05:10:29.823909 systemd[1]: Started sshd@17-139.178.70.105:22-139.178.89.65:54680.service - OpenSSH per-connection server daemon (139.178.89.65:54680). May 14 05:10:29.825575 systemd-logind[1573]: Removed session 19. May 14 05:10:29.862190 sshd[4355]: Accepted publickey for core from 139.178.89.65 port 54680 ssh2: RSA SHA256:sWEHzEuAS00wVyRssVrF9wwUJZsltkVMESa8qG2astk May 14 05:10:29.863043 sshd-session[4355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:10:29.866463 systemd-logind[1573]: New session 20 of user core. May 14 05:10:29.873786 systemd[1]: Started session-20.scope - Session 20 of User core. May 14 05:10:29.966741 sshd[4357]: Connection closed by 139.178.89.65 port 54680 May 14 05:10:29.967830 sshd-session[4355]: pam_unix(sshd:session): session closed for user core May 14 05:10:29.969951 systemd[1]: sshd@17-139.178.70.105:22-139.178.89.65:54680.service: Deactivated successfully. May 14 05:10:29.971375 systemd[1]: session-20.scope: Deactivated successfully. May 14 05:10:29.972163 systemd-logind[1573]: Session 20 logged out. Waiting for processes to exit. May 14 05:10:29.973400 systemd-logind[1573]: Removed session 20. May 14 05:10:34.981147 systemd[1]: Started sshd@18-139.178.70.105:22-139.178.89.65:54686.service - OpenSSH per-connection server daemon (139.178.89.65:54686). May 14 05:10:35.010972 sshd[4374]: Accepted publickey for core from 139.178.89.65 port 54686 ssh2: RSA SHA256:sWEHzEuAS00wVyRssVrF9wwUJZsltkVMESa8qG2astk May 14 05:10:35.011827 sshd-session[4374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:10:35.015076 systemd-logind[1573]: New session 21 of user core. May 14 05:10:35.021752 systemd[1]: Started session-21.scope - Session 21 of User core. May 14 05:10:35.107165 sshd[4376]: Connection closed by 139.178.89.65 port 54686 May 14 05:10:35.107793 sshd-session[4374]: pam_unix(sshd:session): session closed for user core May 14 05:10:35.109909 systemd-logind[1573]: Session 21 logged out. Waiting for processes to exit. May 14 05:10:35.109996 systemd[1]: sshd@18-139.178.70.105:22-139.178.89.65:54686.service: Deactivated successfully. May 14 05:10:35.110949 systemd[1]: session-21.scope: Deactivated successfully. May 14 05:10:35.111848 systemd-logind[1573]: Removed session 21. May 14 05:10:40.116518 systemd[1]: Started sshd@19-139.178.70.105:22-139.178.89.65:39296.service - OpenSSH per-connection server daemon (139.178.89.65:39296). May 14 05:10:40.152400 sshd[4387]: Accepted publickey for core from 139.178.89.65 port 39296 ssh2: RSA SHA256:sWEHzEuAS00wVyRssVrF9wwUJZsltkVMESa8qG2astk May 14 05:10:40.153075 sshd-session[4387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:10:40.156840 systemd-logind[1573]: New session 22 of user core. May 14 05:10:40.161760 systemd[1]: Started session-22.scope - Session 22 of User core. May 14 05:10:40.245597 sshd[4389]: Connection closed by 139.178.89.65 port 39296 May 14 05:10:40.245981 sshd-session[4387]: pam_unix(sshd:session): session closed for user core May 14 05:10:40.247920 systemd-logind[1573]: Session 22 logged out. Waiting for processes to exit. May 14 05:10:40.248059 systemd[1]: sshd@19-139.178.70.105:22-139.178.89.65:39296.service: Deactivated successfully. May 14 05:10:40.249279 systemd[1]: session-22.scope: Deactivated successfully. May 14 05:10:40.250636 systemd-logind[1573]: Removed session 22. May 14 05:10:45.255776 systemd[1]: Started sshd@20-139.178.70.105:22-139.178.89.65:39298.service - OpenSSH per-connection server daemon (139.178.89.65:39298). May 14 05:10:45.290464 sshd[4400]: Accepted publickey for core from 139.178.89.65 port 39298 ssh2: RSA SHA256:sWEHzEuAS00wVyRssVrF9wwUJZsltkVMESa8qG2astk May 14 05:10:45.291340 sshd-session[4400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:10:45.294155 systemd-logind[1573]: New session 23 of user core. May 14 05:10:45.301802 systemd[1]: Started session-23.scope - Session 23 of User core. May 14 05:10:45.391742 sshd[4402]: Connection closed by 139.178.89.65 port 39298 May 14 05:10:45.392133 sshd-session[4400]: pam_unix(sshd:session): session closed for user core May 14 05:10:45.394070 systemd-logind[1573]: Session 23 logged out. Waiting for processes to exit. May 14 05:10:45.395018 systemd[1]: sshd@20-139.178.70.105:22-139.178.89.65:39298.service: Deactivated successfully. May 14 05:10:45.396263 systemd[1]: session-23.scope: Deactivated successfully. May 14 05:10:45.398151 systemd-logind[1573]: Removed session 23. May 14 05:10:50.407236 systemd[1]: Started sshd@21-139.178.70.105:22-139.178.89.65:45928.service - OpenSSH per-connection server daemon (139.178.89.65:45928). May 14 05:10:50.446632 sshd[4413]: Accepted publickey for core from 139.178.89.65 port 45928 ssh2: RSA SHA256:sWEHzEuAS00wVyRssVrF9wwUJZsltkVMESa8qG2astk May 14 05:10:50.447386 sshd-session[4413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:10:50.450355 systemd-logind[1573]: New session 24 of user core. May 14 05:10:50.458768 systemd[1]: Started session-24.scope - Session 24 of User core. May 14 05:10:50.564189 sshd[4415]: Connection closed by 139.178.89.65 port 45928 May 14 05:10:50.564634 sshd-session[4413]: pam_unix(sshd:session): session closed for user core May 14 05:10:50.573844 systemd[1]: sshd@21-139.178.70.105:22-139.178.89.65:45928.service: Deactivated successfully. May 14 05:10:50.575136 systemd[1]: session-24.scope: Deactivated successfully. May 14 05:10:50.577778 systemd-logind[1573]: Session 24 logged out. Waiting for processes to exit. May 14 05:10:50.579719 systemd[1]: Started sshd@22-139.178.70.105:22-139.178.89.65:45940.service - OpenSSH per-connection server daemon (139.178.89.65:45940). May 14 05:10:50.584454 systemd-logind[1573]: Removed session 24. May 14 05:10:50.634330 sshd[4427]: Accepted publickey for core from 139.178.89.65 port 45940 ssh2: RSA SHA256:sWEHzEuAS00wVyRssVrF9wwUJZsltkVMESa8qG2astk May 14 05:10:50.635434 sshd-session[4427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:10:50.638643 systemd-logind[1573]: New session 25 of user core. May 14 05:10:50.645755 systemd[1]: Started session-25.scope - Session 25 of User core. May 14 05:10:51.965935 containerd[1604]: time="2025-05-14T05:10:51.964578627Z" level=info msg="StopContainer for \"3368a79cb371d1250811cea2ec6e74fab311fb0fa12e1ecaf18d5c49a62a7958\" with timeout 30 (s)" May 14 05:10:51.972075 containerd[1604]: time="2025-05-14T05:10:51.972052588Z" level=info msg="Stop container \"3368a79cb371d1250811cea2ec6e74fab311fb0fa12e1ecaf18d5c49a62a7958\" with signal terminated" May 14 05:10:52.016436 systemd[1]: cri-containerd-3368a79cb371d1250811cea2ec6e74fab311fb0fa12e1ecaf18d5c49a62a7958.scope: Deactivated successfully. May 14 05:10:52.018707 containerd[1604]: time="2025-05-14T05:10:52.018648385Z" level=info msg="received exit event container_id:\"3368a79cb371d1250811cea2ec6e74fab311fb0fa12e1ecaf18d5c49a62a7958\" id:\"3368a79cb371d1250811cea2ec6e74fab311fb0fa12e1ecaf18d5c49a62a7958\" pid:3451 exited_at:{seconds:1747199452 nanos:18161402}" May 14 05:10:52.018813 containerd[1604]: time="2025-05-14T05:10:52.018800796Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3368a79cb371d1250811cea2ec6e74fab311fb0fa12e1ecaf18d5c49a62a7958\" id:\"3368a79cb371d1250811cea2ec6e74fab311fb0fa12e1ecaf18d5c49a62a7958\" pid:3451 exited_at:{seconds:1747199452 nanos:18161402}" May 14 05:10:52.035936 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3368a79cb371d1250811cea2ec6e74fab311fb0fa12e1ecaf18d5c49a62a7958-rootfs.mount: Deactivated successfully. May 14 05:10:52.041742 containerd[1604]: time="2025-05-14T05:10:52.041721030Z" level=info msg="StopContainer for \"3368a79cb371d1250811cea2ec6e74fab311fb0fa12e1ecaf18d5c49a62a7958\" returns successfully" May 14 05:10:52.042185 containerd[1604]: time="2025-05-14T05:10:52.042170828Z" level=info msg="StopPodSandbox for \"77ad0fa95aa9551e61800463166c3b30f25eeb2385a7a4cdcb2de7f987ee9ba1\"" May 14 05:10:52.047553 containerd[1604]: time="2025-05-14T05:10:52.047403815Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 05:10:52.048089 containerd[1604]: time="2025-05-14T05:10:52.048066620Z" level=info msg="Container to stop \"3368a79cb371d1250811cea2ec6e74fab311fb0fa12e1ecaf18d5c49a62a7958\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 05:10:52.048527 containerd[1604]: time="2025-05-14T05:10:52.048255034Z" level=info msg="TaskExit event in podsandbox handler container_id:\"308aaaba218a213ba3e7134d4c00e7ff0ce95dbc6c345df80e732352f702dc86\" id:\"7cee41cd91c4164a078734a968dd72308a19b889409d2811846095eaea4387dd\" pid:4460 exited_at:{seconds:1747199452 nanos:47832768}" May 14 05:10:52.053732 systemd[1]: cri-containerd-77ad0fa95aa9551e61800463166c3b30f25eeb2385a7a4cdcb2de7f987ee9ba1.scope: Deactivated successfully. May 14 05:10:52.055035 containerd[1604]: time="2025-05-14T05:10:52.055014604Z" level=info msg="TaskExit event in podsandbox handler container_id:\"77ad0fa95aa9551e61800463166c3b30f25eeb2385a7a4cdcb2de7f987ee9ba1\" id:\"77ad0fa95aa9551e61800463166c3b30f25eeb2385a7a4cdcb2de7f987ee9ba1\" pid:3108 exit_status:137 exited_at:{seconds:1747199452 nanos:54672009}" May 14 05:10:52.063325 containerd[1604]: time="2025-05-14T05:10:52.063293827Z" level=info msg="StopContainer for \"308aaaba218a213ba3e7134d4c00e7ff0ce95dbc6c345df80e732352f702dc86\" with timeout 2 (s)" May 14 05:10:52.063590 containerd[1604]: time="2025-05-14T05:10:52.063578797Z" level=info msg="Stop container \"308aaaba218a213ba3e7134d4c00e7ff0ce95dbc6c345df80e732352f702dc86\" with signal terminated" May 14 05:10:52.069201 systemd-networkd[1536]: lxc_health: Link DOWN May 14 05:10:52.069438 systemd-networkd[1536]: lxc_health: Lost carrier May 14 05:10:52.077417 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77ad0fa95aa9551e61800463166c3b30f25eeb2385a7a4cdcb2de7f987ee9ba1-rootfs.mount: Deactivated successfully. May 14 05:10:52.082351 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-77ad0fa95aa9551e61800463166c3b30f25eeb2385a7a4cdcb2de7f987ee9ba1-shm.mount: Deactivated successfully. May 14 05:10:52.088299 systemd[1]: cri-containerd-308aaaba218a213ba3e7134d4c00e7ff0ce95dbc6c345df80e732352f702dc86.scope: Deactivated successfully. May 14 05:10:52.088480 systemd[1]: cri-containerd-308aaaba218a213ba3e7134d4c00e7ff0ce95dbc6c345df80e732352f702dc86.scope: Consumed 4.785s CPU time, 214.1M memory peak, 92.2M read from disk, 13.3M written to disk. May 14 05:10:52.107385 containerd[1604]: time="2025-05-14T05:10:52.089297809Z" level=info msg="received exit event sandbox_id:\"77ad0fa95aa9551e61800463166c3b30f25eeb2385a7a4cdcb2de7f987ee9ba1\" exit_status:137 exited_at:{seconds:1747199452 nanos:54672009}" May 14 05:10:52.107385 containerd[1604]: time="2025-05-14T05:10:52.089624664Z" level=info msg="shim disconnected" id=77ad0fa95aa9551e61800463166c3b30f25eeb2385a7a4cdcb2de7f987ee9ba1 namespace=k8s.io May 14 05:10:52.107385 containerd[1604]: time="2025-05-14T05:10:52.089639429Z" level=warning msg="cleaning up after shim disconnected" id=77ad0fa95aa9551e61800463166c3b30f25eeb2385a7a4cdcb2de7f987ee9ba1 namespace=k8s.io May 14 05:10:52.107385 containerd[1604]: time="2025-05-14T05:10:52.089643802Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 05:10:52.107385 containerd[1604]: time="2025-05-14T05:10:52.090853772Z" level=info msg="received exit event container_id:\"308aaaba218a213ba3e7134d4c00e7ff0ce95dbc6c345df80e732352f702dc86\" id:\"308aaaba218a213ba3e7134d4c00e7ff0ce95dbc6c345df80e732352f702dc86\" pid:3522 exited_at:{seconds:1747199452 nanos:90729906}" May 14 05:10:52.107385 containerd[1604]: time="2025-05-14T05:10:52.091019454Z" level=info msg="TaskExit event in podsandbox handler container_id:\"308aaaba218a213ba3e7134d4c00e7ff0ce95dbc6c345df80e732352f702dc86\" id:\"308aaaba218a213ba3e7134d4c00e7ff0ce95dbc6c345df80e732352f702dc86\" pid:3522 exited_at:{seconds:1747199452 nanos:90729906}" May 14 05:10:52.106886 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-308aaaba218a213ba3e7134d4c00e7ff0ce95dbc6c345df80e732352f702dc86-rootfs.mount: Deactivated successfully. May 14 05:10:52.115701 containerd[1604]: time="2025-05-14T05:10:52.115664898Z" level=info msg="TearDown network for sandbox \"77ad0fa95aa9551e61800463166c3b30f25eeb2385a7a4cdcb2de7f987ee9ba1\" successfully" May 14 05:10:52.115701 containerd[1604]: time="2025-05-14T05:10:52.115696859Z" level=info msg="StopPodSandbox for \"77ad0fa95aa9551e61800463166c3b30f25eeb2385a7a4cdcb2de7f987ee9ba1\" returns successfully" May 14 05:10:52.140382 containerd[1604]: time="2025-05-14T05:10:52.140320705Z" level=info msg="StopContainer for \"308aaaba218a213ba3e7134d4c00e7ff0ce95dbc6c345df80e732352f702dc86\" returns successfully" May 14 05:10:52.140938 containerd[1604]: time="2025-05-14T05:10:52.140864489Z" level=info msg="StopPodSandbox for \"38d45b6b7e3a8908316e3e19f3efa59ba996ee95cdb48bff53c76a0dcba764d2\"" May 14 05:10:52.140938 containerd[1604]: time="2025-05-14T05:10:52.140899637Z" level=info msg="Container to stop \"1567c8e36a925e7aea079f47bddf715dc75aa44eecb9451fc8187577824ee1ae\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 05:10:52.140938 containerd[1604]: time="2025-05-14T05:10:52.140906689Z" level=info msg="Container to stop \"308aaaba218a213ba3e7134d4c00e7ff0ce95dbc6c345df80e732352f702dc86\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 05:10:52.140938 containerd[1604]: time="2025-05-14T05:10:52.140911582Z" level=info msg="Container to stop \"389179e3bad736224aeef9f0a8418ddba754243f64c4dfa5d177e9e6f003321a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 05:10:52.140938 containerd[1604]: time="2025-05-14T05:10:52.140916371Z" level=info msg="Container to stop \"9b9b0503cb889b9c3a87b9fd8bbb956cc0d26fdb275563a20034d00b77672532\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 05:10:52.140938 containerd[1604]: time="2025-05-14T05:10:52.140920947Z" level=info msg="Container to stop \"54c3beccfd917d0494f7120eda31a7f4a10232be975c7b485e2570498a9e6beb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 05:10:52.145585 systemd[1]: cri-containerd-38d45b6b7e3a8908316e3e19f3efa59ba996ee95cdb48bff53c76a0dcba764d2.scope: Deactivated successfully. May 14 05:10:52.151601 containerd[1604]: time="2025-05-14T05:10:52.151555124Z" level=info msg="TaskExit event in podsandbox handler container_id:\"38d45b6b7e3a8908316e3e19f3efa59ba996ee95cdb48bff53c76a0dcba764d2\" id:\"38d45b6b7e3a8908316e3e19f3efa59ba996ee95cdb48bff53c76a0dcba764d2\" pid:3034 exit_status:137 exited_at:{seconds:1747199452 nanos:150773580}" May 14 05:10:52.164456 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38d45b6b7e3a8908316e3e19f3efa59ba996ee95cdb48bff53c76a0dcba764d2-rootfs.mount: Deactivated successfully. May 14 05:10:52.172068 containerd[1604]: time="2025-05-14T05:10:52.172026542Z" level=info msg="received exit event sandbox_id:\"38d45b6b7e3a8908316e3e19f3efa59ba996ee95cdb48bff53c76a0dcba764d2\" exit_status:137 exited_at:{seconds:1747199452 nanos:150773580}" May 14 05:10:52.172238 containerd[1604]: time="2025-05-14T05:10:52.172120461Z" level=info msg="shim disconnected" id=38d45b6b7e3a8908316e3e19f3efa59ba996ee95cdb48bff53c76a0dcba764d2 namespace=k8s.io May 14 05:10:52.172238 containerd[1604]: time="2025-05-14T05:10:52.172129933Z" level=warning msg="cleaning up after shim disconnected" id=38d45b6b7e3a8908316e3e19f3efa59ba996ee95cdb48bff53c76a0dcba764d2 namespace=k8s.io May 14 05:10:52.172238 containerd[1604]: time="2025-05-14T05:10:52.172134381Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 05:10:52.172502 containerd[1604]: time="2025-05-14T05:10:52.172428422Z" level=info msg="TearDown network for sandbox \"38d45b6b7e3a8908316e3e19f3efa59ba996ee95cdb48bff53c76a0dcba764d2\" successfully" May 14 05:10:52.172502 containerd[1604]: time="2025-05-14T05:10:52.172440497Z" level=info msg="StopPodSandbox for \"38d45b6b7e3a8908316e3e19f3efa59ba996ee95cdb48bff53c76a0dcba764d2\" returns successfully" May 14 05:10:52.177195 kubelet[2889]: I0514 05:10:52.177179 2889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2ljn\" (UniqueName: \"kubernetes.io/projected/4926213b-8b5a-4019-bd68-d79cd14a0813-kube-api-access-d2ljn\") pod \"4926213b-8b5a-4019-bd68-d79cd14a0813\" (UID: \"4926213b-8b5a-4019-bd68-d79cd14a0813\") " May 14 05:10:52.177686 kubelet[2889]: I0514 05:10:52.177425 2889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4926213b-8b5a-4019-bd68-d79cd14a0813-cilium-config-path\") pod \"4926213b-8b5a-4019-bd68-d79cd14a0813\" (UID: \"4926213b-8b5a-4019-bd68-d79cd14a0813\") " May 14 05:10:52.178966 kubelet[2889]: I0514 05:10:52.178934 2889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4926213b-8b5a-4019-bd68-d79cd14a0813-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4926213b-8b5a-4019-bd68-d79cd14a0813" (UID: "4926213b-8b5a-4019-bd68-d79cd14a0813"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 14 05:10:52.188630 kubelet[2889]: I0514 05:10:52.187916 2889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4926213b-8b5a-4019-bd68-d79cd14a0813-kube-api-access-d2ljn" (OuterVolumeSpecName: "kube-api-access-d2ljn") pod "4926213b-8b5a-4019-bd68-d79cd14a0813" (UID: "4926213b-8b5a-4019-bd68-d79cd14a0813"). InnerVolumeSpecName "kube-api-access-d2ljn". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 05:10:52.279096 kubelet[2889]: I0514 05:10:52.278051 2889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-xtables-lock\") pod \"1f72d0e9-16f8-42de-b237-cb3cf049fc9e\" (UID: \"1f72d0e9-16f8-42de-b237-cb3cf049fc9e\") " May 14 05:10:52.279096 kubelet[2889]: I0514 05:10:52.278157 2889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1f72d0e9-16f8-42de-b237-cb3cf049fc9e" (UID: "1f72d0e9-16f8-42de-b237-cb3cf049fc9e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 05:10:52.280157 kubelet[2889]: I0514 05:10:52.279715 2889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-cilium-config-path\") pod \"1f72d0e9-16f8-42de-b237-cb3cf049fc9e\" (UID: \"1f72d0e9-16f8-42de-b237-cb3cf049fc9e\") " May 14 05:10:52.280157 kubelet[2889]: I0514 05:10:52.279743 2889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-host-proc-sys-net\") pod \"1f72d0e9-16f8-42de-b237-cb3cf049fc9e\" (UID: \"1f72d0e9-16f8-42de-b237-cb3cf049fc9e\") " May 14 05:10:52.280157 kubelet[2889]: I0514 05:10:52.279760 2889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77hvp\" (UniqueName: \"kubernetes.io/projected/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-kube-api-access-77hvp\") pod \"1f72d0e9-16f8-42de-b237-cb3cf049fc9e\" (UID: \"1f72d0e9-16f8-42de-b237-cb3cf049fc9e\") " May 14 05:10:52.280157 kubelet[2889]: I0514 05:10:52.279772 2889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-cilium-cgroup\") pod \"1f72d0e9-16f8-42de-b237-cb3cf049fc9e\" (UID: \"1f72d0e9-16f8-42de-b237-cb3cf049fc9e\") " May 14 05:10:52.280157 kubelet[2889]: I0514 05:10:52.279783 2889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-lib-modules\") pod \"1f72d0e9-16f8-42de-b237-cb3cf049fc9e\" (UID: \"1f72d0e9-16f8-42de-b237-cb3cf049fc9e\") " May 14 05:10:52.280157 kubelet[2889]: I0514 05:10:52.279818 2889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-etc-cni-netd\") pod \"1f72d0e9-16f8-42de-b237-cb3cf049fc9e\" (UID: \"1f72d0e9-16f8-42de-b237-cb3cf049fc9e\") " May 14 05:10:52.280325 kubelet[2889]: I0514 05:10:52.279851 2889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-bpf-maps\") pod \"1f72d0e9-16f8-42de-b237-cb3cf049fc9e\" (UID: \"1f72d0e9-16f8-42de-b237-cb3cf049fc9e\") " May 14 05:10:52.280325 kubelet[2889]: I0514 05:10:52.279862 2889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-hostproc\") pod \"1f72d0e9-16f8-42de-b237-cb3cf049fc9e\" (UID: \"1f72d0e9-16f8-42de-b237-cb3cf049fc9e\") " May 14 05:10:52.280325 kubelet[2889]: I0514 05:10:52.279879 2889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-host-proc-sys-kernel\") pod \"1f72d0e9-16f8-42de-b237-cb3cf049fc9e\" (UID: \"1f72d0e9-16f8-42de-b237-cb3cf049fc9e\") " May 14 05:10:52.280325 kubelet[2889]: I0514 05:10:52.279892 2889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-cni-path\") pod \"1f72d0e9-16f8-42de-b237-cb3cf049fc9e\" (UID: \"1f72d0e9-16f8-42de-b237-cb3cf049fc9e\") " May 14 05:10:52.280325 kubelet[2889]: I0514 05:10:52.279903 2889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-cilium-run\") pod \"1f72d0e9-16f8-42de-b237-cb3cf049fc9e\" (UID: \"1f72d0e9-16f8-42de-b237-cb3cf049fc9e\") " May 14 05:10:52.280325 kubelet[2889]: I0514 05:10:52.279915 2889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-hubble-tls\") pod \"1f72d0e9-16f8-42de-b237-cb3cf049fc9e\" (UID: \"1f72d0e9-16f8-42de-b237-cb3cf049fc9e\") " May 14 05:10:52.280455 kubelet[2889]: I0514 05:10:52.279929 2889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-clustermesh-secrets\") pod \"1f72d0e9-16f8-42de-b237-cb3cf049fc9e\" (UID: \"1f72d0e9-16f8-42de-b237-cb3cf049fc9e\") " May 14 05:10:52.280455 kubelet[2889]: I0514 05:10:52.279992 2889 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-d2ljn\" (UniqueName: \"kubernetes.io/projected/4926213b-8b5a-4019-bd68-d79cd14a0813-kube-api-access-d2ljn\") on node \"localhost\" DevicePath \"\"" May 14 05:10:52.280455 kubelet[2889]: I0514 05:10:52.280003 2889 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4926213b-8b5a-4019-bd68-d79cd14a0813-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 14 05:10:52.280455 kubelet[2889]: I0514 05:10:52.280010 2889 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 14 05:10:52.280548 kubelet[2889]: I0514 05:10:52.280474 2889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1f72d0e9-16f8-42de-b237-cb3cf049fc9e" (UID: "1f72d0e9-16f8-42de-b237-cb3cf049fc9e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 14 05:10:52.280548 kubelet[2889]: I0514 05:10:52.280499 2889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1f72d0e9-16f8-42de-b237-cb3cf049fc9e" (UID: "1f72d0e9-16f8-42de-b237-cb3cf049fc9e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 05:10:52.280548 kubelet[2889]: I0514 05:10:52.280513 2889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1f72d0e9-16f8-42de-b237-cb3cf049fc9e" (UID: "1f72d0e9-16f8-42de-b237-cb3cf049fc9e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 05:10:52.281207 kubelet[2889]: I0514 05:10:52.281031 2889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-hostproc" (OuterVolumeSpecName: "hostproc") pod "1f72d0e9-16f8-42de-b237-cb3cf049fc9e" (UID: "1f72d0e9-16f8-42de-b237-cb3cf049fc9e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 05:10:52.281207 kubelet[2889]: I0514 05:10:52.281054 2889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1f72d0e9-16f8-42de-b237-cb3cf049fc9e" (UID: "1f72d0e9-16f8-42de-b237-cb3cf049fc9e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 05:10:52.281207 kubelet[2889]: I0514 05:10:52.281067 2889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-cni-path" (OuterVolumeSpecName: "cni-path") pod "1f72d0e9-16f8-42de-b237-cb3cf049fc9e" (UID: "1f72d0e9-16f8-42de-b237-cb3cf049fc9e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 05:10:52.281207 kubelet[2889]: I0514 05:10:52.281077 2889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1f72d0e9-16f8-42de-b237-cb3cf049fc9e" (UID: "1f72d0e9-16f8-42de-b237-cb3cf049fc9e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 05:10:52.282871 kubelet[2889]: I0514 05:10:52.282849 2889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1f72d0e9-16f8-42de-b237-cb3cf049fc9e" (UID: "1f72d0e9-16f8-42de-b237-cb3cf049fc9e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 05:10:52.282923 kubelet[2889]: I0514 05:10:52.282878 2889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1f72d0e9-16f8-42de-b237-cb3cf049fc9e" (UID: "1f72d0e9-16f8-42de-b237-cb3cf049fc9e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 05:10:52.282923 kubelet[2889]: I0514 05:10:52.282889 2889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1f72d0e9-16f8-42de-b237-cb3cf049fc9e" (UID: "1f72d0e9-16f8-42de-b237-cb3cf049fc9e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 05:10:52.282975 kubelet[2889]: I0514 05:10:52.282931 2889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-kube-api-access-77hvp" (OuterVolumeSpecName: "kube-api-access-77hvp") pod "1f72d0e9-16f8-42de-b237-cb3cf049fc9e" (UID: "1f72d0e9-16f8-42de-b237-cb3cf049fc9e"). InnerVolumeSpecName "kube-api-access-77hvp". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 05:10:52.283451 kubelet[2889]: I0514 05:10:52.283430 2889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1f72d0e9-16f8-42de-b237-cb3cf049fc9e" (UID: "1f72d0e9-16f8-42de-b237-cb3cf049fc9e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 05:10:52.284166 kubelet[2889]: I0514 05:10:52.284150 2889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1f72d0e9-16f8-42de-b237-cb3cf049fc9e" (UID: "1f72d0e9-16f8-42de-b237-cb3cf049fc9e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 14 05:10:52.380290 kubelet[2889]: I0514 05:10:52.380255 2889 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-77hvp\" (UniqueName: \"kubernetes.io/projected/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-kube-api-access-77hvp\") on node \"localhost\" DevicePath \"\"" May 14 05:10:52.380290 kubelet[2889]: I0514 05:10:52.380294 2889 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 14 05:10:52.380413 kubelet[2889]: I0514 05:10:52.380303 2889 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-lib-modules\") on node \"localhost\" DevicePath \"\"" May 14 05:10:52.380413 kubelet[2889]: I0514 05:10:52.380338 2889 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 14 05:10:52.380413 kubelet[2889]: I0514 05:10:52.380349 2889 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 14 05:10:52.380413 kubelet[2889]: I0514 05:10:52.380374 2889 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-hostproc\") on node \"localhost\" DevicePath \"\"" May 14 05:10:52.380413 kubelet[2889]: I0514 05:10:52.380383 2889 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 14 05:10:52.380413 kubelet[2889]: I0514 05:10:52.380389 2889 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-cni-path\") on node \"localhost\" DevicePath \"\"" May 14 05:10:52.380413 kubelet[2889]: I0514 05:10:52.380396 2889 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-cilium-run\") on node \"localhost\" DevicePath \"\"" May 14 05:10:52.380413 kubelet[2889]: I0514 05:10:52.380402 2889 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 14 05:10:52.380591 kubelet[2889]: I0514 05:10:52.380408 2889 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 14 05:10:52.380591 kubelet[2889]: I0514 05:10:52.380414 2889 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 14 05:10:52.380591 kubelet[2889]: I0514 05:10:52.380420 2889 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1f72d0e9-16f8-42de-b237-cb3cf049fc9e-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 14 05:10:53.035100 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-38d45b6b7e3a8908316e3e19f3efa59ba996ee95cdb48bff53c76a0dcba764d2-shm.mount: Deactivated successfully. May 14 05:10:53.035182 systemd[1]: var-lib-kubelet-pods-4926213b\x2d8b5a\x2d4019\x2dbd68\x2dd79cd14a0813-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd2ljn.mount: Deactivated successfully. May 14 05:10:53.035239 systemd[1]: var-lib-kubelet-pods-1f72d0e9\x2d16f8\x2d42de\x2db237\x2dcb3cf049fc9e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d77hvp.mount: Deactivated successfully. May 14 05:10:53.035290 systemd[1]: var-lib-kubelet-pods-1f72d0e9\x2d16f8\x2d42de\x2db237\x2dcb3cf049fc9e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 14 05:10:53.035338 systemd[1]: var-lib-kubelet-pods-1f72d0e9\x2d16f8\x2d42de\x2db237\x2dcb3cf049fc9e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 14 05:10:53.040445 kubelet[2889]: I0514 05:10:53.040424 2889 scope.go:117] "RemoveContainer" containerID="3368a79cb371d1250811cea2ec6e74fab311fb0fa12e1ecaf18d5c49a62a7958" May 14 05:10:53.046500 systemd[1]: Removed slice kubepods-besteffort-pod4926213b_8b5a_4019_bd68_d79cd14a0813.slice - libcontainer container kubepods-besteffort-pod4926213b_8b5a_4019_bd68_d79cd14a0813.slice. May 14 05:10:53.049011 containerd[1604]: time="2025-05-14T05:10:53.048991346Z" level=info msg="RemoveContainer for \"3368a79cb371d1250811cea2ec6e74fab311fb0fa12e1ecaf18d5c49a62a7958\"" May 14 05:10:53.055786 systemd[1]: Removed slice kubepods-burstable-pod1f72d0e9_16f8_42de_b237_cb3cf049fc9e.slice - libcontainer container kubepods-burstable-pod1f72d0e9_16f8_42de_b237_cb3cf049fc9e.slice. May 14 05:10:53.055857 systemd[1]: kubepods-burstable-pod1f72d0e9_16f8_42de_b237_cb3cf049fc9e.slice: Consumed 4.836s CPU time, 215.2M memory peak, 93.3M read from disk, 13.3M written to disk. May 14 05:10:53.063373 containerd[1604]: time="2025-05-14T05:10:53.063342059Z" level=info msg="RemoveContainer for \"3368a79cb371d1250811cea2ec6e74fab311fb0fa12e1ecaf18d5c49a62a7958\" returns successfully" May 14 05:10:53.063515 kubelet[2889]: I0514 05:10:53.063500 2889 scope.go:117] "RemoveContainer" containerID="3368a79cb371d1250811cea2ec6e74fab311fb0fa12e1ecaf18d5c49a62a7958" May 14 05:10:53.063849 containerd[1604]: time="2025-05-14T05:10:53.063830835Z" level=error msg="ContainerStatus for \"3368a79cb371d1250811cea2ec6e74fab311fb0fa12e1ecaf18d5c49a62a7958\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3368a79cb371d1250811cea2ec6e74fab311fb0fa12e1ecaf18d5c49a62a7958\": not found" May 14 05:10:53.064968 kubelet[2889]: E0514 05:10:53.064882 2889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3368a79cb371d1250811cea2ec6e74fab311fb0fa12e1ecaf18d5c49a62a7958\": not found" containerID="3368a79cb371d1250811cea2ec6e74fab311fb0fa12e1ecaf18d5c49a62a7958" May 14 05:10:53.066264 kubelet[2889]: I0514 05:10:53.066206 2889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3368a79cb371d1250811cea2ec6e74fab311fb0fa12e1ecaf18d5c49a62a7958"} err="failed to get container status \"3368a79cb371d1250811cea2ec6e74fab311fb0fa12e1ecaf18d5c49a62a7958\": rpc error: code = NotFound desc = an error occurred when try to find container \"3368a79cb371d1250811cea2ec6e74fab311fb0fa12e1ecaf18d5c49a62a7958\": not found" May 14 05:10:53.066264 kubelet[2889]: I0514 05:10:53.066263 2889 scope.go:117] "RemoveContainer" containerID="308aaaba218a213ba3e7134d4c00e7ff0ce95dbc6c345df80e732352f702dc86" May 14 05:10:53.067745 containerd[1604]: time="2025-05-14T05:10:53.067712494Z" level=info msg="RemoveContainer for \"308aaaba218a213ba3e7134d4c00e7ff0ce95dbc6c345df80e732352f702dc86\"" May 14 05:10:53.073500 containerd[1604]: time="2025-05-14T05:10:53.073457474Z" level=info msg="RemoveContainer for \"308aaaba218a213ba3e7134d4c00e7ff0ce95dbc6c345df80e732352f702dc86\" returns successfully" May 14 05:10:53.073580 kubelet[2889]: I0514 05:10:53.073563 2889 scope.go:117] "RemoveContainer" containerID="54c3beccfd917d0494f7120eda31a7f4a10232be975c7b485e2570498a9e6beb" May 14 05:10:53.074420 containerd[1604]: time="2025-05-14T05:10:53.074404353Z" level=info msg="RemoveContainer for \"54c3beccfd917d0494f7120eda31a7f4a10232be975c7b485e2570498a9e6beb\"" May 14 05:10:53.076194 containerd[1604]: time="2025-05-14T05:10:53.076178246Z" level=info msg="RemoveContainer for \"54c3beccfd917d0494f7120eda31a7f4a10232be975c7b485e2570498a9e6beb\" returns successfully" May 14 05:10:53.076284 kubelet[2889]: I0514 05:10:53.076270 2889 scope.go:117] "RemoveContainer" containerID="9b9b0503cb889b9c3a87b9fd8bbb956cc0d26fdb275563a20034d00b77672532" May 14 05:10:53.077396 containerd[1604]: time="2025-05-14T05:10:53.077381812Z" level=info msg="RemoveContainer for \"9b9b0503cb889b9c3a87b9fd8bbb956cc0d26fdb275563a20034d00b77672532\"" May 14 05:10:53.079107 containerd[1604]: time="2025-05-14T05:10:53.079093745Z" level=info msg="RemoveContainer for \"9b9b0503cb889b9c3a87b9fd8bbb956cc0d26fdb275563a20034d00b77672532\" returns successfully" May 14 05:10:53.079190 kubelet[2889]: I0514 05:10:53.079183 2889 scope.go:117] "RemoveContainer" containerID="1567c8e36a925e7aea079f47bddf715dc75aa44eecb9451fc8187577824ee1ae" May 14 05:10:53.079996 containerd[1604]: time="2025-05-14T05:10:53.079955185Z" level=info msg="RemoveContainer for \"1567c8e36a925e7aea079f47bddf715dc75aa44eecb9451fc8187577824ee1ae\"" May 14 05:10:53.081134 containerd[1604]: time="2025-05-14T05:10:53.081103235Z" level=info msg="RemoveContainer for \"1567c8e36a925e7aea079f47bddf715dc75aa44eecb9451fc8187577824ee1ae\" returns successfully" May 14 05:10:53.081219 kubelet[2889]: I0514 05:10:53.081205 2889 scope.go:117] "RemoveContainer" containerID="389179e3bad736224aeef9f0a8418ddba754243f64c4dfa5d177e9e6f003321a" May 14 05:10:53.082084 containerd[1604]: time="2025-05-14T05:10:53.082073429Z" level=info msg="RemoveContainer for \"389179e3bad736224aeef9f0a8418ddba754243f64c4dfa5d177e9e6f003321a\"" May 14 05:10:53.083237 containerd[1604]: time="2025-05-14T05:10:53.083185911Z" level=info msg="RemoveContainer for \"389179e3bad736224aeef9f0a8418ddba754243f64c4dfa5d177e9e6f003321a\" returns successfully" May 14 05:10:53.083330 kubelet[2889]: I0514 05:10:53.083245 2889 scope.go:117] "RemoveContainer" containerID="308aaaba218a213ba3e7134d4c00e7ff0ce95dbc6c345df80e732352f702dc86" May 14 05:10:53.083426 containerd[1604]: time="2025-05-14T05:10:53.083411257Z" level=error msg="ContainerStatus for \"308aaaba218a213ba3e7134d4c00e7ff0ce95dbc6c345df80e732352f702dc86\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"308aaaba218a213ba3e7134d4c00e7ff0ce95dbc6c345df80e732352f702dc86\": not found" May 14 05:10:53.083544 kubelet[2889]: E0514 05:10:53.083524 2889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"308aaaba218a213ba3e7134d4c00e7ff0ce95dbc6c345df80e732352f702dc86\": not found" containerID="308aaaba218a213ba3e7134d4c00e7ff0ce95dbc6c345df80e732352f702dc86" May 14 05:10:53.083569 kubelet[2889]: I0514 05:10:53.083540 2889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"308aaaba218a213ba3e7134d4c00e7ff0ce95dbc6c345df80e732352f702dc86"} err="failed to get container status \"308aaaba218a213ba3e7134d4c00e7ff0ce95dbc6c345df80e732352f702dc86\": rpc error: code = NotFound desc = an error occurred when try to find container \"308aaaba218a213ba3e7134d4c00e7ff0ce95dbc6c345df80e732352f702dc86\": not found" May 14 05:10:53.083569 kubelet[2889]: I0514 05:10:53.083552 2889 scope.go:117] "RemoveContainer" containerID="54c3beccfd917d0494f7120eda31a7f4a10232be975c7b485e2570498a9e6beb" May 14 05:10:53.083633 containerd[1604]: time="2025-05-14T05:10:53.083618325Z" level=error msg="ContainerStatus for \"54c3beccfd917d0494f7120eda31a7f4a10232be975c7b485e2570498a9e6beb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"54c3beccfd917d0494f7120eda31a7f4a10232be975c7b485e2570498a9e6beb\": not found" May 14 05:10:53.083708 kubelet[2889]: E0514 05:10:53.083695 2889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"54c3beccfd917d0494f7120eda31a7f4a10232be975c7b485e2570498a9e6beb\": not found" containerID="54c3beccfd917d0494f7120eda31a7f4a10232be975c7b485e2570498a9e6beb" May 14 05:10:53.083827 kubelet[2889]: I0514 05:10:53.083711 2889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"54c3beccfd917d0494f7120eda31a7f4a10232be975c7b485e2570498a9e6beb"} err="failed to get container status \"54c3beccfd917d0494f7120eda31a7f4a10232be975c7b485e2570498a9e6beb\": rpc error: code = NotFound desc = an error occurred when try to find container \"54c3beccfd917d0494f7120eda31a7f4a10232be975c7b485e2570498a9e6beb\": not found" May 14 05:10:53.083827 kubelet[2889]: I0514 05:10:53.083720 2889 scope.go:117] "RemoveContainer" containerID="9b9b0503cb889b9c3a87b9fd8bbb956cc0d26fdb275563a20034d00b77672532" May 14 05:10:53.083869 containerd[1604]: time="2025-05-14T05:10:53.083799408Z" level=error msg="ContainerStatus for \"9b9b0503cb889b9c3a87b9fd8bbb956cc0d26fdb275563a20034d00b77672532\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9b9b0503cb889b9c3a87b9fd8bbb956cc0d26fdb275563a20034d00b77672532\": not found" May 14 05:10:53.083937 kubelet[2889]: E0514 05:10:53.083924 2889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9b9b0503cb889b9c3a87b9fd8bbb956cc0d26fdb275563a20034d00b77672532\": not found" containerID="9b9b0503cb889b9c3a87b9fd8bbb956cc0d26fdb275563a20034d00b77672532" May 14 05:10:53.083981 kubelet[2889]: I0514 05:10:53.083936 2889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9b9b0503cb889b9c3a87b9fd8bbb956cc0d26fdb275563a20034d00b77672532"} err="failed to get container status \"9b9b0503cb889b9c3a87b9fd8bbb956cc0d26fdb275563a20034d00b77672532\": rpc error: code = NotFound desc = an error occurred when try to find container \"9b9b0503cb889b9c3a87b9fd8bbb956cc0d26fdb275563a20034d00b77672532\": not found" May 14 05:10:53.083981 kubelet[2889]: I0514 05:10:53.083942 2889 scope.go:117] "RemoveContainer" containerID="1567c8e36a925e7aea079f47bddf715dc75aa44eecb9451fc8187577824ee1ae" May 14 05:10:53.084082 containerd[1604]: time="2025-05-14T05:10:53.084069887Z" level=error msg="ContainerStatus for \"1567c8e36a925e7aea079f47bddf715dc75aa44eecb9451fc8187577824ee1ae\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1567c8e36a925e7aea079f47bddf715dc75aa44eecb9451fc8187577824ee1ae\": not found" May 14 05:10:53.084227 kubelet[2889]: E0514 05:10:53.084213 2889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1567c8e36a925e7aea079f47bddf715dc75aa44eecb9451fc8187577824ee1ae\": not found" containerID="1567c8e36a925e7aea079f47bddf715dc75aa44eecb9451fc8187577824ee1ae" May 14 05:10:53.084250 kubelet[2889]: I0514 05:10:53.084225 2889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1567c8e36a925e7aea079f47bddf715dc75aa44eecb9451fc8187577824ee1ae"} err="failed to get container status \"1567c8e36a925e7aea079f47bddf715dc75aa44eecb9451fc8187577824ee1ae\": rpc error: code = NotFound desc = an error occurred when try to find container \"1567c8e36a925e7aea079f47bddf715dc75aa44eecb9451fc8187577824ee1ae\": not found" May 14 05:10:53.084250 kubelet[2889]: I0514 05:10:53.084232 2889 scope.go:117] "RemoveContainer" containerID="389179e3bad736224aeef9f0a8418ddba754243f64c4dfa5d177e9e6f003321a" May 14 05:10:53.084364 containerd[1604]: time="2025-05-14T05:10:53.084313641Z" level=error msg="ContainerStatus for \"389179e3bad736224aeef9f0a8418ddba754243f64c4dfa5d177e9e6f003321a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"389179e3bad736224aeef9f0a8418ddba754243f64c4dfa5d177e9e6f003321a\": not found" May 14 05:10:53.084412 kubelet[2889]: E0514 05:10:53.084372 2889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"389179e3bad736224aeef9f0a8418ddba754243f64c4dfa5d177e9e6f003321a\": not found" containerID="389179e3bad736224aeef9f0a8418ddba754243f64c4dfa5d177e9e6f003321a" May 14 05:10:53.084412 kubelet[2889]: I0514 05:10:53.084381 2889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"389179e3bad736224aeef9f0a8418ddba754243f64c4dfa5d177e9e6f003321a"} err="failed to get container status \"389179e3bad736224aeef9f0a8418ddba754243f64c4dfa5d177e9e6f003321a\": rpc error: code = NotFound desc = an error occurred when try to find container \"389179e3bad736224aeef9f0a8418ddba754243f64c4dfa5d177e9e6f003321a\": not found" May 14 05:10:53.820637 kubelet[2889]: I0514 05:10:53.820024 2889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f72d0e9-16f8-42de-b237-cb3cf049fc9e" path="/var/lib/kubelet/pods/1f72d0e9-16f8-42de-b237-cb3cf049fc9e/volumes" May 14 05:10:53.820637 kubelet[2889]: I0514 05:10:53.820461 2889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4926213b-8b5a-4019-bd68-d79cd14a0813" path="/var/lib/kubelet/pods/4926213b-8b5a-4019-bd68-d79cd14a0813/volumes" May 14 05:10:53.937059 sshd[4429]: Connection closed by 139.178.89.65 port 45940 May 14 05:10:53.937702 sshd-session[4427]: pam_unix(sshd:session): session closed for user core May 14 05:10:53.942772 systemd[1]: sshd@22-139.178.70.105:22-139.178.89.65:45940.service: Deactivated successfully. May 14 05:10:53.943651 systemd[1]: session-25.scope: Deactivated successfully. May 14 05:10:53.944415 systemd-logind[1573]: Session 25 logged out. Waiting for processes to exit. May 14 05:10:53.945762 systemd[1]: Started sshd@23-139.178.70.105:22-139.178.89.65:45954.service - OpenSSH per-connection server daemon (139.178.89.65:45954). May 14 05:10:53.946472 systemd-logind[1573]: Removed session 25. May 14 05:10:53.996398 sshd[4583]: Accepted publickey for core from 139.178.89.65 port 45954 ssh2: RSA SHA256:sWEHzEuAS00wVyRssVrF9wwUJZsltkVMESa8qG2astk May 14 05:10:53.997392 sshd-session[4583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:10:54.001225 systemd-logind[1573]: New session 26 of user core. May 14 05:10:54.004777 systemd[1]: Started session-26.scope - Session 26 of User core. May 14 05:10:54.527754 sshd[4585]: Connection closed by 139.178.89.65 port 45954 May 14 05:10:54.528768 sshd-session[4583]: pam_unix(sshd:session): session closed for user core May 14 05:10:54.539139 systemd[1]: sshd@23-139.178.70.105:22-139.178.89.65:45954.service: Deactivated successfully. May 14 05:10:54.542129 systemd[1]: session-26.scope: Deactivated successfully. May 14 05:10:54.543734 systemd-logind[1573]: Session 26 logged out. Waiting for processes to exit. May 14 05:10:54.547080 systemd[1]: Started sshd@24-139.178.70.105:22-139.178.89.65:45970.service - OpenSSH per-connection server daemon (139.178.89.65:45970). May 14 05:10:54.547918 systemd-logind[1573]: Removed session 26. May 14 05:10:54.569580 kubelet[2889]: E0514 05:10:54.569292 2889 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1f72d0e9-16f8-42de-b237-cb3cf049fc9e" containerName="apply-sysctl-overwrites" May 14 05:10:54.569580 kubelet[2889]: E0514 05:10:54.569309 2889 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1f72d0e9-16f8-42de-b237-cb3cf049fc9e" containerName="mount-bpf-fs" May 14 05:10:54.569580 kubelet[2889]: E0514 05:10:54.569313 2889 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1f72d0e9-16f8-42de-b237-cb3cf049fc9e" containerName="clean-cilium-state" May 14 05:10:54.569580 kubelet[2889]: E0514 05:10:54.569316 2889 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1f72d0e9-16f8-42de-b237-cb3cf049fc9e" containerName="cilium-agent" May 14 05:10:54.569580 kubelet[2889]: E0514 05:10:54.569319 2889 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1f72d0e9-16f8-42de-b237-cb3cf049fc9e" containerName="mount-cgroup" May 14 05:10:54.569580 kubelet[2889]: E0514 05:10:54.569323 2889 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4926213b-8b5a-4019-bd68-d79cd14a0813" containerName="cilium-operator" May 14 05:10:54.569580 kubelet[2889]: I0514 05:10:54.569359 2889 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f72d0e9-16f8-42de-b237-cb3cf049fc9e" containerName="cilium-agent" May 14 05:10:54.569580 kubelet[2889]: I0514 05:10:54.569364 2889 memory_manager.go:354] "RemoveStaleState removing state" podUID="4926213b-8b5a-4019-bd68-d79cd14a0813" containerName="cilium-operator" May 14 05:10:54.581300 systemd[1]: Created slice kubepods-burstable-pod8296fa7d_598d_487e_a2ae_680d5897cadb.slice - libcontainer container kubepods-burstable-pod8296fa7d_598d_487e_a2ae_680d5897cadb.slice. May 14 05:10:54.585266 sshd[4595]: Accepted publickey for core from 139.178.89.65 port 45970 ssh2: RSA SHA256:sWEHzEuAS00wVyRssVrF9wwUJZsltkVMESa8qG2astk May 14 05:10:54.587012 sshd-session[4595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:10:54.591322 systemd-logind[1573]: New session 27 of user core. May 14 05:10:54.594568 kubelet[2889]: I0514 05:10:54.593580 2889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8296fa7d-598d-487e-a2ae-680d5897cadb-cilium-run\") pod \"cilium-n6wdr\" (UID: \"8296fa7d-598d-487e-a2ae-680d5897cadb\") " pod="kube-system/cilium-n6wdr" May 14 05:10:54.594568 kubelet[2889]: I0514 05:10:54.593604 2889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8296fa7d-598d-487e-a2ae-680d5897cadb-clustermesh-secrets\") pod \"cilium-n6wdr\" (UID: \"8296fa7d-598d-487e-a2ae-680d5897cadb\") " pod="kube-system/cilium-n6wdr" May 14 05:10:54.594568 kubelet[2889]: I0514 05:10:54.593616 2889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8296fa7d-598d-487e-a2ae-680d5897cadb-host-proc-sys-kernel\") pod \"cilium-n6wdr\" (UID: \"8296fa7d-598d-487e-a2ae-680d5897cadb\") " pod="kube-system/cilium-n6wdr" May 14 05:10:54.594568 kubelet[2889]: I0514 05:10:54.593624 2889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8296fa7d-598d-487e-a2ae-680d5897cadb-bpf-maps\") pod \"cilium-n6wdr\" (UID: \"8296fa7d-598d-487e-a2ae-680d5897cadb\") " pod="kube-system/cilium-n6wdr" May 14 05:10:54.594568 kubelet[2889]: I0514 05:10:54.593632 2889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8296fa7d-598d-487e-a2ae-680d5897cadb-xtables-lock\") pod \"cilium-n6wdr\" (UID: \"8296fa7d-598d-487e-a2ae-680d5897cadb\") " pod="kube-system/cilium-n6wdr" May 14 05:10:54.594568 kubelet[2889]: I0514 05:10:54.593642 2889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8296fa7d-598d-487e-a2ae-680d5897cadb-hostproc\") pod \"cilium-n6wdr\" (UID: \"8296fa7d-598d-487e-a2ae-680d5897cadb\") " pod="kube-system/cilium-n6wdr" May 14 05:10:54.595395 kubelet[2889]: I0514 05:10:54.593659 2889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8296fa7d-598d-487e-a2ae-680d5897cadb-cilium-cgroup\") pod \"cilium-n6wdr\" (UID: \"8296fa7d-598d-487e-a2ae-680d5897cadb\") " pod="kube-system/cilium-n6wdr" May 14 05:10:54.595395 kubelet[2889]: I0514 05:10:54.594874 2889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8296fa7d-598d-487e-a2ae-680d5897cadb-lib-modules\") pod \"cilium-n6wdr\" (UID: \"8296fa7d-598d-487e-a2ae-680d5897cadb\") " pod="kube-system/cilium-n6wdr" May 14 05:10:54.595395 kubelet[2889]: I0514 05:10:54.594904 2889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8296fa7d-598d-487e-a2ae-680d5897cadb-host-proc-sys-net\") pod \"cilium-n6wdr\" (UID: \"8296fa7d-598d-487e-a2ae-680d5897cadb\") " pod="kube-system/cilium-n6wdr" May 14 05:10:54.595395 kubelet[2889]: I0514 05:10:54.594917 2889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8296fa7d-598d-487e-a2ae-680d5897cadb-cni-path\") pod \"cilium-n6wdr\" (UID: \"8296fa7d-598d-487e-a2ae-680d5897cadb\") " pod="kube-system/cilium-n6wdr" May 14 05:10:54.595395 kubelet[2889]: I0514 05:10:54.594928 2889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8296fa7d-598d-487e-a2ae-680d5897cadb-hubble-tls\") pod \"cilium-n6wdr\" (UID: \"8296fa7d-598d-487e-a2ae-680d5897cadb\") " pod="kube-system/cilium-n6wdr" May 14 05:10:54.595395 kubelet[2889]: I0514 05:10:54.594938 2889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8296fa7d-598d-487e-a2ae-680d5897cadb-cilium-config-path\") pod \"cilium-n6wdr\" (UID: \"8296fa7d-598d-487e-a2ae-680d5897cadb\") " pod="kube-system/cilium-n6wdr" May 14 05:10:54.596397 kubelet[2889]: I0514 05:10:54.594946 2889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt46q\" (UniqueName: \"kubernetes.io/projected/8296fa7d-598d-487e-a2ae-680d5897cadb-kube-api-access-lt46q\") pod \"cilium-n6wdr\" (UID: \"8296fa7d-598d-487e-a2ae-680d5897cadb\") " pod="kube-system/cilium-n6wdr" May 14 05:10:54.596397 kubelet[2889]: I0514 05:10:54.594956 2889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8296fa7d-598d-487e-a2ae-680d5897cadb-etc-cni-netd\") pod \"cilium-n6wdr\" (UID: \"8296fa7d-598d-487e-a2ae-680d5897cadb\") " pod="kube-system/cilium-n6wdr" May 14 05:10:54.596397 kubelet[2889]: I0514 05:10:54.594966 2889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8296fa7d-598d-487e-a2ae-680d5897cadb-cilium-ipsec-secrets\") pod \"cilium-n6wdr\" (UID: \"8296fa7d-598d-487e-a2ae-680d5897cadb\") " pod="kube-system/cilium-n6wdr" May 14 05:10:54.595893 systemd[1]: Started session-27.scope - Session 27 of User core. May 14 05:10:54.649032 sshd[4597]: Connection closed by 139.178.89.65 port 45970 May 14 05:10:54.649831 sshd-session[4595]: pam_unix(sshd:session): session closed for user core May 14 05:10:54.656142 systemd[1]: sshd@24-139.178.70.105:22-139.178.89.65:45970.service: Deactivated successfully. May 14 05:10:54.657409 systemd[1]: session-27.scope: Deactivated successfully. May 14 05:10:54.657986 systemd-logind[1573]: Session 27 logged out. Waiting for processes to exit. May 14 05:10:54.659569 systemd[1]: Started sshd@25-139.178.70.105:22-139.178.89.65:45972.service - OpenSSH per-connection server daemon (139.178.89.65:45972). May 14 05:10:54.660452 systemd-logind[1573]: Removed session 27. May 14 05:10:54.693216 sshd[4604]: Accepted publickey for core from 139.178.89.65 port 45972 ssh2: RSA SHA256:sWEHzEuAS00wVyRssVrF9wwUJZsltkVMESa8qG2astk May 14 05:10:54.693903 sshd-session[4604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:10:54.697216 systemd-logind[1573]: New session 28 of user core. May 14 05:10:54.699758 systemd[1]: Started session-28.scope - Session 28 of User core. May 14 05:10:54.885924 containerd[1604]: time="2025-05-14T05:10:54.885887935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n6wdr,Uid:8296fa7d-598d-487e-a2ae-680d5897cadb,Namespace:kube-system,Attempt:0,}" May 14 05:10:54.912356 containerd[1604]: time="2025-05-14T05:10:54.912320240Z" level=info msg="connecting to shim 332e813f1be61a1fa6ffae185ff85210f58533c908ced480be80669776399ef6" address="unix:///run/containerd/s/647a520ffa53c14f41d7badeddf1e804279ac7d1dc983af739c857cdf71ab729" namespace=k8s.io protocol=ttrpc version=3 May 14 05:10:54.932848 systemd[1]: Started cri-containerd-332e813f1be61a1fa6ffae185ff85210f58533c908ced480be80669776399ef6.scope - libcontainer container 332e813f1be61a1fa6ffae185ff85210f58533c908ced480be80669776399ef6. May 14 05:10:54.959989 containerd[1604]: time="2025-05-14T05:10:54.959946832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n6wdr,Uid:8296fa7d-598d-487e-a2ae-680d5897cadb,Namespace:kube-system,Attempt:0,} returns sandbox id \"332e813f1be61a1fa6ffae185ff85210f58533c908ced480be80669776399ef6\"" May 14 05:10:54.965311 containerd[1604]: time="2025-05-14T05:10:54.965255252Z" level=info msg="CreateContainer within sandbox \"332e813f1be61a1fa6ffae185ff85210f58533c908ced480be80669776399ef6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 05:10:54.976312 containerd[1604]: time="2025-05-14T05:10:54.976279253Z" level=info msg="Container 4810ff069b30c9902c96773abf9ade8994affa2243ce762c46e964aa85eec8c8: CDI devices from CRI Config.CDIDevices: []" May 14 05:10:54.994044 containerd[1604]: time="2025-05-14T05:10:54.994006798Z" level=info msg="CreateContainer within sandbox \"332e813f1be61a1fa6ffae185ff85210f58533c908ced480be80669776399ef6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4810ff069b30c9902c96773abf9ade8994affa2243ce762c46e964aa85eec8c8\"" May 14 05:10:54.994693 containerd[1604]: time="2025-05-14T05:10:54.994522776Z" level=info msg="StartContainer for \"4810ff069b30c9902c96773abf9ade8994affa2243ce762c46e964aa85eec8c8\"" May 14 05:10:54.996048 containerd[1604]: time="2025-05-14T05:10:54.996003949Z" level=info msg="connecting to shim 4810ff069b30c9902c96773abf9ade8994affa2243ce762c46e964aa85eec8c8" address="unix:///run/containerd/s/647a520ffa53c14f41d7badeddf1e804279ac7d1dc983af739c857cdf71ab729" protocol=ttrpc version=3 May 14 05:10:55.009799 systemd[1]: Started cri-containerd-4810ff069b30c9902c96773abf9ade8994affa2243ce762c46e964aa85eec8c8.scope - libcontainer container 4810ff069b30c9902c96773abf9ade8994affa2243ce762c46e964aa85eec8c8. May 14 05:10:55.036307 containerd[1604]: time="2025-05-14T05:10:55.036286855Z" level=info msg="StartContainer for \"4810ff069b30c9902c96773abf9ade8994affa2243ce762c46e964aa85eec8c8\" returns successfully" May 14 05:10:55.090699 systemd[1]: cri-containerd-4810ff069b30c9902c96773abf9ade8994affa2243ce762c46e964aa85eec8c8.scope: Deactivated successfully. May 14 05:10:55.091147 systemd[1]: cri-containerd-4810ff069b30c9902c96773abf9ade8994affa2243ce762c46e964aa85eec8c8.scope: Consumed 15ms CPU time, 9.6M memory peak, 3.2M read from disk. May 14 05:10:55.092781 containerd[1604]: time="2025-05-14T05:10:55.092761461Z" level=info msg="received exit event container_id:\"4810ff069b30c9902c96773abf9ade8994affa2243ce762c46e964aa85eec8c8\" id:\"4810ff069b30c9902c96773abf9ade8994affa2243ce762c46e964aa85eec8c8\" pid:4674 exited_at:{seconds:1747199455 nanos:92458110}" May 14 05:10:55.093011 containerd[1604]: time="2025-05-14T05:10:55.092769394Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4810ff069b30c9902c96773abf9ade8994affa2243ce762c46e964aa85eec8c8\" id:\"4810ff069b30c9902c96773abf9ade8994affa2243ce762c46e964aa85eec8c8\" pid:4674 exited_at:{seconds:1747199455 nanos:92458110}" May 14 05:10:55.896876 containerd[1604]: time="2025-05-14T05:10:55.896834603Z" level=info msg="StopPodSandbox for \"77ad0fa95aa9551e61800463166c3b30f25eeb2385a7a4cdcb2de7f987ee9ba1\"" May 14 05:10:55.897194 containerd[1604]: time="2025-05-14T05:10:55.896962782Z" level=info msg="TearDown network for sandbox \"77ad0fa95aa9551e61800463166c3b30f25eeb2385a7a4cdcb2de7f987ee9ba1\" successfully" May 14 05:10:55.897194 containerd[1604]: time="2025-05-14T05:10:55.896975014Z" level=info msg="StopPodSandbox for \"77ad0fa95aa9551e61800463166c3b30f25eeb2385a7a4cdcb2de7f987ee9ba1\" returns successfully" May 14 05:10:55.897356 containerd[1604]: time="2025-05-14T05:10:55.897331261Z" level=info msg="RemovePodSandbox for \"77ad0fa95aa9551e61800463166c3b30f25eeb2385a7a4cdcb2de7f987ee9ba1\"" May 14 05:10:55.897407 containerd[1604]: time="2025-05-14T05:10:55.897361901Z" level=info msg="Forcibly stopping sandbox \"77ad0fa95aa9551e61800463166c3b30f25eeb2385a7a4cdcb2de7f987ee9ba1\"" May 14 05:10:55.897444 containerd[1604]: time="2025-05-14T05:10:55.897431094Z" level=info msg="TearDown network for sandbox \"77ad0fa95aa9551e61800463166c3b30f25eeb2385a7a4cdcb2de7f987ee9ba1\" successfully" May 14 05:10:55.906041 containerd[1604]: time="2025-05-14T05:10:55.905521702Z" level=info msg="Ensure that sandbox 77ad0fa95aa9551e61800463166c3b30f25eeb2385a7a4cdcb2de7f987ee9ba1 in task-service has been cleanup successfully" May 14 05:10:55.908079 containerd[1604]: time="2025-05-14T05:10:55.908058658Z" level=info msg="RemovePodSandbox \"77ad0fa95aa9551e61800463166c3b30f25eeb2385a7a4cdcb2de7f987ee9ba1\" returns successfully" May 14 05:10:55.908475 containerd[1604]: time="2025-05-14T05:10:55.908440389Z" level=info msg="StopPodSandbox for \"38d45b6b7e3a8908316e3e19f3efa59ba996ee95cdb48bff53c76a0dcba764d2\"" May 14 05:10:55.908559 containerd[1604]: time="2025-05-14T05:10:55.908541972Z" level=info msg="TearDown network for sandbox \"38d45b6b7e3a8908316e3e19f3efa59ba996ee95cdb48bff53c76a0dcba764d2\" successfully" May 14 05:10:55.908594 containerd[1604]: time="2025-05-14T05:10:55.908558098Z" level=info msg="StopPodSandbox for \"38d45b6b7e3a8908316e3e19f3efa59ba996ee95cdb48bff53c76a0dcba764d2\" returns successfully" May 14 05:10:55.917039 containerd[1604]: time="2025-05-14T05:10:55.916867083Z" level=info msg="RemovePodSandbox for \"38d45b6b7e3a8908316e3e19f3efa59ba996ee95cdb48bff53c76a0dcba764d2\"" May 14 05:10:55.917039 containerd[1604]: time="2025-05-14T05:10:55.916894762Z" level=info msg="Forcibly stopping sandbox \"38d45b6b7e3a8908316e3e19f3efa59ba996ee95cdb48bff53c76a0dcba764d2\"" May 14 05:10:55.917039 containerd[1604]: time="2025-05-14T05:10:55.916959577Z" level=info msg="TearDown network for sandbox \"38d45b6b7e3a8908316e3e19f3efa59ba996ee95cdb48bff53c76a0dcba764d2\" successfully" May 14 05:10:55.917687 containerd[1604]: time="2025-05-14T05:10:55.917657264Z" level=info msg="Ensure that sandbox 38d45b6b7e3a8908316e3e19f3efa59ba996ee95cdb48bff53c76a0dcba764d2 in task-service has been cleanup successfully" May 14 05:10:55.918274 kubelet[2889]: E0514 05:10:55.918240 2889 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 14 05:10:55.918635 containerd[1604]: time="2025-05-14T05:10:55.918615817Z" level=info msg="RemovePodSandbox \"38d45b6b7e3a8908316e3e19f3efa59ba996ee95cdb48bff53c76a0dcba764d2\" returns successfully" May 14 05:10:56.063529 containerd[1604]: time="2025-05-14T05:10:56.063412506Z" level=info msg="CreateContainer within sandbox \"332e813f1be61a1fa6ffae185ff85210f58533c908ced480be80669776399ef6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 05:10:56.071262 containerd[1604]: time="2025-05-14T05:10:56.070756380Z" level=info msg="Container 66c556a1b49cdd93f081c46c307f908a86e80325b5e3604bf28726cbc25a492b: CDI devices from CRI Config.CDIDevices: []" May 14 05:10:56.088673 containerd[1604]: time="2025-05-14T05:10:56.088641660Z" level=info msg="CreateContainer within sandbox \"332e813f1be61a1fa6ffae185ff85210f58533c908ced480be80669776399ef6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"66c556a1b49cdd93f081c46c307f908a86e80325b5e3604bf28726cbc25a492b\"" May 14 05:10:56.089096 containerd[1604]: time="2025-05-14T05:10:56.089076852Z" level=info msg="StartContainer for \"66c556a1b49cdd93f081c46c307f908a86e80325b5e3604bf28726cbc25a492b\"" May 14 05:10:56.089823 containerd[1604]: time="2025-05-14T05:10:56.089807208Z" level=info msg="connecting to shim 66c556a1b49cdd93f081c46c307f908a86e80325b5e3604bf28726cbc25a492b" address="unix:///run/containerd/s/647a520ffa53c14f41d7badeddf1e804279ac7d1dc983af739c857cdf71ab729" protocol=ttrpc version=3 May 14 05:10:56.105772 systemd[1]: Started cri-containerd-66c556a1b49cdd93f081c46c307f908a86e80325b5e3604bf28726cbc25a492b.scope - libcontainer container 66c556a1b49cdd93f081c46c307f908a86e80325b5e3604bf28726cbc25a492b. May 14 05:10:56.137292 containerd[1604]: time="2025-05-14T05:10:56.137265581Z" level=info msg="StartContainer for \"66c556a1b49cdd93f081c46c307f908a86e80325b5e3604bf28726cbc25a492b\" returns successfully" May 14 05:10:56.175162 systemd[1]: cri-containerd-66c556a1b49cdd93f081c46c307f908a86e80325b5e3604bf28726cbc25a492b.scope: Deactivated successfully. May 14 05:10:56.175500 containerd[1604]: time="2025-05-14T05:10:56.175247283Z" level=info msg="received exit event container_id:\"66c556a1b49cdd93f081c46c307f908a86e80325b5e3604bf28726cbc25a492b\" id:\"66c556a1b49cdd93f081c46c307f908a86e80325b5e3604bf28726cbc25a492b\" pid:4721 exited_at:{seconds:1747199456 nanos:175093968}" May 14 05:10:56.175500 containerd[1604]: time="2025-05-14T05:10:56.175438251Z" level=info msg="TaskExit event in podsandbox handler container_id:\"66c556a1b49cdd93f081c46c307f908a86e80325b5e3604bf28726cbc25a492b\" id:\"66c556a1b49cdd93f081c46c307f908a86e80325b5e3604bf28726cbc25a492b\" pid:4721 exited_at:{seconds:1747199456 nanos:175093968}" May 14 05:10:56.175784 systemd[1]: cri-containerd-66c556a1b49cdd93f081c46c307f908a86e80325b5e3604bf28726cbc25a492b.scope: Consumed 12ms CPU time, 7.5M memory peak, 2.2M read from disk. May 14 05:10:56.191045 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-66c556a1b49cdd93f081c46c307f908a86e80325b5e3604bf28726cbc25a492b-rootfs.mount: Deactivated successfully. May 14 05:10:57.066343 containerd[1604]: time="2025-05-14T05:10:57.066311316Z" level=info msg="CreateContainer within sandbox \"332e813f1be61a1fa6ffae185ff85210f58533c908ced480be80669776399ef6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 05:10:57.076607 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount473148979.mount: Deactivated successfully. May 14 05:10:57.078690 containerd[1604]: time="2025-05-14T05:10:57.077976549Z" level=info msg="Container 753c3db8e0d7a259a8ef18943d662ea78feacf99e0905ca7f5b78f5878f338c5: CDI devices from CRI Config.CDIDevices: []" May 14 05:10:57.085085 containerd[1604]: time="2025-05-14T05:10:57.085047049Z" level=info msg="CreateContainer within sandbox \"332e813f1be61a1fa6ffae185ff85210f58533c908ced480be80669776399ef6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"753c3db8e0d7a259a8ef18943d662ea78feacf99e0905ca7f5b78f5878f338c5\"" May 14 05:10:57.088069 containerd[1604]: time="2025-05-14T05:10:57.088038810Z" level=info msg="StartContainer for \"753c3db8e0d7a259a8ef18943d662ea78feacf99e0905ca7f5b78f5878f338c5\"" May 14 05:10:57.089872 containerd[1604]: time="2025-05-14T05:10:57.089850519Z" level=info msg="connecting to shim 753c3db8e0d7a259a8ef18943d662ea78feacf99e0905ca7f5b78f5878f338c5" address="unix:///run/containerd/s/647a520ffa53c14f41d7badeddf1e804279ac7d1dc983af739c857cdf71ab729" protocol=ttrpc version=3 May 14 05:10:57.103796 systemd[1]: Started cri-containerd-753c3db8e0d7a259a8ef18943d662ea78feacf99e0905ca7f5b78f5878f338c5.scope - libcontainer container 753c3db8e0d7a259a8ef18943d662ea78feacf99e0905ca7f5b78f5878f338c5. May 14 05:10:57.132105 containerd[1604]: time="2025-05-14T05:10:57.132058829Z" level=info msg="StartContainer for \"753c3db8e0d7a259a8ef18943d662ea78feacf99e0905ca7f5b78f5878f338c5\" returns successfully" May 14 05:10:57.157965 systemd[1]: cri-containerd-753c3db8e0d7a259a8ef18943d662ea78feacf99e0905ca7f5b78f5878f338c5.scope: Deactivated successfully. May 14 05:10:57.158505 systemd[1]: cri-containerd-753c3db8e0d7a259a8ef18943d662ea78feacf99e0905ca7f5b78f5878f338c5.scope: Consumed 14ms CPU time, 5.6M memory peak, 1.1M read from disk. May 14 05:10:57.158988 containerd[1604]: time="2025-05-14T05:10:57.158968241Z" level=info msg="received exit event container_id:\"753c3db8e0d7a259a8ef18943d662ea78feacf99e0905ca7f5b78f5878f338c5\" id:\"753c3db8e0d7a259a8ef18943d662ea78feacf99e0905ca7f5b78f5878f338c5\" pid:4763 exited_at:{seconds:1747199457 nanos:158776511}" May 14 05:10:57.159265 containerd[1604]: time="2025-05-14T05:10:57.159156624Z" level=info msg="TaskExit event in podsandbox handler container_id:\"753c3db8e0d7a259a8ef18943d662ea78feacf99e0905ca7f5b78f5878f338c5\" id:\"753c3db8e0d7a259a8ef18943d662ea78feacf99e0905ca7f5b78f5878f338c5\" pid:4763 exited_at:{seconds:1747199457 nanos:158776511}" May 14 05:10:57.171108 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-753c3db8e0d7a259a8ef18943d662ea78feacf99e0905ca7f5b78f5878f338c5-rootfs.mount: Deactivated successfully. May 14 05:10:58.072435 containerd[1604]: time="2025-05-14T05:10:58.072365291Z" level=info msg="CreateContainer within sandbox \"332e813f1be61a1fa6ffae185ff85210f58533c908ced480be80669776399ef6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 05:10:58.097448 containerd[1604]: time="2025-05-14T05:10:58.097375108Z" level=info msg="Container 751c2b321e26814a657164905e7e34b361f0edf0d6662ea3f91a96bfcc594d66: CDI devices from CRI Config.CDIDevices: []" May 14 05:10:58.104210 containerd[1604]: time="2025-05-14T05:10:58.104184585Z" level=info msg="CreateContainer within sandbox \"332e813f1be61a1fa6ffae185ff85210f58533c908ced480be80669776399ef6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"751c2b321e26814a657164905e7e34b361f0edf0d6662ea3f91a96bfcc594d66\"" May 14 05:10:58.104876 containerd[1604]: time="2025-05-14T05:10:58.104635086Z" level=info msg="StartContainer for \"751c2b321e26814a657164905e7e34b361f0edf0d6662ea3f91a96bfcc594d66\"" May 14 05:10:58.105873 containerd[1604]: time="2025-05-14T05:10:58.105770031Z" level=info msg="connecting to shim 751c2b321e26814a657164905e7e34b361f0edf0d6662ea3f91a96bfcc594d66" address="unix:///run/containerd/s/647a520ffa53c14f41d7badeddf1e804279ac7d1dc983af739c857cdf71ab729" protocol=ttrpc version=3 May 14 05:10:58.123765 systemd[1]: Started cri-containerd-751c2b321e26814a657164905e7e34b361f0edf0d6662ea3f91a96bfcc594d66.scope - libcontainer container 751c2b321e26814a657164905e7e34b361f0edf0d6662ea3f91a96bfcc594d66. May 14 05:10:58.138412 systemd[1]: cri-containerd-751c2b321e26814a657164905e7e34b361f0edf0d6662ea3f91a96bfcc594d66.scope: Deactivated successfully. May 14 05:10:58.139196 containerd[1604]: time="2025-05-14T05:10:58.139053951Z" level=info msg="TaskExit event in podsandbox handler container_id:\"751c2b321e26814a657164905e7e34b361f0edf0d6662ea3f91a96bfcc594d66\" id:\"751c2b321e26814a657164905e7e34b361f0edf0d6662ea3f91a96bfcc594d66\" pid:4801 exited_at:{seconds:1747199458 nanos:138923315}" May 14 05:10:58.143466 containerd[1604]: time="2025-05-14T05:10:58.143453330Z" level=info msg="received exit event container_id:\"751c2b321e26814a657164905e7e34b361f0edf0d6662ea3f91a96bfcc594d66\" id:\"751c2b321e26814a657164905e7e34b361f0edf0d6662ea3f91a96bfcc594d66\" pid:4801 exited_at:{seconds:1747199458 nanos:138923315}" May 14 05:10:58.147637 containerd[1604]: time="2025-05-14T05:10:58.147586941Z" level=info msg="StartContainer for \"751c2b321e26814a657164905e7e34b361f0edf0d6662ea3f91a96bfcc594d66\" returns successfully" May 14 05:10:58.155670 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-751c2b321e26814a657164905e7e34b361f0edf0d6662ea3f91a96bfcc594d66-rootfs.mount: Deactivated successfully. May 14 05:10:58.473277 kubelet[2889]: I0514 05:10:58.472696 2889 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-14T05:10:58Z","lastTransitionTime":"2025-05-14T05:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 14 05:10:59.074493 containerd[1604]: time="2025-05-14T05:10:59.074305553Z" level=info msg="CreateContainer within sandbox \"332e813f1be61a1fa6ffae185ff85210f58533c908ced480be80669776399ef6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 05:10:59.093694 containerd[1604]: time="2025-05-14T05:10:59.093584879Z" level=info msg="Container b8d441ae038c6881bf78b70781f6d2fd4434d06e98d3723d4e07eccef567f7d4: CDI devices from CRI Config.CDIDevices: []" May 14 05:10:59.098375 containerd[1604]: time="2025-05-14T05:10:59.098347171Z" level=info msg="CreateContainer within sandbox \"332e813f1be61a1fa6ffae185ff85210f58533c908ced480be80669776399ef6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b8d441ae038c6881bf78b70781f6d2fd4434d06e98d3723d4e07eccef567f7d4\"" May 14 05:10:59.098796 containerd[1604]: time="2025-05-14T05:10:59.098752620Z" level=info msg="StartContainer for \"b8d441ae038c6881bf78b70781f6d2fd4434d06e98d3723d4e07eccef567f7d4\"" May 14 05:10:59.099475 containerd[1604]: time="2025-05-14T05:10:59.099455563Z" level=info msg="connecting to shim b8d441ae038c6881bf78b70781f6d2fd4434d06e98d3723d4e07eccef567f7d4" address="unix:///run/containerd/s/647a520ffa53c14f41d7badeddf1e804279ac7d1dc983af739c857cdf71ab729" protocol=ttrpc version=3 May 14 05:10:59.119843 systemd[1]: Started cri-containerd-b8d441ae038c6881bf78b70781f6d2fd4434d06e98d3723d4e07eccef567f7d4.scope - libcontainer container b8d441ae038c6881bf78b70781f6d2fd4434d06e98d3723d4e07eccef567f7d4. May 14 05:10:59.143604 containerd[1604]: time="2025-05-14T05:10:59.143550558Z" level=info msg="StartContainer for \"b8d441ae038c6881bf78b70781f6d2fd4434d06e98d3723d4e07eccef567f7d4\" returns successfully" May 14 05:10:59.257782 containerd[1604]: time="2025-05-14T05:10:59.257694616Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b8d441ae038c6881bf78b70781f6d2fd4434d06e98d3723d4e07eccef567f7d4\" id:\"9de5e8cbb646e7748e453c621a8396c53d92228ee4710838e6b78481e39c59b0\" pid:4868 exited_at:{seconds:1747199459 nanos:257471672}" May 14 05:10:59.904700 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) May 14 05:11:01.082052 containerd[1604]: time="2025-05-14T05:11:01.082025462Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b8d441ae038c6881bf78b70781f6d2fd4434d06e98d3723d4e07eccef567f7d4\" id:\"e2dc84ea040b2636cda9fd4e2d0f870a94a2f4a2ba759d4d163e5dbafb7cdcf5\" pid:4944 exit_status:1 exited_at:{seconds:1747199461 nanos:81351318}" May 14 05:11:02.605058 systemd-networkd[1536]: lxc_health: Link UP May 14 05:11:02.613813 systemd-networkd[1536]: lxc_health: Gained carrier May 14 05:11:02.897172 kubelet[2889]: I0514 05:11:02.896929 2889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-n6wdr" podStartSLOduration=8.896890008 podStartE2EDuration="8.896890008s" podCreationTimestamp="2025-05-14 05:10:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 05:11:00.088352168 +0000 UTC m=+124.352470024" watchObservedRunningTime="2025-05-14 05:11:02.896890008 +0000 UTC m=+127.161007858" May 14 05:11:03.168900 containerd[1604]: time="2025-05-14T05:11:03.168804937Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b8d441ae038c6881bf78b70781f6d2fd4434d06e98d3723d4e07eccef567f7d4\" id:\"cae3c1f9607680d842cc06876a0ecfbe8d4384d990c28b80f74d3ae53035da7c\" pid:5387 exited_at:{seconds:1747199463 nanos:168523372}" May 14 05:11:03.170940 kubelet[2889]: E0514 05:11:03.170919 2889 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:40446->127.0.0.1:37239: write tcp 127.0.0.1:40446->127.0.0.1:37239: write: broken pipe May 14 05:11:04.167783 systemd-networkd[1536]: lxc_health: Gained IPv6LL May 14 05:11:05.272278 containerd[1604]: time="2025-05-14T05:11:05.272246631Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b8d441ae038c6881bf78b70781f6d2fd4434d06e98d3723d4e07eccef567f7d4\" id:\"a3ac7e896ff689f02de64255a0eb2dbbbf42f7932a2d7e66814776e6c156fe09\" pid:5428 exited_at:{seconds:1747199465 nanos:271789919}" May 14 05:11:05.275257 kubelet[2889]: E0514 05:11:05.275081 2889 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:43452->127.0.0.1:37239: write tcp 127.0.0.1:43452->127.0.0.1:37239: write: broken pipe May 14 05:11:07.340643 containerd[1604]: time="2025-05-14T05:11:07.340597343Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b8d441ae038c6881bf78b70781f6d2fd4434d06e98d3723d4e07eccef567f7d4\" id:\"096ddff579321d54b5329530f8c0a7c3c0d5167e670997dadd08a35f1b3cfb06\" pid:5460 exited_at:{seconds:1747199467 nanos:340215054}" May 14 05:11:09.410818 containerd[1604]: time="2025-05-14T05:11:09.410788929Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b8d441ae038c6881bf78b70781f6d2fd4434d06e98d3723d4e07eccef567f7d4\" id:\"8022d847c65e0da35554d2f216af628e7892afc22b6dcedbd07fdf43851ca407\" pid:5485 exited_at:{seconds:1747199469 nanos:410457700}" May 14 05:11:09.413887 sshd[4610]: Connection closed by 139.178.89.65 port 45972 May 14 05:11:09.414464 sshd-session[4604]: pam_unix(sshd:session): session closed for user core May 14 05:11:09.428103 systemd[1]: sshd@25-139.178.70.105:22-139.178.89.65:45972.service: Deactivated successfully. May 14 05:11:09.429353 systemd[1]: session-28.scope: Deactivated successfully. May 14 05:11:09.429907 systemd-logind[1573]: Session 28 logged out. Waiting for processes to exit. May 14 05:11:09.430819 systemd-logind[1573]: Removed session 28.