Jul 2 08:14:32.659816 kernel: Linux version 5.15.161-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Jul 1 23:45:21 -00 2024 Jul 2 08:14:32.659831 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 08:14:32.659838 kernel: Disabled fast string operations Jul 2 08:14:32.659842 kernel: BIOS-provided physical RAM map: Jul 2 08:14:32.659846 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Jul 2 08:14:32.659850 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Jul 2 08:14:32.659856 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Jul 2 08:14:32.659860 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Jul 2 08:14:32.659864 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Jul 2 08:14:32.659868 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Jul 2 08:14:32.659873 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Jul 2 08:14:32.659877 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Jul 2 08:14:32.659881 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Jul 2 08:14:32.659885 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jul 2 08:14:32.659891 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Jul 2 08:14:32.659896 kernel: NX (Execute Disable) protection: active Jul 2 08:14:32.659900 kernel: SMBIOS 2.7 present. Jul 2 08:14:32.659905 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Jul 2 08:14:32.659909 kernel: vmware: hypercall mode: 0x00 Jul 2 08:14:32.659914 kernel: Hypervisor detected: VMware Jul 2 08:14:32.659919 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Jul 2 08:14:32.659924 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Jul 2 08:14:32.659928 kernel: vmware: using clock offset of 2614045954 ns Jul 2 08:14:32.659933 kernel: tsc: Detected 3408.000 MHz processor Jul 2 08:14:32.659938 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 08:14:32.659943 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 08:14:32.659947 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Jul 2 08:14:32.659952 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 08:14:32.659957 kernel: total RAM covered: 3072M Jul 2 08:14:32.659962 kernel: Found optimal setting for mtrr clean up Jul 2 08:14:32.659967 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Jul 2 08:14:32.659972 kernel: Using GB pages for direct mapping Jul 2 08:14:32.659976 kernel: ACPI: Early table checksum verification disabled Jul 2 08:14:32.659981 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Jul 2 08:14:32.659986 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Jul 2 08:14:32.659990 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Jul 2 08:14:32.659995 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Jul 2 08:14:32.659999 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jul 2 08:14:32.660004 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jul 2 08:14:32.660010 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Jul 2 08:14:32.660016 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Jul 2 08:14:32.660021 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Jul 2 08:14:32.660026 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Jul 2 08:14:32.660032 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Jul 2 08:14:32.660037 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Jul 2 08:14:32.660042 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Jul 2 08:14:32.660048 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Jul 2 08:14:32.660053 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jul 2 08:14:32.660058 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jul 2 08:14:32.660063 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Jul 2 08:14:32.660068 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Jul 2 08:14:32.660073 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Jul 2 08:14:32.660078 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Jul 2 08:14:32.660083 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Jul 2 08:14:32.660088 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Jul 2 08:14:32.660093 kernel: system APIC only can use physical flat Jul 2 08:14:32.660098 kernel: Setting APIC routing to physical flat. Jul 2 08:14:32.660103 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 2 08:14:32.660108 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jul 2 08:14:32.660113 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jul 2 08:14:32.660118 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jul 2 08:14:32.660122 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jul 2 08:14:32.660128 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jul 2 08:14:32.660133 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jul 2 08:14:32.660138 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jul 2 08:14:32.660143 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Jul 2 08:14:32.660148 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Jul 2 08:14:32.660153 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Jul 2 08:14:32.660158 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Jul 2 08:14:32.660163 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Jul 2 08:14:32.660168 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Jul 2 08:14:32.660173 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Jul 2 08:14:32.660178 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Jul 2 08:14:32.660184 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Jul 2 08:14:32.660188 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Jul 2 08:14:32.660193 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Jul 2 08:14:32.660198 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Jul 2 08:14:32.660203 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Jul 2 08:14:32.660208 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Jul 2 08:14:32.660213 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Jul 2 08:14:32.660218 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Jul 2 08:14:32.660223 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Jul 2 08:14:32.660228 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Jul 2 08:14:32.660233 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Jul 2 08:14:32.660238 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Jul 2 08:14:32.660243 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Jul 2 08:14:32.660248 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Jul 2 08:14:32.660253 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Jul 2 08:14:32.660258 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Jul 2 08:14:32.660263 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Jul 2 08:14:32.660267 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Jul 2 08:14:32.660272 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Jul 2 08:14:32.660278 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Jul 2 08:14:32.660283 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Jul 2 08:14:32.660288 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Jul 2 08:14:32.660293 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Jul 2 08:14:32.660298 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Jul 2 08:14:32.660303 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Jul 2 08:14:32.660308 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Jul 2 08:14:32.660313 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Jul 2 08:14:32.660352 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Jul 2 08:14:32.660358 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Jul 2 08:14:32.660365 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Jul 2 08:14:32.660370 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Jul 2 08:14:32.660375 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Jul 2 08:14:32.660379 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Jul 2 08:14:32.660384 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Jul 2 08:14:32.660389 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Jul 2 08:14:32.660394 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Jul 2 08:14:32.660399 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Jul 2 08:14:32.660404 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Jul 2 08:14:32.660409 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Jul 2 08:14:32.660414 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Jul 2 08:14:32.660419 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Jul 2 08:14:32.660424 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Jul 2 08:14:32.660429 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Jul 2 08:14:32.660434 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Jul 2 08:14:32.660439 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Jul 2 08:14:32.660448 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Jul 2 08:14:32.660454 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Jul 2 08:14:32.660459 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Jul 2 08:14:32.660464 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Jul 2 08:14:32.660469 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Jul 2 08:14:32.660475 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Jul 2 08:14:32.660481 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Jul 2 08:14:32.660486 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Jul 2 08:14:32.660491 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Jul 2 08:14:32.660496 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Jul 2 08:14:32.660501 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Jul 2 08:14:32.660506 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Jul 2 08:14:32.660512 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Jul 2 08:14:32.660518 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Jul 2 08:14:32.660523 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Jul 2 08:14:32.660528 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Jul 2 08:14:32.660533 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Jul 2 08:14:32.660538 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Jul 2 08:14:32.660544 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Jul 2 08:14:32.660549 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Jul 2 08:14:32.660554 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Jul 2 08:14:32.660560 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Jul 2 08:14:32.660566 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Jul 2 08:14:32.660571 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Jul 2 08:14:32.660576 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Jul 2 08:14:32.660582 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Jul 2 08:14:32.660587 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Jul 2 08:14:32.660592 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Jul 2 08:14:32.660597 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Jul 2 08:14:32.660602 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Jul 2 08:14:32.660608 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Jul 2 08:14:32.660614 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Jul 2 08:14:32.660619 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Jul 2 08:14:32.660624 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Jul 2 08:14:32.660629 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Jul 2 08:14:32.660635 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Jul 2 08:14:32.660640 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Jul 2 08:14:32.660645 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Jul 2 08:14:32.660650 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Jul 2 08:14:32.660656 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Jul 2 08:14:32.660661 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Jul 2 08:14:32.660667 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Jul 2 08:14:32.660672 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Jul 2 08:14:32.660678 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Jul 2 08:14:32.660683 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Jul 2 08:14:32.660688 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Jul 2 08:14:32.660693 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Jul 2 08:14:32.660698 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Jul 2 08:14:32.660704 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Jul 2 08:14:32.660709 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Jul 2 08:14:32.660714 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Jul 2 08:14:32.660720 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Jul 2 08:14:32.660726 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Jul 2 08:14:32.660731 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Jul 2 08:14:32.660736 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Jul 2 08:14:32.660742 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Jul 2 08:14:32.660747 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Jul 2 08:14:32.660752 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Jul 2 08:14:32.660757 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Jul 2 08:14:32.660762 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Jul 2 08:14:32.660768 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Jul 2 08:14:32.660774 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Jul 2 08:14:32.660779 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Jul 2 08:14:32.660784 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Jul 2 08:14:32.660790 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Jul 2 08:14:32.660795 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Jul 2 08:14:32.660800 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Jul 2 08:14:32.660806 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 2 08:14:32.660811 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jul 2 08:14:32.660816 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Jul 2 08:14:32.660822 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Jul 2 08:14:32.660828 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Jul 2 08:14:32.660834 kernel: Zone ranges: Jul 2 08:14:32.660839 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 08:14:32.660844 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Jul 2 08:14:32.660850 kernel: Normal empty Jul 2 08:14:32.660855 kernel: Movable zone start for each node Jul 2 08:14:32.660860 kernel: Early memory node ranges Jul 2 08:14:32.660866 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Jul 2 08:14:32.660871 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Jul 2 08:14:32.660878 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Jul 2 08:14:32.660883 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Jul 2 08:14:32.660888 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 08:14:32.660894 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Jul 2 08:14:32.660899 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Jul 2 08:14:32.660905 kernel: ACPI: PM-Timer IO Port: 0x1008 Jul 2 08:14:32.660910 kernel: system APIC only can use physical flat Jul 2 08:14:32.660915 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Jul 2 08:14:32.660920 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jul 2 08:14:32.660927 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jul 2 08:14:32.660932 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jul 2 08:14:32.660937 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jul 2 08:14:32.660943 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jul 2 08:14:32.660948 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jul 2 08:14:32.660953 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jul 2 08:14:32.660958 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jul 2 08:14:32.660964 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jul 2 08:14:32.660969 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jul 2 08:14:32.660975 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jul 2 08:14:32.660981 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jul 2 08:14:32.660986 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jul 2 08:14:32.660992 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jul 2 08:14:32.660997 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jul 2 08:14:32.661002 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jul 2 08:14:32.661007 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Jul 2 08:14:32.661013 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Jul 2 08:14:32.661018 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Jul 2 08:14:32.661023 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Jul 2 08:14:32.661029 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Jul 2 08:14:32.661035 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Jul 2 08:14:32.661040 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Jul 2 08:14:32.661045 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Jul 2 08:14:32.661050 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Jul 2 08:14:32.661056 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Jul 2 08:14:32.661061 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Jul 2 08:14:32.661066 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Jul 2 08:14:32.661072 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Jul 2 08:14:32.661077 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Jul 2 08:14:32.661083 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Jul 2 08:14:32.661088 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Jul 2 08:14:32.661094 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Jul 2 08:14:32.661099 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Jul 2 08:14:32.661104 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Jul 2 08:14:32.661110 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Jul 2 08:14:32.661115 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Jul 2 08:14:32.661120 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Jul 2 08:14:32.661126 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Jul 2 08:14:32.661132 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Jul 2 08:14:32.661137 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Jul 2 08:14:32.661142 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Jul 2 08:14:32.661147 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Jul 2 08:14:32.661153 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Jul 2 08:14:32.661158 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Jul 2 08:14:32.661163 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Jul 2 08:14:32.661169 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Jul 2 08:14:32.661174 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Jul 2 08:14:32.661179 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Jul 2 08:14:32.661186 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Jul 2 08:14:32.661191 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Jul 2 08:14:32.661196 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Jul 2 08:14:32.668703 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Jul 2 08:14:32.668728 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Jul 2 08:14:32.668734 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Jul 2 08:14:32.668740 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Jul 2 08:14:32.668745 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Jul 2 08:14:32.668750 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Jul 2 08:14:32.668759 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Jul 2 08:14:32.668764 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Jul 2 08:14:32.668769 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Jul 2 08:14:32.668775 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Jul 2 08:14:32.668780 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Jul 2 08:14:32.668785 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Jul 2 08:14:32.668790 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Jul 2 08:14:32.668796 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Jul 2 08:14:32.668801 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Jul 2 08:14:32.668806 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Jul 2 08:14:32.668812 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Jul 2 08:14:32.668818 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Jul 2 08:14:32.668823 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Jul 2 08:14:32.668828 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Jul 2 08:14:32.668834 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Jul 2 08:14:32.668839 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Jul 2 08:14:32.668845 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Jul 2 08:14:32.668850 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Jul 2 08:14:32.668855 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Jul 2 08:14:32.668862 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Jul 2 08:14:32.668867 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Jul 2 08:14:32.668872 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Jul 2 08:14:32.668877 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Jul 2 08:14:32.668883 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Jul 2 08:14:32.668888 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Jul 2 08:14:32.668893 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Jul 2 08:14:32.668898 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Jul 2 08:14:32.668904 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Jul 2 08:14:32.668910 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Jul 2 08:14:32.668915 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Jul 2 08:14:32.668920 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Jul 2 08:14:32.668926 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Jul 2 08:14:32.668931 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Jul 2 08:14:32.668936 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Jul 2 08:14:32.668941 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Jul 2 08:14:32.668947 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Jul 2 08:14:32.668952 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Jul 2 08:14:32.668957 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Jul 2 08:14:32.668963 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Jul 2 08:14:32.668968 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Jul 2 08:14:32.668974 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Jul 2 08:14:32.668979 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Jul 2 08:14:32.668984 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Jul 2 08:14:32.668989 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Jul 2 08:14:32.668995 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Jul 2 08:14:32.669000 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Jul 2 08:14:32.669005 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Jul 2 08:14:32.669012 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Jul 2 08:14:32.669017 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Jul 2 08:14:32.669023 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Jul 2 08:14:32.669028 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Jul 2 08:14:32.669033 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Jul 2 08:14:32.669039 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Jul 2 08:14:32.669044 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Jul 2 08:14:32.669049 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Jul 2 08:14:32.669055 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Jul 2 08:14:32.669060 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Jul 2 08:14:32.669066 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Jul 2 08:14:32.669072 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Jul 2 08:14:32.669077 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Jul 2 08:14:32.669082 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Jul 2 08:14:32.669088 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Jul 2 08:14:32.669093 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Jul 2 08:14:32.669098 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Jul 2 08:14:32.669103 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Jul 2 08:14:32.669109 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Jul 2 08:14:32.669115 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Jul 2 08:14:32.669120 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Jul 2 08:14:32.669126 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Jul 2 08:14:32.669131 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Jul 2 08:14:32.669136 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Jul 2 08:14:32.669142 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 08:14:32.669148 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Jul 2 08:14:32.669153 kernel: TSC deadline timer available Jul 2 08:14:32.669159 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Jul 2 08:14:32.669164 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Jul 2 08:14:32.669171 kernel: Booting paravirtualized kernel on VMware hypervisor Jul 2 08:14:32.669177 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 08:14:32.669183 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:128 nr_node_ids:1 Jul 2 08:14:32.669188 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Jul 2 08:14:32.669194 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Jul 2 08:14:32.669199 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Jul 2 08:14:32.669205 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Jul 2 08:14:32.669210 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Jul 2 08:14:32.669216 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Jul 2 08:14:32.669221 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Jul 2 08:14:32.669226 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Jul 2 08:14:32.669232 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Jul 2 08:14:32.669244 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Jul 2 08:14:32.669251 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Jul 2 08:14:32.669256 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Jul 2 08:14:32.669262 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Jul 2 08:14:32.669267 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Jul 2 08:14:32.669274 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Jul 2 08:14:32.669280 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Jul 2 08:14:32.669285 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Jul 2 08:14:32.669291 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Jul 2 08:14:32.669296 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Jul 2 08:14:32.669302 kernel: Policy zone: DMA32 Jul 2 08:14:32.669308 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 08:14:32.669315 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 08:14:32.669348 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Jul 2 08:14:32.669354 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Jul 2 08:14:32.669360 kernel: printk: log_buf_len min size: 262144 bytes Jul 2 08:14:32.669366 kernel: printk: log_buf_len: 1048576 bytes Jul 2 08:14:32.669371 kernel: printk: early log buf free: 239728(91%) Jul 2 08:14:32.669377 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 08:14:32.669383 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 2 08:14:32.669388 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 08:14:32.669394 kernel: Memory: 1940392K/2096628K available (12294K kernel code, 2276K rwdata, 13712K rodata, 47444K init, 4144K bss, 155976K reserved, 0K cma-reserved) Jul 2 08:14:32.669401 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Jul 2 08:14:32.669407 kernel: ftrace: allocating 34514 entries in 135 pages Jul 2 08:14:32.669414 kernel: ftrace: allocated 135 pages with 4 groups Jul 2 08:14:32.669421 kernel: rcu: Hierarchical RCU implementation. Jul 2 08:14:32.669427 kernel: rcu: RCU event tracing is enabled. Jul 2 08:14:32.669433 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Jul 2 08:14:32.669440 kernel: Rude variant of Tasks RCU enabled. Jul 2 08:14:32.669446 kernel: Tracing variant of Tasks RCU enabled. Jul 2 08:14:32.669452 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 08:14:32.669458 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Jul 2 08:14:32.669463 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Jul 2 08:14:32.669469 kernel: random: crng init done Jul 2 08:14:32.669475 kernel: Console: colour VGA+ 80x25 Jul 2 08:14:32.669480 kernel: printk: console [tty0] enabled Jul 2 08:14:32.669486 kernel: printk: console [ttyS0] enabled Jul 2 08:14:32.669493 kernel: ACPI: Core revision 20210730 Jul 2 08:14:32.669499 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Jul 2 08:14:32.669504 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 08:14:32.669510 kernel: x2apic enabled Jul 2 08:14:32.669516 kernel: Switched APIC routing to physical x2apic. Jul 2 08:14:32.669522 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 2 08:14:32.669528 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jul 2 08:14:32.669534 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Jul 2 08:14:32.669540 kernel: Disabled fast string operations Jul 2 08:14:32.669547 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 2 08:14:32.669553 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jul 2 08:14:32.669559 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 08:14:32.669565 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Jul 2 08:14:32.669571 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jul 2 08:14:32.669577 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jul 2 08:14:32.669583 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jul 2 08:14:32.669589 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 08:14:32.669595 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jul 2 08:14:32.669601 kernel: RETBleed: Mitigation: Enhanced IBRS Jul 2 08:14:32.669607 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 2 08:14:32.669613 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Jul 2 08:14:32.669619 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 2 08:14:32.669625 kernel: SRBDS: Unknown: Dependent on hypervisor status Jul 2 08:14:32.669630 kernel: GDS: Unknown: Dependent on hypervisor status Jul 2 08:14:32.669636 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 08:14:32.669642 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 08:14:32.669648 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 08:14:32.669654 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 08:14:32.669660 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 2 08:14:32.669666 kernel: Freeing SMP alternatives memory: 32K Jul 2 08:14:32.669672 kernel: pid_max: default: 131072 minimum: 1024 Jul 2 08:14:32.669677 kernel: LSM: Security Framework initializing Jul 2 08:14:32.669683 kernel: SELinux: Initializing. Jul 2 08:14:32.669689 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 2 08:14:32.669694 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 2 08:14:32.669701 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jul 2 08:14:32.669707 kernel: Performance Events: Skylake events, core PMU driver. Jul 2 08:14:32.669713 kernel: core: CPUID marked event: 'cpu cycles' unavailable Jul 2 08:14:32.669718 kernel: core: CPUID marked event: 'instructions' unavailable Jul 2 08:14:32.669724 kernel: core: CPUID marked event: 'bus cycles' unavailable Jul 2 08:14:32.669729 kernel: core: CPUID marked event: 'cache references' unavailable Jul 2 08:14:32.669735 kernel: core: CPUID marked event: 'cache misses' unavailable Jul 2 08:14:32.669740 kernel: core: CPUID marked event: 'branch instructions' unavailable Jul 2 08:14:32.669746 kernel: core: CPUID marked event: 'branch misses' unavailable Jul 2 08:14:32.669753 kernel: ... version: 1 Jul 2 08:14:32.669758 kernel: ... bit width: 48 Jul 2 08:14:32.669764 kernel: ... generic registers: 4 Jul 2 08:14:32.669769 kernel: ... value mask: 0000ffffffffffff Jul 2 08:14:32.669775 kernel: ... max period: 000000007fffffff Jul 2 08:14:32.669781 kernel: ... fixed-purpose events: 0 Jul 2 08:14:32.669786 kernel: ... event mask: 000000000000000f Jul 2 08:14:32.669792 kernel: signal: max sigframe size: 1776 Jul 2 08:14:32.669798 kernel: rcu: Hierarchical SRCU implementation. Jul 2 08:14:32.669805 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 2 08:14:32.669812 kernel: smp: Bringing up secondary CPUs ... Jul 2 08:14:32.669818 kernel: x86: Booting SMP configuration: Jul 2 08:14:32.669823 kernel: .... node #0, CPUs: #1 Jul 2 08:14:32.669829 kernel: Disabled fast string operations Jul 2 08:14:32.669835 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Jul 2 08:14:32.669840 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jul 2 08:14:32.669846 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 08:14:32.669852 kernel: smpboot: Max logical packages: 128 Jul 2 08:14:32.669858 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Jul 2 08:14:32.669864 kernel: devtmpfs: initialized Jul 2 08:14:32.669870 kernel: x86/mm: Memory block size: 128MB Jul 2 08:14:32.669876 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Jul 2 08:14:32.669882 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 08:14:32.669888 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Jul 2 08:14:32.669893 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 08:14:32.669899 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 08:14:32.669905 kernel: audit: initializing netlink subsys (disabled) Jul 2 08:14:32.669911 kernel: audit: type=2000 audit(1719908071.062:1): state=initialized audit_enabled=0 res=1 Jul 2 08:14:32.669917 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 08:14:32.669923 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 08:14:32.669929 kernel: cpuidle: using governor menu Jul 2 08:14:32.669934 kernel: Simple Boot Flag at 0x36 set to 0x80 Jul 2 08:14:32.669940 kernel: ACPI: bus type PCI registered Jul 2 08:14:32.669946 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 08:14:32.669951 kernel: dca service started, version 1.12.1 Jul 2 08:14:32.669957 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Jul 2 08:14:32.669963 kernel: PCI: MMCONFIG at [mem 0xf0000000-0xf7ffffff] reserved in E820 Jul 2 08:14:32.669970 kernel: PCI: Using configuration type 1 for base access Jul 2 08:14:32.669976 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 08:14:32.669982 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 08:14:32.669987 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 08:14:32.669993 kernel: ACPI: Added _OSI(Module Device) Jul 2 08:14:32.669999 kernel: ACPI: Added _OSI(Processor Device) Jul 2 08:14:32.670004 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 08:14:32.670010 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 08:14:32.670015 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 2 08:14:32.670022 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 2 08:14:32.670028 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 2 08:14:32.670034 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 08:14:32.670039 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jul 2 08:14:32.670045 kernel: ACPI: Interpreter enabled Jul 2 08:14:32.670051 kernel: ACPI: PM: (supports S0 S1 S5) Jul 2 08:14:32.670057 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 08:14:32.670063 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 08:14:32.670068 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Jul 2 08:14:32.670075 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Jul 2 08:14:32.670167 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 2 08:14:32.670218 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Jul 2 08:14:32.670264 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Jul 2 08:14:32.670273 kernel: PCI host bridge to bus 0000:00 Jul 2 08:14:32.670331 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 2 08:14:32.670383 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000cffff window] Jul 2 08:14:32.670425 kernel: pci_bus 0000:00: root bus resource [mem 0x000d0000-0x000d3fff window] Jul 2 08:14:32.670466 kernel: pci_bus 0000:00: root bus resource [mem 0x000d4000-0x000d7fff window] Jul 2 08:14:32.670506 kernel: pci_bus 0000:00: root bus resource [mem 0x000d8000-0x000dbfff window] Jul 2 08:14:32.670546 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 2 08:14:32.670587 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 2 08:14:32.670627 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Jul 2 08:14:32.670669 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Jul 2 08:14:32.670726 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Jul 2 08:14:32.670779 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Jul 2 08:14:32.670833 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Jul 2 08:14:32.670886 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Jul 2 08:14:32.670934 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Jul 2 08:14:32.670987 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 2 08:14:32.671041 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 2 08:14:32.671087 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 2 08:14:32.671133 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 2 08:14:32.671184 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Jul 2 08:14:32.671232 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Jul 2 08:14:32.671278 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Jul 2 08:14:32.673417 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Jul 2 08:14:32.673495 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Jul 2 08:14:32.673546 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Jul 2 08:14:32.673601 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Jul 2 08:14:32.673649 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Jul 2 08:14:32.673696 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Jul 2 08:14:32.673742 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Jul 2 08:14:32.673792 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Jul 2 08:14:32.673838 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 2 08:14:32.673890 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Jul 2 08:14:32.673945 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Jul 2 08:14:32.673993 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Jul 2 08:14:32.674068 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Jul 2 08:14:32.674122 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Jul 2 08:14:32.674173 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Jul 2 08:14:32.674229 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Jul 2 08:14:32.674289 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Jul 2 08:14:32.674353 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Jul 2 08:14:32.674410 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Jul 2 08:14:32.674459 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Jul 2 08:14:32.674514 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Jul 2 08:14:32.674562 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Jul 2 08:14:32.674613 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Jul 2 08:14:32.674662 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Jul 2 08:14:32.674712 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Jul 2 08:14:32.674760 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Jul 2 08:14:32.674813 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Jul 2 08:14:32.674868 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Jul 2 08:14:32.674923 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Jul 2 08:14:32.674969 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Jul 2 08:14:32.675020 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Jul 2 08:14:32.675069 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Jul 2 08:14:32.675119 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Jul 2 08:14:32.675167 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Jul 2 08:14:32.675218 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Jul 2 08:14:32.675264 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Jul 2 08:14:32.682840 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Jul 2 08:14:32.682940 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Jul 2 08:14:32.683014 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Jul 2 08:14:32.683066 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Jul 2 08:14:32.683119 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Jul 2 08:14:32.683167 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Jul 2 08:14:32.683218 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Jul 2 08:14:32.683266 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Jul 2 08:14:32.683372 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Jul 2 08:14:32.683428 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Jul 2 08:14:32.683480 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Jul 2 08:14:32.683527 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Jul 2 08:14:32.683577 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Jul 2 08:14:32.683625 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Jul 2 08:14:32.683677 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Jul 2 08:14:32.683724 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Jul 2 08:14:32.683778 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Jul 2 08:14:32.683827 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Jul 2 08:14:32.683877 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Jul 2 08:14:32.683924 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Jul 2 08:14:32.683981 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Jul 2 08:14:32.684033 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Jul 2 08:14:32.684085 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Jul 2 08:14:32.684131 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Jul 2 08:14:32.684181 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Jul 2 08:14:32.684227 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Jul 2 08:14:32.684283 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Jul 2 08:14:32.684354 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Jul 2 08:14:32.684407 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Jul 2 08:14:32.684455 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Jul 2 08:14:32.684514 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Jul 2 08:14:32.684562 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Jul 2 08:14:32.684615 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Jul 2 08:14:32.684670 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Jul 2 08:14:32.684725 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Jul 2 08:14:32.684773 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Jul 2 08:14:32.684827 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Jul 2 08:14:32.684881 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Jul 2 08:14:32.684931 kernel: pci_bus 0000:01: extended config space not accessible Jul 2 08:14:32.684983 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 2 08:14:32.685032 kernel: pci_bus 0000:02: extended config space not accessible Jul 2 08:14:32.685041 kernel: acpiphp: Slot [32] registered Jul 2 08:14:32.685047 kernel: acpiphp: Slot [33] registered Jul 2 08:14:32.685053 kernel: acpiphp: Slot [34] registered Jul 2 08:14:32.685058 kernel: acpiphp: Slot [35] registered Jul 2 08:14:32.685064 kernel: acpiphp: Slot [36] registered Jul 2 08:14:32.685070 kernel: acpiphp: Slot [37] registered Jul 2 08:14:32.685077 kernel: acpiphp: Slot [38] registered Jul 2 08:14:32.685082 kernel: acpiphp: Slot [39] registered Jul 2 08:14:32.685088 kernel: acpiphp: Slot [40] registered Jul 2 08:14:32.685094 kernel: acpiphp: Slot [41] registered Jul 2 08:14:32.685099 kernel: acpiphp: Slot [42] registered Jul 2 08:14:32.685105 kernel: acpiphp: Slot [43] registered Jul 2 08:14:32.685111 kernel: acpiphp: Slot [44] registered Jul 2 08:14:32.685117 kernel: acpiphp: Slot [45] registered Jul 2 08:14:32.685122 kernel: acpiphp: Slot [46] registered Jul 2 08:14:32.685130 kernel: acpiphp: Slot [47] registered Jul 2 08:14:32.685136 kernel: acpiphp: Slot [48] registered Jul 2 08:14:32.685142 kernel: acpiphp: Slot [49] registered Jul 2 08:14:32.685148 kernel: acpiphp: Slot [50] registered Jul 2 08:14:32.685153 kernel: acpiphp: Slot [51] registered Jul 2 08:14:32.685159 kernel: acpiphp: Slot [52] registered Jul 2 08:14:32.685165 kernel: acpiphp: Slot [53] registered Jul 2 08:14:32.685170 kernel: acpiphp: Slot [54] registered Jul 2 08:14:32.685176 kernel: acpiphp: Slot [55] registered Jul 2 08:14:32.685181 kernel: acpiphp: Slot [56] registered Jul 2 08:14:32.685188 kernel: acpiphp: Slot [57] registered Jul 2 08:14:32.685194 kernel: acpiphp: Slot [58] registered Jul 2 08:14:32.685199 kernel: acpiphp: Slot [59] registered Jul 2 08:14:32.685205 kernel: acpiphp: Slot [60] registered Jul 2 08:14:32.685210 kernel: acpiphp: Slot [61] registered Jul 2 08:14:32.685216 kernel: acpiphp: Slot [62] registered Jul 2 08:14:32.685222 kernel: acpiphp: Slot [63] registered Jul 2 08:14:32.685271 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Jul 2 08:14:32.690105 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jul 2 08:14:32.690528 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jul 2 08:14:32.690588 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 2 08:14:32.690643 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Jul 2 08:14:32.690694 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000cffff window] (subtractive decode) Jul 2 08:14:32.690751 kernel: pci 0000:00:11.0: bridge window [mem 0x000d0000-0x000d3fff window] (subtractive decode) Jul 2 08:14:32.690822 kernel: pci 0000:00:11.0: bridge window [mem 0x000d4000-0x000d7fff window] (subtractive decode) Jul 2 08:14:32.690872 kernel: pci 0000:00:11.0: bridge window [mem 0x000d8000-0x000dbfff window] (subtractive decode) Jul 2 08:14:32.690927 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Jul 2 08:14:32.690989 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Jul 2 08:14:32.691037 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Jul 2 08:14:32.691092 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Jul 2 08:14:32.691142 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Jul 2 08:14:32.691190 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Jul 2 08:14:32.691238 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jul 2 08:14:32.691289 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jul 2 08:14:32.691346 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jul 2 08:14:32.691395 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jul 2 08:14:32.691442 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jul 2 08:14:32.691488 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jul 2 08:14:32.691546 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jul 2 08:14:32.691608 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jul 2 08:14:32.691656 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jul 2 08:14:32.691705 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jul 2 08:14:32.691754 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jul 2 08:14:32.691801 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jul 2 08:14:32.691848 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jul 2 08:14:32.691895 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jul 2 08:14:32.691953 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jul 2 08:14:32.692010 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jul 2 08:14:32.692057 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jul 2 08:14:32.692109 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jul 2 08:14:32.692156 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jul 2 08:14:32.692201 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 2 08:14:32.692250 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jul 2 08:14:32.692299 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jul 2 08:14:32.692352 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jul 2 08:14:32.692401 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jul 2 08:14:32.692446 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jul 2 08:14:32.692493 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jul 2 08:14:32.692540 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jul 2 08:14:32.692591 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jul 2 08:14:32.692649 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jul 2 08:14:32.692706 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Jul 2 08:14:32.692756 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Jul 2 08:14:32.692804 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Jul 2 08:14:32.692851 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Jul 2 08:14:32.692899 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Jul 2 08:14:32.692945 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jul 2 08:14:32.693008 kernel: pci 0000:0b:00.0: supports D1 D2 Jul 2 08:14:32.693059 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 2 08:14:32.693107 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jul 2 08:14:32.693155 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jul 2 08:14:32.693202 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jul 2 08:14:32.693255 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jul 2 08:14:32.693312 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jul 2 08:14:32.693372 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jul 2 08:14:32.693420 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jul 2 08:14:32.693469 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jul 2 08:14:32.693517 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jul 2 08:14:32.693564 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jul 2 08:14:32.693610 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jul 2 08:14:32.693655 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jul 2 08:14:32.693704 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jul 2 08:14:32.694005 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jul 2 08:14:32.694057 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 2 08:14:32.694112 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jul 2 08:14:32.694180 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jul 2 08:14:32.694476 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 2 08:14:32.694532 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jul 2 08:14:32.694581 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jul 2 08:14:32.694634 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jul 2 08:14:32.694684 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jul 2 08:14:32.694735 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jul 2 08:14:32.694787 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jul 2 08:14:32.694843 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jul 2 08:14:32.694889 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jul 2 08:14:32.694935 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 2 08:14:32.694982 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jul 2 08:14:32.695030 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jul 2 08:14:32.695076 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jul 2 08:14:32.695122 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 2 08:14:32.695172 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jul 2 08:14:32.695218 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jul 2 08:14:32.695264 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jul 2 08:14:32.695310 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jul 2 08:14:32.695368 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jul 2 08:14:32.695415 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jul 2 08:14:32.695462 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jul 2 08:14:32.695510 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jul 2 08:14:32.695558 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jul 2 08:14:32.695605 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jul 2 08:14:32.695651 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 2 08:14:32.695704 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jul 2 08:14:32.695758 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jul 2 08:14:32.695806 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 2 08:14:32.695855 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jul 2 08:14:32.695904 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jul 2 08:14:32.695951 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jul 2 08:14:32.696005 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jul 2 08:14:32.696051 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jul 2 08:14:32.696098 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jul 2 08:14:32.696155 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jul 2 08:14:32.696210 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jul 2 08:14:32.696256 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 2 08:14:32.696309 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jul 2 08:14:32.696727 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jul 2 08:14:32.696779 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jul 2 08:14:32.696828 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jul 2 08:14:32.696877 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jul 2 08:14:32.696924 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jul 2 08:14:32.696969 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jul 2 08:14:32.697015 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jul 2 08:14:32.697071 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jul 2 08:14:32.697132 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jul 2 08:14:32.697180 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jul 2 08:14:32.697227 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jul 2 08:14:32.697274 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jul 2 08:14:32.697326 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 2 08:14:32.697381 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jul 2 08:14:32.697427 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jul 2 08:14:32.697493 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jul 2 08:14:32.697566 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jul 2 08:14:32.697617 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jul 2 08:14:32.697662 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jul 2 08:14:32.697710 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jul 2 08:14:32.697755 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jul 2 08:14:32.697801 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jul 2 08:14:32.697848 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jul 2 08:14:32.697897 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jul 2 08:14:32.697942 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 2 08:14:32.697951 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Jul 2 08:14:32.697957 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Jul 2 08:14:32.697963 kernel: ACPI: PCI: Interrupt link LNKB disabled Jul 2 08:14:32.697969 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 2 08:14:32.697975 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Jul 2 08:14:32.697981 kernel: iommu: Default domain type: Translated Jul 2 08:14:32.697988 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 08:14:32.698034 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Jul 2 08:14:32.698081 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 2 08:14:32.698125 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Jul 2 08:14:32.698134 kernel: vgaarb: loaded Jul 2 08:14:32.698140 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 08:14:32.698146 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 2 08:14:32.698152 kernel: PTP clock support registered Jul 2 08:14:32.698158 kernel: PCI: Using ACPI for IRQ routing Jul 2 08:14:32.698166 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 2 08:14:32.698172 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Jul 2 08:14:32.698178 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Jul 2 08:14:32.698183 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Jul 2 08:14:32.698189 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Jul 2 08:14:32.698195 kernel: clocksource: Switched to clocksource tsc-early Jul 2 08:14:32.698201 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 08:14:32.698207 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 08:14:32.698212 kernel: pnp: PnP ACPI init Jul 2 08:14:32.698266 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Jul 2 08:14:32.698310 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Jul 2 08:14:32.698367 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Jul 2 08:14:32.698413 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Jul 2 08:14:32.698462 kernel: pnp 00:06: [dma 2] Jul 2 08:14:32.698519 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Jul 2 08:14:32.698568 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Jul 2 08:14:32.698610 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Jul 2 08:14:32.698618 kernel: pnp: PnP ACPI: found 8 devices Jul 2 08:14:32.698624 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 08:14:32.698630 kernel: NET: Registered PF_INET protocol family Jul 2 08:14:32.698636 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 08:14:32.698642 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 2 08:14:32.698647 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 08:14:32.698653 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 2 08:14:32.698660 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Jul 2 08:14:32.698666 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 2 08:14:32.698672 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 2 08:14:32.698678 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 2 08:14:32.698683 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 08:14:32.698689 kernel: NET: Registered PF_XDP protocol family Jul 2 08:14:32.698738 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 2 08:14:32.698787 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jul 2 08:14:32.698838 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jul 2 08:14:32.698891 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jul 2 08:14:32.698948 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jul 2 08:14:32.699009 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jul 2 08:14:32.699059 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Jul 2 08:14:32.699108 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jul 2 08:14:32.699159 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jul 2 08:14:32.699207 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jul 2 08:14:32.699256 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jul 2 08:14:32.699304 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jul 2 08:14:32.699361 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jul 2 08:14:32.699412 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jul 2 08:14:32.699461 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jul 2 08:14:32.699509 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jul 2 08:14:32.699556 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jul 2 08:14:32.699603 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jul 2 08:14:32.699651 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jul 2 08:14:32.699701 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Jul 2 08:14:32.699748 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Jul 2 08:14:32.699796 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Jul 2 08:14:32.699845 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Jul 2 08:14:32.699895 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Jul 2 08:14:32.699949 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Jul 2 08:14:32.700019 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jul 2 08:14:32.700077 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jul 2 08:14:32.700410 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jul 2 08:14:32.700463 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jul 2 08:14:32.700532 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jul 2 08:14:32.700829 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jul 2 08:14:32.700884 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jul 2 08:14:32.700934 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jul 2 08:14:32.701224 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jul 2 08:14:32.701281 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jul 2 08:14:32.701338 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jul 2 08:14:32.701387 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jul 2 08:14:32.701435 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jul 2 08:14:32.701491 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jul 2 08:14:32.701540 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jul 2 08:14:32.701587 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jul 2 08:14:32.701636 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jul 2 08:14:32.701687 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jul 2 08:14:32.701735 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jul 2 08:14:32.701783 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jul 2 08:14:32.701836 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jul 2 08:14:32.701901 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jul 2 08:14:32.701949 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jul 2 08:14:32.702008 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jul 2 08:14:32.702059 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jul 2 08:14:32.702107 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jul 2 08:14:32.702218 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jul 2 08:14:32.702269 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jul 2 08:14:32.702317 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jul 2 08:14:32.702380 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jul 2 08:14:32.702428 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jul 2 08:14:32.702475 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jul 2 08:14:32.702522 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jul 2 08:14:32.702571 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jul 2 08:14:32.702619 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jul 2 08:14:32.702666 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jul 2 08:14:32.702712 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jul 2 08:14:32.702758 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jul 2 08:14:32.702805 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jul 2 08:14:32.702852 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jul 2 08:14:32.702901 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jul 2 08:14:32.702964 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jul 2 08:14:32.703041 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jul 2 08:14:32.703095 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jul 2 08:14:32.703140 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jul 2 08:14:32.703187 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jul 2 08:14:32.703232 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jul 2 08:14:32.703279 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jul 2 08:14:32.703717 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jul 2 08:14:32.703787 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jul 2 08:14:32.703848 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jul 2 08:14:32.703897 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jul 2 08:14:32.703946 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jul 2 08:14:32.703993 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jul 2 08:14:32.704041 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jul 2 08:14:32.704089 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jul 2 08:14:32.704154 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jul 2 08:14:32.704202 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jul 2 08:14:32.704249 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jul 2 08:14:32.704295 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jul 2 08:14:32.704352 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jul 2 08:14:32.704405 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jul 2 08:14:32.704464 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jul 2 08:14:32.704512 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jul 2 08:14:32.704559 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jul 2 08:14:32.704607 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jul 2 08:14:32.704654 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jul 2 08:14:32.704700 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jul 2 08:14:32.704748 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jul 2 08:14:32.704798 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jul 2 08:14:32.704850 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jul 2 08:14:32.704905 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jul 2 08:14:32.704954 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jul 2 08:14:32.705001 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jul 2 08:14:32.705050 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jul 2 08:14:32.705399 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jul 2 08:14:32.705452 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jul 2 08:14:32.705519 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jul 2 08:14:32.705839 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jul 2 08:14:32.705896 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jul 2 08:14:32.706189 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jul 2 08:14:32.706239 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jul 2 08:14:32.706290 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jul 2 08:14:32.706348 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jul 2 08:14:32.706400 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 2 08:14:32.706471 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Jul 2 08:14:32.706523 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jul 2 08:14:32.706572 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jul 2 08:14:32.706618 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 2 08:14:32.706670 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Jul 2 08:14:32.706718 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jul 2 08:14:32.706765 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jul 2 08:14:32.706812 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jul 2 08:14:32.706859 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Jul 2 08:14:32.706907 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jul 2 08:14:32.706953 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jul 2 08:14:32.707001 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jul 2 08:14:32.707048 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jul 2 08:14:32.707096 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jul 2 08:14:32.707143 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jul 2 08:14:32.707192 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jul 2 08:14:32.707251 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jul 2 08:14:32.707299 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jul 2 08:14:32.707358 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jul 2 08:14:32.707407 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jul 2 08:14:32.707458 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jul 2 08:14:32.707505 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jul 2 08:14:32.707551 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 2 08:14:32.707599 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jul 2 08:14:32.707645 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jul 2 08:14:32.707691 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jul 2 08:14:32.707742 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jul 2 08:14:32.707788 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jul 2 08:14:32.707835 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jul 2 08:14:32.707882 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jul 2 08:14:32.707929 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jul 2 08:14:32.707977 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jul 2 08:14:32.708030 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Jul 2 08:14:32.708079 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jul 2 08:14:32.708127 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jul 2 08:14:32.708176 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jul 2 08:14:32.708222 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Jul 2 08:14:32.708272 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jul 2 08:14:32.708325 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jul 2 08:14:32.708374 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jul 2 08:14:32.708420 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jul 2 08:14:32.708468 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jul 2 08:14:32.708515 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jul 2 08:14:32.708561 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jul 2 08:14:32.708607 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jul 2 08:14:32.708656 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jul 2 08:14:32.708703 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jul 2 08:14:32.708749 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 2 08:14:32.708816 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jul 2 08:14:32.708871 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jul 2 08:14:32.708918 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 2 08:14:32.708965 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jul 2 08:14:32.709017 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jul 2 08:14:32.709351 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jul 2 08:14:32.709415 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jul 2 08:14:32.709715 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jul 2 08:14:32.709786 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jul 2 08:14:32.709838 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jul 2 08:14:32.709886 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jul 2 08:14:32.709933 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 2 08:14:32.709982 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jul 2 08:14:32.710029 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jul 2 08:14:32.710075 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jul 2 08:14:32.710122 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 2 08:14:32.710173 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jul 2 08:14:32.710242 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jul 2 08:14:32.710582 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jul 2 08:14:32.710875 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jul 2 08:14:32.710931 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jul 2 08:14:32.710980 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jul 2 08:14:32.711029 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jul 2 08:14:32.711101 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jul 2 08:14:32.711480 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jul 2 08:14:32.711849 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jul 2 08:14:32.711908 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 2 08:14:32.711959 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jul 2 08:14:32.712294 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jul 2 08:14:32.712385 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 2 08:14:32.712438 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jul 2 08:14:32.712486 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jul 2 08:14:32.712545 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jul 2 08:14:32.712595 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jul 2 08:14:32.712660 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jul 2 08:14:32.712721 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jul 2 08:14:32.712771 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jul 2 08:14:32.712817 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jul 2 08:14:32.712863 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 2 08:14:32.712911 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jul 2 08:14:32.712957 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jul 2 08:14:32.713020 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jul 2 08:14:32.713067 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jul 2 08:14:32.713374 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jul 2 08:14:32.713431 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jul 2 08:14:32.713479 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jul 2 08:14:32.713532 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jul 2 08:14:32.713582 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jul 2 08:14:32.713629 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jul 2 08:14:32.713685 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jul 2 08:14:32.713756 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jul 2 08:14:32.713804 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jul 2 08:14:32.713871 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 2 08:14:32.713925 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jul 2 08:14:32.713974 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jul 2 08:14:32.714021 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jul 2 08:14:32.714070 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jul 2 08:14:32.714117 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jul 2 08:14:32.714163 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jul 2 08:14:32.714211 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jul 2 08:14:32.714257 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jul 2 08:14:32.714303 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jul 2 08:14:32.714366 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jul 2 08:14:32.714417 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jul 2 08:14:32.714463 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 2 08:14:32.714509 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Jul 2 08:14:32.714550 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000cffff window] Jul 2 08:14:32.714591 kernel: pci_bus 0000:00: resource 6 [mem 0x000d0000-0x000d3fff window] Jul 2 08:14:32.714631 kernel: pci_bus 0000:00: resource 7 [mem 0x000d4000-0x000d7fff window] Jul 2 08:14:32.714671 kernel: pci_bus 0000:00: resource 8 [mem 0x000d8000-0x000dbfff window] Jul 2 08:14:32.714718 kernel: pci_bus 0000:00: resource 9 [mem 0xc0000000-0xfebfffff window] Jul 2 08:14:32.714768 kernel: pci_bus 0000:00: resource 10 [io 0x0000-0x0cf7 window] Jul 2 08:14:32.714809 kernel: pci_bus 0000:00: resource 11 [io 0x0d00-0xfeff window] Jul 2 08:14:32.714854 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Jul 2 08:14:32.714897 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Jul 2 08:14:32.714939 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 2 08:14:32.714981 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Jul 2 08:14:32.715022 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000cffff window] Jul 2 08:14:32.715067 kernel: pci_bus 0000:02: resource 6 [mem 0x000d0000-0x000d3fff window] Jul 2 08:14:32.715109 kernel: pci_bus 0000:02: resource 7 [mem 0x000d4000-0x000d7fff window] Jul 2 08:14:32.715150 kernel: pci_bus 0000:02: resource 8 [mem 0x000d8000-0x000dbfff window] Jul 2 08:14:32.715193 kernel: pci_bus 0000:02: resource 9 [mem 0xc0000000-0xfebfffff window] Jul 2 08:14:32.715234 kernel: pci_bus 0000:02: resource 10 [io 0x0000-0x0cf7 window] Jul 2 08:14:32.715276 kernel: pci_bus 0000:02: resource 11 [io 0x0d00-0xfeff window] Jul 2 08:14:32.715348 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Jul 2 08:14:32.715395 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Jul 2 08:14:32.715438 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Jul 2 08:14:32.715485 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Jul 2 08:14:32.715528 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Jul 2 08:14:32.715569 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Jul 2 08:14:32.715618 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Jul 2 08:14:32.715662 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Jul 2 08:14:32.715707 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Jul 2 08:14:32.715753 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Jul 2 08:14:32.715796 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Jul 2 08:14:32.715845 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Jul 2 08:14:32.715896 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 2 08:14:32.715946 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Jul 2 08:14:32.715997 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Jul 2 08:14:32.716044 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Jul 2 08:14:32.716088 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Jul 2 08:14:32.716135 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Jul 2 08:14:32.716184 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Jul 2 08:14:32.716248 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Jul 2 08:14:32.716295 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Jul 2 08:14:32.716351 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Jul 2 08:14:32.716400 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Jul 2 08:14:32.716443 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Jul 2 08:14:32.716487 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Jul 2 08:14:32.716533 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Jul 2 08:14:32.716577 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Jul 2 08:14:32.716623 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Jul 2 08:14:32.716670 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Jul 2 08:14:32.716713 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 2 08:14:32.716760 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Jul 2 08:14:32.716816 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 2 08:14:32.716866 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Jul 2 08:14:32.716914 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Jul 2 08:14:32.716972 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Jul 2 08:14:32.717018 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Jul 2 08:14:32.717064 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Jul 2 08:14:32.717107 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 2 08:14:32.717154 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Jul 2 08:14:32.717200 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Jul 2 08:14:32.717242 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 2 08:14:32.717306 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Jul 2 08:14:32.717640 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Jul 2 08:14:32.717688 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Jul 2 08:14:32.717737 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Jul 2 08:14:32.717782 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Jul 2 08:14:32.717848 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Jul 2 08:14:32.718127 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Jul 2 08:14:32.718178 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 2 08:14:32.718228 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Jul 2 08:14:32.718510 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 2 08:14:32.718563 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Jul 2 08:14:32.718908 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Jul 2 08:14:32.718966 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Jul 2 08:14:32.719018 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Jul 2 08:14:32.719067 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Jul 2 08:14:32.719222 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 2 08:14:32.719276 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Jul 2 08:14:32.719384 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Jul 2 08:14:32.719432 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Jul 2 08:14:32.719479 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Jul 2 08:14:32.719522 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Jul 2 08:14:32.719565 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Jul 2 08:14:32.719611 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Jul 2 08:14:32.719655 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Jul 2 08:14:32.719707 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Jul 2 08:14:32.719751 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 2 08:14:32.719799 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Jul 2 08:14:32.719981 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Jul 2 08:14:32.720247 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Jul 2 08:14:32.720301 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Jul 2 08:14:32.720459 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Jul 2 08:14:32.720505 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Jul 2 08:14:32.720552 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Jul 2 08:14:32.720595 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 2 08:14:32.720648 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 2 08:14:32.720657 kernel: PCI: CLS 32 bytes, default 64 Jul 2 08:14:32.720664 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 2 08:14:32.720673 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jul 2 08:14:32.720679 kernel: clocksource: Switched to clocksource tsc Jul 2 08:14:32.720685 kernel: Initialise system trusted keyrings Jul 2 08:14:32.720692 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 2 08:14:32.720699 kernel: Key type asymmetric registered Jul 2 08:14:32.720704 kernel: Asymmetric key parser 'x509' registered Jul 2 08:14:32.720710 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 2 08:14:32.720717 kernel: io scheduler mq-deadline registered Jul 2 08:14:32.720723 kernel: io scheduler kyber registered Jul 2 08:14:32.720730 kernel: io scheduler bfq registered Jul 2 08:14:32.720779 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Jul 2 08:14:32.720828 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:14:32.720996 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Jul 2 08:14:32.721047 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:14:32.721423 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Jul 2 08:14:32.721480 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:14:32.721534 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Jul 2 08:14:32.721586 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:14:32.721979 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Jul 2 08:14:32.722042 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:14:32.722095 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Jul 2 08:14:32.722436 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:14:32.722496 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Jul 2 08:14:32.722547 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:14:32.722713 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Jul 2 08:14:32.722769 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:14:32.722819 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Jul 2 08:14:32.723172 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:14:32.723228 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Jul 2 08:14:32.723277 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:14:32.723351 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Jul 2 08:14:32.723402 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:14:32.723450 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Jul 2 08:14:32.723497 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:14:32.723548 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Jul 2 08:14:32.723961 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:14:32.724023 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Jul 2 08:14:32.724072 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:14:32.724121 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Jul 2 08:14:32.724171 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:14:32.724219 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Jul 2 08:14:32.724267 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:14:32.724315 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Jul 2 08:14:32.724377 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:14:32.724426 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Jul 2 08:14:32.724475 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:14:32.724523 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Jul 2 08:14:32.724570 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:14:32.724617 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Jul 2 08:14:32.724664 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:14:32.724712 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Jul 2 08:14:32.724759 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:14:32.724810 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Jul 2 08:14:32.724857 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:14:32.724905 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Jul 2 08:14:32.724951 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:14:32.724999 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Jul 2 08:14:32.725048 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:14:32.725096 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Jul 2 08:14:32.725143 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:14:32.725192 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Jul 2 08:14:32.725238 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:14:32.725286 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Jul 2 08:14:32.725346 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:14:32.725395 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Jul 2 08:14:32.725442 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:14:32.725489 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Jul 2 08:14:32.725536 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:14:32.725586 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Jul 2 08:14:32.725650 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:14:32.725701 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Jul 2 08:14:32.725749 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:14:32.725809 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Jul 2 08:14:32.725858 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:14:32.725869 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 08:14:32.725876 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 08:14:32.725882 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 08:14:32.725889 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Jul 2 08:14:32.725895 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 2 08:14:32.725901 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 2 08:14:32.725951 kernel: rtc_cmos 00:01: registered as rtc0 Jul 2 08:14:32.726023 kernel: rtc_cmos 00:01: setting system clock to 2024-07-02T08:14:32 UTC (1719908072) Jul 2 08:14:32.726305 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Jul 2 08:14:32.726316 kernel: fail to initialize ptp_kvm Jul 2 08:14:32.726335 kernel: intel_pstate: CPU model not supported Jul 2 08:14:32.726341 kernel: NET: Registered PF_INET6 protocol family Jul 2 08:14:32.726347 kernel: Segment Routing with IPv6 Jul 2 08:14:32.726353 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 08:14:32.726360 kernel: NET: Registered PF_PACKET protocol family Jul 2 08:14:32.726366 kernel: Key type dns_resolver registered Jul 2 08:14:32.726374 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 2 08:14:32.726381 kernel: IPI shorthand broadcast: enabled Jul 2 08:14:32.726387 kernel: sched_clock: Marking stable (875002872, 223330704)->(1163995688, -65662112) Jul 2 08:14:32.726613 kernel: registered taskstats version 1 Jul 2 08:14:32.726623 kernel: Loading compiled-in X.509 certificates Jul 2 08:14:32.726630 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.161-flatcar: a1ce693884775675566f1ed116e36d15950b9a42' Jul 2 08:14:32.726636 kernel: Key type .fscrypt registered Jul 2 08:14:32.726642 kernel: Key type fscrypt-provisioning registered Jul 2 08:14:32.726648 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 08:14:32.726657 kernel: ima: Allocated hash algorithm: sha1 Jul 2 08:14:32.726664 kernel: ima: No architecture policies found Jul 2 08:14:32.726670 kernel: clk: Disabling unused clocks Jul 2 08:14:32.726676 kernel: Freeing unused kernel image (initmem) memory: 47444K Jul 2 08:14:32.726682 kernel: Write protecting the kernel read-only data: 28672k Jul 2 08:14:32.726688 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jul 2 08:14:32.726694 kernel: Freeing unused kernel image (rodata/data gap) memory: 624K Jul 2 08:14:32.726701 kernel: Run /init as init process Jul 2 08:14:32.726707 kernel: with arguments: Jul 2 08:14:32.726714 kernel: /init Jul 2 08:14:32.726720 kernel: with environment: Jul 2 08:14:32.726726 kernel: HOME=/ Jul 2 08:14:32.726732 kernel: TERM=linux Jul 2 08:14:32.726738 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 08:14:32.726747 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 08:14:32.726755 systemd[1]: Detected virtualization vmware. Jul 2 08:14:32.726761 systemd[1]: Detected architecture x86-64. Jul 2 08:14:32.726768 systemd[1]: Running in initrd. Jul 2 08:14:32.726774 systemd[1]: No hostname configured, using default hostname. Jul 2 08:14:32.726780 systemd[1]: Hostname set to . Jul 2 08:14:32.726787 systemd[1]: Initializing machine ID from random generator. Jul 2 08:14:32.726793 systemd[1]: Queued start job for default target initrd.target. Jul 2 08:14:32.726799 systemd[1]: Started systemd-ask-password-console.path. Jul 2 08:14:32.726805 systemd[1]: Reached target cryptsetup.target. Jul 2 08:14:32.726812 systemd[1]: Reached target paths.target. Jul 2 08:14:32.726837 systemd[1]: Reached target slices.target. Jul 2 08:14:32.726845 systemd[1]: Reached target swap.target. Jul 2 08:14:32.726851 systemd[1]: Reached target timers.target. Jul 2 08:14:32.726858 systemd[1]: Listening on iscsid.socket. Jul 2 08:14:32.726865 systemd[1]: Listening on iscsiuio.socket. Jul 2 08:14:32.726871 systemd[1]: Listening on systemd-journald-audit.socket. Jul 2 08:14:32.727100 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 2 08:14:32.727109 systemd[1]: Listening on systemd-journald.socket. Jul 2 08:14:32.727118 systemd[1]: Listening on systemd-networkd.socket. Jul 2 08:14:32.727124 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 08:14:32.727131 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 08:14:32.727137 systemd[1]: Reached target sockets.target. Jul 2 08:14:32.727143 systemd[1]: Starting kmod-static-nodes.service... Jul 2 08:14:32.727150 systemd[1]: Finished network-cleanup.service. Jul 2 08:14:32.727156 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 08:14:32.727163 systemd[1]: Starting systemd-journald.service... Jul 2 08:14:32.727169 systemd[1]: Starting systemd-modules-load.service... Jul 2 08:14:32.727176 systemd[1]: Starting systemd-resolved.service... Jul 2 08:14:32.727182 systemd[1]: Starting systemd-vconsole-setup.service... Jul 2 08:14:32.727188 systemd[1]: Finished kmod-static-nodes.service. Jul 2 08:14:32.727195 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 08:14:32.727201 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 08:14:32.727207 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 08:14:32.727214 kernel: audit: type=1130 audit(1719908072.660:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:32.727221 systemd[1]: Finished systemd-vconsole-setup.service. Jul 2 08:14:32.727228 systemd[1]: Starting dracut-cmdline-ask.service... Jul 2 08:14:32.727235 kernel: audit: type=1130 audit(1719908072.663:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:32.727241 systemd[1]: Finished dracut-cmdline-ask.service. Jul 2 08:14:32.727247 kernel: audit: type=1130 audit(1719908072.675:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:32.727254 systemd[1]: Starting dracut-cmdline.service... Jul 2 08:14:32.727260 systemd[1]: Started systemd-resolved.service. Jul 2 08:14:32.727363 systemd[1]: Reached target nss-lookup.target. Jul 2 08:14:32.727374 kernel: audit: type=1130 audit(1719908072.685:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:32.727381 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 08:14:32.727387 kernel: Bridge firewalling registered Jul 2 08:14:32.727397 systemd-journald[217]: Journal started Jul 2 08:14:32.727432 systemd-journald[217]: Runtime Journal (/run/log/journal/df0dc00567a54008838412f91658849b) is 4.8M, max 38.8M, 34.0M free. Jul 2 08:14:32.730726 systemd[1]: Started systemd-journald.service. Jul 2 08:14:32.730743 kernel: audit: type=1130 audit(1719908072.726:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:32.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:32.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:32.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:32.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:32.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:32.664048 systemd-modules-load[218]: Inserted module 'overlay' Jul 2 08:14:32.681128 systemd-resolved[219]: Positive Trust Anchors: Jul 2 08:14:32.681133 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 08:14:32.681152 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 08:14:32.732891 kernel: SCSI subsystem initialized Jul 2 08:14:32.685423 systemd-resolved[219]: Defaulting to hostname 'linux'. Jul 2 08:14:32.710344 systemd-modules-load[218]: Inserted module 'br_netfilter' Jul 2 08:14:32.733456 dracut-cmdline[232]: dracut-dracut-053 Jul 2 08:14:32.733456 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Jul 2 08:14:32.733456 dracut-cmdline[232]: BEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 08:14:32.741464 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 08:14:32.741495 kernel: device-mapper: uevent: version 1.0.3 Jul 2 08:14:32.743015 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 2 08:14:32.748404 systemd-modules-load[218]: Inserted module 'dm_multipath' Jul 2 08:14:32.748842 systemd[1]: Finished systemd-modules-load.service. Jul 2 08:14:32.755302 kernel: audit: type=1130 audit(1719908072.747:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:32.755325 kernel: Loading iSCSI transport class v2.0-870. Jul 2 08:14:32.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:32.749418 systemd[1]: Starting systemd-sysctl.service... Jul 2 08:14:32.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:32.756286 systemd[1]: Finished systemd-sysctl.service. Jul 2 08:14:32.759340 kernel: audit: type=1130 audit(1719908072.755:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:32.768346 kernel: iscsi: registered transport (tcp) Jul 2 08:14:32.783337 kernel: iscsi: registered transport (qla4xxx) Jul 2 08:14:32.783374 kernel: QLogic iSCSI HBA Driver Jul 2 08:14:32.799932 systemd[1]: Finished dracut-cmdline.service. Jul 2 08:14:32.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:32.800567 systemd[1]: Starting dracut-pre-udev.service... Jul 2 08:14:32.803556 kernel: audit: type=1130 audit(1719908072.798:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:32.839567 kernel: raid6: avx2x4 gen() 46582 MB/s Jul 2 08:14:32.855339 kernel: raid6: avx2x4 xor() 21154 MB/s Jul 2 08:14:32.872342 kernel: raid6: avx2x2 gen() 52510 MB/s Jul 2 08:14:32.889353 kernel: raid6: avx2x2 xor() 31236 MB/s Jul 2 08:14:32.906339 kernel: raid6: avx2x1 gen() 44014 MB/s Jul 2 08:14:32.923340 kernel: raid6: avx2x1 xor() 26749 MB/s Jul 2 08:14:32.940341 kernel: raid6: sse2x4 gen() 19602 MB/s Jul 2 08:14:32.957345 kernel: raid6: sse2x4 xor() 11701 MB/s Jul 2 08:14:32.974345 kernel: raid6: sse2x2 gen() 20934 MB/s Jul 2 08:14:32.991350 kernel: raid6: sse2x2 xor() 13307 MB/s Jul 2 08:14:33.008347 kernel: raid6: sse2x1 gen() 17898 MB/s Jul 2 08:14:33.025539 kernel: raid6: sse2x1 xor() 8830 MB/s Jul 2 08:14:33.025582 kernel: raid6: using algorithm avx2x2 gen() 52510 MB/s Jul 2 08:14:33.025591 kernel: raid6: .... xor() 31236 MB/s, rmw enabled Jul 2 08:14:33.026718 kernel: raid6: using avx2x2 recovery algorithm Jul 2 08:14:33.035341 kernel: xor: automatically using best checksumming function avx Jul 2 08:14:33.095458 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jul 2 08:14:33.100184 systemd[1]: Finished dracut-pre-udev.service. Jul 2 08:14:33.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:33.100801 systemd[1]: Starting systemd-udevd.service... Jul 2 08:14:33.099000 audit: BPF prog-id=7 op=LOAD Jul 2 08:14:33.103340 kernel: audit: type=1130 audit(1719908073.098:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:33.099000 audit: BPF prog-id=8 op=LOAD Jul 2 08:14:33.111032 systemd-udevd[415]: Using default interface naming scheme 'v252'. Jul 2 08:14:33.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:33.113769 systemd[1]: Started systemd-udevd.service. Jul 2 08:14:33.114294 systemd[1]: Starting dracut-pre-trigger.service... Jul 2 08:14:33.122241 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Jul 2 08:14:33.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:33.137872 systemd[1]: Finished dracut-pre-trigger.service. Jul 2 08:14:33.138416 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 08:14:33.201374 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 08:14:33.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:33.252469 kernel: VMware PVSCSI driver - version 1.0.7.0-k Jul 2 08:14:33.252508 kernel: vmw_pvscsi: using 64bit dma Jul 2 08:14:33.261341 kernel: VMware vmxnet3 virtual NIC driver - version 1.6.0.0-k-NAPI Jul 2 08:14:33.265335 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Jul 2 08:14:33.265429 kernel: libata version 3.00 loaded. Jul 2 08:14:33.266933 kernel: vmw_pvscsi: max_id: 16 Jul 2 08:14:33.266951 kernel: vmw_pvscsi: setting ring_pages to 8 Jul 2 08:14:33.274331 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Jul 2 08:14:33.277334 kernel: ata_piix 0000:00:07.1: version 2.13 Jul 2 08:14:33.281331 kernel: vmw_pvscsi: enabling reqCallThreshold Jul 2 08:14:33.281357 kernel: vmw_pvscsi: driver-based request coalescing enabled Jul 2 08:14:33.281365 kernel: vmw_pvscsi: using MSI-X Jul 2 08:14:33.283331 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Jul 2 08:14:33.285328 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Jul 2 08:14:33.285419 kernel: scsi host1: ata_piix Jul 2 08:14:33.285436 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Jul 2 08:14:33.289329 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 08:14:33.295334 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Jul 2 08:14:33.301388 kernel: AVX2 version of gcm_enc/dec engaged. Jul 2 08:14:33.301416 kernel: AES CTR mode by8 optimization enabled Jul 2 08:14:33.301430 kernel: scsi host2: ata_piix Jul 2 08:14:33.302793 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Jul 2 08:14:33.302810 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Jul 2 08:14:33.471393 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Jul 2 08:14:33.475332 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Jul 2 08:14:33.482385 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Jul 2 08:14:33.482486 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 2 08:14:33.482549 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Jul 2 08:14:33.482608 kernel: sd 0:0:0:0: [sda] Cache data unavailable Jul 2 08:14:33.483748 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Jul 2 08:14:33.487585 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 08:14:33.487610 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 2 08:14:33.508360 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Jul 2 08:14:33.508511 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 2 08:14:33.522335 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (465) Jul 2 08:14:33.524332 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 2 08:14:33.526717 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 2 08:14:33.530353 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 08:14:33.533870 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 2 08:14:33.540199 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 2 08:14:33.540494 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 2 08:14:33.541176 systemd[1]: Starting disk-uuid.service... Jul 2 08:14:33.578337 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 08:14:33.582335 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 08:14:34.588882 disk-uuid[547]: The operation has completed successfully. Jul 2 08:14:34.589329 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 08:14:34.625966 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 08:14:34.626251 systemd[1]: Finished disk-uuid.service. Jul 2 08:14:34.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:34.625000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:34.627014 systemd[1]: Starting verity-setup.service... Jul 2 08:14:34.636343 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 2 08:14:34.677950 systemd[1]: Found device dev-mapper-usr.device. Jul 2 08:14:34.678612 systemd[1]: Mounting sysusr-usr.mount... Jul 2 08:14:34.678941 systemd[1]: Finished verity-setup.service. Jul 2 08:14:34.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:34.736345 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 2 08:14:34.735084 systemd[1]: Mounted sysusr-usr.mount. Jul 2 08:14:34.735658 systemd[1]: Starting afterburn-network-kargs.service... Jul 2 08:14:34.736090 systemd[1]: Starting ignition-setup.service... Jul 2 08:14:34.752559 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 08:14:34.752583 kernel: BTRFS info (device sda6): using free space tree Jul 2 08:14:34.752591 kernel: BTRFS info (device sda6): has skinny extents Jul 2 08:14:34.758335 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 2 08:14:34.764614 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 08:14:34.770894 systemd[1]: Finished ignition-setup.service. Jul 2 08:14:34.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:34.771629 systemd[1]: Starting ignition-fetch-offline.service... Jul 2 08:14:34.829834 systemd[1]: Finished afterburn-network-kargs.service. Jul 2 08:14:34.830426 systemd[1]: Starting parse-ip-for-networkd.service... Jul 2 08:14:34.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:34.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:34.874000 audit: BPF prog-id=9 op=LOAD Jul 2 08:14:34.875650 systemd[1]: Finished parse-ip-for-networkd.service. Jul 2 08:14:34.876530 systemd[1]: Starting systemd-networkd.service... Jul 2 08:14:34.893139 systemd-networkd[735]: lo: Link UP Jul 2 08:14:34.893382 systemd-networkd[735]: lo: Gained carrier Jul 2 08:14:34.893766 systemd-networkd[735]: Enumeration completed Jul 2 08:14:34.893952 systemd[1]: Started systemd-networkd.service. Jul 2 08:14:34.894122 systemd-networkd[735]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Jul 2 08:14:34.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:34.894197 systemd[1]: Reached target network.target. Jul 2 08:14:34.894709 systemd[1]: Starting iscsiuio.service... Jul 2 08:14:34.897708 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jul 2 08:14:34.897848 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jul 2 08:14:34.898347 systemd-networkd[735]: ens192: Link UP Jul 2 08:14:34.898482 systemd-networkd[735]: ens192: Gained carrier Jul 2 08:14:34.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:34.899380 systemd[1]: Started iscsiuio.service. Jul 2 08:14:34.899952 systemd[1]: Starting iscsid.service... Jul 2 08:14:34.901994 iscsid[740]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 2 08:14:34.901994 iscsid[740]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 2 08:14:34.901994 iscsid[740]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 2 08:14:34.901994 iscsid[740]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 2 08:14:34.901994 iscsid[740]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 2 08:14:34.902930 iscsid[740]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 2 08:14:34.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:34.903095 systemd[1]: Started iscsid.service. Jul 2 08:14:34.903632 systemd[1]: Starting dracut-initqueue.service... Jul 2 08:14:34.910814 systemd[1]: Finished dracut-initqueue.service. Jul 2 08:14:34.910953 systemd[1]: Reached target remote-fs-pre.target. Jul 2 08:14:34.911039 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 08:14:34.911125 systemd[1]: Reached target remote-fs.target. Jul 2 08:14:34.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:34.911614 systemd[1]: Starting dracut-pre-mount.service... Jul 2 08:14:34.917496 systemd[1]: Finished dracut-pre-mount.service. Jul 2 08:14:34.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:34.937858 ignition[606]: Ignition 2.14.0 Jul 2 08:14:34.938087 ignition[606]: Stage: fetch-offline Jul 2 08:14:34.938236 ignition[606]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 08:14:34.938409 ignition[606]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Jul 2 08:14:34.941447 ignition[606]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 2 08:14:34.941693 ignition[606]: parsed url from cmdline: "" Jul 2 08:14:34.941735 ignition[606]: no config URL provided Jul 2 08:14:34.941852 ignition[606]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 08:14:34.941991 ignition[606]: no config at "/usr/lib/ignition/user.ign" Jul 2 08:14:34.948137 ignition[606]: config successfully fetched Jul 2 08:14:34.948226 ignition[606]: parsing config with SHA512: a76146a68b20fb413ed90757406a9964848e1e2a505a93a25a48b5c8d19db6c90d59ae152926ee07ec1f79aa820e58007fe2ddb4579984ef85d148aa349babe4 Jul 2 08:14:34.950747 unknown[606]: fetched base config from "system" Jul 2 08:14:34.950919 unknown[606]: fetched user config from "vmware" Jul 2 08:14:34.951396 ignition[606]: fetch-offline: fetch-offline passed Jul 2 08:14:34.951565 ignition[606]: Ignition finished successfully Jul 2 08:14:34.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:34.952225 systemd[1]: Finished ignition-fetch-offline.service. Jul 2 08:14:34.952381 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 2 08:14:34.952826 systemd[1]: Starting ignition-kargs.service... Jul 2 08:14:34.957917 ignition[754]: Ignition 2.14.0 Jul 2 08:14:34.958148 ignition[754]: Stage: kargs Jul 2 08:14:34.958327 ignition[754]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 08:14:34.958480 ignition[754]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Jul 2 08:14:34.959792 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 2 08:14:34.961327 ignition[754]: kargs: kargs passed Jul 2 08:14:34.961511 ignition[754]: Ignition finished successfully Jul 2 08:14:34.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:34.962357 systemd[1]: Finished ignition-kargs.service. Jul 2 08:14:34.962928 systemd[1]: Starting ignition-disks.service... Jul 2 08:14:34.967135 ignition[760]: Ignition 2.14.0 Jul 2 08:14:34.967482 ignition[760]: Stage: disks Jul 2 08:14:34.967653 ignition[760]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 08:14:34.967803 ignition[760]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Jul 2 08:14:34.969105 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 2 08:14:34.970788 ignition[760]: disks: disks passed Jul 2 08:14:34.970823 ignition[760]: Ignition finished successfully Jul 2 08:14:34.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:34.971527 systemd[1]: Finished ignition-disks.service. Jul 2 08:14:34.971682 systemd[1]: Reached target initrd-root-device.target. Jul 2 08:14:34.971774 systemd[1]: Reached target local-fs-pre.target. Jul 2 08:14:34.971859 systemd[1]: Reached target local-fs.target. Jul 2 08:14:34.971939 systemd[1]: Reached target sysinit.target. Jul 2 08:14:34.972018 systemd[1]: Reached target basic.target. Jul 2 08:14:34.972581 systemd[1]: Starting systemd-fsck-root.service... Jul 2 08:14:34.984203 systemd-fsck[768]: ROOT: clean, 614/1628000 files, 124057/1617920 blocks Jul 2 08:14:34.985515 systemd[1]: Finished systemd-fsck-root.service. Jul 2 08:14:34.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:34.986294 systemd[1]: Mounting sysroot.mount... Jul 2 08:14:34.996361 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 2 08:14:34.996128 systemd[1]: Mounted sysroot.mount. Jul 2 08:14:34.996245 systemd[1]: Reached target initrd-root-fs.target. Jul 2 08:14:34.997306 systemd[1]: Mounting sysroot-usr.mount... Jul 2 08:14:34.997677 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 2 08:14:34.997698 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 08:14:34.997712 systemd[1]: Reached target ignition-diskful.target. Jul 2 08:14:34.998987 systemd[1]: Mounted sysroot-usr.mount. Jul 2 08:14:34.999549 systemd[1]: Starting initrd-setup-root.service... Jul 2 08:14:35.002462 initrd-setup-root[778]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 08:14:35.006324 initrd-setup-root[786]: cut: /sysroot/etc/group: No such file or directory Jul 2 08:14:35.008368 initrd-setup-root[794]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 08:14:35.010074 initrd-setup-root[802]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 08:14:35.042527 systemd[1]: Finished initrd-setup-root.service. Jul 2 08:14:35.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:35.043113 systemd[1]: Starting ignition-mount.service... Jul 2 08:14:35.043564 systemd[1]: Starting sysroot-boot.service... Jul 2 08:14:35.047596 bash[819]: umount: /sysroot/usr/share/oem: not mounted. Jul 2 08:14:35.052654 ignition[820]: INFO : Ignition 2.14.0 Jul 2 08:14:35.052933 ignition[820]: INFO : Stage: mount Jul 2 08:14:35.053124 ignition[820]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 08:14:35.053279 ignition[820]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Jul 2 08:14:35.054748 ignition[820]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 2 08:14:35.056309 ignition[820]: INFO : mount: mount passed Jul 2 08:14:35.056434 ignition[820]: INFO : Ignition finished successfully Jul 2 08:14:35.056962 systemd[1]: Finished ignition-mount.service. Jul 2 08:14:35.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:35.063818 systemd[1]: Finished sysroot-boot.service. Jul 2 08:14:35.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:35.697474 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 08:14:35.706337 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (829) Jul 2 08:14:35.706373 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 08:14:35.708772 kernel: BTRFS info (device sda6): using free space tree Jul 2 08:14:35.708788 kernel: BTRFS info (device sda6): has skinny extents Jul 2 08:14:35.713332 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 2 08:14:35.714828 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 08:14:35.715542 systemd[1]: Starting ignition-files.service... Jul 2 08:14:35.724155 ignition[849]: INFO : Ignition 2.14.0 Jul 2 08:14:35.724155 ignition[849]: INFO : Stage: files Jul 2 08:14:35.724489 ignition[849]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 08:14:35.724489 ignition[849]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Jul 2 08:14:35.725517 ignition[849]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 2 08:14:35.730899 ignition[849]: DEBUG : files: compiled without relabeling support, skipping Jul 2 08:14:35.731478 ignition[849]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 08:14:35.731478 ignition[849]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 08:14:35.733422 ignition[849]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 08:14:35.733562 ignition[849]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 08:14:35.739949 unknown[849]: wrote ssh authorized keys file for user: core Jul 2 08:14:35.740184 ignition[849]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 08:14:35.744445 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 08:14:35.744630 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 2 08:14:35.772392 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 08:14:35.831745 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 08:14:35.834932 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 08:14:35.835162 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 2 08:14:36.099789 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 2 08:14:36.134750 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 08:14:36.135019 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 2 08:14:36.135019 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 08:14:36.135019 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 08:14:36.135019 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 08:14:36.135019 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 08:14:36.135019 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 08:14:36.135019 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 08:14:36.136152 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 08:14:36.140170 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 08:14:36.140351 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 08:14:36.140351 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 08:14:36.140351 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 08:14:36.144607 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" Jul 2 08:14:36.144832 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Jul 2 08:14:36.147673 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1071674392" Jul 2 08:14:36.147883 ignition[849]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1071674392": device or resource busy Jul 2 08:14:36.148101 ignition[849]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1071674392", trying btrfs: device or resource busy Jul 2 08:14:36.148313 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1071674392" Jul 2 08:14:36.149841 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1071674392" Jul 2 08:14:36.150331 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (852) Jul 2 08:14:36.168292 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem1071674392" Jul 2 08:14:36.168549 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem1071674392" Jul 2 08:14:36.168901 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" Jul 2 08:14:36.169116 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 08:14:36.169181 systemd[1]: mnt-oem1071674392.mount: Deactivated successfully. Jul 2 08:14:36.169642 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jul 2 08:14:36.241514 systemd-networkd[735]: ens192: Gained IPv6LL Jul 2 08:14:36.576895 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET result: OK Jul 2 08:14:36.793444 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 08:14:36.795699 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jul 2 08:14:36.795977 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jul 2 08:14:36.796174 ignition[849]: INFO : files: op(11): [started] processing unit "vmtoolsd.service" Jul 2 08:14:36.796315 ignition[849]: INFO : files: op(11): [finished] processing unit "vmtoolsd.service" Jul 2 08:14:36.796466 ignition[849]: INFO : files: op(12): [started] processing unit "prepare-helm.service" Jul 2 08:14:36.796626 ignition[849]: INFO : files: op(12): op(13): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 08:14:36.796865 ignition[849]: INFO : files: op(12): op(13): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 08:14:36.797047 ignition[849]: INFO : files: op(12): [finished] processing unit "prepare-helm.service" Jul 2 08:14:36.797190 ignition[849]: INFO : files: op(14): [started] processing unit "coreos-metadata.service" Jul 2 08:14:36.797362 ignition[849]: INFO : files: op(14): op(15): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 08:14:36.797603 ignition[849]: INFO : files: op(14): op(15): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 08:14:36.797789 ignition[849]: INFO : files: op(14): [finished] processing unit "coreos-metadata.service" Jul 2 08:14:36.797935 ignition[849]: INFO : files: op(16): [started] setting preset to enabled for "vmtoolsd.service" Jul 2 08:14:36.798133 ignition[849]: INFO : files: op(16): [finished] setting preset to enabled for "vmtoolsd.service" Jul 2 08:14:36.798285 ignition[849]: INFO : files: op(17): [started] setting preset to enabled for "prepare-helm.service" Jul 2 08:14:36.798464 ignition[849]: INFO : files: op(17): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 08:14:36.798625 ignition[849]: INFO : files: op(18): [started] setting preset to disabled for "coreos-metadata.service" Jul 2 08:14:36.798782 ignition[849]: INFO : files: op(18): op(19): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 08:14:36.890001 ignition[849]: INFO : files: op(18): op(19): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 08:14:36.890251 ignition[849]: INFO : files: op(18): [finished] setting preset to disabled for "coreos-metadata.service" Jul 2 08:14:36.890532 ignition[849]: INFO : files: createResultFile: createFiles: op(1a): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 08:14:36.890790 ignition[849]: INFO : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 08:14:36.890980 ignition[849]: INFO : files: files passed Jul 2 08:14:36.891121 ignition[849]: INFO : Ignition finished successfully Jul 2 08:14:36.892165 systemd[1]: Finished ignition-files.service. Jul 2 08:14:36.895047 kernel: kauditd_printk_skb: 24 callbacks suppressed Jul 2 08:14:36.895075 kernel: audit: type=1130 audit(1719908076.890:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:36.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:36.893082 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 2 08:14:36.893198 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 2 08:14:36.893604 systemd[1]: Starting ignition-quench.service... Jul 2 08:14:36.898531 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 08:14:36.898582 systemd[1]: Finished ignition-quench.service. Jul 2 08:14:36.904886 kernel: audit: type=1130 audit(1719908076.897:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:36.904906 kernel: audit: type=1131 audit(1719908076.897:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:36.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:36.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:36.905663 initrd-setup-root-after-ignition[875]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 08:14:36.906231 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 2 08:14:36.909032 kernel: audit: type=1130 audit(1719908076.905:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:36.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:36.906411 systemd[1]: Reached target ignition-complete.target. Jul 2 08:14:36.909500 systemd[1]: Starting initrd-parse-etc.service... Jul 2 08:14:36.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:36.917751 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 08:14:36.923567 kernel: audit: type=1130 audit(1719908076.916:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:36.923583 kernel: audit: type=1131 audit(1719908076.916:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:36.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:36.917801 systemd[1]: Finished initrd-parse-etc.service. Jul 2 08:14:36.917974 systemd[1]: Reached target initrd-fs.target. Jul 2 08:14:36.922701 systemd[1]: Reached target initrd.target. Jul 2 08:14:36.922816 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 2 08:14:36.923281 systemd[1]: Starting dracut-pre-pivot.service... Jul 2 08:14:36.933341 kernel: audit: type=1130 audit(1719908076.928:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:36.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:36.930103 systemd[1]: Finished dracut-pre-pivot.service. Jul 2 08:14:36.930641 systemd[1]: Starting initrd-cleanup.service... Jul 2 08:14:36.937822 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 08:14:36.937887 systemd[1]: Finished initrd-cleanup.service. Jul 2 08:14:36.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:36.938455 systemd[1]: Stopped target network.target. Jul 2 08:14:36.942966 kernel: audit: type=1130 audit(1719908076.936:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:36.942985 kernel: audit: type=1131 audit(1719908076.936:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:36.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:36.942869 systemd[1]: Stopped target nss-lookup.target. Jul 2 08:14:36.943031 systemd[1]: Stopped target remote-cryptsetup.target. Jul 2 08:14:36.943224 systemd[1]: Stopped target timers.target. Jul 2 08:14:36.943441 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 08:14:36.946057 kernel: audit: type=1131 audit(1719908076.942:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:36.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:36.943477 systemd[1]: Stopped dracut-pre-pivot.service. Jul 2 08:14:36.943613 systemd[1]: Stopped target initrd.target. Jul 2 08:14:36.946114 systemd[1]: Stopped target basic.target. Jul 2 08:14:36.946275 systemd[1]: Stopped target ignition-complete.target. Jul 2 08:14:36.946448 systemd[1]: Stopped target ignition-diskful.target. Jul 2 08:14:36.946607 systemd[1]: Stopped target initrd-root-device.target. Jul 2 08:14:36.946779 systemd[1]: Stopped target remote-fs.target. Jul 2 08:14:36.946941 systemd[1]: Stopped target remote-fs-pre.target. Jul 2 08:14:36.947124 systemd[1]: Stopped target sysinit.target. Jul 2 08:14:36.947277 systemd[1]: Stopped target local-fs.target. Jul 2 08:14:36.947443 systemd[1]: Stopped target local-fs-pre.target. Jul 2 08:14:36.947602 systemd[1]: Stopped target swap.target. Jul 2 08:14:36.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:36.947762 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 08:14:36.947792 systemd[1]: Stopped dracut-pre-mount.service. Jul 2 08:14:36.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:36.947952 systemd[1]: Stopped target cryptsetup.target. Jul 2 08:14:36.947000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:36.948083 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 08:14:36.948104 systemd[1]: Stopped dracut-initqueue.service. Jul 2 08:14:36.948281 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 08:14:36.948304 systemd[1]: Stopped ignition-fetch-offline.service. Jul 2 08:14:36.948497 systemd[1]: Stopped target paths.target. Jul 2 08:14:36.948789 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 08:14:36.952379 systemd[1]: Stopped systemd-ask-password-console.path. Jul 2 08:14:36.952485 systemd[1]: Stopped target slices.target. Jul 2 08:14:36.952656 systemd[1]: Stopped target sockets.target. Jul 2 08:14:36.952827 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 08:14:36.952840 systemd[1]: Closed iscsid.socket. Jul 2 08:14:36.952989 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 08:14:36.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:36.953002 systemd[1]: Closed iscsiuio.socket. Jul 2 08:14:36.953142 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 08:14:36.953163 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 2 08:14:36.953310 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 08:14:36.953347 systemd[1]: Stopped ignition-files.service. Jul 2 08:14:36.952000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:36.953886 systemd[1]: Stopping ignition-mount.service... Jul 2 08:14:36.954008 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 08:14:36.954035 systemd[1]: Stopped kmod-static-nodes.service. Jul 2 08:14:36.952000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:36.954540 systemd[1]: Stopping sysroot-boot.service... Jul 2 08:14:36.954914 systemd[1]: Stopping systemd-networkd.service... Jul 2 08:14:36.955075 systemd[1]: Stopping systemd-resolved.service... Jul 2 08:14:36.955232 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 08:14:36.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:36.955259 systemd[1]: Stopped systemd-udev-trigger.service. Jul 2 08:14:36.955447 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 08:14:36.955471 systemd[1]: Stopped dracut-pre-trigger.service. Jul 2 08:14:36.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:36.960551 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 08:14:36.962408 ignition[888]: INFO : Ignition 2.14.0 Jul 2 08:14:36.962408 ignition[888]: INFO : Stage: umount Jul 2 08:14:36.962408 ignition[888]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 08:14:36.962408 ignition[888]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Jul 2 08:14:36.962408 ignition[888]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 2 08:14:36.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:36.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:36.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:36.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:36.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:36.960625 systemd[1]: Stopped systemd-networkd.service. Jul 2 08:14:36.961058 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 08:14:36.961078 systemd[1]: Closed systemd-networkd.socket. Jul 2 08:14:36.961631 systemd[1]: Stopping network-cleanup.service... Jul 2 08:14:36.961732 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 08:14:36.961765 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 2 08:14:36.961912 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Jul 2 08:14:36.961942 systemd[1]: Stopped afterburn-network-kargs.service. Jul 2 08:14:36.962071 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 08:14:36.962091 systemd[1]: Stopped systemd-sysctl.service. Jul 2 08:14:36.963000 audit: BPF prog-id=9 op=UNLOAD Jul 2 08:14:36.962264 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 08:14:36.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:36.962286 systemd[1]: Stopped systemd-modules-load.service. Jul 2 08:14:36.963288 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 2 08:14:36.966478 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 08:14:36.966544 systemd[1]: Stopped systemd-resolved.service. Jul 2 08:14:36.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:36.968000 audit: BPF prog-id=6 op=UNLOAD Jul 2 08:14:36.967076 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 08:14:36.970412 ignition[888]: INFO : umount: umount passed Jul 2 08:14:36.970412 ignition[888]: INFO : Ignition finished successfully Jul 2 08:14:36.967129 systemd[1]: Stopped network-cleanup.service. Jul 2 08:14:36.971144 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 08:14:36.971201 systemd[1]: Stopped ignition-mount.service. Jul 2 08:14:36.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:36.971464 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 08:14:36.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:36.971486 systemd[1]: Stopped ignition-disks.service. Jul 2 08:14:36.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:36.971603 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 08:14:36.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:36.971622 systemd[1]: Stopped ignition-kargs.service. Jul 2 08:14:36.971784 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 08:14:36.971804 systemd[1]: Stopped ignition-setup.service. Jul 2 08:14:36.971986 systemd[1]: Stopping systemd-udevd.service... Jul 2 08:14:36.977223 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 08:14:36.977296 systemd[1]: Stopped systemd-udevd.service. Jul 2 08:14:36.977629 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 08:14:36.977650 systemd[1]: Closed systemd-udevd-control.socket. Jul 2 08:14:36.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:36.977920 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 08:14:36.977936 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 2 08:14:36.978077 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 08:14:36.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:36.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:36.978099 systemd[1]: Stopped dracut-pre-udev.service. Jul 2 08:14:36.978275 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 08:14:36.978294 systemd[1]: Stopped dracut-cmdline.service. Jul 2 08:14:36.978441 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 08:14:36.978461 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 2 08:14:36.978996 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 2 08:14:36.979183 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 08:14:36.979210 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 2 08:14:36.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:36.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:36.983280 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 08:14:36.983356 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 2 08:14:36.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:36.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:36.997663 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 08:14:37.109828 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 08:14:37.109920 systemd[1]: Stopped sysroot-boot.service. Jul 2 08:14:37.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:37.110191 systemd[1]: Reached target initrd-switch-root.target. Jul 2 08:14:37.110302 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 08:14:37.109000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:37.110338 systemd[1]: Stopped initrd-setup-root.service. Jul 2 08:14:37.110938 systemd[1]: Starting initrd-switch-root.service... Jul 2 08:14:37.117659 systemd[1]: Switching root. Jul 2 08:14:37.130698 iscsid[740]: iscsid shutting down. Jul 2 08:14:37.130856 systemd-journald[217]: Journal stopped Jul 2 08:14:40.036436 systemd-journald[217]: Received SIGTERM from PID 1 (n/a). Jul 2 08:14:40.036463 kernel: SELinux: Class mctp_socket not defined in policy. Jul 2 08:14:40.036478 kernel: SELinux: Class anon_inode not defined in policy. Jul 2 08:14:40.036489 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 2 08:14:40.036499 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 08:14:40.036512 kernel: SELinux: policy capability open_perms=1 Jul 2 08:14:40.036523 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 08:14:40.036534 kernel: SELinux: policy capability always_check_network=0 Jul 2 08:14:40.036544 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 08:14:40.036553 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 08:14:40.036563 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 08:14:40.036573 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 08:14:40.036586 systemd[1]: Successfully loaded SELinux policy in 120.010ms. Jul 2 08:14:40.036601 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 5.873ms. Jul 2 08:14:40.036616 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 08:14:40.036629 systemd[1]: Detected virtualization vmware. Jul 2 08:14:40.036643 systemd[1]: Detected architecture x86-64. Jul 2 08:14:40.036655 systemd[1]: Detected first boot. Jul 2 08:14:40.036667 systemd[1]: Initializing machine ID from random generator. Jul 2 08:14:40.036679 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 2 08:14:40.036690 systemd[1]: Populated /etc with preset unit settings. Jul 2 08:14:40.036703 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 08:14:40.036715 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 08:14:40.036728 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:14:40.036743 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 2 08:14:40.036756 systemd[1]: Stopped iscsiuio.service. Jul 2 08:14:40.036768 systemd[1]: iscsid.service: Deactivated successfully. Jul 2 08:14:40.036781 systemd[1]: Stopped iscsid.service. Jul 2 08:14:40.036794 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 08:14:40.036808 systemd[1]: Stopped initrd-switch-root.service. Jul 2 08:14:40.036819 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 08:14:40.036833 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 2 08:14:40.036845 systemd[1]: Created slice system-addon\x2drun.slice. Jul 2 08:14:40.036858 systemd[1]: Created slice system-getty.slice. Jul 2 08:14:40.036870 systemd[1]: Created slice system-modprobe.slice. Jul 2 08:14:40.036882 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 2 08:14:40.036894 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 2 08:14:40.036907 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 2 08:14:40.036920 systemd[1]: Created slice user.slice. Jul 2 08:14:40.036935 systemd[1]: Started systemd-ask-password-console.path. Jul 2 08:14:40.036950 systemd[1]: Started systemd-ask-password-wall.path. Jul 2 08:14:40.036964 systemd[1]: Set up automount boot.automount. Jul 2 08:14:40.036977 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 2 08:14:40.036989 systemd[1]: Stopped target initrd-switch-root.target. Jul 2 08:14:40.037003 systemd[1]: Stopped target initrd-fs.target. Jul 2 08:14:40.037015 systemd[1]: Stopped target initrd-root-fs.target. Jul 2 08:14:40.037028 systemd[1]: Reached target integritysetup.target. Jul 2 08:14:40.037040 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 08:14:40.037054 systemd[1]: Reached target remote-fs.target. Jul 2 08:14:40.037066 systemd[1]: Reached target slices.target. Jul 2 08:14:40.037079 systemd[1]: Reached target swap.target. Jul 2 08:14:40.037091 systemd[1]: Reached target torcx.target. Jul 2 08:14:40.037103 systemd[1]: Reached target veritysetup.target. Jul 2 08:14:40.037117 systemd[1]: Listening on systemd-coredump.socket. Jul 2 08:14:40.037132 systemd[1]: Listening on systemd-initctl.socket. Jul 2 08:14:40.037145 systemd[1]: Listening on systemd-networkd.socket. Jul 2 08:14:40.037158 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 08:14:40.037170 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 08:14:40.037182 systemd[1]: Listening on systemd-userdbd.socket. Jul 2 08:14:40.037193 systemd[1]: Mounting dev-hugepages.mount... Jul 2 08:14:40.037205 systemd[1]: Mounting dev-mqueue.mount... Jul 2 08:14:40.037216 systemd[1]: Mounting media.mount... Jul 2 08:14:40.037226 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:14:40.037236 systemd[1]: Mounting sys-kernel-debug.mount... Jul 2 08:14:40.037247 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 2 08:14:40.037258 systemd[1]: Mounting tmp.mount... Jul 2 08:14:40.037269 systemd[1]: Starting flatcar-tmpfiles.service... Jul 2 08:14:40.037280 systemd[1]: Starting ignition-delete-config.service... Jul 2 08:14:40.037290 systemd[1]: Starting kmod-static-nodes.service... Jul 2 08:14:40.037300 systemd[1]: Starting modprobe@configfs.service... Jul 2 08:14:40.037312 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 08:14:40.037337 systemd[1]: Starting modprobe@drm.service... Jul 2 08:14:40.037351 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 08:14:40.037366 systemd[1]: Starting modprobe@fuse.service... Jul 2 08:14:40.037379 systemd[1]: Starting modprobe@loop.service... Jul 2 08:14:40.037392 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 08:14:40.037405 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 08:14:40.037417 systemd[1]: Stopped systemd-fsck-root.service. Jul 2 08:14:40.037430 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 08:14:40.037445 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 08:14:40.037457 systemd[1]: Stopped systemd-journald.service. Jul 2 08:14:40.037472 systemd[1]: Starting systemd-journald.service... Jul 2 08:14:40.037485 systemd[1]: Starting systemd-modules-load.service... Jul 2 08:14:40.037499 systemd[1]: Starting systemd-network-generator.service... Jul 2 08:14:40.037511 systemd[1]: Starting systemd-remount-fs.service... Jul 2 08:14:40.037529 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 08:14:40.037542 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 08:14:40.037556 systemd[1]: Stopped verity-setup.service. Jul 2 08:14:40.037570 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:14:40.037583 systemd[1]: Mounted dev-hugepages.mount. Jul 2 08:14:40.037596 systemd[1]: Mounted dev-mqueue.mount. Jul 2 08:14:40.037609 systemd[1]: Mounted media.mount. Jul 2 08:14:40.037622 systemd[1]: Mounted sys-kernel-debug.mount. Jul 2 08:14:40.037636 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 2 08:14:40.037649 systemd[1]: Mounted tmp.mount. Jul 2 08:14:40.037664 systemd-journald[1006]: Journal started Jul 2 08:14:40.037709 systemd-journald[1006]: Runtime Journal (/run/log/journal/5e83d675d85848d6ab3add8e24f0a524) is 4.8M, max 38.8M, 34.0M free. Jul 2 08:14:37.679000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 08:14:37.839000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 08:14:37.839000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 08:14:37.839000 audit: BPF prog-id=10 op=LOAD Jul 2 08:14:37.839000 audit: BPF prog-id=10 op=UNLOAD Jul 2 08:14:37.839000 audit: BPF prog-id=11 op=LOAD Jul 2 08:14:37.839000 audit: BPF prog-id=11 op=UNLOAD Jul 2 08:14:37.993000 audit[922]: AVC avc: denied { associate } for pid=922 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 2 08:14:37.993000 audit[922]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8b2 a1=c0000cede0 a2=c0000d70c0 a3=32 items=0 ppid=905 pid=922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 08:14:37.993000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 08:14:37.994000 audit[922]: AVC avc: denied { associate } for pid=922 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 2 08:14:37.994000 audit[922]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d989 a2=1ed a3=0 items=2 ppid=905 pid=922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 08:14:37.994000 audit: CWD cwd="/" Jul 2 08:14:37.994000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:37.994000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:37.994000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 08:14:39.923000 audit: BPF prog-id=12 op=LOAD Jul 2 08:14:39.923000 audit: BPF prog-id=3 op=UNLOAD Jul 2 08:14:39.923000 audit: BPF prog-id=13 op=LOAD Jul 2 08:14:39.923000 audit: BPF prog-id=14 op=LOAD Jul 2 08:14:39.923000 audit: BPF prog-id=4 op=UNLOAD Jul 2 08:14:39.923000 audit: BPF prog-id=5 op=UNLOAD Jul 2 08:14:39.923000 audit: BPF prog-id=15 op=LOAD Jul 2 08:14:39.923000 audit: BPF prog-id=12 op=UNLOAD Jul 2 08:14:39.923000 audit: BPF prog-id=16 op=LOAD Jul 2 08:14:39.923000 audit: BPF prog-id=17 op=LOAD Jul 2 08:14:39.923000 audit: BPF prog-id=13 op=UNLOAD Jul 2 08:14:39.923000 audit: BPF prog-id=14 op=UNLOAD Jul 2 08:14:39.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:39.926000 audit: BPF prog-id=15 op=UNLOAD Jul 2 08:14:39.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:39.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:39.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:39.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:39.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:39.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:39.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:39.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:40.000000 audit: BPF prog-id=18 op=LOAD Jul 2 08:14:40.000000 audit: BPF prog-id=19 op=LOAD Jul 2 08:14:40.000000 audit: BPF prog-id=20 op=LOAD Jul 2 08:14:40.000000 audit: BPF prog-id=16 op=UNLOAD Jul 2 08:14:40.000000 audit: BPF prog-id=17 op=UNLOAD Jul 2 08:14:40.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:40.032000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 08:14:40.032000 audit[1006]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffc0826e280 a2=4000 a3=7ffc0826e31c items=0 ppid=1 pid=1006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 08:14:40.032000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 2 08:14:39.921758 systemd[1]: Queued start job for default target multi-user.target. Jul 2 08:14:40.041937 systemd[1]: Finished kmod-static-nodes.service. Jul 2 08:14:40.041953 systemd[1]: Started systemd-journald.service. Jul 2 08:14:40.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:40.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:40.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:40.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:40.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:40.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:40.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:40.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:37.991373 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-07-02T08:14:37Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 08:14:39.925543 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 08:14:37.991981 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-07-02T08:14:37Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 08:14:40.041390 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 08:14:40.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:40.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:37.991993 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-07-02T08:14:37Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 08:14:40.041512 systemd[1]: Finished modprobe@configfs.service. Jul 2 08:14:37.992014 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-07-02T08:14:37Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Jul 2 08:14:40.041801 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:14:37.992021 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-07-02T08:14:37Z" level=debug msg="skipped missing lower profile" missing profile=oem Jul 2 08:14:40.041879 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 08:14:37.992039 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-07-02T08:14:37Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Jul 2 08:14:40.042269 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 08:14:37.992047 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-07-02T08:14:37Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Jul 2 08:14:40.042433 systemd[1]: Finished modprobe@drm.service. Jul 2 08:14:37.992171 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-07-02T08:14:37Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Jul 2 08:14:40.043417 jq[989]: true Jul 2 08:14:40.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:40.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:40.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:40.042644 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:14:37.992195 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-07-02T08:14:37Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 08:14:40.042790 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 08:14:37.992203 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-07-02T08:14:37Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 08:14:40.043312 systemd[1]: Finished systemd-modules-load.service. Jul 2 08:14:37.993541 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-07-02T08:14:37Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Jul 2 08:14:40.043583 systemd[1]: Finished systemd-network-generator.service. Jul 2 08:14:37.993565 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-07-02T08:14:37Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Jul 2 08:14:40.043801 systemd[1]: Finished systemd-remount-fs.service. Jul 2 08:14:37.993578 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-07-02T08:14:37Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.5: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.5 Jul 2 08:14:40.044576 systemd[1]: Reached target network-pre.target. Jul 2 08:14:37.993597 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-07-02T08:14:37Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Jul 2 08:14:37.993607 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-07-02T08:14:37Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.5: no such file or directory" path=/var/lib/torcx/store/3510.3.5 Jul 2 08:14:37.993615 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-07-02T08:14:37Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Jul 2 08:14:39.617256 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-07-02T08:14:39Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 08:14:40.050356 kernel: fuse: init (API version 7.34) Jul 2 08:14:39.617427 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-07-02T08:14:39Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 08:14:40.046039 systemd[1]: Mounting sys-kernel-config.mount... Jul 2 08:14:39.617512 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-07-02T08:14:39Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 08:14:40.046151 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 08:14:39.617828 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-07-02T08:14:39Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 08:14:40.048966 systemd[1]: Starting systemd-hwdb-update.service... Jul 2 08:14:40.051049 jq[1019]: true Jul 2 08:14:39.617879 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-07-02T08:14:39Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Jul 2 08:14:40.049800 systemd[1]: Starting systemd-journal-flush.service... Jul 2 08:14:39.617935 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-07-02T08:14:39Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Jul 2 08:14:40.049921 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 08:14:40.050628 systemd[1]: Starting systemd-random-seed.service... Jul 2 08:14:40.051482 systemd[1]: Starting systemd-sysctl.service... Jul 2 08:14:40.052446 systemd[1]: Mounted sys-kernel-config.mount. Jul 2 08:14:40.057211 systemd[1]: Finished systemd-random-seed.service. Jul 2 08:14:40.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:40.057646 systemd[1]: Reached target first-boot-complete.target. Jul 2 08:14:40.065186 systemd-journald[1006]: Time spent on flushing to /var/log/journal/5e83d675d85848d6ab3add8e24f0a524 is 47.297ms for 2003 entries. Jul 2 08:14:40.065186 systemd-journald[1006]: System Journal (/var/log/journal/5e83d675d85848d6ab3add8e24f0a524) is 8.0M, max 584.8M, 576.8M free. Jul 2 08:14:40.117192 systemd-journald[1006]: Received client request to flush runtime journal. Jul 2 08:14:40.117220 kernel: loop: module loaded Jul 2 08:14:40.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:40.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:40.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:40.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:40.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:40.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:40.072672 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 08:14:40.072766 systemd[1]: Finished modprobe@fuse.service. Jul 2 08:14:40.074128 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 2 08:14:40.078177 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:14:40.078338 systemd[1]: Finished modprobe@loop.service. Jul 2 08:14:40.078538 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 2 08:14:40.078694 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 08:14:40.084484 systemd[1]: Finished systemd-sysctl.service. Jul 2 08:14:40.098126 systemd[1]: Finished flatcar-tmpfiles.service. Jul 2 08:14:40.099684 systemd[1]: Starting systemd-sysusers.service... Jul 2 08:14:40.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:40.118203 systemd[1]: Finished systemd-journal-flush.service. Jul 2 08:14:40.139875 systemd[1]: Finished systemd-sysusers.service. Jul 2 08:14:40.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:40.157875 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 08:14:40.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:40.158835 systemd[1]: Starting systemd-udev-settle.service... Jul 2 08:14:40.168909 udevadm[1052]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 2 08:14:40.191909 ignition[1024]: Ignition 2.14.0 Jul 2 08:14:40.192139 ignition[1024]: deleting config from guestinfo properties Jul 2 08:14:40.196889 ignition[1024]: Successfully deleted config Jul 2 08:14:40.197586 systemd[1]: Finished ignition-delete-config.service. Jul 2 08:14:40.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ignition-delete-config comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:40.756067 systemd[1]: Finished systemd-hwdb-update.service. Jul 2 08:14:40.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:40.755000 audit: BPF prog-id=21 op=LOAD Jul 2 08:14:40.755000 audit: BPF prog-id=22 op=LOAD Jul 2 08:14:40.755000 audit: BPF prog-id=7 op=UNLOAD Jul 2 08:14:40.755000 audit: BPF prog-id=8 op=UNLOAD Jul 2 08:14:40.757176 systemd[1]: Starting systemd-udevd.service... Jul 2 08:14:40.768978 systemd-udevd[1053]: Using default interface naming scheme 'v252'. Jul 2 08:14:40.810098 systemd[1]: Started systemd-udevd.service. Jul 2 08:14:40.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:40.809000 audit: BPF prog-id=23 op=LOAD Jul 2 08:14:40.811610 systemd[1]: Starting systemd-networkd.service... Jul 2 08:14:40.822000 audit: BPF prog-id=24 op=LOAD Jul 2 08:14:40.822000 audit: BPF prog-id=25 op=LOAD Jul 2 08:14:40.822000 audit: BPF prog-id=26 op=LOAD Jul 2 08:14:40.824307 systemd[1]: Starting systemd-userdbd.service... Jul 2 08:14:40.835876 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Jul 2 08:14:40.849876 systemd[1]: Started systemd-userdbd.service. Jul 2 08:14:40.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:40.883338 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 2 08:14:40.887330 kernel: ACPI: button: Power Button [PWRF] Jul 2 08:14:40.935643 systemd-networkd[1061]: lo: Link UP Jul 2 08:14:40.935648 systemd-networkd[1061]: lo: Gained carrier Jul 2 08:14:40.936133 systemd-networkd[1061]: Enumeration completed Jul 2 08:14:40.936195 systemd[1]: Started systemd-networkd.service. Jul 2 08:14:40.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:40.936512 systemd-networkd[1061]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Jul 2 08:14:40.939529 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jul 2 08:14:40.939658 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jul 2 08:14:40.940812 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): ens192: link becomes ready Jul 2 08:14:40.941281 systemd-networkd[1061]: ens192: Link UP Jul 2 08:14:40.941374 systemd-networkd[1061]: ens192: Gained carrier Jul 2 08:14:40.952000 audit[1060]: AVC avc: denied { confidentiality } for pid=1060 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 2 08:14:40.952000 audit[1060]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55d95b0e7c40 a1=3207c a2=7fa079e7abc5 a3=5 items=108 ppid=1053 pid=1060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 08:14:40.952000 audit: CWD cwd="/" Jul 2 08:14:40.952000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=1 name=(null) inode=16301 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=2 name=(null) inode=16301 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=3 name=(null) inode=16302 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=4 name=(null) inode=16301 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=5 name=(null) inode=16303 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=6 name=(null) inode=16301 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=7 name=(null) inode=16304 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=8 name=(null) inode=16304 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=9 name=(null) inode=16305 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=10 name=(null) inode=16304 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=11 name=(null) inode=16306 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=12 name=(null) inode=16304 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=13 name=(null) inode=16307 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=14 name=(null) inode=16304 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=15 name=(null) inode=16308 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=16 name=(null) inode=16304 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=17 name=(null) inode=16309 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=18 name=(null) inode=16301 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=19 name=(null) inode=16310 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=20 name=(null) inode=16310 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=21 name=(null) inode=16311 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=22 name=(null) inode=16310 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=23 name=(null) inode=16312 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=24 name=(null) inode=16310 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=25 name=(null) inode=16313 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=26 name=(null) inode=16310 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=27 name=(null) inode=16314 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=28 name=(null) inode=16310 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=29 name=(null) inode=16315 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=30 name=(null) inode=16301 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=31 name=(null) inode=16316 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=32 name=(null) inode=16316 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=33 name=(null) inode=16317 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=34 name=(null) inode=16316 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=35 name=(null) inode=16318 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=36 name=(null) inode=16316 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=37 name=(null) inode=16319 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=38 name=(null) inode=16316 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=39 name=(null) inode=16320 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=40 name=(null) inode=16316 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=41 name=(null) inode=16321 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=42 name=(null) inode=16301 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=43 name=(null) inode=16322 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=44 name=(null) inode=16322 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=45 name=(null) inode=16323 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=46 name=(null) inode=16322 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=47 name=(null) inode=16324 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=48 name=(null) inode=16322 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=49 name=(null) inode=16325 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=50 name=(null) inode=16322 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=51 name=(null) inode=16326 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=52 name=(null) inode=16322 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=53 name=(null) inode=16327 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=55 name=(null) inode=16328 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=56 name=(null) inode=16328 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=57 name=(null) inode=16329 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=58 name=(null) inode=16328 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=59 name=(null) inode=16330 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=60 name=(null) inode=16328 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=61 name=(null) inode=16331 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=62 name=(null) inode=16331 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=63 name=(null) inode=16332 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=64 name=(null) inode=16331 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=65 name=(null) inode=16333 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=66 name=(null) inode=16331 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=67 name=(null) inode=16334 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=68 name=(null) inode=16331 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=69 name=(null) inode=16335 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=70 name=(null) inode=16331 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=71 name=(null) inode=16336 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=72 name=(null) inode=16328 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=73 name=(null) inode=16337 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=74 name=(null) inode=16337 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=75 name=(null) inode=16338 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=76 name=(null) inode=16337 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=77 name=(null) inode=16339 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=78 name=(null) inode=16337 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=79 name=(null) inode=16340 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=80 name=(null) inode=16337 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=81 name=(null) inode=16341 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=82 name=(null) inode=16337 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=83 name=(null) inode=16342 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=84 name=(null) inode=16328 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=85 name=(null) inode=16343 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=86 name=(null) inode=16343 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=87 name=(null) inode=16344 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=88 name=(null) inode=16343 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=89 name=(null) inode=16345 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=90 name=(null) inode=16343 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=91 name=(null) inode=16346 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=92 name=(null) inode=16343 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=93 name=(null) inode=16347 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=94 name=(null) inode=16343 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=95 name=(null) inode=16348 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=96 name=(null) inode=16328 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=97 name=(null) inode=16349 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=98 name=(null) inode=16349 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=99 name=(null) inode=16350 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=100 name=(null) inode=16349 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=101 name=(null) inode=16351 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=102 name=(null) inode=16349 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=103 name=(null) inode=16352 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=104 name=(null) inode=16349 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=105 name=(null) inode=16353 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=106 name=(null) inode=16349 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PATH item=107 name=(null) inode=16354 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:14:40.952000 audit: PROCTITLE proctitle="(udev-worker)" Jul 2 08:14:40.964366 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Jul 2 08:14:40.971370 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1069) Jul 2 08:14:40.999792 (udev-worker)[1059]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Jul 2 08:14:41.001471 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Jul 2 08:14:41.003328 kernel: vmw_vmci 0000:00:07.7: Found VMCI PCI device at 0x11080, irq 16 Jul 2 08:14:41.004330 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Jul 2 08:14:41.007412 kernel: Guest personality initialized and is active Jul 2 08:14:41.010338 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 08:14:41.011800 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 2 08:14:41.011832 kernel: Initialized host personality Jul 2 08:14:41.017534 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 08:14:41.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:41.030539 systemd[1]: Finished systemd-udev-settle.service. Jul 2 08:14:41.031557 systemd[1]: Starting lvm2-activation-early.service... Jul 2 08:14:41.048189 lvm[1086]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 08:14:41.073917 systemd[1]: Finished lvm2-activation-early.service. Jul 2 08:14:41.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:41.074113 systemd[1]: Reached target cryptsetup.target. Jul 2 08:14:41.075042 systemd[1]: Starting lvm2-activation.service... Jul 2 08:14:41.077963 lvm[1087]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 08:14:41.096909 systemd[1]: Finished lvm2-activation.service. Jul 2 08:14:41.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:41.097094 systemd[1]: Reached target local-fs-pre.target. Jul 2 08:14:41.097195 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 08:14:41.097214 systemd[1]: Reached target local-fs.target. Jul 2 08:14:41.097306 systemd[1]: Reached target machines.target. Jul 2 08:14:41.098269 systemd[1]: Starting ldconfig.service... Jul 2 08:14:41.099061 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 08:14:41.099094 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 08:14:41.099874 systemd[1]: Starting systemd-boot-update.service... Jul 2 08:14:41.100607 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 2 08:14:41.101467 systemd[1]: Starting systemd-machine-id-commit.service... Jul 2 08:14:41.102439 systemd[1]: Starting systemd-sysext.service... Jul 2 08:14:41.109305 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1089 (bootctl) Jul 2 08:14:41.110126 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 2 08:14:41.115785 systemd[1]: Unmounting usr-share-oem.mount... Jul 2 08:14:41.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:41.118256 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 2 08:14:41.128096 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 2 08:14:41.128202 systemd[1]: Unmounted usr-share-oem.mount. Jul 2 08:14:41.144334 kernel: loop0: detected capacity change from 0 to 210664 Jul 2 08:14:41.731066 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 08:14:41.731787 systemd[1]: Finished systemd-machine-id-commit.service. Jul 2 08:14:41.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:41.748369 systemd-fsck[1100]: fsck.fat 4.2 (2021-01-31) Jul 2 08:14:41.748369 systemd-fsck[1100]: /dev/sda1: 789 files, 119238/258078 clusters Jul 2 08:14:41.749333 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 2 08:14:41.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:41.750620 systemd[1]: Mounting boot.mount... Jul 2 08:14:41.761280 systemd[1]: Mounted boot.mount. Jul 2 08:14:41.762331 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 08:14:41.775151 systemd[1]: Finished systemd-boot-update.service. Jul 2 08:14:41.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:41.787335 kernel: loop1: detected capacity change from 0 to 210664 Jul 2 08:14:41.809250 (sd-sysext)[1104]: Using extensions 'kubernetes'. Jul 2 08:14:41.810122 (sd-sysext)[1104]: Merged extensions into '/usr'. Jul 2 08:14:41.821471 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:14:41.822585 systemd[1]: Mounting usr-share-oem.mount... Jul 2 08:14:41.824465 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 08:14:41.825264 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 08:14:41.826293 systemd[1]: Starting modprobe@loop.service... Jul 2 08:14:41.826641 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 08:14:41.826734 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 08:14:41.826815 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:14:41.828739 systemd[1]: Mounted usr-share-oem.mount. Jul 2 08:14:41.829024 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:14:41.829109 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 08:14:41.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:41.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:41.829441 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:14:41.829517 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 08:14:41.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:41.828000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:41.829835 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 08:14:41.830015 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:14:41.830084 systemd[1]: Finished modprobe@loop.service. Jul 2 08:14:41.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:41.828000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:41.830306 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 08:14:41.831127 systemd[1]: Finished systemd-sysext.service. Jul 2 08:14:41.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:41.832202 systemd[1]: Starting ensure-sysext.service... Jul 2 08:14:41.833654 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 2 08:14:41.837716 systemd[1]: Reloading. Jul 2 08:14:41.858194 systemd-tmpfiles[1111]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 2 08:14:41.880623 systemd-tmpfiles[1111]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 08:14:41.882820 /usr/lib/systemd/system-generators/torcx-generator[1130]: time="2024-07-02T08:14:41Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 08:14:41.883066 /usr/lib/systemd/system-generators/torcx-generator[1130]: time="2024-07-02T08:14:41Z" level=info msg="torcx already run" Jul 2 08:14:41.890803 systemd-tmpfiles[1111]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 08:14:41.949028 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 08:14:41.949041 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 08:14:41.961252 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:14:42.005657 kernel: kauditd_printk_skb: 239 callbacks suppressed Jul 2 08:14:42.005724 kernel: audit: type=1334 audit(1719908081.994:164): prog-id=27 op=LOAD Jul 2 08:14:42.005744 kernel: audit: type=1334 audit(1719908081.995:165): prog-id=18 op=UNLOAD Jul 2 08:14:42.005759 kernel: audit: type=1334 audit(1719908081.995:166): prog-id=28 op=LOAD Jul 2 08:14:42.005773 kernel: audit: type=1334 audit(1719908081.996:167): prog-id=29 op=LOAD Jul 2 08:14:42.005787 kernel: audit: type=1334 audit(1719908081.996:168): prog-id=19 op=UNLOAD Jul 2 08:14:42.005800 kernel: audit: type=1334 audit(1719908081.996:169): prog-id=20 op=UNLOAD Jul 2 08:14:42.005812 kernel: audit: type=1334 audit(1719908081.997:170): prog-id=30 op=LOAD Jul 2 08:14:42.005822 kernel: audit: type=1334 audit(1719908081.997:171): prog-id=23 op=UNLOAD Jul 2 08:14:42.005833 kernel: audit: type=1334 audit(1719908081.999:172): prog-id=31 op=LOAD Jul 2 08:14:42.005845 kernel: audit: type=1334 audit(1719908081.999:173): prog-id=24 op=UNLOAD Jul 2 08:14:41.994000 audit: BPF prog-id=27 op=LOAD Jul 2 08:14:41.995000 audit: BPF prog-id=18 op=UNLOAD Jul 2 08:14:41.995000 audit: BPF prog-id=28 op=LOAD Jul 2 08:14:41.996000 audit: BPF prog-id=29 op=LOAD Jul 2 08:14:41.996000 audit: BPF prog-id=19 op=UNLOAD Jul 2 08:14:41.996000 audit: BPF prog-id=20 op=UNLOAD Jul 2 08:14:41.997000 audit: BPF prog-id=30 op=LOAD Jul 2 08:14:41.997000 audit: BPF prog-id=23 op=UNLOAD Jul 2 08:14:41.999000 audit: BPF prog-id=31 op=LOAD Jul 2 08:14:41.999000 audit: BPF prog-id=24 op=UNLOAD Jul 2 08:14:41.999000 audit: BPF prog-id=32 op=LOAD Jul 2 08:14:41.999000 audit: BPF prog-id=33 op=LOAD Jul 2 08:14:41.999000 audit: BPF prog-id=25 op=UNLOAD Jul 2 08:14:41.999000 audit: BPF prog-id=26 op=UNLOAD Jul 2 08:14:42.003000 audit: BPF prog-id=34 op=LOAD Jul 2 08:14:42.004000 audit: BPF prog-id=35 op=LOAD Jul 2 08:14:42.004000 audit: BPF prog-id=21 op=UNLOAD Jul 2 08:14:42.004000 audit: BPF prog-id=22 op=UNLOAD Jul 2 08:14:42.013223 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 08:14:42.013937 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 08:14:42.015238 systemd[1]: Starting modprobe@loop.service... Jul 2 08:14:42.015438 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 08:14:42.015512 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 08:14:42.015957 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:14:42.016050 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 08:14:42.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:42.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:42.016866 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:14:42.016968 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 08:14:42.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:42.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:42.017647 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:14:42.017713 systemd[1]: Finished modprobe@loop.service. Jul 2 08:14:42.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:42.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:42.018182 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 08:14:42.018256 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 08:14:42.020109 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 08:14:42.021432 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 08:14:42.022236 systemd[1]: Starting modprobe@loop.service... Jul 2 08:14:42.022589 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 08:14:42.022660 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 08:14:42.023102 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:14:42.023182 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 08:14:42.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:42.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:42.023777 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:14:42.023907 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 08:14:42.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:42.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:42.024316 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:14:42.024482 systemd[1]: Finished modprobe@loop.service. Jul 2 08:14:42.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:42.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:42.026930 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 08:14:42.028220 systemd[1]: Starting modprobe@drm.service... Jul 2 08:14:42.029080 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 08:14:42.030052 systemd[1]: Starting modprobe@loop.service... Jul 2 08:14:42.030300 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 08:14:42.030438 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 08:14:42.031452 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 2 08:14:42.032222 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:14:42.032305 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 08:14:42.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:42.031000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:42.032636 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 08:14:42.032708 systemd[1]: Finished modprobe@drm.service. Jul 2 08:14:42.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:42.031000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:42.033046 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:14:42.033119 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 08:14:42.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:42.031000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:42.033575 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:14:42.033643 systemd[1]: Finished modprobe@loop.service. Jul 2 08:14:42.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:42.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:42.034076 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 08:14:42.034178 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 08:14:42.034885 systemd[1]: Finished ensure-sysext.service. Jul 2 08:14:42.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:42.109594 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 2 08:14:42.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:42.111235 systemd[1]: Starting audit-rules.service... Jul 2 08:14:42.112396 systemd[1]: Starting clean-ca-certificates.service... Jul 2 08:14:42.113437 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 2 08:14:42.112000 audit: BPF prog-id=36 op=LOAD Jul 2 08:14:42.114753 systemd[1]: Starting systemd-resolved.service... Jul 2 08:14:42.114000 audit: BPF prog-id=37 op=LOAD Jul 2 08:14:42.116416 systemd[1]: Starting systemd-timesyncd.service... Jul 2 08:14:42.118628 systemd[1]: Starting systemd-update-utmp.service... Jul 2 08:14:42.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:42.126330 systemd[1]: Finished clean-ca-certificates.service. Jul 2 08:14:42.126530 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 08:14:42.128830 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:14:42.128858 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:14:42.130000 audit[1208]: SYSTEM_BOOT pid=1208 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 2 08:14:42.132793 systemd[1]: Finished systemd-update-utmp.service. Jul 2 08:14:42.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:42.153561 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 2 08:14:42.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:14:42.170301 ldconfig[1088]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 08:14:42.183000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 2 08:14:42.183000 audit[1223]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcf753dda0 a2=420 a3=0 items=0 ppid=1203 pid=1223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 08:14:42.183000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 2 08:14:42.185523 systemd[1]: Finished audit-rules.service. Jul 2 08:14:42.186124 augenrules[1223]: No rules Jul 2 08:14:42.187503 systemd[1]: Finished ldconfig.service. Jul 2 08:14:42.188608 systemd[1]: Starting systemd-update-done.service... Jul 2 08:14:42.189538 systemd-resolved[1206]: Positive Trust Anchors: Jul 2 08:14:42.189546 systemd-resolved[1206]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 08:14:42.189564 systemd-resolved[1206]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 08:14:42.193500 systemd-networkd[1061]: ens192: Gained IPv6LL Jul 2 08:14:42.194060 systemd[1]: Finished systemd-update-done.service. Jul 2 08:14:42.195950 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 2 08:14:42.201009 systemd[1]: Started systemd-timesyncd.service. Jul 2 08:14:42.201169 systemd[1]: Reached target time-set.target. Jul 2 08:14:42.214195 systemd-resolved[1206]: Defaulting to hostname 'linux'. Jul 2 08:14:42.215315 systemd[1]: Started systemd-resolved.service. Jul 2 08:14:42.215489 systemd[1]: Reached target network.target. Jul 2 08:14:42.215578 systemd[1]: Reached target network-online.target. Jul 2 08:14:42.215666 systemd[1]: Reached target nss-lookup.target. Jul 2 08:14:42.215759 systemd[1]: Reached target sysinit.target. Jul 2 08:14:42.215893 systemd[1]: Started motdgen.path. Jul 2 08:14:42.215991 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 2 08:14:42.216195 systemd[1]: Started logrotate.timer. Jul 2 08:14:42.216371 systemd[1]: Started mdadm.timer. Jul 2 08:14:42.216454 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 2 08:14:42.216545 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 08:14:42.216564 systemd[1]: Reached target paths.target. Jul 2 08:14:42.216658 systemd[1]: Reached target timers.target. Jul 2 08:14:42.216906 systemd[1]: Listening on dbus.socket. Jul 2 08:14:42.217779 systemd[1]: Starting docker.socket... Jul 2 08:14:42.220555 systemd[1]: Listening on sshd.socket. Jul 2 08:14:42.220732 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 08:14:42.220976 systemd[1]: Listening on docker.socket. Jul 2 08:14:42.221100 systemd[1]: Reached target sockets.target. Jul 2 08:14:42.221185 systemd[1]: Reached target basic.target. Jul 2 08:14:42.221290 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 08:14:42.221305 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 08:14:42.221990 systemd[1]: Starting containerd.service... Jul 2 08:14:42.222797 systemd[1]: Starting dbus.service... Jul 2 08:14:42.223790 systemd[1]: Starting enable-oem-cloudinit.service... Jul 2 08:14:42.225191 systemd[1]: Starting extend-filesystems.service... Jul 2 08:14:42.225518 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 2 08:14:42.226912 systemd[1]: Starting kubelet.service... Jul 2 08:14:42.228411 jq[1234]: false Jul 2 08:14:42.227803 systemd[1]: Starting motdgen.service... Jul 2 08:14:42.228903 systemd[1]: Starting prepare-helm.service... Jul 2 08:14:42.232632 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 2 08:14:42.233535 systemd[1]: Starting sshd-keygen.service... Jul 2 08:14:42.235304 systemd[1]: Starting systemd-logind.service... Jul 2 08:14:42.235438 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 08:14:42.235473 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 08:14:42.235981 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 08:14:42.236411 systemd[1]: Starting update-engine.service... Jul 2 08:14:42.237522 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 2 08:14:42.238989 systemd[1]: Starting vmtoolsd.service... Jul 2 08:14:42.242494 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 08:14:42.242610 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 2 08:14:42.243575 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 08:14:42.243710 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 2 08:14:42.248718 jq[1244]: true Jul 2 08:14:42.253234 jq[1253]: true Jul 2 08:14:42.264290 tar[1251]: linux-amd64/helm Jul 2 08:14:42.270868 dbus-daemon[1233]: [system] SELinux support is enabled Jul 2 08:14:42.271055 systemd[1]: Started dbus.service. Jul 2 08:14:42.272420 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 08:14:42.272442 systemd[1]: Reached target system-config.target. Jul 2 08:14:42.272566 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 08:14:42.272575 systemd[1]: Reached target user-config.target. Jul 2 08:14:42.274996 systemd[1]: Started vmtoolsd.service. Jul 2 08:14:42.276589 extend-filesystems[1235]: Found loop1 Jul 2 08:14:42.276896 extend-filesystems[1235]: Found sda Jul 2 08:14:42.277264 extend-filesystems[1235]: Found sda1 Jul 2 08:14:42.277264 extend-filesystems[1235]: Found sda2 Jul 2 08:14:42.277264 extend-filesystems[1235]: Found sda3 Jul 2 08:14:42.277264 extend-filesystems[1235]: Found usr Jul 2 08:14:42.277264 extend-filesystems[1235]: Found sda4 Jul 2 08:14:42.277264 extend-filesystems[1235]: Found sda6 Jul 2 08:14:42.277264 extend-filesystems[1235]: Found sda7 Jul 2 08:14:42.277264 extend-filesystems[1235]: Found sda9 Jul 2 08:14:42.277264 extend-filesystems[1235]: Checking size of /dev/sda9 Jul 2 08:14:42.284778 extend-filesystems[1235]: Old size kept for /dev/sda9 Jul 2 08:14:42.284999 extend-filesystems[1235]: Found sr0 Jul 2 08:14:42.285413 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 08:14:42.285537 systemd[1]: Finished extend-filesystems.service. Jul 2 08:14:42.304303 bash[1274]: Updated "/home/core/.ssh/authorized_keys" Jul 2 08:14:42.304980 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 2 08:14:42.310787 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 08:14:42.310883 systemd[1]: Finished motdgen.service. Jul 2 08:14:42.343333 kernel: NET: Registered PF_VSOCK protocol family Jul 2 08:14:42.349892 env[1254]: time="2024-07-02T08:14:42.349861142Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 2 08:14:42.374513 update_engine[1243]: I0702 08:14:42.373390 1243 main.cc:92] Flatcar Update Engine starting Jul 2 08:14:42.377005 systemd[1]: Started update-engine.service. Jul 2 08:14:42.378676 systemd[1]: Started locksmithd.service. Jul 2 08:14:42.379202 update_engine[1243]: I0702 08:14:42.379150 1243 update_check_scheduler.cc:74] Next update check in 11m47s Jul 2 08:14:42.405835 systemd-logind[1241]: Watching system buttons on /dev/input/event1 (Power Button) Jul 2 08:14:42.405849 systemd-logind[1241]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 08:14:42.408043 systemd-logind[1241]: New seat seat0. Jul 2 08:14:42.413732 systemd[1]: Started systemd-logind.service. Jul 2 08:14:42.428515 env[1254]: time="2024-07-02T08:14:42.428301558Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 08:14:42.428515 env[1254]: time="2024-07-02T08:14:42.428416901Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:15:38.082257 systemd-resolved[1206]: Clock change detected. Flushing caches. Jul 2 08:15:38.082369 systemd-timesyncd[1207]: Contacted time server 104.156.246.53:123 (0.flatcar.pool.ntp.org). Jul 2 08:15:38.082408 systemd-timesyncd[1207]: Initial clock synchronization to Tue 2024-07-02 08:15:38.082223 UTC. Jul 2 08:15:38.085510 env[1254]: time="2024-07-02T08:15:38.085424761Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.161-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:15:38.085510 env[1254]: time="2024-07-02T08:15:38.085453149Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:15:38.086223 env[1254]: time="2024-07-02T08:15:38.085597863Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:15:38.086223 env[1254]: time="2024-07-02T08:15:38.085610105Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 08:15:38.086223 env[1254]: time="2024-07-02T08:15:38.085617645Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 08:15:38.086223 env[1254]: time="2024-07-02T08:15:38.085623610Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 08:15:38.086223 env[1254]: time="2024-07-02T08:15:38.085671938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:15:38.086223 env[1254]: time="2024-07-02T08:15:38.085816443Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:15:38.086223 env[1254]: time="2024-07-02T08:15:38.085886599Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:15:38.086223 env[1254]: time="2024-07-02T08:15:38.085914818Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 08:15:38.086223 env[1254]: time="2024-07-02T08:15:38.085952886Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 08:15:38.086223 env[1254]: time="2024-07-02T08:15:38.085960977Z" level=info msg="metadata content store policy set" policy=shared Jul 2 08:15:38.100059 env[1254]: time="2024-07-02T08:15:38.100029808Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 08:15:38.100059 env[1254]: time="2024-07-02T08:15:38.100059274Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 08:15:38.100151 env[1254]: time="2024-07-02T08:15:38.100069376Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 08:15:38.100151 env[1254]: time="2024-07-02T08:15:38.100093438Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 08:15:38.100151 env[1254]: time="2024-07-02T08:15:38.100102593Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 08:15:38.100151 env[1254]: time="2024-07-02T08:15:38.100110651Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 08:15:38.100151 env[1254]: time="2024-07-02T08:15:38.100118040Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 08:15:38.100151 env[1254]: time="2024-07-02T08:15:38.100125624Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 08:15:38.100151 env[1254]: time="2024-07-02T08:15:38.100132933Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 2 08:15:38.100151 env[1254]: time="2024-07-02T08:15:38.100140050Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 08:15:38.100151 env[1254]: time="2024-07-02T08:15:38.100146644Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 08:15:38.100290 env[1254]: time="2024-07-02T08:15:38.100153929Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 08:15:38.100290 env[1254]: time="2024-07-02T08:15:38.100229599Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 08:15:38.100324 env[1254]: time="2024-07-02T08:15:38.100290408Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 08:15:38.100447 env[1254]: time="2024-07-02T08:15:38.100435273Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 08:15:38.100478 env[1254]: time="2024-07-02T08:15:38.100453224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 08:15:38.100478 env[1254]: time="2024-07-02T08:15:38.100464227Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 08:15:38.100521 env[1254]: time="2024-07-02T08:15:38.100495141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 08:15:38.100521 env[1254]: time="2024-07-02T08:15:38.100503288Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 08:15:38.100521 env[1254]: time="2024-07-02T08:15:38.100510016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 08:15:38.100521 env[1254]: time="2024-07-02T08:15:38.100516172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 08:15:38.100587 env[1254]: time="2024-07-02T08:15:38.100522386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 08:15:38.100587 env[1254]: time="2024-07-02T08:15:38.100529673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 08:15:38.100587 env[1254]: time="2024-07-02T08:15:38.100535815Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 08:15:38.100587 env[1254]: time="2024-07-02T08:15:38.100542182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 08:15:38.100587 env[1254]: time="2024-07-02T08:15:38.100549610Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 08:15:38.100671 env[1254]: time="2024-07-02T08:15:38.100616147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 08:15:38.100671 env[1254]: time="2024-07-02T08:15:38.100625050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 08:15:38.100671 env[1254]: time="2024-07-02T08:15:38.100631435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 08:15:38.100671 env[1254]: time="2024-07-02T08:15:38.100637418Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 08:15:38.100671 env[1254]: time="2024-07-02T08:15:38.100647361Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 2 08:15:38.100671 env[1254]: time="2024-07-02T08:15:38.100653767Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 08:15:38.100671 env[1254]: time="2024-07-02T08:15:38.100664184Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 2 08:15:38.100777 env[1254]: time="2024-07-02T08:15:38.100685565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 08:15:38.100835 env[1254]: time="2024-07-02T08:15:38.100803742Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 08:15:38.102515 env[1254]: time="2024-07-02T08:15:38.100837881Z" level=info msg="Connect containerd service" Jul 2 08:15:38.102515 env[1254]: time="2024-07-02T08:15:38.100860638Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 08:15:38.102515 env[1254]: time="2024-07-02T08:15:38.101353552Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 08:15:38.102827 env[1254]: time="2024-07-02T08:15:38.102812913Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 08:15:38.102864 env[1254]: time="2024-07-02T08:15:38.102842680Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 08:15:38.102936 systemd[1]: Started containerd.service. Jul 2 08:15:38.105086 env[1254]: time="2024-07-02T08:15:38.105054232Z" level=info msg="Start subscribing containerd event" Jul 2 08:15:38.105181 env[1254]: time="2024-07-02T08:15:38.105169221Z" level=info msg="Start recovering state" Jul 2 08:15:38.105276 env[1254]: time="2024-07-02T08:15:38.105266304Z" level=info msg="Start event monitor" Jul 2 08:15:38.105331 env[1254]: time="2024-07-02T08:15:38.105318241Z" level=info msg="Start snapshots syncer" Jul 2 08:15:38.105377 env[1254]: time="2024-07-02T08:15:38.105367493Z" level=info msg="Start cni network conf syncer for default" Jul 2 08:15:38.105424 env[1254]: time="2024-07-02T08:15:38.105411430Z" level=info msg="Start streaming server" Jul 2 08:15:38.107208 env[1254]: time="2024-07-02T08:15:38.105086358Z" level=info msg="containerd successfully booted in 0.108259s" Jul 2 08:15:38.474826 tar[1251]: linux-amd64/LICENSE Jul 2 08:15:38.474948 tar[1251]: linux-amd64/README.md Jul 2 08:15:38.479410 systemd[1]: Finished prepare-helm.service. Jul 2 08:15:38.503156 sshd_keygen[1256]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 08:15:38.520202 systemd[1]: Finished sshd-keygen.service. Jul 2 08:15:38.521378 systemd[1]: Starting issuegen.service... Jul 2 08:15:38.525446 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 08:15:38.525566 systemd[1]: Finished issuegen.service. Jul 2 08:15:38.526646 systemd[1]: Starting systemd-user-sessions.service... Jul 2 08:15:38.532443 systemd[1]: Finished systemd-user-sessions.service. Jul 2 08:15:38.533491 systemd[1]: Started getty@tty1.service. Jul 2 08:15:38.534365 systemd[1]: Started serial-getty@ttyS0.service. Jul 2 08:15:38.534574 systemd[1]: Reached target getty.target. Jul 2 08:15:38.566221 locksmithd[1296]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 08:15:39.262792 systemd[1]: Started kubelet.service. Jul 2 08:15:39.263121 systemd[1]: Reached target multi-user.target. Jul 2 08:15:39.264142 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 2 08:15:39.268222 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 2 08:15:39.268321 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 2 08:15:39.268639 systemd[1]: Startup finished in 913ms (kernel) + 4.898s (initrd) + 6.170s (userspace) = 11.982s. Jul 2 08:15:39.794999 login[1360]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 2 08:15:39.796471 login[1361]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 2 08:15:39.850945 systemd[1]: Created slice user-500.slice. Jul 2 08:15:39.852057 systemd[1]: Starting user-runtime-dir@500.service... Jul 2 08:15:39.854306 systemd-logind[1241]: New session 1 of user core. Jul 2 08:15:39.856887 systemd-logind[1241]: New session 2 of user core. Jul 2 08:15:39.870616 systemd[1]: Finished user-runtime-dir@500.service. Jul 2 08:15:39.872040 systemd[1]: Starting user@500.service... Jul 2 08:15:39.884661 (systemd)[1368]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:15:40.114356 systemd[1368]: Queued start job for default target default.target. Jul 2 08:15:40.115124 systemd[1368]: Reached target paths.target. Jul 2 08:15:40.115144 systemd[1368]: Reached target sockets.target. Jul 2 08:15:40.115157 systemd[1368]: Reached target timers.target. Jul 2 08:15:40.115167 systemd[1368]: Reached target basic.target. Jul 2 08:15:40.115243 systemd[1]: Started user@500.service. Jul 2 08:15:40.115943 systemd[1368]: Reached target default.target. Jul 2 08:15:40.115977 systemd[1368]: Startup finished in 227ms. Jul 2 08:15:40.116143 systemd[1]: Started session-1.scope. Jul 2 08:15:40.116865 systemd[1]: Started session-2.scope. Jul 2 08:15:40.944311 kubelet[1365]: E0702 08:15:40.944289 1365 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:15:40.945497 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:15:40.945592 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:15:51.196212 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 08:15:51.196380 systemd[1]: Stopped kubelet.service. Jul 2 08:15:51.197601 systemd[1]: Starting kubelet.service... Jul 2 08:15:51.251401 systemd[1]: Started kubelet.service. Jul 2 08:15:51.326258 kubelet[1397]: E0702 08:15:51.326219 1397 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:15:51.328981 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:15:51.329073 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:16:01.579721 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 08:16:01.579913 systemd[1]: Stopped kubelet.service. Jul 2 08:16:01.581128 systemd[1]: Starting kubelet.service... Jul 2 08:16:01.862026 systemd[1]: Started kubelet.service. Jul 2 08:16:01.919114 kubelet[1408]: E0702 08:16:01.919091 1408 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:16:01.920251 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:16:01.920324 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:16:12.035553 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 2 08:16:12.035677 systemd[1]: Stopped kubelet.service. Jul 2 08:16:12.036730 systemd[1]: Starting kubelet.service... Jul 2 08:16:12.250135 systemd[1]: Started kubelet.service. Jul 2 08:16:12.290170 kubelet[1418]: E0702 08:16:12.290102 1418 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:16:12.291203 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:16:12.291274 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:16:18.063669 systemd[1]: Created slice system-sshd.slice. Jul 2 08:16:18.064535 systemd[1]: Started sshd@0-139.178.70.99:22-139.178.68.195:55094.service. Jul 2 08:16:18.141594 sshd[1425]: Accepted publickey for core from 139.178.68.195 port 55094 ssh2: RSA SHA256:ZBsE641/vEJoOwiO3PsmlEnzQGV1D8+6UMqIklXV5hk Jul 2 08:16:18.142343 sshd[1425]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:16:18.145513 systemd[1]: Started session-3.scope. Jul 2 08:16:18.145869 systemd-logind[1241]: New session 3 of user core. Jul 2 08:16:18.193438 systemd[1]: Started sshd@1-139.178.70.99:22-139.178.68.195:55108.service. Jul 2 08:16:18.234812 sshd[1430]: Accepted publickey for core from 139.178.68.195 port 55108 ssh2: RSA SHA256:ZBsE641/vEJoOwiO3PsmlEnzQGV1D8+6UMqIklXV5hk Jul 2 08:16:18.235769 sshd[1430]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:16:18.238596 systemd[1]: Started session-4.scope. Jul 2 08:16:18.238762 systemd-logind[1241]: New session 4 of user core. Jul 2 08:16:18.289733 sshd[1430]: pam_unix(sshd:session): session closed for user core Jul 2 08:16:18.291267 systemd[1]: Started sshd@2-139.178.70.99:22-139.178.68.195:55124.service. Jul 2 08:16:18.292947 systemd[1]: sshd@1-139.178.70.99:22-139.178.68.195:55108.service: Deactivated successfully. Jul 2 08:16:18.293456 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 08:16:18.294363 systemd-logind[1241]: Session 4 logged out. Waiting for processes to exit. Jul 2 08:16:18.295121 systemd-logind[1241]: Removed session 4. Jul 2 08:16:18.322018 sshd[1435]: Accepted publickey for core from 139.178.68.195 port 55124 ssh2: RSA SHA256:ZBsE641/vEJoOwiO3PsmlEnzQGV1D8+6UMqIklXV5hk Jul 2 08:16:18.323157 sshd[1435]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:16:18.326754 systemd[1]: Started session-5.scope. Jul 2 08:16:18.327935 systemd-logind[1241]: New session 5 of user core. Jul 2 08:16:18.375164 sshd[1435]: pam_unix(sshd:session): session closed for user core Jul 2 08:16:18.377413 systemd[1]: Started sshd@3-139.178.70.99:22-139.178.68.195:55136.service. Jul 2 08:16:18.377742 systemd[1]: sshd@2-139.178.70.99:22-139.178.68.195:55124.service: Deactivated successfully. Jul 2 08:16:18.378264 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 08:16:18.378671 systemd-logind[1241]: Session 5 logged out. Waiting for processes to exit. Jul 2 08:16:18.379428 systemd-logind[1241]: Removed session 5. Jul 2 08:16:18.408233 sshd[1441]: Accepted publickey for core from 139.178.68.195 port 55136 ssh2: RSA SHA256:ZBsE641/vEJoOwiO3PsmlEnzQGV1D8+6UMqIklXV5hk Jul 2 08:16:18.408964 sshd[1441]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:16:18.411399 systemd-logind[1241]: New session 6 of user core. Jul 2 08:16:18.411833 systemd[1]: Started session-6.scope. Jul 2 08:16:18.462358 sshd[1441]: pam_unix(sshd:session): session closed for user core Jul 2 08:16:18.464413 systemd[1]: sshd@3-139.178.70.99:22-139.178.68.195:55136.service: Deactivated successfully. Jul 2 08:16:18.464752 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 08:16:18.465175 systemd-logind[1241]: Session 6 logged out. Waiting for processes to exit. Jul 2 08:16:18.465714 systemd[1]: Started sshd@4-139.178.70.99:22-139.178.68.195:55146.service. Jul 2 08:16:18.466457 systemd-logind[1241]: Removed session 6. Jul 2 08:16:18.494521 sshd[1448]: Accepted publickey for core from 139.178.68.195 port 55146 ssh2: RSA SHA256:ZBsE641/vEJoOwiO3PsmlEnzQGV1D8+6UMqIklXV5hk Jul 2 08:16:18.495420 sshd[1448]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:16:18.498639 systemd-logind[1241]: New session 7 of user core. Jul 2 08:16:18.499174 systemd[1]: Started session-7.scope. Jul 2 08:16:18.559777 sudo[1451]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 08:16:18.559959 sudo[1451]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 08:16:18.576112 systemd[1]: Starting docker.service... Jul 2 08:16:18.599253 env[1461]: time="2024-07-02T08:16:18.599224968Z" level=info msg="Starting up" Jul 2 08:16:18.600573 env[1461]: time="2024-07-02T08:16:18.600557743Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 08:16:18.600573 env[1461]: time="2024-07-02T08:16:18.600570464Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 08:16:18.600643 env[1461]: time="2024-07-02T08:16:18.600582905Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 08:16:18.600643 env[1461]: time="2024-07-02T08:16:18.600589691Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 08:16:18.602353 env[1461]: time="2024-07-02T08:16:18.602338761Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 08:16:18.602420 env[1461]: time="2024-07-02T08:16:18.602406737Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 08:16:18.602482 env[1461]: time="2024-07-02T08:16:18.602467443Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 08:16:18.602529 env[1461]: time="2024-07-02T08:16:18.602518857Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 08:16:18.622081 env[1461]: time="2024-07-02T08:16:18.622052570Z" level=info msg="Loading containers: start." Jul 2 08:16:18.737911 kernel: Initializing XFRM netlink socket Jul 2 08:16:18.815328 env[1461]: time="2024-07-02T08:16:18.815305711Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 2 08:16:18.882321 systemd-networkd[1061]: docker0: Link UP Jul 2 08:16:18.887371 env[1461]: time="2024-07-02T08:16:18.887347041Z" level=info msg="Loading containers: done." Jul 2 08:16:18.895169 env[1461]: time="2024-07-02T08:16:18.895144519Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 08:16:18.895365 env[1461]: time="2024-07-02T08:16:18.895354319Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 2 08:16:18.895468 env[1461]: time="2024-07-02T08:16:18.895458550Z" level=info msg="Daemon has completed initialization" Jul 2 08:16:18.901765 systemd[1]: Started docker.service. Jul 2 08:16:18.906365 env[1461]: time="2024-07-02T08:16:18.906325389Z" level=info msg="API listen on /run/docker.sock" Jul 2 08:16:19.999987 env[1254]: time="2024-07-02T08:16:19.999953981Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\"" Jul 2 08:16:20.603678 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount359596291.mount: Deactivated successfully. Jul 2 08:16:22.171596 env[1254]: time="2024-07-02T08:16:22.171559260Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:16:22.173185 env[1254]: time="2024-07-02T08:16:22.173167843Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:16:22.173995 env[1254]: time="2024-07-02T08:16:22.173982074Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:16:22.175046 env[1254]: time="2024-07-02T08:16:22.175033566Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:16:22.175542 env[1254]: time="2024-07-02T08:16:22.175528209Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\" returns image reference \"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\"" Jul 2 08:16:22.181324 env[1254]: time="2024-07-02T08:16:22.181306932Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\"" Jul 2 08:16:22.535803 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 2 08:16:22.536063 systemd[1]: Stopped kubelet.service. Jul 2 08:16:22.537883 systemd[1]: Starting kubelet.service... Jul 2 08:16:22.591887 systemd[1]: Started kubelet.service. Jul 2 08:16:22.616987 kubelet[1599]: E0702 08:16:22.616953 1599 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:16:22.618320 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:16:22.618414 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:16:23.291334 update_engine[1243]: I0702 08:16:23.291088 1243 update_attempter.cc:509] Updating boot flags... Jul 2 08:16:25.011446 env[1254]: time="2024-07-02T08:16:25.011405829Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:16:25.023105 env[1254]: time="2024-07-02T08:16:25.023087600Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:16:25.029940 env[1254]: time="2024-07-02T08:16:25.029924156Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:16:25.040505 env[1254]: time="2024-07-02T08:16:25.040488360Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:16:25.041272 env[1254]: time="2024-07-02T08:16:25.041252607Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\" returns image reference \"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\"" Jul 2 08:16:25.049066 env[1254]: time="2024-07-02T08:16:25.049047019Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\"" Jul 2 08:16:26.425960 env[1254]: time="2024-07-02T08:16:26.425867359Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:16:26.437061 env[1254]: time="2024-07-02T08:16:26.437037886Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:16:26.446867 env[1254]: time="2024-07-02T08:16:26.446848556Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:16:26.451590 env[1254]: time="2024-07-02T08:16:26.451572954Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:16:26.452114 env[1254]: time="2024-07-02T08:16:26.452087175Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\" returns image reference \"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\"" Jul 2 08:16:26.459383 env[1254]: time="2024-07-02T08:16:26.459355429Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\"" Jul 2 08:16:27.958458 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1951233539.mount: Deactivated successfully. Jul 2 08:16:28.660541 env[1254]: time="2024-07-02T08:16:28.660507865Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:16:28.690596 env[1254]: time="2024-07-02T08:16:28.690574037Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:16:28.698312 env[1254]: time="2024-07-02T08:16:28.698295740Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:16:28.711403 env[1254]: time="2024-07-02T08:16:28.711384414Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:16:28.711953 env[1254]: time="2024-07-02T08:16:28.711934060Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\" returns image reference \"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\"" Jul 2 08:16:28.719840 env[1254]: time="2024-07-02T08:16:28.719810049Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jul 2 08:16:29.791513 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2951100343.mount: Deactivated successfully. Jul 2 08:16:31.076576 env[1254]: time="2024-07-02T08:16:31.076543054Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:16:31.084042 env[1254]: time="2024-07-02T08:16:31.084027326Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:16:31.086284 env[1254]: time="2024-07-02T08:16:31.086265574Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:16:31.091138 env[1254]: time="2024-07-02T08:16:31.091125323Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:16:31.091527 env[1254]: time="2024-07-02T08:16:31.091512499Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jul 2 08:16:31.102536 env[1254]: time="2024-07-02T08:16:31.102508133Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 08:16:31.692904 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount577313070.mount: Deactivated successfully. Jul 2 08:16:31.695541 env[1254]: time="2024-07-02T08:16:31.695513441Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:16:31.699733 env[1254]: time="2024-07-02T08:16:31.699704494Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:16:31.700740 env[1254]: time="2024-07-02T08:16:31.700724180Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:16:31.701380 env[1254]: time="2024-07-02T08:16:31.701361613Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:16:31.702199 env[1254]: time="2024-07-02T08:16:31.702177878Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jul 2 08:16:31.708579 env[1254]: time="2024-07-02T08:16:31.708540802Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jul 2 08:16:32.387079 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3947460619.mount: Deactivated successfully. Jul 2 08:16:32.785778 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 2 08:16:32.785909 systemd[1]: Stopped kubelet.service. Jul 2 08:16:32.787141 systemd[1]: Starting kubelet.service... Jul 2 08:16:35.129158 systemd[1]: Started kubelet.service. Jul 2 08:16:35.193559 kubelet[1654]: E0702 08:16:35.193516 1654 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:16:35.194812 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:16:35.194911 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:16:36.562937 env[1254]: time="2024-07-02T08:16:36.562905212Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:16:36.564406 env[1254]: time="2024-07-02T08:16:36.564392095Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:16:36.565931 env[1254]: time="2024-07-02T08:16:36.565917173Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:16:36.567348 env[1254]: time="2024-07-02T08:16:36.567329544Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:16:36.568373 env[1254]: time="2024-07-02T08:16:36.567888493Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jul 2 08:16:38.494115 systemd[1]: Stopped kubelet.service. Jul 2 08:16:38.495491 systemd[1]: Starting kubelet.service... Jul 2 08:16:38.513457 systemd[1]: Reloading. Jul 2 08:16:38.584233 /usr/lib/systemd/system-generators/torcx-generator[1746]: time="2024-07-02T08:16:38Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 08:16:38.584260 /usr/lib/systemd/system-generators/torcx-generator[1746]: time="2024-07-02T08:16:38Z" level=info msg="torcx already run" Jul 2 08:16:38.643801 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 08:16:38.643935 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 08:16:38.657301 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:16:38.726954 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 08:16:38.727109 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 08:16:38.727348 systemd[1]: Stopped kubelet.service. Jul 2 08:16:38.728986 systemd[1]: Starting kubelet.service... Jul 2 08:16:39.617778 systemd[1]: Started kubelet.service. Jul 2 08:16:39.840993 kubelet[1809]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:16:39.841258 kubelet[1809]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 08:16:39.841309 kubelet[1809]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:16:39.842542 kubelet[1809]: I0702 08:16:39.842511 1809 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 08:16:40.128762 kubelet[1809]: I0702 08:16:40.128743 1809 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jul 2 08:16:40.128873 kubelet[1809]: I0702 08:16:40.128865 1809 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 08:16:40.129056 kubelet[1809]: I0702 08:16:40.129048 1809 server.go:927] "Client rotation is on, will bootstrap in background" Jul 2 08:16:40.294626 kubelet[1809]: I0702 08:16:40.294605 1809 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 08:16:40.329238 kubelet[1809]: E0702 08:16:40.329214 1809 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.70.99:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.70.99:6443: connect: connection refused Jul 2 08:16:40.367443 kubelet[1809]: I0702 08:16:40.367424 1809 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 08:16:40.394667 kubelet[1809]: I0702 08:16:40.394386 1809 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 08:16:40.394667 kubelet[1809]: I0702 08:16:40.394414 1809 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 08:16:40.394667 kubelet[1809]: I0702 08:16:40.394559 1809 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 08:16:40.394667 kubelet[1809]: I0702 08:16:40.394569 1809 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 08:16:40.394667 kubelet[1809]: I0702 08:16:40.394654 1809 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:16:40.406366 kubelet[1809]: I0702 08:16:40.406344 1809 kubelet.go:400] "Attempting to sync node with API server" Jul 2 08:16:40.406366 kubelet[1809]: I0702 08:16:40.406359 1809 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 08:16:40.406457 kubelet[1809]: I0702 08:16:40.406375 1809 kubelet.go:312] "Adding apiserver pod source" Jul 2 08:16:40.406457 kubelet[1809]: I0702 08:16:40.406388 1809 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 08:16:40.423387 kubelet[1809]: W0702 08:16:40.423347 1809 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Jul 2 08:16:40.423435 kubelet[1809]: E0702 08:16:40.423391 1809 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Jul 2 08:16:40.492590 kubelet[1809]: W0702 08:16:40.492556 1809 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.99:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Jul 2 08:16:40.492744 kubelet[1809]: E0702 08:16:40.492732 1809 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.99:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Jul 2 08:16:40.493210 kubelet[1809]: I0702 08:16:40.493197 1809 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 08:16:40.508848 kubelet[1809]: I0702 08:16:40.508836 1809 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 08:16:40.508981 kubelet[1809]: W0702 08:16:40.508971 1809 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 08:16:40.509671 kubelet[1809]: I0702 08:16:40.509661 1809 server.go:1264] "Started kubelet" Jul 2 08:16:40.521973 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 2 08:16:40.522560 kubelet[1809]: I0702 08:16:40.522547 1809 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 08:16:40.522889 kubelet[1809]: I0702 08:16:40.522870 1809 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 08:16:40.523704 kubelet[1809]: I0702 08:16:40.523692 1809 server.go:455] "Adding debug handlers to kubelet server" Jul 2 08:16:40.524358 kubelet[1809]: I0702 08:16:40.524327 1809 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 08:16:40.524534 kubelet[1809]: I0702 08:16:40.524524 1809 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 08:16:40.526553 kubelet[1809]: I0702 08:16:40.526544 1809 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 08:16:40.526806 kubelet[1809]: I0702 08:16:40.526798 1809 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jul 2 08:16:40.526890 kubelet[1809]: I0702 08:16:40.526883 1809 reconciler.go:26] "Reconciler: start to sync state" Jul 2 08:16:40.543303 kubelet[1809]: W0702 08:16:40.543263 1809 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Jul 2 08:16:40.543303 kubelet[1809]: E0702 08:16:40.543307 1809 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Jul 2 08:16:40.552285 kubelet[1809]: E0702 08:16:40.552260 1809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.99:6443: connect: connection refused" interval="200ms" Jul 2 08:16:40.552915 kubelet[1809]: I0702 08:16:40.552890 1809 factory.go:221] Registration of the containerd container factory successfully Jul 2 08:16:40.552915 kubelet[1809]: I0702 08:16:40.552914 1809 factory.go:221] Registration of the systemd container factory successfully Jul 2 08:16:40.552977 kubelet[1809]: I0702 08:16:40.552957 1809 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 08:16:40.563318 kubelet[1809]: E0702 08:16:40.563245 1809 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.99:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.99:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17de576559a877b4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-07-02 08:16:40.509634484 +0000 UTC m=+0.888756220,LastTimestamp:2024-07-02 08:16:40.509634484 +0000 UTC m=+0.888756220,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 2 08:16:40.576631 kubelet[1809]: E0702 08:16:40.576612 1809 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 08:16:40.576711 kubelet[1809]: I0702 08:16:40.576637 1809 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 08:16:40.580938 kubelet[1809]: I0702 08:16:40.580830 1809 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 08:16:40.580938 kubelet[1809]: I0702 08:16:40.580853 1809 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 08:16:40.580938 kubelet[1809]: I0702 08:16:40.580865 1809 kubelet.go:2337] "Starting kubelet main sync loop" Jul 2 08:16:40.580938 kubelet[1809]: E0702 08:16:40.580913 1809 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 08:16:40.584658 kubelet[1809]: W0702 08:16:40.584615 1809 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Jul 2 08:16:40.584721 kubelet[1809]: E0702 08:16:40.584662 1809 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Jul 2 08:16:40.584952 kubelet[1809]: I0702 08:16:40.584940 1809 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 08:16:40.584952 kubelet[1809]: I0702 08:16:40.584949 1809 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 08:16:40.585016 kubelet[1809]: I0702 08:16:40.584962 1809 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:16:40.590608 kubelet[1809]: I0702 08:16:40.590582 1809 policy_none.go:49] "None policy: Start" Jul 2 08:16:40.592661 kubelet[1809]: I0702 08:16:40.592651 1809 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 08:16:40.592830 kubelet[1809]: I0702 08:16:40.592824 1809 state_mem.go:35] "Initializing new in-memory state store" Jul 2 08:16:40.598388 systemd[1]: Created slice kubepods.slice. Jul 2 08:16:40.601502 systemd[1]: Created slice kubepods-burstable.slice. Jul 2 08:16:40.603772 systemd[1]: Created slice kubepods-besteffort.slice. Jul 2 08:16:40.608669 kubelet[1809]: I0702 08:16:40.608644 1809 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 08:16:40.608798 kubelet[1809]: I0702 08:16:40.608771 1809 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 2 08:16:40.608865 kubelet[1809]: I0702 08:16:40.608854 1809 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 08:16:40.611985 kubelet[1809]: E0702 08:16:40.611967 1809 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 2 08:16:40.627676 kubelet[1809]: I0702 08:16:40.627653 1809 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 08:16:40.627873 kubelet[1809]: E0702 08:16:40.627856 1809 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.99:6443/api/v1/nodes\": dial tcp 139.178.70.99:6443: connect: connection refused" node="localhost" Jul 2 08:16:40.682044 kubelet[1809]: I0702 08:16:40.681271 1809 topology_manager.go:215] "Topology Admit Handler" podUID="fd87124bd1ab6d9b01dedf07aaa171f7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 08:16:40.682253 kubelet[1809]: I0702 08:16:40.682243 1809 topology_manager.go:215] "Topology Admit Handler" podUID="5df30d679156d9b860331584e2d47675" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 08:16:40.682829 kubelet[1809]: I0702 08:16:40.682821 1809 topology_manager.go:215] "Topology Admit Handler" podUID="95a1f460b21cfe16d84d8091532e629f" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 08:16:40.686114 systemd[1]: Created slice kubepods-burstable-pod5df30d679156d9b860331584e2d47675.slice. Jul 2 08:16:40.693735 systemd[1]: Created slice kubepods-burstable-podfd87124bd1ab6d9b01dedf07aaa171f7.slice. Jul 2 08:16:40.696369 systemd[1]: Created slice kubepods-burstable-pod95a1f460b21cfe16d84d8091532e629f.slice. Jul 2 08:16:40.728052 kubelet[1809]: I0702 08:16:40.728031 1809 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 08:16:40.728189 kubelet[1809]: I0702 08:16:40.728177 1809 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 08:16:40.728247 kubelet[1809]: I0702 08:16:40.728237 1809 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5df30d679156d9b860331584e2d47675-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5df30d679156d9b860331584e2d47675\") " pod="kube-system/kube-scheduler-localhost" Jul 2 08:16:40.728310 kubelet[1809]: I0702 08:16:40.728300 1809 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/95a1f460b21cfe16d84d8091532e629f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"95a1f460b21cfe16d84d8091532e629f\") " pod="kube-system/kube-apiserver-localhost" Jul 2 08:16:40.728369 kubelet[1809]: I0702 08:16:40.728358 1809 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/95a1f460b21cfe16d84d8091532e629f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"95a1f460b21cfe16d84d8091532e629f\") " pod="kube-system/kube-apiserver-localhost" Jul 2 08:16:40.728436 kubelet[1809]: I0702 08:16:40.728428 1809 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 08:16:40.728493 kubelet[1809]: I0702 08:16:40.728485 1809 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 08:16:40.728548 kubelet[1809]: I0702 08:16:40.728539 1809 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 08:16:40.728604 kubelet[1809]: I0702 08:16:40.728596 1809 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/95a1f460b21cfe16d84d8091532e629f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"95a1f460b21cfe16d84d8091532e629f\") " pod="kube-system/kube-apiserver-localhost" Jul 2 08:16:40.753378 kubelet[1809]: E0702 08:16:40.753346 1809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.99:6443: connect: connection refused" interval="400ms" Jul 2 08:16:40.829503 kubelet[1809]: I0702 08:16:40.829487 1809 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 08:16:40.829819 kubelet[1809]: E0702 08:16:40.829808 1809 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.99:6443/api/v1/nodes\": dial tcp 139.178.70.99:6443: connect: connection refused" node="localhost" Jul 2 08:16:40.993693 env[1254]: time="2024-07-02T08:16:40.993448498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5df30d679156d9b860331584e2d47675,Namespace:kube-system,Attempt:0,}" Jul 2 08:16:40.995769 env[1254]: time="2024-07-02T08:16:40.995749320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fd87124bd1ab6d9b01dedf07aaa171f7,Namespace:kube-system,Attempt:0,}" Jul 2 08:16:40.998578 env[1254]: time="2024-07-02T08:16:40.998409698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:95a1f460b21cfe16d84d8091532e629f,Namespace:kube-system,Attempt:0,}" Jul 2 08:16:41.153858 kubelet[1809]: E0702 08:16:41.153835 1809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.99:6443: connect: connection refused" interval="800ms" Jul 2 08:16:41.230737 kubelet[1809]: I0702 08:16:41.230716 1809 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 08:16:41.230932 kubelet[1809]: E0702 08:16:41.230914 1809 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.99:6443/api/v1/nodes\": dial tcp 139.178.70.99:6443: connect: connection refused" node="localhost" Jul 2 08:16:41.267416 kubelet[1809]: W0702 08:16:41.267311 1809 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Jul 2 08:16:41.267416 kubelet[1809]: E0702 08:16:41.267350 1809 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Jul 2 08:16:41.404220 kubelet[1809]: W0702 08:16:41.404176 1809 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.99:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Jul 2 08:16:41.404220 kubelet[1809]: E0702 08:16:41.404219 1809 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.99:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Jul 2 08:16:41.509122 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount878465409.mount: Deactivated successfully. Jul 2 08:16:41.511934 env[1254]: time="2024-07-02T08:16:41.511889969Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:16:41.513668 env[1254]: time="2024-07-02T08:16:41.513652736Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:16:41.514910 env[1254]: time="2024-07-02T08:16:41.514877776Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:16:41.516450 env[1254]: time="2024-07-02T08:16:41.516423669Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:16:41.517096 env[1254]: time="2024-07-02T08:16:41.517084135Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:16:41.519424 env[1254]: time="2024-07-02T08:16:41.519348727Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:16:41.521800 env[1254]: time="2024-07-02T08:16:41.521773196Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:16:41.525401 env[1254]: time="2024-07-02T08:16:41.525374893Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:16:41.525783 env[1254]: time="2024-07-02T08:16:41.525768278Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:16:41.528265 env[1254]: time="2024-07-02T08:16:41.527566770Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:16:41.528355 env[1254]: time="2024-07-02T08:16:41.528336290Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:16:41.529382 env[1254]: time="2024-07-02T08:16:41.529352048Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:16:41.555458 env[1254]: time="2024-07-02T08:16:41.546805336Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:16:41.555458 env[1254]: time="2024-07-02T08:16:41.546823486Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:16:41.555458 env[1254]: time="2024-07-02T08:16:41.546830358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:16:41.555458 env[1254]: time="2024-07-02T08:16:41.549817710Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/835a2c24677f6c7e0a4b9064a5d6e766b60fc058eea5d4d20ccd2d12a3560d21 pid=1866 runtime=io.containerd.runc.v2 Jul 2 08:16:41.556795 env[1254]: time="2024-07-02T08:16:41.553501875Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:16:41.556795 env[1254]: time="2024-07-02T08:16:41.553525061Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:16:41.556795 env[1254]: time="2024-07-02T08:16:41.553544681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:16:41.556795 env[1254]: time="2024-07-02T08:16:41.553790773Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1969f9666f769a68581787974937232b5e08699637826dac3a21d70f8b660bd3 pid=1884 runtime=io.containerd.runc.v2 Jul 2 08:16:41.556932 env[1254]: time="2024-07-02T08:16:41.541333750Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:16:41.556932 env[1254]: time="2024-07-02T08:16:41.541370225Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:16:41.556932 env[1254]: time="2024-07-02T08:16:41.541379011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:16:41.556932 env[1254]: time="2024-07-02T08:16:41.541480354Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c628d98a2c7733dd3aca1163c2acfdcdeb61239f12cbdced21ab47e6a2d7ec7d pid=1848 runtime=io.containerd.runc.v2 Jul 2 08:16:41.565425 systemd[1]: Started cri-containerd-835a2c24677f6c7e0a4b9064a5d6e766b60fc058eea5d4d20ccd2d12a3560d21.scope. Jul 2 08:16:41.573209 systemd[1]: Started cri-containerd-c628d98a2c7733dd3aca1163c2acfdcdeb61239f12cbdced21ab47e6a2d7ec7d.scope. Jul 2 08:16:41.590554 systemd[1]: Started cri-containerd-1969f9666f769a68581787974937232b5e08699637826dac3a21d70f8b660bd3.scope. Jul 2 08:16:41.620134 env[1254]: time="2024-07-02T08:16:41.620084215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:95a1f460b21cfe16d84d8091532e629f,Namespace:kube-system,Attempt:0,} returns sandbox id \"835a2c24677f6c7e0a4b9064a5d6e766b60fc058eea5d4d20ccd2d12a3560d21\"" Jul 2 08:16:41.622328 env[1254]: time="2024-07-02T08:16:41.622308146Z" level=info msg="CreateContainer within sandbox \"835a2c24677f6c7e0a4b9064a5d6e766b60fc058eea5d4d20ccd2d12a3560d21\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 08:16:41.637091 env[1254]: time="2024-07-02T08:16:41.637067506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5df30d679156d9b860331584e2d47675,Namespace:kube-system,Attempt:0,} returns sandbox id \"c628d98a2c7733dd3aca1163c2acfdcdeb61239f12cbdced21ab47e6a2d7ec7d\"" Jul 2 08:16:41.639426 env[1254]: time="2024-07-02T08:16:41.639404120Z" level=info msg="CreateContainer within sandbox \"c628d98a2c7733dd3aca1163c2acfdcdeb61239f12cbdced21ab47e6a2d7ec7d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 08:16:41.643301 env[1254]: time="2024-07-02T08:16:41.643286396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fd87124bd1ab6d9b01dedf07aaa171f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"1969f9666f769a68581787974937232b5e08699637826dac3a21d70f8b660bd3\"" Jul 2 08:16:41.665975 env[1254]: time="2024-07-02T08:16:41.665953701Z" level=info msg="CreateContainer within sandbox \"1969f9666f769a68581787974937232b5e08699637826dac3a21d70f8b660bd3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 08:16:41.700714 kubelet[1809]: W0702 08:16:41.700645 1809 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Jul 2 08:16:41.700714 kubelet[1809]: E0702 08:16:41.700698 1809 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Jul 2 08:16:41.768234 env[1254]: time="2024-07-02T08:16:41.768205219Z" level=info msg="CreateContainer within sandbox \"835a2c24677f6c7e0a4b9064a5d6e766b60fc058eea5d4d20ccd2d12a3560d21\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e1c01dd7de367aca11f03f18afc740004977e192fe952838eccdf9ab26dd58bf\"" Jul 2 08:16:41.768940 env[1254]: time="2024-07-02T08:16:41.768885385Z" level=info msg="StartContainer for \"e1c01dd7de367aca11f03f18afc740004977e192fe952838eccdf9ab26dd58bf\"" Jul 2 08:16:41.769780 env[1254]: time="2024-07-02T08:16:41.769739465Z" level=info msg="CreateContainer within sandbox \"1969f9666f769a68581787974937232b5e08699637826dac3a21d70f8b660bd3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0c413bdf40076ca75600c3168925629dbc3229fef198bc2a65861f9590e6f6e9\"" Jul 2 08:16:41.770371 env[1254]: time="2024-07-02T08:16:41.770359698Z" level=info msg="StartContainer for \"0c413bdf40076ca75600c3168925629dbc3229fef198bc2a65861f9590e6f6e9\"" Jul 2 08:16:41.771241 env[1254]: time="2024-07-02T08:16:41.771221596Z" level=info msg="CreateContainer within sandbox \"c628d98a2c7733dd3aca1163c2acfdcdeb61239f12cbdced21ab47e6a2d7ec7d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4a3fd798d2a01058ea8825065e16058699722393c93d7c1e9ffa29b3b06b948e\"" Jul 2 08:16:41.771472 env[1254]: time="2024-07-02T08:16:41.771461494Z" level=info msg="StartContainer for \"4a3fd798d2a01058ea8825065e16058699722393c93d7c1e9ffa29b3b06b948e\"" Jul 2 08:16:41.782190 systemd[1]: Started cri-containerd-4a3fd798d2a01058ea8825065e16058699722393c93d7c1e9ffa29b3b06b948e.scope. Jul 2 08:16:41.786545 systemd[1]: Started cri-containerd-e1c01dd7de367aca11f03f18afc740004977e192fe952838eccdf9ab26dd58bf.scope. Jul 2 08:16:41.798264 systemd[1]: Started cri-containerd-0c413bdf40076ca75600c3168925629dbc3229fef198bc2a65861f9590e6f6e9.scope. Jul 2 08:16:41.840349 env[1254]: time="2024-07-02T08:16:41.840327054Z" level=info msg="StartContainer for \"4a3fd798d2a01058ea8825065e16058699722393c93d7c1e9ffa29b3b06b948e\" returns successfully" Jul 2 08:16:41.847692 env[1254]: time="2024-07-02T08:16:41.847537570Z" level=info msg="StartContainer for \"0c413bdf40076ca75600c3168925629dbc3229fef198bc2a65861f9590e6f6e9\" returns successfully" Jul 2 08:16:41.851960 env[1254]: time="2024-07-02T08:16:41.851930534Z" level=info msg="StartContainer for \"e1c01dd7de367aca11f03f18afc740004977e192fe952838eccdf9ab26dd58bf\" returns successfully" Jul 2 08:16:41.917262 kubelet[1809]: W0702 08:16:41.917189 1809 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Jul 2 08:16:41.917262 kubelet[1809]: E0702 08:16:41.917243 1809 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.99:6443: connect: connection refused Jul 2 08:16:41.954799 kubelet[1809]: E0702 08:16:41.954762 1809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.99:6443: connect: connection refused" interval="1.6s" Jul 2 08:16:42.032005 kubelet[1809]: I0702 08:16:42.031937 1809 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 08:16:42.032172 kubelet[1809]: E0702 08:16:42.032140 1809 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.99:6443/api/v1/nodes\": dial tcp 139.178.70.99:6443: connect: connection refused" node="localhost" Jul 2 08:16:42.463473 kubelet[1809]: E0702 08:16:42.463455 1809 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.70.99:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.70.99:6443: connect: connection refused Jul 2 08:16:43.462514 kubelet[1809]: I0702 08:16:43.462489 1809 apiserver.go:52] "Watching apiserver" Jul 2 08:16:43.527485 kubelet[1809]: I0702 08:16:43.527460 1809 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jul 2 08:16:43.557197 kubelet[1809]: E0702 08:16:43.557175 1809 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 2 08:16:43.632877 kubelet[1809]: I0702 08:16:43.632852 1809 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 08:16:43.638459 kubelet[1809]: I0702 08:16:43.638446 1809 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jul 2 08:16:45.256017 systemd[1]: Reloading. Jul 2 08:16:45.330513 /usr/lib/systemd/system-generators/torcx-generator[2099]: time="2024-07-02T08:16:45Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 08:16:45.334946 /usr/lib/systemd/system-generators/torcx-generator[2099]: time="2024-07-02T08:16:45Z" level=info msg="torcx already run" Jul 2 08:16:45.418506 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 08:16:45.418633 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 08:16:45.430536 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:16:45.502287 systemd[1]: Stopping kubelet.service... Jul 2 08:16:45.513152 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 08:16:45.513256 systemd[1]: Stopped kubelet.service. Jul 2 08:16:45.514417 systemd[1]: Starting kubelet.service... Jul 2 08:16:46.138737 systemd[1]: Started kubelet.service. Jul 2 08:16:46.196847 kubelet[2163]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:16:46.197078 kubelet[2163]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 08:16:46.197122 kubelet[2163]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:16:46.197342 kubelet[2163]: I0702 08:16:46.197322 2163 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 08:16:46.201400 kubelet[2163]: I0702 08:16:46.201384 2163 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jul 2 08:16:46.201496 kubelet[2163]: I0702 08:16:46.201489 2163 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 08:16:46.201679 kubelet[2163]: I0702 08:16:46.201671 2163 server.go:927] "Client rotation is on, will bootstrap in background" Jul 2 08:16:46.202640 kubelet[2163]: I0702 08:16:46.202631 2163 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 08:16:46.203462 kubelet[2163]: I0702 08:16:46.203447 2163 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 08:16:46.206836 kubelet[2163]: I0702 08:16:46.206822 2163 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 08:16:46.206957 kubelet[2163]: I0702 08:16:46.206937 2163 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 08:16:46.207056 kubelet[2163]: I0702 08:16:46.206959 2163 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 08:16:46.207126 kubelet[2163]: I0702 08:16:46.207065 2163 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 08:16:46.207126 kubelet[2163]: I0702 08:16:46.207073 2163 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 08:16:46.207126 kubelet[2163]: I0702 08:16:46.207098 2163 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:16:46.207193 kubelet[2163]: I0702 08:16:46.207140 2163 kubelet.go:400] "Attempting to sync node with API server" Jul 2 08:16:46.207193 kubelet[2163]: I0702 08:16:46.207147 2163 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 08:16:46.207193 kubelet[2163]: I0702 08:16:46.207159 2163 kubelet.go:312] "Adding apiserver pod source" Jul 2 08:16:46.207193 kubelet[2163]: I0702 08:16:46.207167 2163 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 08:16:46.210552 kubelet[2163]: I0702 08:16:46.209962 2163 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 08:16:46.210552 kubelet[2163]: I0702 08:16:46.210053 2163 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 08:16:46.210552 kubelet[2163]: I0702 08:16:46.210276 2163 server.go:1264] "Started kubelet" Jul 2 08:16:46.213030 kubelet[2163]: I0702 08:16:46.213021 2163 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 08:16:46.216442 kubelet[2163]: I0702 08:16:46.216424 2163 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 08:16:46.221791 kubelet[2163]: I0702 08:16:46.221775 2163 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 08:16:46.224346 kubelet[2163]: I0702 08:16:46.224335 2163 server.go:455] "Adding debug handlers to kubelet server" Jul 2 08:16:46.226326 kubelet[2163]: I0702 08:16:46.226294 2163 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 08:16:46.226478 kubelet[2163]: I0702 08:16:46.226471 2163 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 08:16:46.231153 kubelet[2163]: I0702 08:16:46.231135 2163 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jul 2 08:16:46.231242 kubelet[2163]: I0702 08:16:46.231232 2163 reconciler.go:26] "Reconciler: start to sync state" Jul 2 08:16:46.232115 kubelet[2163]: I0702 08:16:46.231625 2163 factory.go:221] Registration of the systemd container factory successfully Jul 2 08:16:46.232115 kubelet[2163]: I0702 08:16:46.231682 2163 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 08:16:46.232917 kubelet[2163]: E0702 08:16:46.232905 2163 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 08:16:46.233586 kubelet[2163]: I0702 08:16:46.233102 2163 factory.go:221] Registration of the containerd container factory successfully Jul 2 08:16:46.237530 kubelet[2163]: I0702 08:16:46.237499 2163 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 08:16:46.239199 kubelet[2163]: I0702 08:16:46.239188 2163 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 08:16:46.239289 kubelet[2163]: I0702 08:16:46.239281 2163 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 08:16:46.239399 kubelet[2163]: I0702 08:16:46.239392 2163 kubelet.go:2337] "Starting kubelet main sync loop" Jul 2 08:16:46.239492 kubelet[2163]: E0702 08:16:46.239479 2163 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 08:16:46.258846 sudo[2191]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 2 08:16:46.259054 sudo[2191]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 2 08:16:46.286950 kubelet[2163]: I0702 08:16:46.286936 2163 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 08:16:46.287086 kubelet[2163]: I0702 08:16:46.287078 2163 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 08:16:46.287140 kubelet[2163]: I0702 08:16:46.287133 2163 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:16:46.287317 kubelet[2163]: I0702 08:16:46.287309 2163 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 08:16:46.287382 kubelet[2163]: I0702 08:16:46.287366 2163 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 08:16:46.287460 kubelet[2163]: I0702 08:16:46.287454 2163 policy_none.go:49] "None policy: Start" Jul 2 08:16:46.287887 kubelet[2163]: I0702 08:16:46.287879 2163 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 08:16:46.287991 kubelet[2163]: I0702 08:16:46.287956 2163 state_mem.go:35] "Initializing new in-memory state store" Jul 2 08:16:46.288124 kubelet[2163]: I0702 08:16:46.288118 2163 state_mem.go:75] "Updated machine memory state" Jul 2 08:16:46.290577 kubelet[2163]: I0702 08:16:46.290568 2163 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 08:16:46.290937 kubelet[2163]: I0702 08:16:46.290886 2163 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 2 08:16:46.295174 kubelet[2163]: I0702 08:16:46.295156 2163 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 08:16:46.331001 kubelet[2163]: I0702 08:16:46.330975 2163 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 08:16:46.334985 kubelet[2163]: I0702 08:16:46.334964 2163 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jul 2 08:16:46.335089 kubelet[2163]: I0702 08:16:46.335035 2163 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jul 2 08:16:46.340081 kubelet[2163]: I0702 08:16:46.339965 2163 topology_manager.go:215] "Topology Admit Handler" podUID="95a1f460b21cfe16d84d8091532e629f" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 08:16:46.340457 kubelet[2163]: I0702 08:16:46.340119 2163 topology_manager.go:215] "Topology Admit Handler" podUID="fd87124bd1ab6d9b01dedf07aaa171f7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 08:16:46.342168 kubelet[2163]: I0702 08:16:46.342153 2163 topology_manager.go:215] "Topology Admit Handler" podUID="5df30d679156d9b860331584e2d47675" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 08:16:46.433964 kubelet[2163]: I0702 08:16:46.433863 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 08:16:46.433964 kubelet[2163]: I0702 08:16:46.433907 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5df30d679156d9b860331584e2d47675-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5df30d679156d9b860331584e2d47675\") " pod="kube-system/kube-scheduler-localhost" Jul 2 08:16:46.433964 kubelet[2163]: I0702 08:16:46.433921 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/95a1f460b21cfe16d84d8091532e629f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"95a1f460b21cfe16d84d8091532e629f\") " pod="kube-system/kube-apiserver-localhost" Jul 2 08:16:46.434257 kubelet[2163]: I0702 08:16:46.433936 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 08:16:46.434367 kubelet[2163]: I0702 08:16:46.434337 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 08:16:46.434367 kubelet[2163]: I0702 08:16:46.434351 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 08:16:46.434367 kubelet[2163]: I0702 08:16:46.434362 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 08:16:46.434449 kubelet[2163]: I0702 08:16:46.434371 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/95a1f460b21cfe16d84d8091532e629f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"95a1f460b21cfe16d84d8091532e629f\") " pod="kube-system/kube-apiserver-localhost" Jul 2 08:16:46.434449 kubelet[2163]: I0702 08:16:46.434381 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/95a1f460b21cfe16d84d8091532e629f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"95a1f460b21cfe16d84d8091532e629f\") " pod="kube-system/kube-apiserver-localhost" Jul 2 08:16:46.714292 sudo[2191]: pam_unix(sudo:session): session closed for user root Jul 2 08:16:47.213968 kubelet[2163]: I0702 08:16:47.213942 2163 apiserver.go:52] "Watching apiserver" Jul 2 08:16:47.232245 kubelet[2163]: I0702 08:16:47.232200 2163 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jul 2 08:16:47.277386 kubelet[2163]: E0702 08:16:47.277351 2163 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 2 08:16:47.282228 kubelet[2163]: I0702 08:16:47.282189 2163 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.282177714 podStartE2EDuration="1.282177714s" podCreationTimestamp="2024-07-02 08:16:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:16:47.281968938 +0000 UTC m=+1.127376893" watchObservedRunningTime="2024-07-02 08:16:47.282177714 +0000 UTC m=+1.127585672" Jul 2 08:16:47.294473 kubelet[2163]: I0702 08:16:47.294438 2163 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.294423241 podStartE2EDuration="1.294423241s" podCreationTimestamp="2024-07-02 08:16:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:16:47.294066085 +0000 UTC m=+1.139474040" watchObservedRunningTime="2024-07-02 08:16:47.294423241 +0000 UTC m=+1.139831189" Jul 2 08:16:47.345320 kubelet[2163]: I0702 08:16:47.345288 2163 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.345277196 podStartE2EDuration="1.345277196s" podCreationTimestamp="2024-07-02 08:16:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:16:47.320205343 +0000 UTC m=+1.165613298" watchObservedRunningTime="2024-07-02 08:16:47.345277196 +0000 UTC m=+1.190685150" Jul 2 08:16:48.326045 sudo[1451]: pam_unix(sudo:session): session closed for user root Jul 2 08:16:48.327918 sshd[1448]: pam_unix(sshd:session): session closed for user core Jul 2 08:16:48.329731 systemd[1]: sshd@4-139.178.70.99:22-139.178.68.195:55146.service: Deactivated successfully. Jul 2 08:16:48.330359 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 08:16:48.330489 systemd[1]: session-7.scope: Consumed 3.121s CPU time. Jul 2 08:16:48.331277 systemd-logind[1241]: Session 7 logged out. Waiting for processes to exit. Jul 2 08:16:48.331985 systemd-logind[1241]: Removed session 7. Jul 2 08:16:59.390972 kubelet[2163]: I0702 08:16:59.390935 2163 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 08:16:59.391285 env[1254]: time="2024-07-02T08:16:59.391179771Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 08:16:59.391436 kubelet[2163]: I0702 08:16:59.391293 2163 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 08:16:59.669612 kubelet[2163]: I0702 08:16:59.669535 2163 topology_manager.go:215] "Topology Admit Handler" podUID="ad39400b-0703-4273-8622-7c767315e41c" podNamespace="kube-system" podName="kube-proxy-prx2h" Jul 2 08:16:59.669809 kubelet[2163]: I0702 08:16:59.669794 2163 topology_manager.go:215] "Topology Admit Handler" podUID="f1336714-ffb8-4bc0-8af6-2c89f00a6e70" podNamespace="kube-system" podName="cilium-x78rm" Jul 2 08:16:59.677104 systemd[1]: Created slice kubepods-besteffort-podad39400b_0703_4273_8622_7c767315e41c.slice. Jul 2 08:16:59.684687 systemd[1]: Created slice kubepods-burstable-podf1336714_ffb8_4bc0_8af6_2c89f00a6e70.slice. Jul 2 08:16:59.718999 kubelet[2163]: I0702 08:16:59.718968 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-cilium-run\") pod \"cilium-x78rm\" (UID: \"f1336714-ffb8-4bc0-8af6-2c89f00a6e70\") " pod="kube-system/cilium-x78rm" Jul 2 08:16:59.718999 kubelet[2163]: I0702 08:16:59.719000 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-etc-cni-netd\") pod \"cilium-x78rm\" (UID: \"f1336714-ffb8-4bc0-8af6-2c89f00a6e70\") " pod="kube-system/cilium-x78rm" Jul 2 08:16:59.719147 kubelet[2163]: I0702 08:16:59.719014 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-hubble-tls\") pod \"cilium-x78rm\" (UID: \"f1336714-ffb8-4bc0-8af6-2c89f00a6e70\") " pod="kube-system/cilium-x78rm" Jul 2 08:16:59.719147 kubelet[2163]: I0702 08:16:59.719028 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-bpf-maps\") pod \"cilium-x78rm\" (UID: \"f1336714-ffb8-4bc0-8af6-2c89f00a6e70\") " pod="kube-system/cilium-x78rm" Jul 2 08:16:59.719147 kubelet[2163]: I0702 08:16:59.719039 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-cilium-cgroup\") pod \"cilium-x78rm\" (UID: \"f1336714-ffb8-4bc0-8af6-2c89f00a6e70\") " pod="kube-system/cilium-x78rm" Jul 2 08:16:59.719147 kubelet[2163]: I0702 08:16:59.719056 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-cni-path\") pod \"cilium-x78rm\" (UID: \"f1336714-ffb8-4bc0-8af6-2c89f00a6e70\") " pod="kube-system/cilium-x78rm" Jul 2 08:16:59.719147 kubelet[2163]: I0702 08:16:59.719072 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ad39400b-0703-4273-8622-7c767315e41c-lib-modules\") pod \"kube-proxy-prx2h\" (UID: \"ad39400b-0703-4273-8622-7c767315e41c\") " pod="kube-system/kube-proxy-prx2h" Jul 2 08:16:59.719147 kubelet[2163]: I0702 08:16:59.719086 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-hostproc\") pod \"cilium-x78rm\" (UID: \"f1336714-ffb8-4bc0-8af6-2c89f00a6e70\") " pod="kube-system/cilium-x78rm" Jul 2 08:16:59.719268 kubelet[2163]: I0702 08:16:59.719099 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-clustermesh-secrets\") pod \"cilium-x78rm\" (UID: \"f1336714-ffb8-4bc0-8af6-2c89f00a6e70\") " pod="kube-system/cilium-x78rm" Jul 2 08:16:59.719268 kubelet[2163]: I0702 08:16:59.719137 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-host-proc-sys-net\") pod \"cilium-x78rm\" (UID: \"f1336714-ffb8-4bc0-8af6-2c89f00a6e70\") " pod="kube-system/cilium-x78rm" Jul 2 08:16:59.719268 kubelet[2163]: I0702 08:16:59.719151 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-cilium-config-path\") pod \"cilium-x78rm\" (UID: \"f1336714-ffb8-4bc0-8af6-2c89f00a6e70\") " pod="kube-system/cilium-x78rm" Jul 2 08:16:59.719268 kubelet[2163]: I0702 08:16:59.719161 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-host-proc-sys-kernel\") pod \"cilium-x78rm\" (UID: \"f1336714-ffb8-4bc0-8af6-2c89f00a6e70\") " pod="kube-system/cilium-x78rm" Jul 2 08:16:59.719268 kubelet[2163]: I0702 08:16:59.719171 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ad39400b-0703-4273-8622-7c767315e41c-kube-proxy\") pod \"kube-proxy-prx2h\" (UID: \"ad39400b-0703-4273-8622-7c767315e41c\") " pod="kube-system/kube-proxy-prx2h" Jul 2 08:16:59.719361 kubelet[2163]: I0702 08:16:59.719180 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78bpw\" (UniqueName: \"kubernetes.io/projected/ad39400b-0703-4273-8622-7c767315e41c-kube-api-access-78bpw\") pod \"kube-proxy-prx2h\" (UID: \"ad39400b-0703-4273-8622-7c767315e41c\") " pod="kube-system/kube-proxy-prx2h" Jul 2 08:16:59.719361 kubelet[2163]: I0702 08:16:59.719189 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm9n5\" (UniqueName: \"kubernetes.io/projected/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-kube-api-access-nm9n5\") pod \"cilium-x78rm\" (UID: \"f1336714-ffb8-4bc0-8af6-2c89f00a6e70\") " pod="kube-system/cilium-x78rm" Jul 2 08:16:59.719361 kubelet[2163]: I0702 08:16:59.719199 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-xtables-lock\") pod \"cilium-x78rm\" (UID: \"f1336714-ffb8-4bc0-8af6-2c89f00a6e70\") " pod="kube-system/cilium-x78rm" Jul 2 08:16:59.719361 kubelet[2163]: I0702 08:16:59.719208 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ad39400b-0703-4273-8622-7c767315e41c-xtables-lock\") pod \"kube-proxy-prx2h\" (UID: \"ad39400b-0703-4273-8622-7c767315e41c\") " pod="kube-system/kube-proxy-prx2h" Jul 2 08:16:59.719361 kubelet[2163]: I0702 08:16:59.719219 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-lib-modules\") pod \"cilium-x78rm\" (UID: \"f1336714-ffb8-4bc0-8af6-2c89f00a6e70\") " pod="kube-system/cilium-x78rm" Jul 2 08:16:59.860055 kubelet[2163]: E0702 08:16:59.860013 2163 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 2 08:16:59.860224 kubelet[2163]: E0702 08:16:59.860213 2163 projected.go:200] Error preparing data for projected volume kube-api-access-78bpw for pod kube-system/kube-proxy-prx2h: configmap "kube-root-ca.crt" not found Jul 2 08:16:59.860391 kubelet[2163]: E0702 08:16:59.860014 2163 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 2 08:16:59.860457 kubelet[2163]: E0702 08:16:59.860448 2163 projected.go:200] Error preparing data for projected volume kube-api-access-nm9n5 for pod kube-system/cilium-x78rm: configmap "kube-root-ca.crt" not found Jul 2 08:16:59.860550 kubelet[2163]: E0702 08:16:59.860541 2163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ad39400b-0703-4273-8622-7c767315e41c-kube-api-access-78bpw podName:ad39400b-0703-4273-8622-7c767315e41c nodeName:}" failed. No retries permitted until 2024-07-02 08:17:00.360366331 +0000 UTC m=+14.205774283 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-78bpw" (UniqueName: "kubernetes.io/projected/ad39400b-0703-4273-8622-7c767315e41c-kube-api-access-78bpw") pod "kube-proxy-prx2h" (UID: "ad39400b-0703-4273-8622-7c767315e41c") : configmap "kube-root-ca.crt" not found Jul 2 08:16:59.860834 kubelet[2163]: E0702 08:16:59.860825 2163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-kube-api-access-nm9n5 podName:f1336714-ffb8-4bc0-8af6-2c89f00a6e70 nodeName:}" failed. No retries permitted until 2024-07-02 08:17:00.360814983 +0000 UTC m=+14.206222935 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nm9n5" (UniqueName: "kubernetes.io/projected/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-kube-api-access-nm9n5") pod "cilium-x78rm" (UID: "f1336714-ffb8-4bc0-8af6-2c89f00a6e70") : configmap "kube-root-ca.crt" not found Jul 2 08:17:00.262695 kubelet[2163]: I0702 08:17:00.262663 2163 topology_manager.go:215] "Topology Admit Handler" podUID="cc77f06c-d063-4cc9-a222-d7a12166038e" podNamespace="kube-system" podName="cilium-operator-599987898-dx759" Jul 2 08:17:00.270258 systemd[1]: Created slice kubepods-besteffort-podcc77f06c_d063_4cc9_a222_d7a12166038e.slice. Jul 2 08:17:00.324095 kubelet[2163]: I0702 08:17:00.324072 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm794\" (UniqueName: \"kubernetes.io/projected/cc77f06c-d063-4cc9-a222-d7a12166038e-kube-api-access-dm794\") pod \"cilium-operator-599987898-dx759\" (UID: \"cc77f06c-d063-4cc9-a222-d7a12166038e\") " pod="kube-system/cilium-operator-599987898-dx759" Jul 2 08:17:00.324283 kubelet[2163]: I0702 08:17:00.324269 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cc77f06c-d063-4cc9-a222-d7a12166038e-cilium-config-path\") pod \"cilium-operator-599987898-dx759\" (UID: \"cc77f06c-d063-4cc9-a222-d7a12166038e\") " pod="kube-system/cilium-operator-599987898-dx759" Jul 2 08:17:00.573240 env[1254]: time="2024-07-02T08:17:00.572824984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-dx759,Uid:cc77f06c-d063-4cc9-a222-d7a12166038e,Namespace:kube-system,Attempt:0,}" Jul 2 08:17:00.583228 env[1254]: time="2024-07-02T08:17:00.583199828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-prx2h,Uid:ad39400b-0703-4273-8622-7c767315e41c,Namespace:kube-system,Attempt:0,}" Jul 2 08:17:00.585712 env[1254]: time="2024-07-02T08:17:00.585650482Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:17:00.585877 env[1254]: time="2024-07-02T08:17:00.585715011Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:17:00.585877 env[1254]: time="2024-07-02T08:17:00.585732761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:17:00.585877 env[1254]: time="2024-07-02T08:17:00.585853945Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/49c0b38f18dda45565e8387f4255130cbfce9a478dd439ca7b45708e7bd866ec pid=2244 runtime=io.containerd.runc.v2 Jul 2 08:17:00.587590 env[1254]: time="2024-07-02T08:17:00.587543131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x78rm,Uid:f1336714-ffb8-4bc0-8af6-2c89f00a6e70,Namespace:kube-system,Attempt:0,}" Jul 2 08:17:00.604141 env[1254]: time="2024-07-02T08:17:00.604081002Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:17:00.604141 env[1254]: time="2024-07-02T08:17:00.604107385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:17:00.604141 env[1254]: time="2024-07-02T08:17:00.604118357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:17:00.611499 env[1254]: time="2024-07-02T08:17:00.611457026Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f688070c16ea820a2e7b69fec60ef244dd81c0c2b7f8cb16082b443efb162ab8 pid=2267 runtime=io.containerd.runc.v2 Jul 2 08:17:00.615249 env[1254]: time="2024-07-02T08:17:00.615187716Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:17:00.615550 env[1254]: time="2024-07-02T08:17:00.615525162Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:17:00.615585 env[1254]: time="2024-07-02T08:17:00.615564119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:17:00.615682 env[1254]: time="2024-07-02T08:17:00.615662576Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4faa07b8afa9ba5bd9a181cf7d3753c157f3136e9e9a9fa915f2eec77f25f3a2 pid=2268 runtime=io.containerd.runc.v2 Jul 2 08:17:00.623067 systemd[1]: Started cri-containerd-49c0b38f18dda45565e8387f4255130cbfce9a478dd439ca7b45708e7bd866ec.scope. Jul 2 08:17:00.640300 systemd[1]: Started cri-containerd-4faa07b8afa9ba5bd9a181cf7d3753c157f3136e9e9a9fa915f2eec77f25f3a2.scope. Jul 2 08:17:00.641136 systemd[1]: Started cri-containerd-f688070c16ea820a2e7b69fec60ef244dd81c0c2b7f8cb16082b443efb162ab8.scope. Jul 2 08:17:00.657427 env[1254]: time="2024-07-02T08:17:00.657395451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-prx2h,Uid:ad39400b-0703-4273-8622-7c767315e41c,Namespace:kube-system,Attempt:0,} returns sandbox id \"f688070c16ea820a2e7b69fec60ef244dd81c0c2b7f8cb16082b443efb162ab8\"" Jul 2 08:17:00.659513 env[1254]: time="2024-07-02T08:17:00.659494359Z" level=info msg="CreateContainer within sandbox \"f688070c16ea820a2e7b69fec60ef244dd81c0c2b7f8cb16082b443efb162ab8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 08:17:00.682820 env[1254]: time="2024-07-02T08:17:00.682791586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x78rm,Uid:f1336714-ffb8-4bc0-8af6-2c89f00a6e70,Namespace:kube-system,Attempt:0,} returns sandbox id \"4faa07b8afa9ba5bd9a181cf7d3753c157f3136e9e9a9fa915f2eec77f25f3a2\"" Jul 2 08:17:00.695573 env[1254]: time="2024-07-02T08:17:00.695544702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-dx759,Uid:cc77f06c-d063-4cc9-a222-d7a12166038e,Namespace:kube-system,Attempt:0,} returns sandbox id \"49c0b38f18dda45565e8387f4255130cbfce9a478dd439ca7b45708e7bd866ec\"" Jul 2 08:17:00.699307 env[1254]: time="2024-07-02T08:17:00.699280051Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 08:17:00.865954 env[1254]: time="2024-07-02T08:17:00.865316739Z" level=info msg="CreateContainer within sandbox \"f688070c16ea820a2e7b69fec60ef244dd81c0c2b7f8cb16082b443efb162ab8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1c2bb10471853dbaf1f74fe41adf9296ad7a72b8af2f981acdef40d7b9cb9f3e\"" Jul 2 08:17:00.867270 env[1254]: time="2024-07-02T08:17:00.867239227Z" level=info msg="StartContainer for \"1c2bb10471853dbaf1f74fe41adf9296ad7a72b8af2f981acdef40d7b9cb9f3e\"" Jul 2 08:17:00.882011 systemd[1]: Started cri-containerd-1c2bb10471853dbaf1f74fe41adf9296ad7a72b8af2f981acdef40d7b9cb9f3e.scope. Jul 2 08:17:00.915424 env[1254]: time="2024-07-02T08:17:00.915387110Z" level=info msg="StartContainer for \"1c2bb10471853dbaf1f74fe41adf9296ad7a72b8af2f981acdef40d7b9cb9f3e\" returns successfully" Jul 2 08:17:05.608568 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount648373248.mount: Deactivated successfully. Jul 2 08:17:06.347459 kubelet[2163]: I0702 08:17:06.347428 2163 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-prx2h" podStartSLOduration=7.347414528 podStartE2EDuration="7.347414528s" podCreationTimestamp="2024-07-02 08:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:17:01.286753674 +0000 UTC m=+15.132161635" watchObservedRunningTime="2024-07-02 08:17:06.347414528 +0000 UTC m=+20.192822478" Jul 2 08:17:08.094500 env[1254]: time="2024-07-02T08:17:08.094468813Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:17:08.096490 env[1254]: time="2024-07-02T08:17:08.096473055Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:17:08.098012 env[1254]: time="2024-07-02T08:17:08.097995395Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:17:08.098647 env[1254]: time="2024-07-02T08:17:08.098623783Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 2 08:17:08.100742 env[1254]: time="2024-07-02T08:17:08.100710539Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 08:17:08.102067 env[1254]: time="2024-07-02T08:17:08.102040452Z" level=info msg="CreateContainer within sandbox \"4faa07b8afa9ba5bd9a181cf7d3753c157f3136e9e9a9fa915f2eec77f25f3a2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 08:17:08.113816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3320533620.mount: Deactivated successfully. Jul 2 08:17:08.118507 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4027144206.mount: Deactivated successfully. Jul 2 08:17:08.134974 env[1254]: time="2024-07-02T08:17:08.134947994Z" level=info msg="CreateContainer within sandbox \"4faa07b8afa9ba5bd9a181cf7d3753c157f3136e9e9a9fa915f2eec77f25f3a2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e1e53c7fea4bdbaf1d9de87cf0419898e080e8168cc6d885462aa120e4e90aa2\"" Jul 2 08:17:08.135613 env[1254]: time="2024-07-02T08:17:08.135246991Z" level=info msg="StartContainer for \"e1e53c7fea4bdbaf1d9de87cf0419898e080e8168cc6d885462aa120e4e90aa2\"" Jul 2 08:17:08.151308 systemd[1]: Started cri-containerd-e1e53c7fea4bdbaf1d9de87cf0419898e080e8168cc6d885462aa120e4e90aa2.scope. Jul 2 08:17:08.197745 systemd[1]: cri-containerd-e1e53c7fea4bdbaf1d9de87cf0419898e080e8168cc6d885462aa120e4e90aa2.scope: Deactivated successfully. Jul 2 08:17:08.204118 env[1254]: time="2024-07-02T08:17:08.199141554Z" level=info msg="StartContainer for \"e1e53c7fea4bdbaf1d9de87cf0419898e080e8168cc6d885462aa120e4e90aa2\" returns successfully" Jul 2 08:17:08.509408 env[1254]: time="2024-07-02T08:17:08.509332909Z" level=info msg="shim disconnected" id=e1e53c7fea4bdbaf1d9de87cf0419898e080e8168cc6d885462aa120e4e90aa2 Jul 2 08:17:08.509408 env[1254]: time="2024-07-02T08:17:08.509365626Z" level=warning msg="cleaning up after shim disconnected" id=e1e53c7fea4bdbaf1d9de87cf0419898e080e8168cc6d885462aa120e4e90aa2 namespace=k8s.io Jul 2 08:17:08.509408 env[1254]: time="2024-07-02T08:17:08.509374114Z" level=info msg="cleaning up dead shim" Jul 2 08:17:08.516103 env[1254]: time="2024-07-02T08:17:08.516072596Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:17:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2555 runtime=io.containerd.runc.v2\n" Jul 2 08:17:09.110879 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e1e53c7fea4bdbaf1d9de87cf0419898e080e8168cc6d885462aa120e4e90aa2-rootfs.mount: Deactivated successfully. Jul 2 08:17:09.343476 env[1254]: time="2024-07-02T08:17:09.343447560Z" level=info msg="CreateContainer within sandbox \"4faa07b8afa9ba5bd9a181cf7d3753c157f3136e9e9a9fa915f2eec77f25f3a2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 08:17:09.477725 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4076099678.mount: Deactivated successfully. Jul 2 08:17:09.479939 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1801450538.mount: Deactivated successfully. Jul 2 08:17:09.482543 env[1254]: time="2024-07-02T08:17:09.482519329Z" level=info msg="CreateContainer within sandbox \"4faa07b8afa9ba5bd9a181cf7d3753c157f3136e9e9a9fa915f2eec77f25f3a2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7f6879000c2ab87bdc1bd4ab23b204c8ed0b521a913513dde5eab3bc880f899f\"" Jul 2 08:17:09.483133 env[1254]: time="2024-07-02T08:17:09.483059765Z" level=info msg="StartContainer for \"7f6879000c2ab87bdc1bd4ab23b204c8ed0b521a913513dde5eab3bc880f899f\"" Jul 2 08:17:09.508599 systemd[1]: Started cri-containerd-7f6879000c2ab87bdc1bd4ab23b204c8ed0b521a913513dde5eab3bc880f899f.scope. Jul 2 08:17:09.535845 env[1254]: time="2024-07-02T08:17:09.535820225Z" level=info msg="StartContainer for \"7f6879000c2ab87bdc1bd4ab23b204c8ed0b521a913513dde5eab3bc880f899f\" returns successfully" Jul 2 08:17:09.543196 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 08:17:09.543353 systemd[1]: Stopped systemd-sysctl.service. Jul 2 08:17:09.543529 systemd[1]: Stopping systemd-sysctl.service... Jul 2 08:17:09.544657 systemd[1]: Starting systemd-sysctl.service... Jul 2 08:17:09.548345 systemd[1]: cri-containerd-7f6879000c2ab87bdc1bd4ab23b204c8ed0b521a913513dde5eab3bc880f899f.scope: Deactivated successfully. Jul 2 08:17:09.552020 systemd[1]: Finished systemd-sysctl.service. Jul 2 08:17:09.613018 env[1254]: time="2024-07-02T08:17:09.612991525Z" level=info msg="shim disconnected" id=7f6879000c2ab87bdc1bd4ab23b204c8ed0b521a913513dde5eab3bc880f899f Jul 2 08:17:09.613164 env[1254]: time="2024-07-02T08:17:09.613152573Z" level=warning msg="cleaning up after shim disconnected" id=7f6879000c2ab87bdc1bd4ab23b204c8ed0b521a913513dde5eab3bc880f899f namespace=k8s.io Jul 2 08:17:09.613456 env[1254]: time="2024-07-02T08:17:09.613251193Z" level=info msg="cleaning up dead shim" Jul 2 08:17:09.618059 env[1254]: time="2024-07-02T08:17:09.618042898Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:17:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2616 runtime=io.containerd.runc.v2\n" Jul 2 08:17:09.932639 env[1254]: time="2024-07-02T08:17:09.932603744Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:17:09.933686 env[1254]: time="2024-07-02T08:17:09.933659244Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:17:09.934176 env[1254]: time="2024-07-02T08:17:09.934160492Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:17:09.934870 env[1254]: time="2024-07-02T08:17:09.934849415Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 2 08:17:09.936825 env[1254]: time="2024-07-02T08:17:09.936809284Z" level=info msg="CreateContainer within sandbox \"49c0b38f18dda45565e8387f4255130cbfce9a478dd439ca7b45708e7bd866ec\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 08:17:09.943875 env[1254]: time="2024-07-02T08:17:09.943856025Z" level=info msg="CreateContainer within sandbox \"49c0b38f18dda45565e8387f4255130cbfce9a478dd439ca7b45708e7bd866ec\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c73c3dd161c8855e355a529e1d220228c1f66c985fc38527213d210c5a83445f\"" Jul 2 08:17:09.951292 env[1254]: time="2024-07-02T08:17:09.946074470Z" level=info msg="StartContainer for \"c73c3dd161c8855e355a529e1d220228c1f66c985fc38527213d210c5a83445f\"" Jul 2 08:17:09.956869 systemd[1]: Started cri-containerd-c73c3dd161c8855e355a529e1d220228c1f66c985fc38527213d210c5a83445f.scope. Jul 2 08:17:09.983725 env[1254]: time="2024-07-02T08:17:09.983700291Z" level=info msg="StartContainer for \"c73c3dd161c8855e355a529e1d220228c1f66c985fc38527213d210c5a83445f\" returns successfully" Jul 2 08:17:10.344665 env[1254]: time="2024-07-02T08:17:10.344634133Z" level=info msg="CreateContainer within sandbox \"4faa07b8afa9ba5bd9a181cf7d3753c157f3136e9e9a9fa915f2eec77f25f3a2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 08:17:10.355926 env[1254]: time="2024-07-02T08:17:10.355886307Z" level=info msg="CreateContainer within sandbox \"4faa07b8afa9ba5bd9a181cf7d3753c157f3136e9e9a9fa915f2eec77f25f3a2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5bea1e1721e477ca775fecf574e794b8b319b2ade48c00bacddc9291b79354c1\"" Jul 2 08:17:10.356244 env[1254]: time="2024-07-02T08:17:10.356227888Z" level=info msg="StartContainer for \"5bea1e1721e477ca775fecf574e794b8b319b2ade48c00bacddc9291b79354c1\"" Jul 2 08:17:10.376318 systemd[1]: Started cri-containerd-5bea1e1721e477ca775fecf574e794b8b319b2ade48c00bacddc9291b79354c1.scope. Jul 2 08:17:10.418464 env[1254]: time="2024-07-02T08:17:10.418429039Z" level=info msg="StartContainer for \"5bea1e1721e477ca775fecf574e794b8b319b2ade48c00bacddc9291b79354c1\" returns successfully" Jul 2 08:17:10.459400 systemd[1]: cri-containerd-5bea1e1721e477ca775fecf574e794b8b319b2ade48c00bacddc9291b79354c1.scope: Deactivated successfully. Jul 2 08:17:10.471027 env[1254]: time="2024-07-02T08:17:10.470988155Z" level=info msg="shim disconnected" id=5bea1e1721e477ca775fecf574e794b8b319b2ade48c00bacddc9291b79354c1 Jul 2 08:17:10.471027 env[1254]: time="2024-07-02T08:17:10.471025359Z" level=warning msg="cleaning up after shim disconnected" id=5bea1e1721e477ca775fecf574e794b8b319b2ade48c00bacddc9291b79354c1 namespace=k8s.io Jul 2 08:17:10.471156 env[1254]: time="2024-07-02T08:17:10.471031333Z" level=info msg="cleaning up dead shim" Jul 2 08:17:10.475748 env[1254]: time="2024-07-02T08:17:10.475723854Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:17:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2712 runtime=io.containerd.runc.v2\n" Jul 2 08:17:11.110805 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5bea1e1721e477ca775fecf574e794b8b319b2ade48c00bacddc9291b79354c1-rootfs.mount: Deactivated successfully. Jul 2 08:17:11.348373 env[1254]: time="2024-07-02T08:17:11.348340756Z" level=info msg="CreateContainer within sandbox \"4faa07b8afa9ba5bd9a181cf7d3753c157f3136e9e9a9fa915f2eec77f25f3a2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 08:17:11.355483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2343399425.mount: Deactivated successfully. Jul 2 08:17:11.359235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1771211618.mount: Deactivated successfully. Jul 2 08:17:11.361076 env[1254]: time="2024-07-02T08:17:11.361020557Z" level=info msg="CreateContainer within sandbox \"4faa07b8afa9ba5bd9a181cf7d3753c157f3136e9e9a9fa915f2eec77f25f3a2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7b9e1d53a6c5624452e1659795704bceb89be11510df0333782ceb03ac38ae38\"" Jul 2 08:17:11.361662 env[1254]: time="2024-07-02T08:17:11.361642877Z" level=info msg="StartContainer for \"7b9e1d53a6c5624452e1659795704bceb89be11510df0333782ceb03ac38ae38\"" Jul 2 08:17:11.369189 kubelet[2163]: I0702 08:17:11.369156 2163 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-dx759" podStartSLOduration=2.130029916 podStartE2EDuration="11.369134674s" podCreationTimestamp="2024-07-02 08:17:00 +0000 UTC" firstStartedPulling="2024-07-02 08:17:00.696139014 +0000 UTC m=+14.541546962" lastFinishedPulling="2024-07-02 08:17:09.935243765 +0000 UTC m=+23.780651720" observedRunningTime="2024-07-02 08:17:10.420242375 +0000 UTC m=+24.265650330" watchObservedRunningTime="2024-07-02 08:17:11.369134674 +0000 UTC m=+25.214542629" Jul 2 08:17:11.375000 systemd[1]: Started cri-containerd-7b9e1d53a6c5624452e1659795704bceb89be11510df0333782ceb03ac38ae38.scope. Jul 2 08:17:11.393964 env[1254]: time="2024-07-02T08:17:11.393941101Z" level=info msg="StartContainer for \"7b9e1d53a6c5624452e1659795704bceb89be11510df0333782ceb03ac38ae38\" returns successfully" Jul 2 08:17:11.398517 systemd[1]: cri-containerd-7b9e1d53a6c5624452e1659795704bceb89be11510df0333782ceb03ac38ae38.scope: Deactivated successfully. Jul 2 08:17:11.412737 env[1254]: time="2024-07-02T08:17:11.412706789Z" level=info msg="shim disconnected" id=7b9e1d53a6c5624452e1659795704bceb89be11510df0333782ceb03ac38ae38 Jul 2 08:17:11.412737 env[1254]: time="2024-07-02T08:17:11.412735333Z" level=warning msg="cleaning up after shim disconnected" id=7b9e1d53a6c5624452e1659795704bceb89be11510df0333782ceb03ac38ae38 namespace=k8s.io Jul 2 08:17:11.414702 env[1254]: time="2024-07-02T08:17:11.412741723Z" level=info msg="cleaning up dead shim" Jul 2 08:17:11.417923 env[1254]: time="2024-07-02T08:17:11.417887791Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:17:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2767 runtime=io.containerd.runc.v2\n" Jul 2 08:17:12.350748 env[1254]: time="2024-07-02T08:17:12.350726114Z" level=info msg="CreateContainer within sandbox \"4faa07b8afa9ba5bd9a181cf7d3753c157f3136e9e9a9fa915f2eec77f25f3a2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 08:17:12.359486 env[1254]: time="2024-07-02T08:17:12.359459678Z" level=info msg="CreateContainer within sandbox \"4faa07b8afa9ba5bd9a181cf7d3753c157f3136e9e9a9fa915f2eec77f25f3a2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8ecd95ae7dc6f3fb03e5d49100d255e5f3d5a0bd37937384d9011c5afce47c16\"" Jul 2 08:17:12.359913 env[1254]: time="2024-07-02T08:17:12.359891099Z" level=info msg="StartContainer for \"8ecd95ae7dc6f3fb03e5d49100d255e5f3d5a0bd37937384d9011c5afce47c16\"" Jul 2 08:17:12.375296 systemd[1]: Started cri-containerd-8ecd95ae7dc6f3fb03e5d49100d255e5f3d5a0bd37937384d9011c5afce47c16.scope. Jul 2 08:17:12.397643 env[1254]: time="2024-07-02T08:17:12.397617494Z" level=info msg="StartContainer for \"8ecd95ae7dc6f3fb03e5d49100d255e5f3d5a0bd37937384d9011c5afce47c16\" returns successfully" Jul 2 08:17:12.542170 kubelet[2163]: I0702 08:17:12.542149 2163 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 08:17:12.564670 kubelet[2163]: I0702 08:17:12.564642 2163 topology_manager.go:215] "Topology Admit Handler" podUID="8c3506b8-8da5-49bd-b630-fcc8ce56f947" podNamespace="kube-system" podName="coredns-7db6d8ff4d-pmxgz" Jul 2 08:17:12.568488 kubelet[2163]: I0702 08:17:12.568313 2163 topology_manager.go:215] "Topology Admit Handler" podUID="11b955a5-83cd-47ac-a802-15a238db9ca0" podNamespace="kube-system" podName="coredns-7db6d8ff4d-92qvf" Jul 2 08:17:12.575840 systemd[1]: Created slice kubepods-burstable-pod8c3506b8_8da5_49bd_b630_fcc8ce56f947.slice. Jul 2 08:17:12.579588 systemd[1]: Created slice kubepods-burstable-pod11b955a5_83cd_47ac_a802_15a238db9ca0.slice. Jul 2 08:17:12.605359 kubelet[2163]: I0702 08:17:12.605294 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfms9\" (UniqueName: \"kubernetes.io/projected/11b955a5-83cd-47ac-a802-15a238db9ca0-kube-api-access-qfms9\") pod \"coredns-7db6d8ff4d-92qvf\" (UID: \"11b955a5-83cd-47ac-a802-15a238db9ca0\") " pod="kube-system/coredns-7db6d8ff4d-92qvf" Jul 2 08:17:12.605480 kubelet[2163]: I0702 08:17:12.605471 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11b955a5-83cd-47ac-a802-15a238db9ca0-config-volume\") pod \"coredns-7db6d8ff4d-92qvf\" (UID: \"11b955a5-83cd-47ac-a802-15a238db9ca0\") " pod="kube-system/coredns-7db6d8ff4d-92qvf" Jul 2 08:17:12.605569 kubelet[2163]: I0702 08:17:12.605558 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c3506b8-8da5-49bd-b630-fcc8ce56f947-config-volume\") pod \"coredns-7db6d8ff4d-pmxgz\" (UID: \"8c3506b8-8da5-49bd-b630-fcc8ce56f947\") " pod="kube-system/coredns-7db6d8ff4d-pmxgz" Jul 2 08:17:12.605642 kubelet[2163]: I0702 08:17:12.605633 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvn56\" (UniqueName: \"kubernetes.io/projected/8c3506b8-8da5-49bd-b630-fcc8ce56f947-kube-api-access-hvn56\") pod \"coredns-7db6d8ff4d-pmxgz\" (UID: \"8c3506b8-8da5-49bd-b630-fcc8ce56f947\") " pod="kube-system/coredns-7db6d8ff4d-pmxgz" Jul 2 08:17:12.719908 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Jul 2 08:17:12.878941 env[1254]: time="2024-07-02T08:17:12.878866074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pmxgz,Uid:8c3506b8-8da5-49bd-b630-fcc8ce56f947,Namespace:kube-system,Attempt:0,}" Jul 2 08:17:12.881706 env[1254]: time="2024-07-02T08:17:12.881687967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-92qvf,Uid:11b955a5-83cd-47ac-a802-15a238db9ca0,Namespace:kube-system,Attempt:0,}" Jul 2 08:17:12.964912 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Jul 2 08:17:14.576035 systemd-networkd[1061]: cilium_host: Link UP Jul 2 08:17:14.576597 systemd-networkd[1061]: cilium_net: Link UP Jul 2 08:17:14.578620 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Jul 2 08:17:14.578663 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 2 08:17:14.578725 systemd-networkd[1061]: cilium_net: Gained carrier Jul 2 08:17:14.578832 systemd-networkd[1061]: cilium_host: Gained carrier Jul 2 08:17:14.683599 systemd-networkd[1061]: cilium_vxlan: Link UP Jul 2 08:17:14.683603 systemd-networkd[1061]: cilium_vxlan: Gained carrier Jul 2 08:17:14.703020 systemd-networkd[1061]: cilium_net: Gained IPv6LL Jul 2 08:17:15.127044 systemd-networkd[1061]: cilium_host: Gained IPv6LL Jul 2 08:17:15.211912 kernel: NET: Registered PF_ALG protocol family Jul 2 08:17:15.675530 systemd-networkd[1061]: lxc_health: Link UP Jul 2 08:17:15.689048 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 08:17:15.687989 systemd-networkd[1061]: lxc_health: Gained carrier Jul 2 08:17:15.923005 systemd-networkd[1061]: lxc0fd46123f189: Link UP Jul 2 08:17:15.926912 kernel: eth0: renamed from tmp441ff Jul 2 08:17:15.933968 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc0fd46123f189: link becomes ready Jul 2 08:17:15.933956 systemd-networkd[1061]: lxc0fd46123f189: Gained carrier Jul 2 08:17:15.934412 systemd-networkd[1061]: lxcb1bed41432bc: Link UP Jul 2 08:17:15.939976 kernel: eth0: renamed from tmp42326 Jul 2 08:17:15.949537 systemd-networkd[1061]: lxcb1bed41432bc: Gained carrier Jul 2 08:17:15.949911 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcb1bed41432bc: link becomes ready Jul 2 08:17:16.535032 systemd-networkd[1061]: cilium_vxlan: Gained IPv6LL Jul 2 08:17:16.604035 kubelet[2163]: I0702 08:17:16.604001 2163 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-x78rm" podStartSLOduration=10.187364428 podStartE2EDuration="17.603984468s" podCreationTimestamp="2024-07-02 08:16:59 +0000 UTC" firstStartedPulling="2024-07-02 08:17:00.683475002 +0000 UTC m=+14.528882949" lastFinishedPulling="2024-07-02 08:17:08.100095032 +0000 UTC m=+21.945502989" observedRunningTime="2024-07-02 08:17:13.360611474 +0000 UTC m=+27.206019428" watchObservedRunningTime="2024-07-02 08:17:16.603984468 +0000 UTC m=+30.449392420" Jul 2 08:17:17.110981 systemd-networkd[1061]: lxc0fd46123f189: Gained IPv6LL Jul 2 08:17:17.687019 systemd-networkd[1061]: lxc_health: Gained IPv6LL Jul 2 08:17:17.879016 systemd-networkd[1061]: lxcb1bed41432bc: Gained IPv6LL Jul 2 08:17:18.569598 env[1254]: time="2024-07-02T08:17:18.569560867Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:17:18.569875 env[1254]: time="2024-07-02T08:17:18.569859701Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:17:18.569946 env[1254]: time="2024-07-02T08:17:18.569932581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:17:18.570085 env[1254]: time="2024-07-02T08:17:18.570069053Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/441ff92d73732beaf563d6557d9a3304fe496b24b84ab16a26f62798df80bfe8 pid=3326 runtime=io.containerd.runc.v2 Jul 2 08:17:18.577934 env[1254]: time="2024-07-02T08:17:18.577685821Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:17:18.577934 env[1254]: time="2024-07-02T08:17:18.577709315Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:17:18.577934 env[1254]: time="2024-07-02T08:17:18.577716327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:17:18.577934 env[1254]: time="2024-07-02T08:17:18.577784566Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/42326fcad55e51884277d7d5384c0dc74d38edabaf2d56ca4efa1789f8d2ba89 pid=3336 runtime=io.containerd.runc.v2 Jul 2 08:17:18.586906 systemd[1]: Started cri-containerd-42326fcad55e51884277d7d5384c0dc74d38edabaf2d56ca4efa1789f8d2ba89.scope. Jul 2 08:17:18.596056 systemd[1]: run-containerd-runc-k8s.io-441ff92d73732beaf563d6557d9a3304fe496b24b84ab16a26f62798df80bfe8-runc.hAMdRF.mount: Deactivated successfully. Jul 2 08:17:18.603677 systemd[1]: Started cri-containerd-441ff92d73732beaf563d6557d9a3304fe496b24b84ab16a26f62798df80bfe8.scope. Jul 2 08:17:18.620013 systemd-resolved[1206]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 08:17:18.637572 env[1254]: time="2024-07-02T08:17:18.637546643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-92qvf,Uid:11b955a5-83cd-47ac-a802-15a238db9ca0,Namespace:kube-system,Attempt:0,} returns sandbox id \"42326fcad55e51884277d7d5384c0dc74d38edabaf2d56ca4efa1789f8d2ba89\"" Jul 2 08:17:18.646396 systemd-resolved[1206]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 08:17:18.653024 env[1254]: time="2024-07-02T08:17:18.652944161Z" level=info msg="CreateContainer within sandbox \"42326fcad55e51884277d7d5384c0dc74d38edabaf2d56ca4efa1789f8d2ba89\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 08:17:18.673299 env[1254]: time="2024-07-02T08:17:18.673268550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pmxgz,Uid:8c3506b8-8da5-49bd-b630-fcc8ce56f947,Namespace:kube-system,Attempt:0,} returns sandbox id \"441ff92d73732beaf563d6557d9a3304fe496b24b84ab16a26f62798df80bfe8\"" Jul 2 08:17:18.676268 env[1254]: time="2024-07-02T08:17:18.676251275Z" level=info msg="CreateContainer within sandbox \"441ff92d73732beaf563d6557d9a3304fe496b24b84ab16a26f62798df80bfe8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 08:17:18.696757 env[1254]: time="2024-07-02T08:17:18.696685783Z" level=info msg="CreateContainer within sandbox \"441ff92d73732beaf563d6557d9a3304fe496b24b84ab16a26f62798df80bfe8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dedab59691564766bb00bb780ff71dc9bf0cbed87188c3786e00057b28aab607\"" Jul 2 08:17:18.698185 env[1254]: time="2024-07-02T08:17:18.698160773Z" level=info msg="StartContainer for \"dedab59691564766bb00bb780ff71dc9bf0cbed87188c3786e00057b28aab607\"" Jul 2 08:17:18.699874 env[1254]: time="2024-07-02T08:17:18.699846620Z" level=info msg="CreateContainer within sandbox \"42326fcad55e51884277d7d5384c0dc74d38edabaf2d56ca4efa1789f8d2ba89\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c51f80b213b29b4dd29b2cd3fd071c4700bd9e51221ba53b64e4bcf4996b9454\"" Jul 2 08:17:18.700249 env[1254]: time="2024-07-02T08:17:18.700231764Z" level=info msg="StartContainer for \"c51f80b213b29b4dd29b2cd3fd071c4700bd9e51221ba53b64e4bcf4996b9454\"" Jul 2 08:17:18.713115 systemd[1]: Started cri-containerd-c51f80b213b29b4dd29b2cd3fd071c4700bd9e51221ba53b64e4bcf4996b9454.scope. Jul 2 08:17:18.720125 systemd[1]: Started cri-containerd-dedab59691564766bb00bb780ff71dc9bf0cbed87188c3786e00057b28aab607.scope. Jul 2 08:17:18.749126 env[1254]: time="2024-07-02T08:17:18.749066472Z" level=info msg="StartContainer for \"dedab59691564766bb00bb780ff71dc9bf0cbed87188c3786e00057b28aab607\" returns successfully" Jul 2 08:17:18.752389 env[1254]: time="2024-07-02T08:17:18.752355935Z" level=info msg="StartContainer for \"c51f80b213b29b4dd29b2cd3fd071c4700bd9e51221ba53b64e4bcf4996b9454\" returns successfully" Jul 2 08:17:19.202187 kubelet[2163]: I0702 08:17:19.202149 2163 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 08:17:19.382299 kubelet[2163]: I0702 08:17:19.375890 2163 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-92qvf" podStartSLOduration=19.375877233 podStartE2EDuration="19.375877233s" podCreationTimestamp="2024-07-02 08:17:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:17:19.366596404 +0000 UTC m=+33.212004347" watchObservedRunningTime="2024-07-02 08:17:19.375877233 +0000 UTC m=+33.221285182" Jul 2 08:17:19.395819 kubelet[2163]: I0702 08:17:19.395780 2163 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-pmxgz" podStartSLOduration=19.395762971 podStartE2EDuration="19.395762971s" podCreationTimestamp="2024-07-02 08:17:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:17:19.388267598 +0000 UTC m=+33.233675548" watchObservedRunningTime="2024-07-02 08:17:19.395762971 +0000 UTC m=+33.241170928" Jul 2 08:18:05.906025 systemd[1]: Started sshd@5-139.178.70.99:22-139.178.68.195:53220.service. Jul 2 08:18:05.971557 sshd[3485]: Accepted publickey for core from 139.178.68.195 port 53220 ssh2: RSA SHA256:ZBsE641/vEJoOwiO3PsmlEnzQGV1D8+6UMqIklXV5hk Jul 2 08:18:05.973962 sshd[3485]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:18:05.978333 systemd[1]: Started session-8.scope. Jul 2 08:18:05.978584 systemd-logind[1241]: New session 8 of user core. Jul 2 08:18:06.204447 sshd[3485]: pam_unix(sshd:session): session closed for user core Jul 2 08:18:06.206761 systemd[1]: sshd@5-139.178.70.99:22-139.178.68.195:53220.service: Deactivated successfully. Jul 2 08:18:06.207303 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 08:18:06.207586 systemd-logind[1241]: Session 8 logged out. Waiting for processes to exit. Jul 2 08:18:06.209467 systemd-logind[1241]: Removed session 8. Jul 2 08:18:11.208320 systemd[1]: Started sshd@6-139.178.70.99:22-139.178.68.195:53236.service. Jul 2 08:18:11.236732 sshd[3497]: Accepted publickey for core from 139.178.68.195 port 53236 ssh2: RSA SHA256:ZBsE641/vEJoOwiO3PsmlEnzQGV1D8+6UMqIklXV5hk Jul 2 08:18:11.237901 sshd[3497]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:18:11.241313 systemd[1]: Started session-9.scope. Jul 2 08:18:11.241734 systemd-logind[1241]: New session 9 of user core. Jul 2 08:18:11.461461 sshd[3497]: pam_unix(sshd:session): session closed for user core Jul 2 08:18:11.463875 systemd[1]: sshd@6-139.178.70.99:22-139.178.68.195:53236.service: Deactivated successfully. Jul 2 08:18:11.464380 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 08:18:11.465363 systemd-logind[1241]: Session 9 logged out. Waiting for processes to exit. Jul 2 08:18:11.466142 systemd-logind[1241]: Removed session 9. Jul 2 08:18:16.464955 systemd[1]: Started sshd@7-139.178.70.99:22-139.178.68.195:51868.service. Jul 2 08:18:16.509359 sshd[3511]: Accepted publickey for core from 139.178.68.195 port 51868 ssh2: RSA SHA256:ZBsE641/vEJoOwiO3PsmlEnzQGV1D8+6UMqIklXV5hk Jul 2 08:18:16.511090 sshd[3511]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:18:16.514841 systemd-logind[1241]: New session 10 of user core. Jul 2 08:18:16.515569 systemd[1]: Started session-10.scope. Jul 2 08:18:16.699732 sshd[3511]: pam_unix(sshd:session): session closed for user core Jul 2 08:18:16.701177 systemd-logind[1241]: Session 10 logged out. Waiting for processes to exit. Jul 2 08:18:16.701329 systemd[1]: sshd@7-139.178.70.99:22-139.178.68.195:51868.service: Deactivated successfully. Jul 2 08:18:16.701756 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 08:18:16.702299 systemd-logind[1241]: Removed session 10. Jul 2 08:18:21.703160 systemd[1]: Started sshd@8-139.178.70.99:22-139.178.68.195:51874.service. Jul 2 08:18:21.732212 sshd[3525]: Accepted publickey for core from 139.178.68.195 port 51874 ssh2: RSA SHA256:ZBsE641/vEJoOwiO3PsmlEnzQGV1D8+6UMqIklXV5hk Jul 2 08:18:21.733292 sshd[3525]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:18:21.737246 systemd[1]: Started session-11.scope. Jul 2 08:18:21.737536 systemd-logind[1241]: New session 11 of user core. Jul 2 08:18:21.881152 sshd[3525]: pam_unix(sshd:session): session closed for user core Jul 2 08:18:21.883726 systemd[1]: Started sshd@9-139.178.70.99:22-139.178.68.195:51880.service. Jul 2 08:18:21.886333 systemd[1]: sshd@8-139.178.70.99:22-139.178.68.195:51874.service: Deactivated successfully. Jul 2 08:18:21.886775 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 08:18:21.887454 systemd-logind[1241]: Session 11 logged out. Waiting for processes to exit. Jul 2 08:18:21.887998 systemd-logind[1241]: Removed session 11. Jul 2 08:18:21.968729 sshd[3536]: Accepted publickey for core from 139.178.68.195 port 51880 ssh2: RSA SHA256:ZBsE641/vEJoOwiO3PsmlEnzQGV1D8+6UMqIklXV5hk Jul 2 08:18:21.970777 sshd[3536]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:18:21.974569 systemd[1]: Started session-12.scope. Jul 2 08:18:21.975454 systemd-logind[1241]: New session 12 of user core. Jul 2 08:18:22.145476 sshd[3536]: pam_unix(sshd:session): session closed for user core Jul 2 08:18:22.147144 systemd[1]: Started sshd@10-139.178.70.99:22-139.178.68.195:51896.service. Jul 2 08:18:22.150992 systemd[1]: sshd@9-139.178.70.99:22-139.178.68.195:51880.service: Deactivated successfully. Jul 2 08:18:22.151487 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 08:18:22.151957 systemd-logind[1241]: Session 12 logged out. Waiting for processes to exit. Jul 2 08:18:22.153247 systemd-logind[1241]: Removed session 12. Jul 2 08:18:22.182316 sshd[3546]: Accepted publickey for core from 139.178.68.195 port 51896 ssh2: RSA SHA256:ZBsE641/vEJoOwiO3PsmlEnzQGV1D8+6UMqIklXV5hk Jul 2 08:18:22.183185 sshd[3546]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:18:22.186389 systemd[1]: Started session-13.scope. Jul 2 08:18:22.186755 systemd-logind[1241]: New session 13 of user core. Jul 2 08:18:22.282802 sshd[3546]: pam_unix(sshd:session): session closed for user core Jul 2 08:18:22.284702 systemd-logind[1241]: Session 13 logged out. Waiting for processes to exit. Jul 2 08:18:22.284856 systemd[1]: sshd@10-139.178.70.99:22-139.178.68.195:51896.service: Deactivated successfully. Jul 2 08:18:22.285281 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 08:18:22.285805 systemd-logind[1241]: Removed session 13. Jul 2 08:18:27.286805 systemd[1]: Started sshd@11-139.178.70.99:22-139.178.68.195:56880.service. Jul 2 08:18:27.320127 sshd[3558]: Accepted publickey for core from 139.178.68.195 port 56880 ssh2: RSA SHA256:ZBsE641/vEJoOwiO3PsmlEnzQGV1D8+6UMqIklXV5hk Jul 2 08:18:27.321041 sshd[3558]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:18:27.325451 systemd[1]: Started session-14.scope. Jul 2 08:18:27.326090 systemd-logind[1241]: New session 14 of user core. Jul 2 08:18:27.412270 sshd[3558]: pam_unix(sshd:session): session closed for user core Jul 2 08:18:27.414051 systemd[1]: sshd@11-139.178.70.99:22-139.178.68.195:56880.service: Deactivated successfully. Jul 2 08:18:27.414532 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 08:18:27.414905 systemd-logind[1241]: Session 14 logged out. Waiting for processes to exit. Jul 2 08:18:27.415382 systemd-logind[1241]: Removed session 14. Jul 2 08:18:32.416628 systemd[1]: Started sshd@12-139.178.70.99:22-139.178.68.195:56882.service. Jul 2 08:18:32.448710 sshd[3571]: Accepted publickey for core from 139.178.68.195 port 56882 ssh2: RSA SHA256:ZBsE641/vEJoOwiO3PsmlEnzQGV1D8+6UMqIklXV5hk Jul 2 08:18:32.449634 sshd[3571]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:18:32.453219 systemd[1]: Started session-15.scope. Jul 2 08:18:32.453461 systemd-logind[1241]: New session 15 of user core. Jul 2 08:18:32.541556 sshd[3571]: pam_unix(sshd:session): session closed for user core Jul 2 08:18:32.543480 systemd[1]: Started sshd@13-139.178.70.99:22-139.178.68.195:52332.service. Jul 2 08:18:32.549421 systemd[1]: sshd@12-139.178.70.99:22-139.178.68.195:56882.service: Deactivated successfully. Jul 2 08:18:32.549889 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 08:18:32.550630 systemd-logind[1241]: Session 15 logged out. Waiting for processes to exit. Jul 2 08:18:32.551140 systemd-logind[1241]: Removed session 15. Jul 2 08:18:32.573359 sshd[3582]: Accepted publickey for core from 139.178.68.195 port 52332 ssh2: RSA SHA256:ZBsE641/vEJoOwiO3PsmlEnzQGV1D8+6UMqIklXV5hk Jul 2 08:18:32.574564 sshd[3582]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:18:32.577432 systemd-logind[1241]: New session 16 of user core. Jul 2 08:18:32.578201 systemd[1]: Started session-16.scope. Jul 2 08:18:33.035802 sshd[3582]: pam_unix(sshd:session): session closed for user core Jul 2 08:18:33.038074 systemd[1]: Started sshd@14-139.178.70.99:22-139.178.68.195:52342.service. Jul 2 08:18:33.048242 systemd[1]: sshd@13-139.178.70.99:22-139.178.68.195:52332.service: Deactivated successfully. Jul 2 08:18:33.048694 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 08:18:33.049274 systemd-logind[1241]: Session 16 logged out. Waiting for processes to exit. Jul 2 08:18:33.049759 systemd-logind[1241]: Removed session 16. Jul 2 08:18:33.079091 sshd[3592]: Accepted publickey for core from 139.178.68.195 port 52342 ssh2: RSA SHA256:ZBsE641/vEJoOwiO3PsmlEnzQGV1D8+6UMqIklXV5hk Jul 2 08:18:33.080086 sshd[3592]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:18:33.082738 systemd-logind[1241]: New session 17 of user core. Jul 2 08:18:33.083245 systemd[1]: Started session-17.scope. Jul 2 08:18:34.309791 sshd[3592]: pam_unix(sshd:session): session closed for user core Jul 2 08:18:34.312635 systemd[1]: sshd@14-139.178.70.99:22-139.178.68.195:52342.service: Deactivated successfully. Jul 2 08:18:34.313006 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 08:18:34.313546 systemd-logind[1241]: Session 17 logged out. Waiting for processes to exit. Jul 2 08:18:34.314372 systemd[1]: Started sshd@15-139.178.70.99:22-139.178.68.195:52352.service. Jul 2 08:18:34.315355 systemd-logind[1241]: Removed session 17. Jul 2 08:18:34.347222 sshd[3612]: Accepted publickey for core from 139.178.68.195 port 52352 ssh2: RSA SHA256:ZBsE641/vEJoOwiO3PsmlEnzQGV1D8+6UMqIklXV5hk Jul 2 08:18:34.348785 sshd[3612]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:18:34.352109 systemd-logind[1241]: New session 18 of user core. Jul 2 08:18:34.352654 systemd[1]: Started session-18.scope. Jul 2 08:18:34.600002 sshd[3612]: pam_unix(sshd:session): session closed for user core Jul 2 08:18:34.602731 systemd[1]: Started sshd@16-139.178.70.99:22-139.178.68.195:52368.service. Jul 2 08:18:34.607329 systemd-logind[1241]: Session 18 logged out. Waiting for processes to exit. Jul 2 08:18:34.607465 systemd[1]: sshd@15-139.178.70.99:22-139.178.68.195:52352.service: Deactivated successfully. Jul 2 08:18:34.607849 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 08:18:34.608583 systemd-logind[1241]: Removed session 18. Jul 2 08:18:34.641670 sshd[3621]: Accepted publickey for core from 139.178.68.195 port 52368 ssh2: RSA SHA256:ZBsE641/vEJoOwiO3PsmlEnzQGV1D8+6UMqIklXV5hk Jul 2 08:18:34.642508 sshd[3621]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:18:34.645258 systemd-logind[1241]: New session 19 of user core. Jul 2 08:18:34.645751 systemd[1]: Started session-19.scope. Jul 2 08:18:34.775187 sshd[3621]: pam_unix(sshd:session): session closed for user core Jul 2 08:18:34.776813 systemd[1]: sshd@16-139.178.70.99:22-139.178.68.195:52368.service: Deactivated successfully. Jul 2 08:18:34.777279 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 08:18:34.777772 systemd-logind[1241]: Session 19 logged out. Waiting for processes to exit. Jul 2 08:18:34.778313 systemd-logind[1241]: Removed session 19. Jul 2 08:18:39.779553 systemd[1]: Started sshd@17-139.178.70.99:22-139.178.68.195:52372.service. Jul 2 08:18:39.810122 sshd[3633]: Accepted publickey for core from 139.178.68.195 port 52372 ssh2: RSA SHA256:ZBsE641/vEJoOwiO3PsmlEnzQGV1D8+6UMqIklXV5hk Jul 2 08:18:39.811224 sshd[3633]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:18:39.817441 systemd[1]: Started session-20.scope. Jul 2 08:18:39.817939 systemd-logind[1241]: New session 20 of user core. Jul 2 08:18:39.907670 sshd[3633]: pam_unix(sshd:session): session closed for user core Jul 2 08:18:39.909290 systemd[1]: sshd@17-139.178.70.99:22-139.178.68.195:52372.service: Deactivated successfully. Jul 2 08:18:39.909733 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 08:18:39.910184 systemd-logind[1241]: Session 20 logged out. Waiting for processes to exit. Jul 2 08:18:39.910644 systemd-logind[1241]: Removed session 20. Jul 2 08:18:44.911842 systemd[1]: Started sshd@18-139.178.70.99:22-139.178.68.195:37158.service. Jul 2 08:18:44.941222 sshd[3648]: Accepted publickey for core from 139.178.68.195 port 37158 ssh2: RSA SHA256:ZBsE641/vEJoOwiO3PsmlEnzQGV1D8+6UMqIklXV5hk Jul 2 08:18:44.942091 sshd[3648]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:18:44.945396 systemd[1]: Started session-21.scope. Jul 2 08:18:44.945954 systemd-logind[1241]: New session 21 of user core. Jul 2 08:18:45.036306 sshd[3648]: pam_unix(sshd:session): session closed for user core Jul 2 08:18:45.037824 systemd[1]: sshd@18-139.178.70.99:22-139.178.68.195:37158.service: Deactivated successfully. Jul 2 08:18:45.038282 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 08:18:45.038957 systemd-logind[1241]: Session 21 logged out. Waiting for processes to exit. Jul 2 08:18:45.039582 systemd-logind[1241]: Removed session 21. Jul 2 08:18:50.039479 systemd[1]: Started sshd@19-139.178.70.99:22-139.178.68.195:37164.service. Jul 2 08:18:50.072757 sshd[3662]: Accepted publickey for core from 139.178.68.195 port 37164 ssh2: RSA SHA256:ZBsE641/vEJoOwiO3PsmlEnzQGV1D8+6UMqIklXV5hk Jul 2 08:18:50.073586 sshd[3662]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:18:50.076165 systemd-logind[1241]: New session 22 of user core. Jul 2 08:18:50.076694 systemd[1]: Started session-22.scope. Jul 2 08:18:50.161405 sshd[3662]: pam_unix(sshd:session): session closed for user core Jul 2 08:18:50.163034 systemd-logind[1241]: Session 22 logged out. Waiting for processes to exit. Jul 2 08:18:50.163136 systemd[1]: sshd@19-139.178.70.99:22-139.178.68.195:37164.service: Deactivated successfully. Jul 2 08:18:50.163552 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 08:18:50.164025 systemd-logind[1241]: Removed session 22. Jul 2 08:18:55.166167 systemd[1]: Started sshd@20-139.178.70.99:22-139.178.68.195:43526.service. Jul 2 08:18:55.195573 sshd[3674]: Accepted publickey for core from 139.178.68.195 port 43526 ssh2: RSA SHA256:ZBsE641/vEJoOwiO3PsmlEnzQGV1D8+6UMqIklXV5hk Jul 2 08:18:55.196964 sshd[3674]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:18:55.199850 systemd-logind[1241]: New session 23 of user core. Jul 2 08:18:55.200198 systemd[1]: Started session-23.scope. Jul 2 08:18:55.302098 systemd[1]: Started sshd@21-139.178.70.99:22-139.178.68.195:43540.service. Jul 2 08:18:55.302494 sshd[3674]: pam_unix(sshd:session): session closed for user core Jul 2 08:18:55.306585 systemd-logind[1241]: Session 23 logged out. Waiting for processes to exit. Jul 2 08:18:55.306864 systemd[1]: sshd@20-139.178.70.99:22-139.178.68.195:43526.service: Deactivated successfully. Jul 2 08:18:55.307469 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 08:18:55.308349 systemd-logind[1241]: Removed session 23. Jul 2 08:18:55.335270 sshd[3685]: Accepted publickey for core from 139.178.68.195 port 43540 ssh2: RSA SHA256:ZBsE641/vEJoOwiO3PsmlEnzQGV1D8+6UMqIklXV5hk Jul 2 08:18:55.336278 sshd[3685]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:18:55.339606 systemd[1]: Started session-24.scope. Jul 2 08:18:55.339982 systemd-logind[1241]: New session 24 of user core. Jul 2 08:18:57.108523 env[1254]: time="2024-07-02T08:18:57.108479999Z" level=info msg="StopContainer for \"c73c3dd161c8855e355a529e1d220228c1f66c985fc38527213d210c5a83445f\" with timeout 30 (s)" Jul 2 08:18:57.109084 env[1254]: time="2024-07-02T08:18:57.109061800Z" level=info msg="Stop container \"c73c3dd161c8855e355a529e1d220228c1f66c985fc38527213d210c5a83445f\" with signal terminated" Jul 2 08:18:57.119151 systemd[1]: run-containerd-runc-k8s.io-8ecd95ae7dc6f3fb03e5d49100d255e5f3d5a0bd37937384d9011c5afce47c16-runc.IH1Eia.mount: Deactivated successfully. Jul 2 08:18:57.119796 systemd[1]: cri-containerd-c73c3dd161c8855e355a529e1d220228c1f66c985fc38527213d210c5a83445f.scope: Deactivated successfully. Jul 2 08:18:57.137379 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c73c3dd161c8855e355a529e1d220228c1f66c985fc38527213d210c5a83445f-rootfs.mount: Deactivated successfully. Jul 2 08:18:57.144499 env[1254]: time="2024-07-02T08:18:57.144458671Z" level=info msg="shim disconnected" id=c73c3dd161c8855e355a529e1d220228c1f66c985fc38527213d210c5a83445f Jul 2 08:18:57.144499 env[1254]: time="2024-07-02T08:18:57.144493724Z" level=warning msg="cleaning up after shim disconnected" id=c73c3dd161c8855e355a529e1d220228c1f66c985fc38527213d210c5a83445f namespace=k8s.io Jul 2 08:18:57.144499 env[1254]: time="2024-07-02T08:18:57.144503801Z" level=info msg="cleaning up dead shim" Jul 2 08:18:57.149614 env[1254]: time="2024-07-02T08:18:57.149567955Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 08:18:57.151154 env[1254]: time="2024-07-02T08:18:57.151132492Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:18:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3730 runtime=io.containerd.runc.v2\n" Jul 2 08:18:57.156734 env[1254]: time="2024-07-02T08:18:57.156708916Z" level=info msg="StopContainer for \"c73c3dd161c8855e355a529e1d220228c1f66c985fc38527213d210c5a83445f\" returns successfully" Jul 2 08:18:57.157259 env[1254]: time="2024-07-02T08:18:57.157240677Z" level=info msg="StopPodSandbox for \"49c0b38f18dda45565e8387f4255130cbfce9a478dd439ca7b45708e7bd866ec\"" Jul 2 08:18:57.157311 env[1254]: time="2024-07-02T08:18:57.157283894Z" level=info msg="Container to stop \"c73c3dd161c8855e355a529e1d220228c1f66c985fc38527213d210c5a83445f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:18:57.159021 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-49c0b38f18dda45565e8387f4255130cbfce9a478dd439ca7b45708e7bd866ec-shm.mount: Deactivated successfully. Jul 2 08:18:57.165056 systemd[1]: cri-containerd-49c0b38f18dda45565e8387f4255130cbfce9a478dd439ca7b45708e7bd866ec.scope: Deactivated successfully. Jul 2 08:18:57.170541 env[1254]: time="2024-07-02T08:18:57.167279979Z" level=info msg="StopContainer for \"8ecd95ae7dc6f3fb03e5d49100d255e5f3d5a0bd37937384d9011c5afce47c16\" with timeout 2 (s)" Jul 2 08:18:57.170541 env[1254]: time="2024-07-02T08:18:57.167426333Z" level=info msg="Stop container \"8ecd95ae7dc6f3fb03e5d49100d255e5f3d5a0bd37937384d9011c5afce47c16\" with signal terminated" Jul 2 08:18:57.181183 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-49c0b38f18dda45565e8387f4255130cbfce9a478dd439ca7b45708e7bd866ec-rootfs.mount: Deactivated successfully. Jul 2 08:18:57.188297 systemd-networkd[1061]: lxc_health: Link DOWN Jul 2 08:18:57.188301 systemd-networkd[1061]: lxc_health: Lost carrier Jul 2 08:18:57.219637 env[1254]: time="2024-07-02T08:18:57.219584165Z" level=info msg="shim disconnected" id=49c0b38f18dda45565e8387f4255130cbfce9a478dd439ca7b45708e7bd866ec Jul 2 08:18:57.219637 env[1254]: time="2024-07-02T08:18:57.219632049Z" level=warning msg="cleaning up after shim disconnected" id=49c0b38f18dda45565e8387f4255130cbfce9a478dd439ca7b45708e7bd866ec namespace=k8s.io Jul 2 08:18:57.219637 env[1254]: time="2024-07-02T08:18:57.219638760Z" level=info msg="cleaning up dead shim" Jul 2 08:18:57.224659 env[1254]: time="2024-07-02T08:18:57.224631344Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:18:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3773 runtime=io.containerd.runc.v2\n" Jul 2 08:18:57.229750 env[1254]: time="2024-07-02T08:18:57.229726921Z" level=info msg="TearDown network for sandbox \"49c0b38f18dda45565e8387f4255130cbfce9a478dd439ca7b45708e7bd866ec\" successfully" Jul 2 08:18:57.229750 env[1254]: time="2024-07-02T08:18:57.229746228Z" level=info msg="StopPodSandbox for \"49c0b38f18dda45565e8387f4255130cbfce9a478dd439ca7b45708e7bd866ec\" returns successfully" Jul 2 08:18:57.249135 systemd[1]: cri-containerd-8ecd95ae7dc6f3fb03e5d49100d255e5f3d5a0bd37937384d9011c5afce47c16.scope: Deactivated successfully. Jul 2 08:18:57.249626 systemd[1]: cri-containerd-8ecd95ae7dc6f3fb03e5d49100d255e5f3d5a0bd37937384d9011c5afce47c16.scope: Consumed 4.486s CPU time. Jul 2 08:18:57.278388 env[1254]: time="2024-07-02T08:18:57.278358720Z" level=info msg="shim disconnected" id=8ecd95ae7dc6f3fb03e5d49100d255e5f3d5a0bd37937384d9011c5afce47c16 Jul 2 08:18:57.278552 env[1254]: time="2024-07-02T08:18:57.278539764Z" level=warning msg="cleaning up after shim disconnected" id=8ecd95ae7dc6f3fb03e5d49100d255e5f3d5a0bd37937384d9011c5afce47c16 namespace=k8s.io Jul 2 08:18:57.278612 env[1254]: time="2024-07-02T08:18:57.278597333Z" level=info msg="cleaning up dead shim" Jul 2 08:18:57.283345 env[1254]: time="2024-07-02T08:18:57.283326012Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:18:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3799 runtime=io.containerd.runc.v2\n" Jul 2 08:18:57.288190 env[1254]: time="2024-07-02T08:18:57.288148162Z" level=info msg="StopContainer for \"8ecd95ae7dc6f3fb03e5d49100d255e5f3d5a0bd37937384d9011c5afce47c16\" returns successfully" Jul 2 08:18:57.288614 env[1254]: time="2024-07-02T08:18:57.288595796Z" level=info msg="StopPodSandbox for \"4faa07b8afa9ba5bd9a181cf7d3753c157f3136e9e9a9fa915f2eec77f25f3a2\"" Jul 2 08:18:57.288655 env[1254]: time="2024-07-02T08:18:57.288640455Z" level=info msg="Container to stop \"7f6879000c2ab87bdc1bd4ab23b204c8ed0b521a913513dde5eab3bc880f899f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:18:57.288655 env[1254]: time="2024-07-02T08:18:57.288649951Z" level=info msg="Container to stop \"5bea1e1721e477ca775fecf574e794b8b319b2ade48c00bacddc9291b79354c1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:18:57.290249 env[1254]: time="2024-07-02T08:18:57.288656532Z" level=info msg="Container to stop \"e1e53c7fea4bdbaf1d9de87cf0419898e080e8168cc6d885462aa120e4e90aa2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:18:57.290249 env[1254]: time="2024-07-02T08:18:57.288663247Z" level=info msg="Container to stop \"7b9e1d53a6c5624452e1659795704bceb89be11510df0333782ceb03ac38ae38\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:18:57.290249 env[1254]: time="2024-07-02T08:18:57.288668631Z" level=info msg="Container to stop \"8ecd95ae7dc6f3fb03e5d49100d255e5f3d5a0bd37937384d9011c5afce47c16\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:18:57.292793 systemd[1]: cri-containerd-4faa07b8afa9ba5bd9a181cf7d3753c157f3136e9e9a9fa915f2eec77f25f3a2.scope: Deactivated successfully. Jul 2 08:18:57.295177 kubelet[2163]: I0702 08:18:57.295072 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dm794\" (UniqueName: \"kubernetes.io/projected/cc77f06c-d063-4cc9-a222-d7a12166038e-kube-api-access-dm794\") pod \"cc77f06c-d063-4cc9-a222-d7a12166038e\" (UID: \"cc77f06c-d063-4cc9-a222-d7a12166038e\") " Jul 2 08:18:57.295177 kubelet[2163]: I0702 08:18:57.295129 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cc77f06c-d063-4cc9-a222-d7a12166038e-cilium-config-path\") pod \"cc77f06c-d063-4cc9-a222-d7a12166038e\" (UID: \"cc77f06c-d063-4cc9-a222-d7a12166038e\") " Jul 2 08:18:57.302439 kubelet[2163]: I0702 08:18:57.300423 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc77f06c-d063-4cc9-a222-d7a12166038e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cc77f06c-d063-4cc9-a222-d7a12166038e" (UID: "cc77f06c-d063-4cc9-a222-d7a12166038e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 08:18:57.309357 kubelet[2163]: I0702 08:18:57.309319 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc77f06c-d063-4cc9-a222-d7a12166038e-kube-api-access-dm794" (OuterVolumeSpecName: "kube-api-access-dm794") pod "cc77f06c-d063-4cc9-a222-d7a12166038e" (UID: "cc77f06c-d063-4cc9-a222-d7a12166038e"). InnerVolumeSpecName "kube-api-access-dm794". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 08:18:57.316400 env[1254]: time="2024-07-02T08:18:57.316348479Z" level=info msg="shim disconnected" id=4faa07b8afa9ba5bd9a181cf7d3753c157f3136e9e9a9fa915f2eec77f25f3a2 Jul 2 08:18:57.316400 env[1254]: time="2024-07-02T08:18:57.316393380Z" level=warning msg="cleaning up after shim disconnected" id=4faa07b8afa9ba5bd9a181cf7d3753c157f3136e9e9a9fa915f2eec77f25f3a2 namespace=k8s.io Jul 2 08:18:57.316400 env[1254]: time="2024-07-02T08:18:57.316400001Z" level=info msg="cleaning up dead shim" Jul 2 08:18:57.321373 env[1254]: time="2024-07-02T08:18:57.321343528Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:18:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3830 runtime=io.containerd.runc.v2\n" Jul 2 08:18:57.321875 env[1254]: time="2024-07-02T08:18:57.321858697Z" level=info msg="TearDown network for sandbox \"4faa07b8afa9ba5bd9a181cf7d3753c157f3136e9e9a9fa915f2eec77f25f3a2\" successfully" Jul 2 08:18:57.321976 env[1254]: time="2024-07-02T08:18:57.321963854Z" level=info msg="StopPodSandbox for \"4faa07b8afa9ba5bd9a181cf7d3753c157f3136e9e9a9fa915f2eec77f25f3a2\" returns successfully" Jul 2 08:18:57.396537 kubelet[2163]: I0702 08:18:57.396187 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-cni-path\") pod \"f1336714-ffb8-4bc0-8af6-2c89f00a6e70\" (UID: \"f1336714-ffb8-4bc0-8af6-2c89f00a6e70\") " Jul 2 08:18:57.396537 kubelet[2163]: I0702 08:18:57.396234 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-clustermesh-secrets\") pod \"f1336714-ffb8-4bc0-8af6-2c89f00a6e70\" (UID: \"f1336714-ffb8-4bc0-8af6-2c89f00a6e70\") " Jul 2 08:18:57.396537 kubelet[2163]: I0702 08:18:57.396244 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nm9n5\" (UniqueName: \"kubernetes.io/projected/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-kube-api-access-nm9n5\") pod \"f1336714-ffb8-4bc0-8af6-2c89f00a6e70\" (UID: \"f1336714-ffb8-4bc0-8af6-2c89f00a6e70\") " Jul 2 08:18:57.396537 kubelet[2163]: I0702 08:18:57.396351 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-etc-cni-netd\") pod \"f1336714-ffb8-4bc0-8af6-2c89f00a6e70\" (UID: \"f1336714-ffb8-4bc0-8af6-2c89f00a6e70\") " Jul 2 08:18:57.396537 kubelet[2163]: I0702 08:18:57.396367 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-hostproc\") pod \"f1336714-ffb8-4bc0-8af6-2c89f00a6e70\" (UID: \"f1336714-ffb8-4bc0-8af6-2c89f00a6e70\") " Jul 2 08:18:57.396537 kubelet[2163]: I0702 08:18:57.396377 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-hubble-tls\") pod \"f1336714-ffb8-4bc0-8af6-2c89f00a6e70\" (UID: \"f1336714-ffb8-4bc0-8af6-2c89f00a6e70\") " Jul 2 08:18:57.396810 kubelet[2163]: I0702 08:18:57.396385 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-cilium-cgroup\") pod \"f1336714-ffb8-4bc0-8af6-2c89f00a6e70\" (UID: \"f1336714-ffb8-4bc0-8af6-2c89f00a6e70\") " Jul 2 08:18:57.396810 kubelet[2163]: I0702 08:18:57.396394 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-cilium-config-path\") pod \"f1336714-ffb8-4bc0-8af6-2c89f00a6e70\" (UID: \"f1336714-ffb8-4bc0-8af6-2c89f00a6e70\") " Jul 2 08:18:57.396810 kubelet[2163]: I0702 08:18:57.396402 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-bpf-maps\") pod \"f1336714-ffb8-4bc0-8af6-2c89f00a6e70\" (UID: \"f1336714-ffb8-4bc0-8af6-2c89f00a6e70\") " Jul 2 08:18:57.396810 kubelet[2163]: I0702 08:18:57.396410 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-host-proc-sys-kernel\") pod \"f1336714-ffb8-4bc0-8af6-2c89f00a6e70\" (UID: \"f1336714-ffb8-4bc0-8af6-2c89f00a6e70\") " Jul 2 08:18:57.396810 kubelet[2163]: I0702 08:18:57.396433 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-lib-modules\") pod \"f1336714-ffb8-4bc0-8af6-2c89f00a6e70\" (UID: \"f1336714-ffb8-4bc0-8af6-2c89f00a6e70\") " Jul 2 08:18:57.396810 kubelet[2163]: I0702 08:18:57.396445 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-cilium-run\") pod \"f1336714-ffb8-4bc0-8af6-2c89f00a6e70\" (UID: \"f1336714-ffb8-4bc0-8af6-2c89f00a6e70\") " Jul 2 08:18:57.397010 kubelet[2163]: I0702 08:18:57.396453 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-host-proc-sys-net\") pod \"f1336714-ffb8-4bc0-8af6-2c89f00a6e70\" (UID: \"f1336714-ffb8-4bc0-8af6-2c89f00a6e70\") " Jul 2 08:18:57.397010 kubelet[2163]: I0702 08:18:57.396462 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-xtables-lock\") pod \"f1336714-ffb8-4bc0-8af6-2c89f00a6e70\" (UID: \"f1336714-ffb8-4bc0-8af6-2c89f00a6e70\") " Jul 2 08:18:57.397363 kubelet[2163]: I0702 08:18:57.397247 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-cni-path" (OuterVolumeSpecName: "cni-path") pod "f1336714-ffb8-4bc0-8af6-2c89f00a6e70" (UID: "f1336714-ffb8-4bc0-8af6-2c89f00a6e70"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:18:57.397819 kubelet[2163]: I0702 08:18:57.397699 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f1336714-ffb8-4bc0-8af6-2c89f00a6e70" (UID: "f1336714-ffb8-4bc0-8af6-2c89f00a6e70"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:18:57.397819 kubelet[2163]: I0702 08:18:57.397743 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-hostproc" (OuterVolumeSpecName: "hostproc") pod "f1336714-ffb8-4bc0-8af6-2c89f00a6e70" (UID: "f1336714-ffb8-4bc0-8af6-2c89f00a6e70"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:18:57.398652 kubelet[2163]: I0702 08:18:57.398640 2163 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-dm794\" (UniqueName: \"kubernetes.io/projected/cc77f06c-d063-4cc9-a222-d7a12166038e-kube-api-access-dm794\") on node \"localhost\" DevicePath \"\"" Jul 2 08:18:57.398743 kubelet[2163]: I0702 08:18:57.398736 2163 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cc77f06c-d063-4cc9-a222-d7a12166038e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 2 08:18:57.398813 kubelet[2163]: I0702 08:18:57.398711 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f1336714-ffb8-4bc0-8af6-2c89f00a6e70" (UID: "f1336714-ffb8-4bc0-8af6-2c89f00a6e70"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:18:57.398877 kubelet[2163]: I0702 08:18:57.398863 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f1336714-ffb8-4bc0-8af6-2c89f00a6e70" (UID: "f1336714-ffb8-4bc0-8af6-2c89f00a6e70"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:18:57.398963 kubelet[2163]: I0702 08:18:57.398954 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f1336714-ffb8-4bc0-8af6-2c89f00a6e70" (UID: "f1336714-ffb8-4bc0-8af6-2c89f00a6e70"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:18:57.400344 kubelet[2163]: I0702 08:18:57.399162 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f1336714-ffb8-4bc0-8af6-2c89f00a6e70" (UID: "f1336714-ffb8-4bc0-8af6-2c89f00a6e70"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:18:57.400427 kubelet[2163]: I0702 08:18:57.399172 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f1336714-ffb8-4bc0-8af6-2c89f00a6e70" (UID: "f1336714-ffb8-4bc0-8af6-2c89f00a6e70"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:18:57.400496 kubelet[2163]: I0702 08:18:57.399178 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f1336714-ffb8-4bc0-8af6-2c89f00a6e70" (UID: "f1336714-ffb8-4bc0-8af6-2c89f00a6e70"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:18:57.400759 kubelet[2163]: I0702 08:18:57.399183 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f1336714-ffb8-4bc0-8af6-2c89f00a6e70" (UID: "f1336714-ffb8-4bc0-8af6-2c89f00a6e70"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:18:57.400811 kubelet[2163]: I0702 08:18:57.400323 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f1336714-ffb8-4bc0-8af6-2c89f00a6e70" (UID: "f1336714-ffb8-4bc0-8af6-2c89f00a6e70"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 08:18:57.402648 kubelet[2163]: I0702 08:18:57.402634 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-kube-api-access-nm9n5" (OuterVolumeSpecName: "kube-api-access-nm9n5") pod "f1336714-ffb8-4bc0-8af6-2c89f00a6e70" (UID: "f1336714-ffb8-4bc0-8af6-2c89f00a6e70"). InnerVolumeSpecName "kube-api-access-nm9n5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 08:18:57.406107 kubelet[2163]: I0702 08:18:57.406095 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f1336714-ffb8-4bc0-8af6-2c89f00a6e70" (UID: "f1336714-ffb8-4bc0-8af6-2c89f00a6e70"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 08:18:57.409221 kubelet[2163]: I0702 08:18:57.409209 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f1336714-ffb8-4bc0-8af6-2c89f00a6e70" (UID: "f1336714-ffb8-4bc0-8af6-2c89f00a6e70"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 08:18:57.498930 kubelet[2163]: I0702 08:18:57.498907 2163 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 2 08:18:57.499068 kubelet[2163]: I0702 08:18:57.499059 2163 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-nm9n5\" (UniqueName: \"kubernetes.io/projected/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-kube-api-access-nm9n5\") on node \"localhost\" DevicePath \"\"" Jul 2 08:18:57.499124 kubelet[2163]: I0702 08:18:57.499116 2163 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 2 08:18:57.499172 kubelet[2163]: I0702 08:18:57.499165 2163 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 2 08:18:57.499223 kubelet[2163]: I0702 08:18:57.499216 2163 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 2 08:18:57.499284 kubelet[2163]: I0702 08:18:57.499277 2163 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 2 08:18:57.499336 kubelet[2163]: I0702 08:18:57.499327 2163 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 2 08:18:57.499382 kubelet[2163]: I0702 08:18:57.499375 2163 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 2 08:18:57.499428 kubelet[2163]: I0702 08:18:57.499421 2163 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 2 08:18:57.499476 kubelet[2163]: I0702 08:18:57.499469 2163 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 2 08:18:57.499520 kubelet[2163]: I0702 08:18:57.499512 2163 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 2 08:18:57.499566 kubelet[2163]: I0702 08:18:57.499558 2163 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 2 08:18:57.499611 kubelet[2163]: I0702 08:18:57.499604 2163 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 2 08:18:57.499678 kubelet[2163]: I0702 08:18:57.499671 2163 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f1336714-ffb8-4bc0-8af6-2c89f00a6e70-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 2 08:18:57.507769 systemd[1]: Removed slice kubepods-besteffort-podcc77f06c_d063_4cc9_a222_d7a12166038e.slice. Jul 2 08:18:57.515399 systemd[1]: Removed slice kubepods-burstable-podf1336714_ffb8_4bc0_8af6_2c89f00a6e70.slice. Jul 2 08:18:57.515449 systemd[1]: kubepods-burstable-podf1336714_ffb8_4bc0_8af6_2c89f00a6e70.slice: Consumed 4.549s CPU time. Jul 2 08:18:57.527419 kubelet[2163]: I0702 08:18:57.527401 2163 scope.go:117] "RemoveContainer" containerID="c73c3dd161c8855e355a529e1d220228c1f66c985fc38527213d210c5a83445f" Jul 2 08:18:57.531214 env[1254]: time="2024-07-02T08:18:57.530424635Z" level=info msg="RemoveContainer for \"c73c3dd161c8855e355a529e1d220228c1f66c985fc38527213d210c5a83445f\"" Jul 2 08:18:57.533302 env[1254]: time="2024-07-02T08:18:57.533215038Z" level=info msg="RemoveContainer for \"c73c3dd161c8855e355a529e1d220228c1f66c985fc38527213d210c5a83445f\" returns successfully" Jul 2 08:18:57.533387 kubelet[2163]: I0702 08:18:57.533373 2163 scope.go:117] "RemoveContainer" containerID="c73c3dd161c8855e355a529e1d220228c1f66c985fc38527213d210c5a83445f" Jul 2 08:18:57.533653 env[1254]: time="2024-07-02T08:18:57.533564046Z" level=error msg="ContainerStatus for \"c73c3dd161c8855e355a529e1d220228c1f66c985fc38527213d210c5a83445f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c73c3dd161c8855e355a529e1d220228c1f66c985fc38527213d210c5a83445f\": not found" Jul 2 08:18:57.538755 kubelet[2163]: E0702 08:18:57.538719 2163 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c73c3dd161c8855e355a529e1d220228c1f66c985fc38527213d210c5a83445f\": not found" containerID="c73c3dd161c8855e355a529e1d220228c1f66c985fc38527213d210c5a83445f" Jul 2 08:18:57.561236 kubelet[2163]: I0702 08:18:57.546132 2163 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c73c3dd161c8855e355a529e1d220228c1f66c985fc38527213d210c5a83445f"} err="failed to get container status \"c73c3dd161c8855e355a529e1d220228c1f66c985fc38527213d210c5a83445f\": rpc error: code = NotFound desc = an error occurred when try to find container \"c73c3dd161c8855e355a529e1d220228c1f66c985fc38527213d210c5a83445f\": not found" Jul 2 08:18:57.561236 kubelet[2163]: I0702 08:18:57.561235 2163 scope.go:117] "RemoveContainer" containerID="8ecd95ae7dc6f3fb03e5d49100d255e5f3d5a0bd37937384d9011c5afce47c16" Jul 2 08:18:57.562120 env[1254]: time="2024-07-02T08:18:57.562086884Z" level=info msg="RemoveContainer for \"8ecd95ae7dc6f3fb03e5d49100d255e5f3d5a0bd37937384d9011c5afce47c16\"" Jul 2 08:18:57.569823 env[1254]: time="2024-07-02T08:18:57.569797443Z" level=info msg="RemoveContainer for \"8ecd95ae7dc6f3fb03e5d49100d255e5f3d5a0bd37937384d9011c5afce47c16\" returns successfully" Jul 2 08:18:57.569954 kubelet[2163]: I0702 08:18:57.569932 2163 scope.go:117] "RemoveContainer" containerID="7b9e1d53a6c5624452e1659795704bceb89be11510df0333782ceb03ac38ae38" Jul 2 08:18:57.570515 env[1254]: time="2024-07-02T08:18:57.570492910Z" level=info msg="RemoveContainer for \"7b9e1d53a6c5624452e1659795704bceb89be11510df0333782ceb03ac38ae38\"" Jul 2 08:18:57.585361 env[1254]: time="2024-07-02T08:18:57.585333796Z" level=info msg="RemoveContainer for \"7b9e1d53a6c5624452e1659795704bceb89be11510df0333782ceb03ac38ae38\" returns successfully" Jul 2 08:18:57.585500 kubelet[2163]: I0702 08:18:57.585484 2163 scope.go:117] "RemoveContainer" containerID="5bea1e1721e477ca775fecf574e794b8b319b2ade48c00bacddc9291b79354c1" Jul 2 08:18:57.595759 env[1254]: time="2024-07-02T08:18:57.595462740Z" level=info msg="RemoveContainer for \"5bea1e1721e477ca775fecf574e794b8b319b2ade48c00bacddc9291b79354c1\"" Jul 2 08:18:57.609284 env[1254]: time="2024-07-02T08:18:57.609216526Z" level=info msg="RemoveContainer for \"5bea1e1721e477ca775fecf574e794b8b319b2ade48c00bacddc9291b79354c1\" returns successfully" Jul 2 08:18:57.609511 kubelet[2163]: I0702 08:18:57.609491 2163 scope.go:117] "RemoveContainer" containerID="7f6879000c2ab87bdc1bd4ab23b204c8ed0b521a913513dde5eab3bc880f899f" Jul 2 08:18:57.613475 env[1254]: time="2024-07-02T08:18:57.613460121Z" level=info msg="RemoveContainer for \"7f6879000c2ab87bdc1bd4ab23b204c8ed0b521a913513dde5eab3bc880f899f\"" Jul 2 08:18:57.661155 env[1254]: time="2024-07-02T08:18:57.659740645Z" level=info msg="RemoveContainer for \"7f6879000c2ab87bdc1bd4ab23b204c8ed0b521a913513dde5eab3bc880f899f\" returns successfully" Jul 2 08:18:57.661390 kubelet[2163]: I0702 08:18:57.661378 2163 scope.go:117] "RemoveContainer" containerID="e1e53c7fea4bdbaf1d9de87cf0419898e080e8168cc6d885462aa120e4e90aa2" Jul 2 08:18:57.669401 env[1254]: time="2024-07-02T08:18:57.669383845Z" level=info msg="RemoveContainer for \"e1e53c7fea4bdbaf1d9de87cf0419898e080e8168cc6d885462aa120e4e90aa2\"" Jul 2 08:18:57.687619 env[1254]: time="2024-07-02T08:18:57.687601401Z" level=info msg="RemoveContainer for \"e1e53c7fea4bdbaf1d9de87cf0419898e080e8168cc6d885462aa120e4e90aa2\" returns successfully" Jul 2 08:18:57.687703 kubelet[2163]: I0702 08:18:57.687690 2163 scope.go:117] "RemoveContainer" containerID="8ecd95ae7dc6f3fb03e5d49100d255e5f3d5a0bd37937384d9011c5afce47c16" Jul 2 08:18:57.687881 env[1254]: time="2024-07-02T08:18:57.687850481Z" level=error msg="ContainerStatus for \"8ecd95ae7dc6f3fb03e5d49100d255e5f3d5a0bd37937384d9011c5afce47c16\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8ecd95ae7dc6f3fb03e5d49100d255e5f3d5a0bd37937384d9011c5afce47c16\": not found" Jul 2 08:18:57.688011 kubelet[2163]: E0702 08:18:57.687996 2163 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8ecd95ae7dc6f3fb03e5d49100d255e5f3d5a0bd37937384d9011c5afce47c16\": not found" containerID="8ecd95ae7dc6f3fb03e5d49100d255e5f3d5a0bd37937384d9011c5afce47c16" Jul 2 08:18:57.688056 kubelet[2163]: I0702 08:18:57.688013 2163 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8ecd95ae7dc6f3fb03e5d49100d255e5f3d5a0bd37937384d9011c5afce47c16"} err="failed to get container status \"8ecd95ae7dc6f3fb03e5d49100d255e5f3d5a0bd37937384d9011c5afce47c16\": rpc error: code = NotFound desc = an error occurred when try to find container \"8ecd95ae7dc6f3fb03e5d49100d255e5f3d5a0bd37937384d9011c5afce47c16\": not found" Jul 2 08:18:57.688056 kubelet[2163]: I0702 08:18:57.688025 2163 scope.go:117] "RemoveContainer" containerID="7b9e1d53a6c5624452e1659795704bceb89be11510df0333782ceb03ac38ae38" Jul 2 08:18:57.688233 env[1254]: time="2024-07-02T08:18:57.688207836Z" level=error msg="ContainerStatus for \"7b9e1d53a6c5624452e1659795704bceb89be11510df0333782ceb03ac38ae38\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7b9e1d53a6c5624452e1659795704bceb89be11510df0333782ceb03ac38ae38\": not found" Jul 2 08:18:57.688329 kubelet[2163]: E0702 08:18:57.688319 2163 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7b9e1d53a6c5624452e1659795704bceb89be11510df0333782ceb03ac38ae38\": not found" containerID="7b9e1d53a6c5624452e1659795704bceb89be11510df0333782ceb03ac38ae38" Jul 2 08:18:57.688362 kubelet[2163]: I0702 08:18:57.688329 2163 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7b9e1d53a6c5624452e1659795704bceb89be11510df0333782ceb03ac38ae38"} err="failed to get container status \"7b9e1d53a6c5624452e1659795704bceb89be11510df0333782ceb03ac38ae38\": rpc error: code = NotFound desc = an error occurred when try to find container \"7b9e1d53a6c5624452e1659795704bceb89be11510df0333782ceb03ac38ae38\": not found" Jul 2 08:18:57.688362 kubelet[2163]: I0702 08:18:57.688336 2163 scope.go:117] "RemoveContainer" containerID="5bea1e1721e477ca775fecf574e794b8b319b2ade48c00bacddc9291b79354c1" Jul 2 08:18:57.694277 kubelet[2163]: E0702 08:18:57.688538 2163 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5bea1e1721e477ca775fecf574e794b8b319b2ade48c00bacddc9291b79354c1\": not found" containerID="5bea1e1721e477ca775fecf574e794b8b319b2ade48c00bacddc9291b79354c1" Jul 2 08:18:57.694277 kubelet[2163]: I0702 08:18:57.688548 2163 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5bea1e1721e477ca775fecf574e794b8b319b2ade48c00bacddc9291b79354c1"} err="failed to get container status \"5bea1e1721e477ca775fecf574e794b8b319b2ade48c00bacddc9291b79354c1\": rpc error: code = NotFound desc = an error occurred when try to find container \"5bea1e1721e477ca775fecf574e794b8b319b2ade48c00bacddc9291b79354c1\": not found" Jul 2 08:18:57.694277 kubelet[2163]: I0702 08:18:57.688557 2163 scope.go:117] "RemoveContainer" containerID="7f6879000c2ab87bdc1bd4ab23b204c8ed0b521a913513dde5eab3bc880f899f" Jul 2 08:18:57.694277 kubelet[2163]: E0702 08:18:57.688736 2163 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7f6879000c2ab87bdc1bd4ab23b204c8ed0b521a913513dde5eab3bc880f899f\": not found" containerID="7f6879000c2ab87bdc1bd4ab23b204c8ed0b521a913513dde5eab3bc880f899f" Jul 2 08:18:57.694277 kubelet[2163]: I0702 08:18:57.688762 2163 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7f6879000c2ab87bdc1bd4ab23b204c8ed0b521a913513dde5eab3bc880f899f"} err="failed to get container status \"7f6879000c2ab87bdc1bd4ab23b204c8ed0b521a913513dde5eab3bc880f899f\": rpc error: code = NotFound desc = an error occurred when try to find container \"7f6879000c2ab87bdc1bd4ab23b204c8ed0b521a913513dde5eab3bc880f899f\": not found" Jul 2 08:18:57.694277 kubelet[2163]: I0702 08:18:57.688770 2163 scope.go:117] "RemoveContainer" containerID="e1e53c7fea4bdbaf1d9de87cf0419898e080e8168cc6d885462aa120e4e90aa2" Jul 2 08:18:57.694393 env[1254]: time="2024-07-02T08:18:57.688473322Z" level=error msg="ContainerStatus for \"5bea1e1721e477ca775fecf574e794b8b319b2ade48c00bacddc9291b79354c1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5bea1e1721e477ca775fecf574e794b8b319b2ade48c00bacddc9291b79354c1\": not found" Jul 2 08:18:57.694393 env[1254]: time="2024-07-02T08:18:57.688651928Z" level=error msg="ContainerStatus for \"7f6879000c2ab87bdc1bd4ab23b204c8ed0b521a913513dde5eab3bc880f899f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7f6879000c2ab87bdc1bd4ab23b204c8ed0b521a913513dde5eab3bc880f899f\": not found" Jul 2 08:18:57.694393 env[1254]: time="2024-07-02T08:18:57.688877997Z" level=error msg="ContainerStatus for \"e1e53c7fea4bdbaf1d9de87cf0419898e080e8168cc6d885462aa120e4e90aa2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e1e53c7fea4bdbaf1d9de87cf0419898e080e8168cc6d885462aa120e4e90aa2\": not found" Jul 2 08:18:57.694456 kubelet[2163]: E0702 08:18:57.688962 2163 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e1e53c7fea4bdbaf1d9de87cf0419898e080e8168cc6d885462aa120e4e90aa2\": not found" containerID="e1e53c7fea4bdbaf1d9de87cf0419898e080e8168cc6d885462aa120e4e90aa2" Jul 2 08:18:57.694456 kubelet[2163]: I0702 08:18:57.688972 2163 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e1e53c7fea4bdbaf1d9de87cf0419898e080e8168cc6d885462aa120e4e90aa2"} err="failed to get container status \"e1e53c7fea4bdbaf1d9de87cf0419898e080e8168cc6d885462aa120e4e90aa2\": rpc error: code = NotFound desc = an error occurred when try to find container \"e1e53c7fea4bdbaf1d9de87cf0419898e080e8168cc6d885462aa120e4e90aa2\": not found" Jul 2 08:18:58.112779 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ecd95ae7dc6f3fb03e5d49100d255e5f3d5a0bd37937384d9011c5afce47c16-rootfs.mount: Deactivated successfully. Jul 2 08:18:58.112842 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4faa07b8afa9ba5bd9a181cf7d3753c157f3136e9e9a9fa915f2eec77f25f3a2-rootfs.mount: Deactivated successfully. Jul 2 08:18:58.112877 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4faa07b8afa9ba5bd9a181cf7d3753c157f3136e9e9a9fa915f2eec77f25f3a2-shm.mount: Deactivated successfully. Jul 2 08:18:58.112926 systemd[1]: var-lib-kubelet-pods-cc77f06c\x2dd063\x2d4cc9\x2da222\x2dd7a12166038e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddm794.mount: Deactivated successfully. Jul 2 08:18:58.112965 systemd[1]: var-lib-kubelet-pods-f1336714\x2dffb8\x2d4bc0\x2d8af6\x2d2c89f00a6e70-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnm9n5.mount: Deactivated successfully. Jul 2 08:18:58.113006 systemd[1]: var-lib-kubelet-pods-f1336714\x2dffb8\x2d4bc0\x2d8af6\x2d2c89f00a6e70-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 08:18:58.113045 systemd[1]: var-lib-kubelet-pods-f1336714\x2dffb8\x2d4bc0\x2d8af6\x2d2c89f00a6e70-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 08:18:58.241769 kubelet[2163]: I0702 08:18:58.241742 2163 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc77f06c-d063-4cc9-a222-d7a12166038e" path="/var/lib/kubelet/pods/cc77f06c-d063-4cc9-a222-d7a12166038e/volumes" Jul 2 08:18:58.257087 kubelet[2163]: I0702 08:18:58.257064 2163 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1336714-ffb8-4bc0-8af6-2c89f00a6e70" path="/var/lib/kubelet/pods/f1336714-ffb8-4bc0-8af6-2c89f00a6e70/volumes" Jul 2 08:18:59.066227 sshd[3685]: pam_unix(sshd:session): session closed for user core Jul 2 08:18:59.066551 systemd[1]: Started sshd@22-139.178.70.99:22-139.178.68.195:43552.service. Jul 2 08:18:59.068745 systemd[1]: sshd@21-139.178.70.99:22-139.178.68.195:43540.service: Deactivated successfully. Jul 2 08:18:59.069163 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 08:18:59.069953 systemd-logind[1241]: Session 24 logged out. Waiting for processes to exit. Jul 2 08:18:59.070555 systemd-logind[1241]: Removed session 24. Jul 2 08:18:59.104620 sshd[3848]: Accepted publickey for core from 139.178.68.195 port 43552 ssh2: RSA SHA256:ZBsE641/vEJoOwiO3PsmlEnzQGV1D8+6UMqIklXV5hk Jul 2 08:18:59.105545 sshd[3848]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:18:59.108448 systemd-logind[1241]: New session 25 of user core. Jul 2 08:18:59.109065 systemd[1]: Started session-25.scope. Jul 2 08:18:59.801354 sshd[3848]: pam_unix(sshd:session): session closed for user core Jul 2 08:18:59.803955 systemd[1]: Started sshd@23-139.178.70.99:22-139.178.68.195:43556.service. Jul 2 08:18:59.806596 systemd[1]: sshd@22-139.178.70.99:22-139.178.68.195:43552.service: Deactivated successfully. Jul 2 08:18:59.807170 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 08:18:59.807600 systemd-logind[1241]: Session 25 logged out. Waiting for processes to exit. Jul 2 08:18:59.808366 systemd-logind[1241]: Removed session 25. Jul 2 08:18:59.834842 kubelet[2163]: I0702 08:18:59.834807 2163 topology_manager.go:215] "Topology Admit Handler" podUID="aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d" podNamespace="kube-system" podName="cilium-sqfmt" Jul 2 08:18:59.836239 sshd[3859]: Accepted publickey for core from 139.178.68.195 port 43556 ssh2: RSA SHA256:ZBsE641/vEJoOwiO3PsmlEnzQGV1D8+6UMqIklXV5hk Jul 2 08:18:59.837111 sshd[3859]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:18:59.839924 systemd-logind[1241]: New session 26 of user core. Jul 2 08:18:59.840391 systemd[1]: Started session-26.scope. Jul 2 08:18:59.864083 kubelet[2163]: E0702 08:18:59.863861 2163 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f1336714-ffb8-4bc0-8af6-2c89f00a6e70" containerName="apply-sysctl-overwrites" Jul 2 08:18:59.864083 kubelet[2163]: E0702 08:18:59.863890 2163 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cc77f06c-d063-4cc9-a222-d7a12166038e" containerName="cilium-operator" Jul 2 08:18:59.864083 kubelet[2163]: E0702 08:18:59.863902 2163 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f1336714-ffb8-4bc0-8af6-2c89f00a6e70" containerName="mount-bpf-fs" Jul 2 08:18:59.864083 kubelet[2163]: E0702 08:18:59.863908 2163 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f1336714-ffb8-4bc0-8af6-2c89f00a6e70" containerName="mount-cgroup" Jul 2 08:18:59.864083 kubelet[2163]: E0702 08:18:59.863912 2163 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f1336714-ffb8-4bc0-8af6-2c89f00a6e70" containerName="clean-cilium-state" Jul 2 08:18:59.864083 kubelet[2163]: E0702 08:18:59.863916 2163 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f1336714-ffb8-4bc0-8af6-2c89f00a6e70" containerName="cilium-agent" Jul 2 08:18:59.874420 kubelet[2163]: I0702 08:18:59.874399 2163 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc77f06c-d063-4cc9-a222-d7a12166038e" containerName="cilium-operator" Jul 2 08:18:59.874520 kubelet[2163]: I0702 08:18:59.874511 2163 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1336714-ffb8-4bc0-8af6-2c89f00a6e70" containerName="cilium-agent" Jul 2 08:18:59.899579 systemd[1]: Created slice kubepods-burstable-podaead17d9_2ce1_4ca3_97e8_d4b55dae1e0d.slice. Jul 2 08:18:59.924738 kubelet[2163]: I0702 08:18:59.924706 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-cilium-run\") pod \"cilium-sqfmt\" (UID: \"aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d\") " pod="kube-system/cilium-sqfmt" Jul 2 08:18:59.924738 kubelet[2163]: I0702 08:18:59.924741 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-cilium-config-path\") pod \"cilium-sqfmt\" (UID: \"aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d\") " pod="kube-system/cilium-sqfmt" Jul 2 08:18:59.924883 kubelet[2163]: I0702 08:18:59.924758 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-bpf-maps\") pod \"cilium-sqfmt\" (UID: \"aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d\") " pod="kube-system/cilium-sqfmt" Jul 2 08:18:59.924883 kubelet[2163]: I0702 08:18:59.924770 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-hostproc\") pod \"cilium-sqfmt\" (UID: \"aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d\") " pod="kube-system/cilium-sqfmt" Jul 2 08:18:59.924883 kubelet[2163]: I0702 08:18:59.924779 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-etc-cni-netd\") pod \"cilium-sqfmt\" (UID: \"aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d\") " pod="kube-system/cilium-sqfmt" Jul 2 08:18:59.924883 kubelet[2163]: I0702 08:18:59.924787 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-xtables-lock\") pod \"cilium-sqfmt\" (UID: \"aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d\") " pod="kube-system/cilium-sqfmt" Jul 2 08:18:59.924883 kubelet[2163]: I0702 08:18:59.924797 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-host-proc-sys-net\") pod \"cilium-sqfmt\" (UID: \"aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d\") " pod="kube-system/cilium-sqfmt" Jul 2 08:18:59.924883 kubelet[2163]: I0702 08:18:59.924807 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-lib-modules\") pod \"cilium-sqfmt\" (UID: \"aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d\") " pod="kube-system/cilium-sqfmt" Jul 2 08:18:59.925029 kubelet[2163]: I0702 08:18:59.924816 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g6mj\" (UniqueName: \"kubernetes.io/projected/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-kube-api-access-7g6mj\") pod \"cilium-sqfmt\" (UID: \"aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d\") " pod="kube-system/cilium-sqfmt" Jul 2 08:18:59.925029 kubelet[2163]: I0702 08:18:59.924827 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-cilium-cgroup\") pod \"cilium-sqfmt\" (UID: \"aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d\") " pod="kube-system/cilium-sqfmt" Jul 2 08:18:59.925029 kubelet[2163]: I0702 08:18:59.924836 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-host-proc-sys-kernel\") pod \"cilium-sqfmt\" (UID: \"aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d\") " pod="kube-system/cilium-sqfmt" Jul 2 08:18:59.925029 kubelet[2163]: I0702 08:18:59.924844 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-hubble-tls\") pod \"cilium-sqfmt\" (UID: \"aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d\") " pod="kube-system/cilium-sqfmt" Jul 2 08:18:59.925029 kubelet[2163]: I0702 08:18:59.924854 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-cilium-ipsec-secrets\") pod \"cilium-sqfmt\" (UID: \"aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d\") " pod="kube-system/cilium-sqfmt" Jul 2 08:18:59.925029 kubelet[2163]: I0702 08:18:59.924868 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-cni-path\") pod \"cilium-sqfmt\" (UID: \"aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d\") " pod="kube-system/cilium-sqfmt" Jul 2 08:18:59.925152 kubelet[2163]: I0702 08:18:59.924879 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-clustermesh-secrets\") pod \"cilium-sqfmt\" (UID: \"aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d\") " pod="kube-system/cilium-sqfmt" Jul 2 08:19:00.011067 systemd[1]: Started sshd@24-139.178.70.99:22-139.178.68.195:43562.service. Jul 2 08:19:00.012069 sshd[3859]: pam_unix(sshd:session): session closed for user core Jul 2 08:19:00.015442 systemd-logind[1241]: Session 26 logged out. Waiting for processes to exit. Jul 2 08:19:00.015538 systemd[1]: sshd@23-139.178.70.99:22-139.178.68.195:43556.service: Deactivated successfully. Jul 2 08:19:00.015993 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 08:19:00.016535 systemd-logind[1241]: Removed session 26. Jul 2 08:19:00.045185 kubelet[2163]: E0702 08:19:00.045160 2163 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[cilium-cgroup cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path host-proc-sys-kernel hubble-tls kube-api-access-7g6mj], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-sqfmt" podUID="aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d" Jul 2 08:19:00.060452 sshd[3871]: Accepted publickey for core from 139.178.68.195 port 43562 ssh2: RSA SHA256:ZBsE641/vEJoOwiO3PsmlEnzQGV1D8+6UMqIklXV5hk Jul 2 08:19:00.062239 sshd[3871]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:19:00.064730 systemd-logind[1241]: New session 27 of user core. Jul 2 08:19:00.065284 systemd[1]: Started session-27.scope. Jul 2 08:19:00.628077 kubelet[2163]: I0702 08:19:00.628050 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-hubble-tls\") pod \"aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d\" (UID: \"aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d\") " Jul 2 08:19:00.628254 kubelet[2163]: I0702 08:19:00.628240 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-hostproc\") pod \"aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d\" (UID: \"aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d\") " Jul 2 08:19:00.628327 kubelet[2163]: I0702 08:19:00.628315 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-lib-modules\") pod \"aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d\" (UID: \"aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d\") " Jul 2 08:19:00.628402 kubelet[2163]: I0702 08:19:00.628390 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-xtables-lock\") pod \"aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d\" (UID: \"aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d\") " Jul 2 08:19:00.628482 kubelet[2163]: I0702 08:19:00.628467 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-cilium-config-path\") pod \"aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d\" (UID: \"aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d\") " Jul 2 08:19:00.628559 kubelet[2163]: I0702 08:19:00.628547 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-cilium-cgroup\") pod \"aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d\" (UID: \"aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d\") " Jul 2 08:19:00.628633 kubelet[2163]: I0702 08:19:00.628620 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-cilium-ipsec-secrets\") pod \"aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d\" (UID: \"aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d\") " Jul 2 08:19:00.629951 kubelet[2163]: I0702 08:19:00.629940 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-cni-path\") pod \"aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d\" (UID: \"aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d\") " Jul 2 08:19:00.630039 kubelet[2163]: I0702 08:19:00.630027 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-host-proc-sys-net\") pod \"aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d\" (UID: \"aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d\") " Jul 2 08:19:00.630118 kubelet[2163]: I0702 08:19:00.630106 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-etc-cni-netd\") pod \"aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d\" (UID: \"aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d\") " Jul 2 08:19:00.630192 kubelet[2163]: I0702 08:19:00.630180 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-host-proc-sys-kernel\") pod \"aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d\" (UID: \"aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d\") " Jul 2 08:19:00.630264 kubelet[2163]: I0702 08:19:00.630253 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-cilium-run\") pod \"aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d\" (UID: \"aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d\") " Jul 2 08:19:00.630339 kubelet[2163]: I0702 08:19:00.630327 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7g6mj\" (UniqueName: \"kubernetes.io/projected/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-kube-api-access-7g6mj\") pod \"aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d\" (UID: \"aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d\") " Jul 2 08:19:00.630416 kubelet[2163]: I0702 08:19:00.630405 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-clustermesh-secrets\") pod \"aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d\" (UID: \"aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d\") " Jul 2 08:19:00.630489 kubelet[2163]: I0702 08:19:00.630477 2163 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-bpf-maps\") pod \"aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d\" (UID: \"aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d\") " Jul 2 08:19:00.630581 kubelet[2163]: I0702 08:19:00.628709 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d" (UID: "aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:19:00.632045 systemd[1]: var-lib-kubelet-pods-aead17d9\x2d2ce1\x2d4ca3\x2d97e8\x2dd4b55dae1e0d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 08:19:00.632788 kubelet[2163]: I0702 08:19:00.628722 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-hostproc" (OuterVolumeSpecName: "hostproc") pod "aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d" (UID: "aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:19:00.632866 kubelet[2163]: I0702 08:19:00.628729 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d" (UID: "aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:19:00.633035 kubelet[2163]: I0702 08:19:00.629900 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d" (UID: "aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 08:19:00.634410 kubelet[2163]: I0702 08:19:00.629918 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d" (UID: "aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:19:00.634486 kubelet[2163]: I0702 08:19:00.630567 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d" (UID: "aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:19:00.634549 kubelet[2163]: I0702 08:19:00.632624 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-cni-path" (OuterVolumeSpecName: "cni-path") pod "aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d" (UID: "aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:19:00.634614 kubelet[2163]: I0702 08:19:00.632638 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d" (UID: "aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:19:00.634671 kubelet[2163]: I0702 08:19:00.632646 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d" (UID: "aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:19:00.634731 kubelet[2163]: I0702 08:19:00.632655 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d" (UID: "aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:19:00.634791 kubelet[2163]: I0702 08:19:00.632662 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d" (UID: "aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:19:00.634864 kubelet[2163]: I0702 08:19:00.634254 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d" (UID: "aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 08:19:00.635336 kubelet[2163]: I0702 08:19:00.635317 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-kube-api-access-7g6mj" (OuterVolumeSpecName: "kube-api-access-7g6mj") pod "aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d" (UID: "aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d"). InnerVolumeSpecName "kube-api-access-7g6mj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 08:19:00.635977 kubelet[2163]: I0702 08:19:00.635955 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d" (UID: "aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 08:19:00.637683 kubelet[2163]: I0702 08:19:00.637668 2163 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d" (UID: "aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 08:19:00.731300 kubelet[2163]: I0702 08:19:00.731261 2163 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 2 08:19:00.731435 kubelet[2163]: I0702 08:19:00.731427 2163 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 2 08:19:00.731486 kubelet[2163]: I0702 08:19:00.731478 2163 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-7g6mj\" (UniqueName: \"kubernetes.io/projected/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-kube-api-access-7g6mj\") on node \"localhost\" DevicePath \"\"" Jul 2 08:19:00.731536 kubelet[2163]: I0702 08:19:00.731529 2163 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 2 08:19:00.731591 kubelet[2163]: I0702 08:19:00.731584 2163 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 2 08:19:00.731641 kubelet[2163]: I0702 08:19:00.731634 2163 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 2 08:19:00.732059 kubelet[2163]: I0702 08:19:00.731763 2163 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 2 08:19:00.732059 kubelet[2163]: I0702 08:19:00.731771 2163 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 2 08:19:00.732059 kubelet[2163]: I0702 08:19:00.731775 2163 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 2 08:19:00.732059 kubelet[2163]: I0702 08:19:00.731780 2163 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 2 08:19:00.732059 kubelet[2163]: I0702 08:19:00.731785 2163 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 2 08:19:00.732059 kubelet[2163]: I0702 08:19:00.731789 2163 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Jul 2 08:19:00.732059 kubelet[2163]: I0702 08:19:00.731793 2163 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 2 08:19:00.732059 kubelet[2163]: I0702 08:19:00.731798 2163 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 2 08:19:00.732215 kubelet[2163]: I0702 08:19:00.731803 2163 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 2 08:19:01.036235 systemd[1]: var-lib-kubelet-pods-aead17d9\x2d2ce1\x2d4ca3\x2d97e8\x2dd4b55dae1e0d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7g6mj.mount: Deactivated successfully. Jul 2 08:19:01.036298 systemd[1]: var-lib-kubelet-pods-aead17d9\x2d2ce1\x2d4ca3\x2d97e8\x2dd4b55dae1e0d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 08:19:01.036333 systemd[1]: var-lib-kubelet-pods-aead17d9\x2d2ce1\x2d4ca3\x2d97e8\x2dd4b55dae1e0d-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 2 08:19:01.390768 kubelet[2163]: E0702 08:19:01.390618 2163 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 08:19:01.517692 systemd[1]: Removed slice kubepods-burstable-podaead17d9_2ce1_4ca3_97e8_d4b55dae1e0d.slice. Jul 2 08:19:01.569902 kubelet[2163]: I0702 08:19:01.569862 2163 topology_manager.go:215] "Topology Admit Handler" podUID="810c457f-6db6-4452-87b2-8825a43cbf04" podNamespace="kube-system" podName="cilium-tc6b8" Jul 2 08:19:01.573849 systemd[1]: Created slice kubepods-burstable-pod810c457f_6db6_4452_87b2_8825a43cbf04.slice. Jul 2 08:19:01.636039 kubelet[2163]: I0702 08:19:01.636008 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/810c457f-6db6-4452-87b2-8825a43cbf04-xtables-lock\") pod \"cilium-tc6b8\" (UID: \"810c457f-6db6-4452-87b2-8825a43cbf04\") " pod="kube-system/cilium-tc6b8" Jul 2 08:19:01.636139 kubelet[2163]: I0702 08:19:01.636044 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wnk5\" (UniqueName: \"kubernetes.io/projected/810c457f-6db6-4452-87b2-8825a43cbf04-kube-api-access-9wnk5\") pod \"cilium-tc6b8\" (UID: \"810c457f-6db6-4452-87b2-8825a43cbf04\") " pod="kube-system/cilium-tc6b8" Jul 2 08:19:01.636139 kubelet[2163]: I0702 08:19:01.636065 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/810c457f-6db6-4452-87b2-8825a43cbf04-cni-path\") pod \"cilium-tc6b8\" (UID: \"810c457f-6db6-4452-87b2-8825a43cbf04\") " pod="kube-system/cilium-tc6b8" Jul 2 08:19:01.636139 kubelet[2163]: I0702 08:19:01.636080 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/810c457f-6db6-4452-87b2-8825a43cbf04-hubble-tls\") pod \"cilium-tc6b8\" (UID: \"810c457f-6db6-4452-87b2-8825a43cbf04\") " pod="kube-system/cilium-tc6b8" Jul 2 08:19:01.636139 kubelet[2163]: I0702 08:19:01.636097 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/810c457f-6db6-4452-87b2-8825a43cbf04-cilium-run\") pod \"cilium-tc6b8\" (UID: \"810c457f-6db6-4452-87b2-8825a43cbf04\") " pod="kube-system/cilium-tc6b8" Jul 2 08:19:01.636139 kubelet[2163]: I0702 08:19:01.636116 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/810c457f-6db6-4452-87b2-8825a43cbf04-clustermesh-secrets\") pod \"cilium-tc6b8\" (UID: \"810c457f-6db6-4452-87b2-8825a43cbf04\") " pod="kube-system/cilium-tc6b8" Jul 2 08:19:01.636139 kubelet[2163]: I0702 08:19:01.636132 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/810c457f-6db6-4452-87b2-8825a43cbf04-host-proc-sys-net\") pod \"cilium-tc6b8\" (UID: \"810c457f-6db6-4452-87b2-8825a43cbf04\") " pod="kube-system/cilium-tc6b8" Jul 2 08:19:01.636267 kubelet[2163]: I0702 08:19:01.636147 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/810c457f-6db6-4452-87b2-8825a43cbf04-bpf-maps\") pod \"cilium-tc6b8\" (UID: \"810c457f-6db6-4452-87b2-8825a43cbf04\") " pod="kube-system/cilium-tc6b8" Jul 2 08:19:01.636267 kubelet[2163]: I0702 08:19:01.636162 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/810c457f-6db6-4452-87b2-8825a43cbf04-lib-modules\") pod \"cilium-tc6b8\" (UID: \"810c457f-6db6-4452-87b2-8825a43cbf04\") " pod="kube-system/cilium-tc6b8" Jul 2 08:19:01.636267 kubelet[2163]: I0702 08:19:01.636177 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/810c457f-6db6-4452-87b2-8825a43cbf04-cilium-ipsec-secrets\") pod \"cilium-tc6b8\" (UID: \"810c457f-6db6-4452-87b2-8825a43cbf04\") " pod="kube-system/cilium-tc6b8" Jul 2 08:19:01.636267 kubelet[2163]: I0702 08:19:01.636192 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/810c457f-6db6-4452-87b2-8825a43cbf04-hostproc\") pod \"cilium-tc6b8\" (UID: \"810c457f-6db6-4452-87b2-8825a43cbf04\") " pod="kube-system/cilium-tc6b8" Jul 2 08:19:01.636267 kubelet[2163]: I0702 08:19:01.636205 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/810c457f-6db6-4452-87b2-8825a43cbf04-etc-cni-netd\") pod \"cilium-tc6b8\" (UID: \"810c457f-6db6-4452-87b2-8825a43cbf04\") " pod="kube-system/cilium-tc6b8" Jul 2 08:19:01.636267 kubelet[2163]: I0702 08:19:01.636219 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/810c457f-6db6-4452-87b2-8825a43cbf04-cilium-cgroup\") pod \"cilium-tc6b8\" (UID: \"810c457f-6db6-4452-87b2-8825a43cbf04\") " pod="kube-system/cilium-tc6b8" Jul 2 08:19:01.636377 kubelet[2163]: I0702 08:19:01.636234 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/810c457f-6db6-4452-87b2-8825a43cbf04-cilium-config-path\") pod \"cilium-tc6b8\" (UID: \"810c457f-6db6-4452-87b2-8825a43cbf04\") " pod="kube-system/cilium-tc6b8" Jul 2 08:19:01.636377 kubelet[2163]: I0702 08:19:01.636250 2163 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/810c457f-6db6-4452-87b2-8825a43cbf04-host-proc-sys-kernel\") pod \"cilium-tc6b8\" (UID: \"810c457f-6db6-4452-87b2-8825a43cbf04\") " pod="kube-system/cilium-tc6b8" Jul 2 08:19:01.876309 env[1254]: time="2024-07-02T08:19:01.876262591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tc6b8,Uid:810c457f-6db6-4452-87b2-8825a43cbf04,Namespace:kube-system,Attempt:0,}" Jul 2 08:19:01.883707 env[1254]: time="2024-07-02T08:19:01.883639553Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:19:01.883801 env[1254]: time="2024-07-02T08:19:01.883706908Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:19:01.883801 env[1254]: time="2024-07-02T08:19:01.883724344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:19:01.883990 env[1254]: time="2024-07-02T08:19:01.883958447Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/21a8691eb760d7eb1dfd82a1a987ed52e753d6f7fe0f8c106dfef5574b00d57e pid=3900 runtime=io.containerd.runc.v2 Jul 2 08:19:01.892536 systemd[1]: Started cri-containerd-21a8691eb760d7eb1dfd82a1a987ed52e753d6f7fe0f8c106dfef5574b00d57e.scope. Jul 2 08:19:01.914038 env[1254]: time="2024-07-02T08:19:01.914004240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tc6b8,Uid:810c457f-6db6-4452-87b2-8825a43cbf04,Namespace:kube-system,Attempt:0,} returns sandbox id \"21a8691eb760d7eb1dfd82a1a987ed52e753d6f7fe0f8c106dfef5574b00d57e\"" Jul 2 08:19:01.916693 env[1254]: time="2024-07-02T08:19:01.916671187Z" level=info msg="CreateContainer within sandbox \"21a8691eb760d7eb1dfd82a1a987ed52e753d6f7fe0f8c106dfef5574b00d57e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 08:19:01.922585 env[1254]: time="2024-07-02T08:19:01.922561883Z" level=info msg="CreateContainer within sandbox \"21a8691eb760d7eb1dfd82a1a987ed52e753d6f7fe0f8c106dfef5574b00d57e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f71f1bbc961f6f274d7159a60b20065252ef65b88fcd9715e1ebbe8b0d0b180a\"" Jul 2 08:19:01.923567 env[1254]: time="2024-07-02T08:19:01.923540715Z" level=info msg="StartContainer for \"f71f1bbc961f6f274d7159a60b20065252ef65b88fcd9715e1ebbe8b0d0b180a\"" Jul 2 08:19:01.936798 systemd[1]: Started cri-containerd-f71f1bbc961f6f274d7159a60b20065252ef65b88fcd9715e1ebbe8b0d0b180a.scope. Jul 2 08:19:01.957031 env[1254]: time="2024-07-02T08:19:01.957005519Z" level=info msg="StartContainer for \"f71f1bbc961f6f274d7159a60b20065252ef65b88fcd9715e1ebbe8b0d0b180a\" returns successfully" Jul 2 08:19:01.988436 systemd[1]: cri-containerd-f71f1bbc961f6f274d7159a60b20065252ef65b88fcd9715e1ebbe8b0d0b180a.scope: Deactivated successfully. Jul 2 08:19:02.059493 env[1254]: time="2024-07-02T08:19:02.059450945Z" level=info msg="shim disconnected" id=f71f1bbc961f6f274d7159a60b20065252ef65b88fcd9715e1ebbe8b0d0b180a Jul 2 08:19:02.059493 env[1254]: time="2024-07-02T08:19:02.059486995Z" level=warning msg="cleaning up after shim disconnected" id=f71f1bbc961f6f274d7159a60b20065252ef65b88fcd9715e1ebbe8b0d0b180a namespace=k8s.io Jul 2 08:19:02.059493 env[1254]: time="2024-07-02T08:19:02.059493541Z" level=info msg="cleaning up dead shim" Jul 2 08:19:02.067398 env[1254]: time="2024-07-02T08:19:02.067366440Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:19:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3984 runtime=io.containerd.runc.v2\n" Jul 2 08:19:02.241357 kubelet[2163]: I0702 08:19:02.241206 2163 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d" path="/var/lib/kubelet/pods/aead17d9-2ce1-4ca3-97e8-d4b55dae1e0d/volumes" Jul 2 08:19:02.517424 env[1254]: time="2024-07-02T08:19:02.517357569Z" level=info msg="CreateContainer within sandbox \"21a8691eb760d7eb1dfd82a1a987ed52e753d6f7fe0f8c106dfef5574b00d57e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 08:19:02.588164 env[1254]: time="2024-07-02T08:19:02.588130993Z" level=info msg="CreateContainer within sandbox \"21a8691eb760d7eb1dfd82a1a987ed52e753d6f7fe0f8c106dfef5574b00d57e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9b19e6bbc2739dedc01e83f94473b90db3f1696ddc9888ac6935475157243667\"" Jul 2 08:19:02.588745 env[1254]: time="2024-07-02T08:19:02.588727892Z" level=info msg="StartContainer for \"9b19e6bbc2739dedc01e83f94473b90db3f1696ddc9888ac6935475157243667\"" Jul 2 08:19:02.605640 systemd[1]: Started cri-containerd-9b19e6bbc2739dedc01e83f94473b90db3f1696ddc9888ac6935475157243667.scope. Jul 2 08:19:02.637346 env[1254]: time="2024-07-02T08:19:02.637319776Z" level=info msg="StartContainer for \"9b19e6bbc2739dedc01e83f94473b90db3f1696ddc9888ac6935475157243667\" returns successfully" Jul 2 08:19:02.676625 systemd[1]: cri-containerd-9b19e6bbc2739dedc01e83f94473b90db3f1696ddc9888ac6935475157243667.scope: Deactivated successfully. Jul 2 08:19:02.786845 env[1254]: time="2024-07-02T08:19:02.786819243Z" level=info msg="shim disconnected" id=9b19e6bbc2739dedc01e83f94473b90db3f1696ddc9888ac6935475157243667 Jul 2 08:19:02.787051 env[1254]: time="2024-07-02T08:19:02.787038282Z" level=warning msg="cleaning up after shim disconnected" id=9b19e6bbc2739dedc01e83f94473b90db3f1696ddc9888ac6935475157243667 namespace=k8s.io Jul 2 08:19:02.787107 env[1254]: time="2024-07-02T08:19:02.787096765Z" level=info msg="cleaning up dead shim" Jul 2 08:19:02.791659 env[1254]: time="2024-07-02T08:19:02.791638828Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:19:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4046 runtime=io.containerd.runc.v2\n" Jul 2 08:19:03.037039 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9b19e6bbc2739dedc01e83f94473b90db3f1696ddc9888ac6935475157243667-rootfs.mount: Deactivated successfully. Jul 2 08:19:03.519104 env[1254]: time="2024-07-02T08:19:03.519039767Z" level=info msg="CreateContainer within sandbox \"21a8691eb760d7eb1dfd82a1a987ed52e753d6f7fe0f8c106dfef5574b00d57e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 08:19:03.544951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount654202805.mount: Deactivated successfully. Jul 2 08:19:03.548472 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1957571965.mount: Deactivated successfully. Jul 2 08:19:03.562200 env[1254]: time="2024-07-02T08:19:03.562166651Z" level=info msg="CreateContainer within sandbox \"21a8691eb760d7eb1dfd82a1a987ed52e753d6f7fe0f8c106dfef5574b00d57e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"89d0f602cef16d77346072453e6b46363fa0138cbcf208f91eada479f188b3b4\"" Jul 2 08:19:03.562626 env[1254]: time="2024-07-02T08:19:03.562612270Z" level=info msg="StartContainer for \"89d0f602cef16d77346072453e6b46363fa0138cbcf208f91eada479f188b3b4\"" Jul 2 08:19:03.573633 systemd[1]: Started cri-containerd-89d0f602cef16d77346072453e6b46363fa0138cbcf208f91eada479f188b3b4.scope. Jul 2 08:19:03.599582 env[1254]: time="2024-07-02T08:19:03.599538858Z" level=info msg="StartContainer for \"89d0f602cef16d77346072453e6b46363fa0138cbcf208f91eada479f188b3b4\" returns successfully" Jul 2 08:19:03.652006 systemd[1]: cri-containerd-89d0f602cef16d77346072453e6b46363fa0138cbcf208f91eada479f188b3b4.scope: Deactivated successfully. Jul 2 08:19:03.668842 env[1254]: time="2024-07-02T08:19:03.668798893Z" level=info msg="shim disconnected" id=89d0f602cef16d77346072453e6b46363fa0138cbcf208f91eada479f188b3b4 Jul 2 08:19:03.668842 env[1254]: time="2024-07-02T08:19:03.668834985Z" level=warning msg="cleaning up after shim disconnected" id=89d0f602cef16d77346072453e6b46363fa0138cbcf208f91eada479f188b3b4 namespace=k8s.io Jul 2 08:19:03.668842 env[1254]: time="2024-07-02T08:19:03.668843957Z" level=info msg="cleaning up dead shim" Jul 2 08:19:03.674126 env[1254]: time="2024-07-02T08:19:03.674091716Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:19:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4107 runtime=io.containerd.runc.v2\n" Jul 2 08:19:04.521587 env[1254]: time="2024-07-02T08:19:04.521554451Z" level=info msg="CreateContainer within sandbox \"21a8691eb760d7eb1dfd82a1a987ed52e753d6f7fe0f8c106dfef5574b00d57e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 08:19:04.560266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2988132267.mount: Deactivated successfully. Jul 2 08:19:04.592609 env[1254]: time="2024-07-02T08:19:04.592579509Z" level=info msg="CreateContainer within sandbox \"21a8691eb760d7eb1dfd82a1a987ed52e753d6f7fe0f8c106dfef5574b00d57e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b85d0e4fa71f036c4118f94d16161d553e9e0509cad2479d1b60f90a3a6b066a\"" Jul 2 08:19:04.593753 env[1254]: time="2024-07-02T08:19:04.593075643Z" level=info msg="StartContainer for \"b85d0e4fa71f036c4118f94d16161d553e9e0509cad2479d1b60f90a3a6b066a\"" Jul 2 08:19:04.607351 systemd[1]: Started cri-containerd-b85d0e4fa71f036c4118f94d16161d553e9e0509cad2479d1b60f90a3a6b066a.scope. Jul 2 08:19:04.623998 systemd[1]: cri-containerd-b85d0e4fa71f036c4118f94d16161d553e9e0509cad2479d1b60f90a3a6b066a.scope: Deactivated successfully. Jul 2 08:19:04.642220 env[1254]: time="2024-07-02T08:19:04.642195361Z" level=info msg="StartContainer for \"b85d0e4fa71f036c4118f94d16161d553e9e0509cad2479d1b60f90a3a6b066a\" returns successfully" Jul 2 08:19:04.692949 env[1254]: time="2024-07-02T08:19:04.692874330Z" level=info msg="shim disconnected" id=b85d0e4fa71f036c4118f94d16161d553e9e0509cad2479d1b60f90a3a6b066a Jul 2 08:19:04.693105 env[1254]: time="2024-07-02T08:19:04.693093717Z" level=warning msg="cleaning up after shim disconnected" id=b85d0e4fa71f036c4118f94d16161d553e9e0509cad2479d1b60f90a3a6b066a namespace=k8s.io Jul 2 08:19:04.693159 env[1254]: time="2024-07-02T08:19:04.693143937Z" level=info msg="cleaning up dead shim" Jul 2 08:19:04.697863 env[1254]: time="2024-07-02T08:19:04.697832037Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:19:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4162 runtime=io.containerd.runc.v2\n" Jul 2 08:19:05.036503 systemd[1]: run-containerd-runc-k8s.io-b85d0e4fa71f036c4118f94d16161d553e9e0509cad2479d1b60f90a3a6b066a-runc.DcOFqa.mount: Deactivated successfully. Jul 2 08:19:05.036565 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b85d0e4fa71f036c4118f94d16161d553e9e0509cad2479d1b60f90a3a6b066a-rootfs.mount: Deactivated successfully. Jul 2 08:19:05.524074 env[1254]: time="2024-07-02T08:19:05.524014788Z" level=info msg="CreateContainer within sandbox \"21a8691eb760d7eb1dfd82a1a987ed52e753d6f7fe0f8c106dfef5574b00d57e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 08:19:05.563064 env[1254]: time="2024-07-02T08:19:05.563031832Z" level=info msg="CreateContainer within sandbox \"21a8691eb760d7eb1dfd82a1a987ed52e753d6f7fe0f8c106dfef5574b00d57e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0085f6ac4abe44934fb523e57b537f57a404cc67bbbcef1df40550ad1456db49\"" Jul 2 08:19:05.563346 env[1254]: time="2024-07-02T08:19:05.563332207Z" level=info msg="StartContainer for \"0085f6ac4abe44934fb523e57b537f57a404cc67bbbcef1df40550ad1456db49\"" Jul 2 08:19:05.574541 systemd[1]: Started cri-containerd-0085f6ac4abe44934fb523e57b537f57a404cc67bbbcef1df40550ad1456db49.scope. Jul 2 08:19:05.599418 env[1254]: time="2024-07-02T08:19:05.599379838Z" level=info msg="StartContainer for \"0085f6ac4abe44934fb523e57b537f57a404cc67bbbcef1df40550ad1456db49\" returns successfully" Jul 2 08:19:06.657917 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 2 08:19:08.354079 systemd[1]: run-containerd-runc-k8s.io-0085f6ac4abe44934fb523e57b537f57a404cc67bbbcef1df40550ad1456db49-runc.bffgVw.mount: Deactivated successfully. Jul 2 08:19:08.429914 kubelet[2163]: E0702 08:19:08.429870 2163 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:37772->127.0.0.1:44183: read tcp 127.0.0.1:37772->127.0.0.1:44183: read: connection reset by peer Jul 2 08:19:08.430194 kubelet[2163]: E0702 08:19:08.430182 2163 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:37772->127.0.0.1:44183: write tcp 127.0.0.1:37772->127.0.0.1:44183: write: broken pipe Jul 2 08:19:09.182951 systemd-networkd[1061]: lxc_health: Link UP Jul 2 08:19:09.218946 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 08:19:09.223054 systemd-networkd[1061]: lxc_health: Gained carrier Jul 2 08:19:09.893224 kubelet[2163]: I0702 08:19:09.893182 2163 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tc6b8" podStartSLOduration=8.892080975 podStartE2EDuration="8.892080975s" podCreationTimestamp="2024-07-02 08:19:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:19:06.545501389 +0000 UTC m=+140.390909341" watchObservedRunningTime="2024-07-02 08:19:09.892080975 +0000 UTC m=+143.737488924" Jul 2 08:19:10.471376 systemd[1]: run-containerd-runc-k8s.io-0085f6ac4abe44934fb523e57b537f57a404cc67bbbcef1df40550ad1456db49-runc.UcsZl8.mount: Deactivated successfully. Jul 2 08:19:10.647017 systemd-networkd[1061]: lxc_health: Gained IPv6LL Jul 2 08:19:12.591275 systemd[1]: run-containerd-runc-k8s.io-0085f6ac4abe44934fb523e57b537f57a404cc67bbbcef1df40550ad1456db49-runc.bj0NDh.mount: Deactivated successfully. Jul 2 08:19:14.723141 sshd[3871]: pam_unix(sshd:session): session closed for user core Jul 2 08:19:14.725660 systemd[1]: sshd@24-139.178.70.99:22-139.178.68.195:43562.service: Deactivated successfully. Jul 2 08:19:14.726398 systemd[1]: session-27.scope: Deactivated successfully. Jul 2 08:19:14.727328 systemd-logind[1241]: Session 27 logged out. Waiting for processes to exit. Jul 2 08:19:14.728131 systemd-logind[1241]: Removed session 27.