Jul 2 08:01:51.646776 kernel: Linux version 5.15.161-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Jul 1 23:45:21 -00 2024 Jul 2 08:01:51.646795 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 08:01:51.646803 kernel: Disabled fast string operations Jul 2 08:01:51.646808 kernel: BIOS-provided physical RAM map: Jul 2 08:01:51.646813 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Jul 2 08:01:51.646819 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Jul 2 08:01:51.646828 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Jul 2 08:01:51.646835 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Jul 2 08:01:51.647636 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Jul 2 08:01:51.647641 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Jul 2 08:01:51.647645 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Jul 2 08:01:51.647649 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Jul 2 08:01:51.647653 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Jul 2 08:01:51.647658 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jul 2 08:01:51.647665 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Jul 2 08:01:51.647670 kernel: NX (Execute Disable) protection: active Jul 2 08:01:51.647675 kernel: SMBIOS 2.7 present. Jul 2 08:01:51.647680 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Jul 2 08:01:51.647684 kernel: vmware: hypercall mode: 0x00 Jul 2 08:01:51.647689 kernel: Hypervisor detected: VMware Jul 2 08:01:51.647694 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Jul 2 08:01:51.647699 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Jul 2 08:01:51.647703 kernel: vmware: using clock offset of 3294053308 ns Jul 2 08:01:51.647708 kernel: tsc: Detected 3408.000 MHz processor Jul 2 08:01:51.647713 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 08:01:51.647718 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 08:01:51.647723 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Jul 2 08:01:51.647727 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 08:01:51.647732 kernel: total RAM covered: 3072M Jul 2 08:01:51.647737 kernel: Found optimal setting for mtrr clean up Jul 2 08:01:51.647743 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Jul 2 08:01:51.647747 kernel: Using GB pages for direct mapping Jul 2 08:01:51.647752 kernel: ACPI: Early table checksum verification disabled Jul 2 08:01:51.647756 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Jul 2 08:01:51.647761 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Jul 2 08:01:51.647766 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Jul 2 08:01:51.647770 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Jul 2 08:01:51.647775 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jul 2 08:01:51.647780 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jul 2 08:01:51.647786 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Jul 2 08:01:51.647793 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Jul 2 08:01:51.647798 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Jul 2 08:01:51.647803 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Jul 2 08:01:51.647808 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Jul 2 08:01:51.647817 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Jul 2 08:01:51.647825 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Jul 2 08:01:51.647833 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Jul 2 08:01:51.647841 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jul 2 08:01:51.647849 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jul 2 08:01:51.647856 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Jul 2 08:01:51.647864 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Jul 2 08:01:51.647872 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Jul 2 08:01:51.647880 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Jul 2 08:01:51.647889 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Jul 2 08:01:51.647897 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Jul 2 08:01:51.647903 kernel: system APIC only can use physical flat Jul 2 08:01:51.647908 kernel: Setting APIC routing to physical flat. Jul 2 08:01:51.647914 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 2 08:01:51.647922 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jul 2 08:01:51.647928 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jul 2 08:01:51.647935 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jul 2 08:01:51.647943 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jul 2 08:01:51.647950 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jul 2 08:01:51.647954 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jul 2 08:01:51.647959 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jul 2 08:01:51.647964 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Jul 2 08:01:51.647969 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Jul 2 08:01:51.647974 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Jul 2 08:01:51.647979 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Jul 2 08:01:51.647985 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Jul 2 08:01:51.647994 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Jul 2 08:01:51.648003 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Jul 2 08:01:51.648013 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Jul 2 08:01:51.648020 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Jul 2 08:01:51.648025 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Jul 2 08:01:51.648030 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Jul 2 08:01:51.648035 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Jul 2 08:01:51.648040 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Jul 2 08:01:51.648045 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Jul 2 08:01:51.648050 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Jul 2 08:01:51.648056 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Jul 2 08:01:51.648066 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Jul 2 08:01:51.648075 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Jul 2 08:01:51.648082 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Jul 2 08:01:51.648090 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Jul 2 08:01:51.648098 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Jul 2 08:01:51.648103 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Jul 2 08:01:51.648108 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Jul 2 08:01:51.648116 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Jul 2 08:01:51.648125 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Jul 2 08:01:51.648131 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Jul 2 08:01:51.648140 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Jul 2 08:01:51.648152 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Jul 2 08:01:51.648159 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Jul 2 08:01:51.648167 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Jul 2 08:01:51.648174 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Jul 2 08:01:51.648179 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Jul 2 08:01:51.648184 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Jul 2 08:01:51.648189 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Jul 2 08:01:51.648194 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Jul 2 08:01:51.648199 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Jul 2 08:01:51.648207 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Jul 2 08:01:51.648214 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Jul 2 08:01:51.648221 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Jul 2 08:01:51.648226 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Jul 2 08:01:51.648231 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Jul 2 08:01:51.648236 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Jul 2 08:01:51.648241 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Jul 2 08:01:51.648246 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Jul 2 08:01:51.648251 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Jul 2 08:01:51.648256 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Jul 2 08:01:51.648261 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Jul 2 08:01:51.648266 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Jul 2 08:01:51.648271 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Jul 2 08:01:51.648276 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Jul 2 08:01:51.648281 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Jul 2 08:01:51.648286 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Jul 2 08:01:51.648292 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Jul 2 08:01:51.648300 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Jul 2 08:01:51.648306 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Jul 2 08:01:51.648311 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Jul 2 08:01:51.648316 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Jul 2 08:01:51.648323 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Jul 2 08:01:51.648328 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Jul 2 08:01:51.648333 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Jul 2 08:01:51.648339 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Jul 2 08:01:51.648344 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Jul 2 08:01:51.648349 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Jul 2 08:01:51.648355 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Jul 2 08:01:51.648360 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Jul 2 08:01:51.648366 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Jul 2 08:01:51.648371 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Jul 2 08:01:51.648376 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Jul 2 08:01:51.648384 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Jul 2 08:01:51.648389 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Jul 2 08:01:51.648395 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Jul 2 08:01:51.648402 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Jul 2 08:01:51.648409 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Jul 2 08:01:51.648417 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Jul 2 08:01:51.648423 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Jul 2 08:01:51.648428 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Jul 2 08:01:51.648433 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Jul 2 08:01:51.648440 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Jul 2 08:01:51.648449 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Jul 2 08:01:51.648458 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Jul 2 08:01:51.648466 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Jul 2 08:01:51.648472 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Jul 2 08:01:51.648479 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Jul 2 08:01:51.648485 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Jul 2 08:01:51.648494 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Jul 2 08:01:51.648503 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Jul 2 08:01:51.648511 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Jul 2 08:01:51.648520 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Jul 2 08:01:51.648537 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Jul 2 08:01:51.648547 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Jul 2 08:01:51.648553 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Jul 2 08:01:51.648558 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Jul 2 08:01:51.648565 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Jul 2 08:01:51.648570 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Jul 2 08:01:51.648575 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Jul 2 08:01:51.648581 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Jul 2 08:01:51.648589 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Jul 2 08:01:51.648598 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Jul 2 08:01:51.648604 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Jul 2 08:01:51.648610 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Jul 2 08:01:51.648618 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Jul 2 08:01:51.648624 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Jul 2 08:01:51.648631 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Jul 2 08:01:51.648636 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Jul 2 08:01:51.648641 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Jul 2 08:01:51.648646 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Jul 2 08:01:51.648652 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Jul 2 08:01:51.648657 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Jul 2 08:01:51.648662 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Jul 2 08:01:51.648667 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Jul 2 08:01:51.648673 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Jul 2 08:01:51.648679 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Jul 2 08:01:51.648684 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Jul 2 08:01:51.648689 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Jul 2 08:01:51.648694 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Jul 2 08:01:51.648699 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Jul 2 08:01:51.648705 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Jul 2 08:01:51.648710 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Jul 2 08:01:51.648715 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Jul 2 08:01:51.648720 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Jul 2 08:01:51.648725 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 2 08:01:51.648732 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jul 2 08:01:51.648737 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Jul 2 08:01:51.648743 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Jul 2 08:01:51.648748 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Jul 2 08:01:51.648754 kernel: Zone ranges: Jul 2 08:01:51.648759 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 08:01:51.648764 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Jul 2 08:01:51.648769 kernel: Normal empty Jul 2 08:01:51.648775 kernel: Movable zone start for each node Jul 2 08:01:51.648781 kernel: Early memory node ranges Jul 2 08:01:51.648786 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Jul 2 08:01:51.648792 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Jul 2 08:01:51.648797 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Jul 2 08:01:51.648803 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Jul 2 08:01:51.648810 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 08:01:51.648816 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Jul 2 08:01:51.648821 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Jul 2 08:01:51.648826 kernel: ACPI: PM-Timer IO Port: 0x1008 Jul 2 08:01:51.648833 kernel: system APIC only can use physical flat Jul 2 08:01:51.648838 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Jul 2 08:01:51.648843 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jul 2 08:01:51.648849 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jul 2 08:01:51.648854 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jul 2 08:01:51.648859 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jul 2 08:01:51.648864 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jul 2 08:01:51.648869 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jul 2 08:01:51.648875 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jul 2 08:01:51.648880 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jul 2 08:01:51.648886 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jul 2 08:01:51.648891 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jul 2 08:01:51.648896 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jul 2 08:01:51.648902 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jul 2 08:01:51.648907 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jul 2 08:01:51.648912 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jul 2 08:01:51.648918 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jul 2 08:01:51.648923 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jul 2 08:01:51.648928 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Jul 2 08:01:51.648934 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Jul 2 08:01:51.648939 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Jul 2 08:01:51.648944 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Jul 2 08:01:51.648950 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Jul 2 08:01:51.648955 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Jul 2 08:01:51.648960 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Jul 2 08:01:51.648966 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Jul 2 08:01:51.648971 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Jul 2 08:01:51.648976 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Jul 2 08:01:51.648982 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Jul 2 08:01:51.648987 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Jul 2 08:01:51.648993 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Jul 2 08:01:51.648998 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Jul 2 08:01:51.649003 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Jul 2 08:01:51.649008 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Jul 2 08:01:51.649013 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Jul 2 08:01:51.649018 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Jul 2 08:01:51.649024 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Jul 2 08:01:51.649030 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Jul 2 08:01:51.649035 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Jul 2 08:01:51.649040 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Jul 2 08:01:51.649045 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Jul 2 08:01:51.649051 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Jul 2 08:01:51.649056 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Jul 2 08:01:51.649061 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Jul 2 08:01:51.649067 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Jul 2 08:01:51.649072 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Jul 2 08:01:51.649077 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Jul 2 08:01:51.649083 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Jul 2 08:01:51.649089 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Jul 2 08:01:51.649094 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Jul 2 08:01:51.649099 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Jul 2 08:01:51.649104 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Jul 2 08:01:51.649109 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Jul 2 08:01:51.649115 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Jul 2 08:01:51.649120 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Jul 2 08:01:51.649125 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Jul 2 08:01:51.649131 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Jul 2 08:01:51.649137 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Jul 2 08:01:51.649142 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Jul 2 08:01:51.649147 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Jul 2 08:01:51.649152 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Jul 2 08:01:51.649158 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Jul 2 08:01:51.649163 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Jul 2 08:01:51.649168 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Jul 2 08:01:51.649173 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Jul 2 08:01:51.649179 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Jul 2 08:01:51.649185 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Jul 2 08:01:51.649190 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Jul 2 08:01:51.649195 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Jul 2 08:01:51.649200 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Jul 2 08:01:51.649206 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Jul 2 08:01:51.649211 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Jul 2 08:01:51.649216 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Jul 2 08:01:51.649221 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Jul 2 08:01:51.649228 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Jul 2 08:01:51.649233 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Jul 2 08:01:51.649239 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Jul 2 08:01:51.649249 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Jul 2 08:01:51.649254 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Jul 2 08:01:51.649260 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Jul 2 08:01:51.649265 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Jul 2 08:01:51.649270 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Jul 2 08:01:51.649275 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Jul 2 08:01:51.649285 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Jul 2 08:01:51.649293 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Jul 2 08:01:51.649299 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Jul 2 08:01:51.649306 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Jul 2 08:01:51.649312 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Jul 2 08:01:51.649317 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Jul 2 08:01:51.649322 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Jul 2 08:01:51.649328 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Jul 2 08:01:51.649337 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Jul 2 08:01:51.649345 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Jul 2 08:01:51.649354 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Jul 2 08:01:51.649359 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Jul 2 08:01:51.649366 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Jul 2 08:01:51.649373 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Jul 2 08:01:51.649382 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Jul 2 08:01:51.649390 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Jul 2 08:01:51.649399 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Jul 2 08:01:51.649405 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Jul 2 08:01:51.649412 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Jul 2 08:01:51.649422 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Jul 2 08:01:51.649429 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Jul 2 08:01:51.649437 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Jul 2 08:01:51.649443 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Jul 2 08:01:51.649448 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Jul 2 08:01:51.649453 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Jul 2 08:01:51.649461 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Jul 2 08:01:51.649469 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Jul 2 08:01:51.649475 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Jul 2 08:01:51.649485 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Jul 2 08:01:51.649491 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Jul 2 08:01:51.649496 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Jul 2 08:01:51.649501 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Jul 2 08:01:51.649507 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Jul 2 08:01:51.649514 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Jul 2 08:01:51.649521 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Jul 2 08:01:51.649535 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Jul 2 08:01:51.649543 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Jul 2 08:01:51.649553 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Jul 2 08:01:51.649558 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Jul 2 08:01:51.649563 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Jul 2 08:01:51.649569 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Jul 2 08:01:51.649574 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Jul 2 08:01:51.649579 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Jul 2 08:01:51.649585 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Jul 2 08:01:51.649590 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Jul 2 08:01:51.649595 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Jul 2 08:01:51.649601 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Jul 2 08:01:51.649607 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Jul 2 08:01:51.649612 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 08:01:51.649617 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Jul 2 08:01:51.649623 kernel: TSC deadline timer available Jul 2 08:01:51.649628 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Jul 2 08:01:51.649633 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Jul 2 08:01:51.649638 kernel: Booting paravirtualized kernel on VMware hypervisor Jul 2 08:01:51.649644 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 08:01:51.649650 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:128 nr_node_ids:1 Jul 2 08:01:51.649656 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Jul 2 08:01:51.649661 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Jul 2 08:01:51.649667 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Jul 2 08:01:51.649675 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Jul 2 08:01:51.649680 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Jul 2 08:01:51.649685 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Jul 2 08:01:51.649690 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Jul 2 08:01:51.649695 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Jul 2 08:01:51.649702 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Jul 2 08:01:51.649715 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Jul 2 08:01:51.649721 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Jul 2 08:01:51.649727 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Jul 2 08:01:51.649732 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Jul 2 08:01:51.649738 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Jul 2 08:01:51.649743 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Jul 2 08:01:51.649749 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Jul 2 08:01:51.649754 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Jul 2 08:01:51.649761 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Jul 2 08:01:51.649766 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Jul 2 08:01:51.649772 kernel: Policy zone: DMA32 Jul 2 08:01:51.649778 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 08:01:51.649784 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 08:01:51.649790 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Jul 2 08:01:51.649795 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Jul 2 08:01:51.649801 kernel: printk: log_buf_len min size: 262144 bytes Jul 2 08:01:51.649808 kernel: printk: log_buf_len: 1048576 bytes Jul 2 08:01:51.649813 kernel: printk: early log buf free: 239728(91%) Jul 2 08:01:51.649821 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 08:01:51.649831 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 2 08:01:51.649837 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 08:01:51.649842 kernel: Memory: 1940392K/2096628K available (12294K kernel code, 2276K rwdata, 13712K rodata, 47444K init, 4144K bss, 155976K reserved, 0K cma-reserved) Jul 2 08:01:51.649848 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Jul 2 08:01:51.649854 kernel: ftrace: allocating 34514 entries in 135 pages Jul 2 08:01:51.649861 kernel: ftrace: allocated 135 pages with 4 groups Jul 2 08:01:51.649868 kernel: rcu: Hierarchical RCU implementation. Jul 2 08:01:51.649874 kernel: rcu: RCU event tracing is enabled. Jul 2 08:01:51.649879 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Jul 2 08:01:51.649885 kernel: Rude variant of Tasks RCU enabled. Jul 2 08:01:51.649891 kernel: Tracing variant of Tasks RCU enabled. Jul 2 08:01:51.649898 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 08:01:51.649906 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Jul 2 08:01:51.649911 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Jul 2 08:01:51.649917 kernel: random: crng init done Jul 2 08:01:51.649923 kernel: Console: colour VGA+ 80x25 Jul 2 08:01:51.649928 kernel: printk: console [tty0] enabled Jul 2 08:01:51.649934 kernel: printk: console [ttyS0] enabled Jul 2 08:01:51.649939 kernel: ACPI: Core revision 20210730 Jul 2 08:01:51.649949 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Jul 2 08:01:51.649957 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 08:01:51.649968 kernel: x2apic enabled Jul 2 08:01:51.649974 kernel: Switched APIC routing to physical x2apic. Jul 2 08:01:51.649980 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 2 08:01:51.649986 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jul 2 08:01:51.649991 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Jul 2 08:01:51.649998 kernel: Disabled fast string operations Jul 2 08:01:51.650007 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 2 08:01:51.650013 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jul 2 08:01:51.650019 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 08:01:51.650029 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Jul 2 08:01:51.650039 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jul 2 08:01:51.650045 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jul 2 08:01:51.650051 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jul 2 08:01:51.650057 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 08:01:51.650066 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jul 2 08:01:51.650072 kernel: RETBleed: Mitigation: Enhanced IBRS Jul 2 08:01:51.650078 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 2 08:01:51.650086 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Jul 2 08:01:51.650094 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 2 08:01:51.650102 kernel: SRBDS: Unknown: Dependent on hypervisor status Jul 2 08:01:51.650108 kernel: GDS: Unknown: Dependent on hypervisor status Jul 2 08:01:51.650115 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 08:01:51.650124 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 08:01:51.650133 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 08:01:51.650140 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 08:01:51.650147 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 2 08:01:51.650163 kernel: Freeing SMP alternatives memory: 32K Jul 2 08:01:51.650169 kernel: pid_max: default: 131072 minimum: 1024 Jul 2 08:01:51.650175 kernel: LSM: Security Framework initializing Jul 2 08:01:51.650180 kernel: SELinux: Initializing. Jul 2 08:01:51.650186 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 2 08:01:51.650192 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 2 08:01:51.653575 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jul 2 08:01:51.653583 kernel: Performance Events: Skylake events, core PMU driver. Jul 2 08:01:51.653589 kernel: core: CPUID marked event: 'cpu cycles' unavailable Jul 2 08:01:51.653598 kernel: core: CPUID marked event: 'instructions' unavailable Jul 2 08:01:51.653604 kernel: core: CPUID marked event: 'bus cycles' unavailable Jul 2 08:01:51.653610 kernel: core: CPUID marked event: 'cache references' unavailable Jul 2 08:01:51.653615 kernel: core: CPUID marked event: 'cache misses' unavailable Jul 2 08:01:51.653621 kernel: core: CPUID marked event: 'branch instructions' unavailable Jul 2 08:01:51.653627 kernel: core: CPUID marked event: 'branch misses' unavailable Jul 2 08:01:51.653633 kernel: ... version: 1 Jul 2 08:01:51.653639 kernel: ... bit width: 48 Jul 2 08:01:51.653645 kernel: ... generic registers: 4 Jul 2 08:01:51.653651 kernel: ... value mask: 0000ffffffffffff Jul 2 08:01:51.653657 kernel: ... max period: 000000007fffffff Jul 2 08:01:51.653663 kernel: ... fixed-purpose events: 0 Jul 2 08:01:51.653669 kernel: ... event mask: 000000000000000f Jul 2 08:01:51.653674 kernel: signal: max sigframe size: 1776 Jul 2 08:01:51.653680 kernel: rcu: Hierarchical SRCU implementation. Jul 2 08:01:51.653686 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 2 08:01:51.653692 kernel: smp: Bringing up secondary CPUs ... Jul 2 08:01:51.653698 kernel: x86: Booting SMP configuration: Jul 2 08:01:51.653704 kernel: .... node #0, CPUs: #1 Jul 2 08:01:51.653710 kernel: Disabled fast string operations Jul 2 08:01:51.653716 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Jul 2 08:01:51.653722 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jul 2 08:01:51.653727 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 08:01:51.653733 kernel: smpboot: Max logical packages: 128 Jul 2 08:01:51.653739 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Jul 2 08:01:51.653744 kernel: devtmpfs: initialized Jul 2 08:01:51.653750 kernel: x86/mm: Memory block size: 128MB Jul 2 08:01:51.653756 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Jul 2 08:01:51.653763 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 08:01:51.653769 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Jul 2 08:01:51.653774 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 08:01:51.653780 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 08:01:51.653786 kernel: audit: initializing netlink subsys (disabled) Jul 2 08:01:51.653793 kernel: audit: type=2000 audit(1719907310.061:1): state=initialized audit_enabled=0 res=1 Jul 2 08:01:51.653799 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 08:01:51.653804 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 08:01:51.653810 kernel: cpuidle: using governor menu Jul 2 08:01:51.653817 kernel: Simple Boot Flag at 0x36 set to 0x80 Jul 2 08:01:51.653823 kernel: ACPI: bus type PCI registered Jul 2 08:01:51.653829 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 08:01:51.653835 kernel: dca service started, version 1.12.1 Jul 2 08:01:51.653841 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Jul 2 08:01:51.653847 kernel: PCI: MMCONFIG at [mem 0xf0000000-0xf7ffffff] reserved in E820 Jul 2 08:01:51.653852 kernel: PCI: Using configuration type 1 for base access Jul 2 08:01:51.653858 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 08:01:51.653865 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 08:01:51.653871 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 08:01:51.653876 kernel: ACPI: Added _OSI(Module Device) Jul 2 08:01:51.653882 kernel: ACPI: Added _OSI(Processor Device) Jul 2 08:01:51.653888 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 08:01:51.653893 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 08:01:51.653899 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 2 08:01:51.653908 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 2 08:01:51.653914 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 2 08:01:51.653920 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 08:01:51.653930 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jul 2 08:01:51.653938 kernel: ACPI: Interpreter enabled Jul 2 08:01:51.653944 kernel: ACPI: PM: (supports S0 S1 S5) Jul 2 08:01:51.653951 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 08:01:51.653960 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 08:01:51.653969 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Jul 2 08:01:51.653979 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Jul 2 08:01:51.654082 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 2 08:01:51.654151 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Jul 2 08:01:51.654218 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Jul 2 08:01:51.654227 kernel: PCI host bridge to bus 0000:00 Jul 2 08:01:51.654283 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 2 08:01:51.654327 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000cffff window] Jul 2 08:01:51.654368 kernel: pci_bus 0000:00: root bus resource [mem 0x000d0000-0x000d3fff window] Jul 2 08:01:51.654415 kernel: pci_bus 0000:00: root bus resource [mem 0x000d4000-0x000d7fff window] Jul 2 08:01:51.654466 kernel: pci_bus 0000:00: root bus resource [mem 0x000d8000-0x000dbfff window] Jul 2 08:01:51.654521 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 2 08:01:51.654590 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 2 08:01:51.654646 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Jul 2 08:01:51.654707 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Jul 2 08:01:51.654775 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Jul 2 08:01:51.654835 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Jul 2 08:01:51.654887 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Jul 2 08:01:51.654940 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Jul 2 08:01:51.654988 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Jul 2 08:01:51.655042 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 2 08:01:51.655090 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 2 08:01:51.655136 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 2 08:01:51.658750 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 2 08:01:51.658821 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Jul 2 08:01:51.658876 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Jul 2 08:01:51.658929 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Jul 2 08:01:51.658985 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Jul 2 08:01:51.659036 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Jul 2 08:01:51.659090 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Jul 2 08:01:51.659147 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Jul 2 08:01:51.659208 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Jul 2 08:01:51.659271 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Jul 2 08:01:51.659347 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Jul 2 08:01:51.659429 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Jul 2 08:01:51.659499 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 2 08:01:51.659577 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Jul 2 08:01:51.659638 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Jul 2 08:01:51.659690 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Jul 2 08:01:51.659752 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Jul 2 08:01:51.659814 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Jul 2 08:01:51.659873 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Jul 2 08:01:51.659948 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Jul 2 08:01:51.660018 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Jul 2 08:01:51.660084 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Jul 2 08:01:51.660151 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Jul 2 08:01:51.660223 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Jul 2 08:01:51.660289 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Jul 2 08:01:51.660344 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Jul 2 08:01:51.660398 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Jul 2 08:01:51.660450 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Jul 2 08:01:51.660503 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Jul 2 08:01:51.660567 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Jul 2 08:01:51.660624 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Jul 2 08:01:51.660683 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Jul 2 08:01:51.660742 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Jul 2 08:01:51.660797 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Jul 2 08:01:51.660853 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Jul 2 08:01:51.660904 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Jul 2 08:01:51.663012 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Jul 2 08:01:51.663101 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Jul 2 08:01:51.663185 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Jul 2 08:01:51.663251 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Jul 2 08:01:51.663320 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Jul 2 08:01:51.663369 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Jul 2 08:01:51.663421 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Jul 2 08:01:51.663472 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Jul 2 08:01:51.663577 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Jul 2 08:01:51.663651 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Jul 2 08:01:51.663729 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Jul 2 08:01:51.663801 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Jul 2 08:01:51.663873 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Jul 2 08:01:51.663924 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Jul 2 08:01:51.663978 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Jul 2 08:01:51.664026 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Jul 2 08:01:51.664080 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Jul 2 08:01:51.664127 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Jul 2 08:01:51.664189 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Jul 2 08:01:51.664237 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Jul 2 08:01:51.664296 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Jul 2 08:01:51.664344 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Jul 2 08:01:51.664394 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Jul 2 08:01:51.664442 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Jul 2 08:01:51.664498 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Jul 2 08:01:51.664559 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Jul 2 08:01:51.664619 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Jul 2 08:01:51.664671 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Jul 2 08:01:51.664730 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Jul 2 08:01:51.664802 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Jul 2 08:01:51.664878 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Jul 2 08:01:51.664958 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Jul 2 08:01:51.665034 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Jul 2 08:01:51.665102 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Jul 2 08:01:51.665162 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Jul 2 08:01:51.665210 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Jul 2 08:01:51.665260 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Jul 2 08:01:51.665312 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Jul 2 08:01:51.665369 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Jul 2 08:01:51.665424 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Jul 2 08:01:51.665474 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Jul 2 08:01:51.665522 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Jul 2 08:01:51.665583 kernel: pci_bus 0000:01: extended config space not accessible Jul 2 08:01:51.665634 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 2 08:01:51.665682 kernel: pci_bus 0000:02: extended config space not accessible Jul 2 08:01:51.665694 kernel: acpiphp: Slot [32] registered Jul 2 08:01:51.665700 kernel: acpiphp: Slot [33] registered Jul 2 08:01:51.665920 kernel: acpiphp: Slot [34] registered Jul 2 08:01:51.665928 kernel: acpiphp: Slot [35] registered Jul 2 08:01:51.665934 kernel: acpiphp: Slot [36] registered Jul 2 08:01:51.665940 kernel: acpiphp: Slot [37] registered Jul 2 08:01:51.665945 kernel: acpiphp: Slot [38] registered Jul 2 08:01:51.665951 kernel: acpiphp: Slot [39] registered Jul 2 08:01:51.665956 kernel: acpiphp: Slot [40] registered Jul 2 08:01:51.665964 kernel: acpiphp: Slot [41] registered Jul 2 08:01:51.665970 kernel: acpiphp: Slot [42] registered Jul 2 08:01:51.665975 kernel: acpiphp: Slot [43] registered Jul 2 08:01:51.665981 kernel: acpiphp: Slot [44] registered Jul 2 08:01:51.665986 kernel: acpiphp: Slot [45] registered Jul 2 08:01:51.665992 kernel: acpiphp: Slot [46] registered Jul 2 08:01:51.665997 kernel: acpiphp: Slot [47] registered Jul 2 08:01:51.666003 kernel: acpiphp: Slot [48] registered Jul 2 08:01:51.666011 kernel: acpiphp: Slot [49] registered Jul 2 08:01:51.666021 kernel: acpiphp: Slot [50] registered Jul 2 08:01:51.666028 kernel: acpiphp: Slot [51] registered Jul 2 08:01:51.666034 kernel: acpiphp: Slot [52] registered Jul 2 08:01:51.666040 kernel: acpiphp: Slot [53] registered Jul 2 08:01:51.666045 kernel: acpiphp: Slot [54] registered Jul 2 08:01:51.666051 kernel: acpiphp: Slot [55] registered Jul 2 08:01:51.666057 kernel: acpiphp: Slot [56] registered Jul 2 08:01:51.666063 kernel: acpiphp: Slot [57] registered Jul 2 08:01:51.666068 kernel: acpiphp: Slot [58] registered Jul 2 08:01:51.666074 kernel: acpiphp: Slot [59] registered Jul 2 08:01:51.666081 kernel: acpiphp: Slot [60] registered Jul 2 08:01:51.666086 kernel: acpiphp: Slot [61] registered Jul 2 08:01:51.666092 kernel: acpiphp: Slot [62] registered Jul 2 08:01:51.666097 kernel: acpiphp: Slot [63] registered Jul 2 08:01:51.666156 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Jul 2 08:01:51.666206 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jul 2 08:01:51.666252 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jul 2 08:01:51.666299 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 2 08:01:51.666357 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Jul 2 08:01:51.666417 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000cffff window] (subtractive decode) Jul 2 08:01:51.666464 kernel: pci 0000:00:11.0: bridge window [mem 0x000d0000-0x000d3fff window] (subtractive decode) Jul 2 08:01:51.666512 kernel: pci 0000:00:11.0: bridge window [mem 0x000d4000-0x000d7fff window] (subtractive decode) Jul 2 08:01:51.666628 kernel: pci 0000:00:11.0: bridge window [mem 0x000d8000-0x000dbfff window] (subtractive decode) Jul 2 08:01:51.666675 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Jul 2 08:01:51.666721 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Jul 2 08:01:51.666785 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Jul 2 08:01:51.667066 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Jul 2 08:01:51.667125 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Jul 2 08:01:51.667192 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Jul 2 08:01:51.667243 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jul 2 08:01:51.667291 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jul 2 08:01:51.667339 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jul 2 08:01:51.667387 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jul 2 08:01:51.667438 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jul 2 08:01:51.667484 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jul 2 08:01:51.667540 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jul 2 08:01:51.667588 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jul 2 08:01:51.667656 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jul 2 08:01:51.667988 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jul 2 08:01:51.668045 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jul 2 08:01:51.668104 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jul 2 08:01:51.668162 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jul 2 08:01:51.668221 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jul 2 08:01:51.668272 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jul 2 08:01:51.668318 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jul 2 08:01:51.668365 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jul 2 08:01:51.668414 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jul 2 08:01:51.668471 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jul 2 08:01:51.668520 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 2 08:01:51.668653 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jul 2 08:01:51.668700 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jul 2 08:01:51.668746 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jul 2 08:01:51.668818 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jul 2 08:01:51.671244 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jul 2 08:01:51.671299 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jul 2 08:01:51.671350 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jul 2 08:01:51.671398 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jul 2 08:01:51.671446 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jul 2 08:01:51.671499 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Jul 2 08:01:51.671561 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Jul 2 08:01:51.671614 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Jul 2 08:01:51.671661 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Jul 2 08:01:51.671709 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Jul 2 08:01:51.671757 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jul 2 08:01:51.671805 kernel: pci 0000:0b:00.0: supports D1 D2 Jul 2 08:01:51.671853 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 2 08:01:51.671902 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jul 2 08:01:51.671952 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jul 2 08:01:51.672000 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jul 2 08:01:51.672046 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jul 2 08:01:51.672093 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jul 2 08:01:51.672140 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jul 2 08:01:51.672186 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jul 2 08:01:51.672231 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jul 2 08:01:51.672279 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jul 2 08:01:51.672327 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jul 2 08:01:51.672373 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jul 2 08:01:51.672420 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jul 2 08:01:51.672467 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jul 2 08:01:51.672514 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jul 2 08:01:51.672570 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 2 08:01:51.672623 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jul 2 08:01:51.672670 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jul 2 08:01:51.672718 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 2 08:01:51.672765 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jul 2 08:01:51.672811 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jul 2 08:01:51.672856 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jul 2 08:01:51.672904 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jul 2 08:01:51.672950 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jul 2 08:01:51.672995 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jul 2 08:01:51.673042 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jul 2 08:01:51.673089 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jul 2 08:01:51.673135 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 2 08:01:51.673192 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jul 2 08:01:51.673240 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jul 2 08:01:51.673287 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jul 2 08:01:51.673333 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 2 08:01:51.673381 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jul 2 08:01:51.673427 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jul 2 08:01:51.673629 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jul 2 08:01:51.673684 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jul 2 08:01:51.673733 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jul 2 08:01:51.673780 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jul 2 08:01:51.673826 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jul 2 08:01:51.673871 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jul 2 08:01:51.673926 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jul 2 08:01:51.673983 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jul 2 08:01:51.674030 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 2 08:01:51.674077 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jul 2 08:01:51.674123 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jul 2 08:01:51.674168 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 2 08:01:51.674219 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jul 2 08:01:51.674263 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jul 2 08:01:51.674309 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jul 2 08:01:51.674358 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jul 2 08:01:51.674404 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jul 2 08:01:51.674449 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jul 2 08:01:51.674495 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jul 2 08:01:51.674548 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jul 2 08:01:51.674594 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 2 08:01:51.674641 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jul 2 08:01:51.674687 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jul 2 08:01:51.674735 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jul 2 08:01:51.674783 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jul 2 08:01:51.674830 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jul 2 08:01:51.674876 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jul 2 08:01:51.674922 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jul 2 08:01:51.674972 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jul 2 08:01:51.675019 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jul 2 08:01:51.675064 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jul 2 08:01:51.675113 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jul 2 08:01:51.675159 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jul 2 08:01:51.675205 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jul 2 08:01:51.675250 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 2 08:01:51.675296 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jul 2 08:01:51.675342 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jul 2 08:01:51.675387 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jul 2 08:01:51.675433 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jul 2 08:01:51.675481 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jul 2 08:01:51.675526 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jul 2 08:01:51.675582 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jul 2 08:01:51.675627 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jul 2 08:01:51.675673 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jul 2 08:01:51.675719 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jul 2 08:01:51.675764 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jul 2 08:01:51.675809 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 2 08:01:51.675820 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Jul 2 08:01:51.675826 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Jul 2 08:01:51.675832 kernel: ACPI: PCI: Interrupt link LNKB disabled Jul 2 08:01:51.675837 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 2 08:01:51.675843 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Jul 2 08:01:51.675849 kernel: iommu: Default domain type: Translated Jul 2 08:01:51.675855 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 08:01:51.675902 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Jul 2 08:01:51.675958 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 2 08:01:51.676013 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Jul 2 08:01:51.676022 kernel: vgaarb: loaded Jul 2 08:01:51.676028 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 08:01:51.676034 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 2 08:01:51.676040 kernel: PTP clock support registered Jul 2 08:01:51.676046 kernel: PCI: Using ACPI for IRQ routing Jul 2 08:01:51.676052 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 2 08:01:51.676058 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Jul 2 08:01:51.676064 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Jul 2 08:01:51.676071 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Jul 2 08:01:51.676077 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Jul 2 08:01:51.676083 kernel: clocksource: Switched to clocksource tsc-early Jul 2 08:01:51.676088 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 08:01:51.676094 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 08:01:51.676100 kernel: pnp: PnP ACPI init Jul 2 08:01:51.676149 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Jul 2 08:01:51.676208 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Jul 2 08:01:51.676253 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Jul 2 08:01:51.676298 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Jul 2 08:01:51.676345 kernel: pnp 00:06: [dma 2] Jul 2 08:01:51.676392 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Jul 2 08:01:51.676436 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Jul 2 08:01:51.676477 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Jul 2 08:01:51.676487 kernel: pnp: PnP ACPI: found 8 devices Jul 2 08:01:51.676493 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 08:01:51.676499 kernel: NET: Registered PF_INET protocol family Jul 2 08:01:51.676505 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 08:01:51.676511 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 2 08:01:51.676517 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 08:01:51.676522 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 2 08:01:51.676536 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Jul 2 08:01:51.676544 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 2 08:01:51.676552 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 2 08:01:51.676558 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 2 08:01:51.676564 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 08:01:51.676570 kernel: NET: Registered PF_XDP protocol family Jul 2 08:01:51.676623 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 2 08:01:51.676671 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jul 2 08:01:51.676719 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jul 2 08:01:51.676768 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jul 2 08:01:51.676815 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jul 2 08:01:51.676861 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jul 2 08:01:51.676908 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Jul 2 08:01:51.676955 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jul 2 08:01:51.677001 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jul 2 08:01:51.677050 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jul 2 08:01:51.677097 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jul 2 08:01:51.677143 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jul 2 08:01:51.677201 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jul 2 08:01:51.677250 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jul 2 08:01:51.677297 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jul 2 08:01:51.677346 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jul 2 08:01:51.677392 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jul 2 08:01:51.677438 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jul 2 08:01:51.677484 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jul 2 08:01:51.677536 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Jul 2 08:01:51.677585 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Jul 2 08:01:51.677634 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Jul 2 08:01:51.677681 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Jul 2 08:01:51.677727 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Jul 2 08:01:51.677773 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Jul 2 08:01:51.677820 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jul 2 08:01:51.677866 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jul 2 08:01:51.677911 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jul 2 08:01:51.677974 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jul 2 08:01:51.678022 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jul 2 08:01:51.678068 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jul 2 08:01:51.678123 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jul 2 08:01:51.678176 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jul 2 08:01:51.678225 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jul 2 08:01:51.678272 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jul 2 08:01:51.678318 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jul 2 08:01:51.678373 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jul 2 08:01:51.678434 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jul 2 08:01:51.678482 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jul 2 08:01:51.678659 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jul 2 08:01:51.678709 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jul 2 08:01:51.678755 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jul 2 08:01:51.678801 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jul 2 08:01:51.678847 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jul 2 08:01:51.678896 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jul 2 08:01:51.680359 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jul 2 08:01:51.680415 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jul 2 08:01:51.680469 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jul 2 08:01:51.680871 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jul 2 08:01:51.680925 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jul 2 08:01:51.681270 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jul 2 08:01:51.681333 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jul 2 08:01:51.681387 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jul 2 08:01:51.681434 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jul 2 08:01:51.681499 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jul 2 08:01:51.681565 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jul 2 08:01:51.681616 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jul 2 08:01:51.681662 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jul 2 08:01:51.681707 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jul 2 08:01:51.682038 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jul 2 08:01:51.682100 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jul 2 08:01:51.682160 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jul 2 08:01:51.682219 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jul 2 08:01:51.682267 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jul 2 08:01:51.682315 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jul 2 08:01:51.682362 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jul 2 08:01:51.682409 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jul 2 08:01:51.682455 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jul 2 08:01:51.682503 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jul 2 08:01:51.682623 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jul 2 08:01:51.682675 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jul 2 08:01:51.682722 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jul 2 08:01:51.682767 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jul 2 08:01:51.684049 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jul 2 08:01:51.684102 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jul 2 08:01:51.684407 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jul 2 08:01:51.684461 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jul 2 08:01:51.684511 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jul 2 08:01:51.684599 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jul 2 08:01:51.684647 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jul 2 08:01:51.684693 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jul 2 08:01:51.684739 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jul 2 08:01:51.684784 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jul 2 08:01:51.684829 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jul 2 08:01:51.684878 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jul 2 08:01:51.684930 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jul 2 08:01:51.684976 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jul 2 08:01:51.685025 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jul 2 08:01:51.685071 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jul 2 08:01:51.685117 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jul 2 08:01:51.685163 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jul 2 08:01:51.685210 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jul 2 08:01:51.685256 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jul 2 08:01:51.685303 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jul 2 08:01:51.685349 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jul 2 08:01:51.685397 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jul 2 08:01:51.685454 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jul 2 08:01:51.685504 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jul 2 08:01:51.685559 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jul 2 08:01:51.685606 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jul 2 08:01:51.685652 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jul 2 08:01:51.685698 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jul 2 08:01:51.685744 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jul 2 08:01:51.685790 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jul 2 08:01:51.685837 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jul 2 08:01:51.685884 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jul 2 08:01:51.685931 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jul 2 08:01:51.685977 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jul 2 08:01:51.686022 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jul 2 08:01:51.686070 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 2 08:01:51.686117 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Jul 2 08:01:51.686168 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jul 2 08:01:51.686215 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jul 2 08:01:51.686260 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 2 08:01:51.686312 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Jul 2 08:01:51.686360 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jul 2 08:01:51.686406 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jul 2 08:01:51.686453 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jul 2 08:01:51.686499 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Jul 2 08:01:51.686560 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jul 2 08:01:51.686609 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jul 2 08:01:51.686656 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jul 2 08:01:51.686702 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jul 2 08:01:51.686749 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jul 2 08:01:51.686799 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jul 2 08:01:51.686846 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jul 2 08:01:51.686893 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jul 2 08:01:51.686939 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jul 2 08:01:51.686986 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jul 2 08:01:51.687031 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jul 2 08:01:51.687080 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jul 2 08:01:51.687126 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jul 2 08:01:51.687187 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 2 08:01:51.687235 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jul 2 08:01:51.687281 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jul 2 08:01:51.687327 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jul 2 08:01:51.687373 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jul 2 08:01:51.687419 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jul 2 08:01:51.687465 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jul 2 08:01:51.687513 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jul 2 08:01:51.687566 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jul 2 08:01:51.687613 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jul 2 08:01:51.687664 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Jul 2 08:01:51.687711 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jul 2 08:01:51.687758 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jul 2 08:01:51.687804 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jul 2 08:01:51.687850 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Jul 2 08:01:51.687898 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jul 2 08:01:51.687946 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jul 2 08:01:51.688004 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jul 2 08:01:51.688052 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jul 2 08:01:51.688100 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jul 2 08:01:51.688146 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jul 2 08:01:51.688192 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jul 2 08:01:51.688243 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jul 2 08:01:51.688295 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jul 2 08:01:51.688341 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jul 2 08:01:51.688390 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 2 08:01:51.688436 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jul 2 08:01:51.688482 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jul 2 08:01:51.688557 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 2 08:01:51.688606 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jul 2 08:01:51.688652 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jul 2 08:01:51.688697 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jul 2 08:01:51.688742 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jul 2 08:01:51.688787 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jul 2 08:01:51.688852 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jul 2 08:01:51.689247 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jul 2 08:01:51.689307 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jul 2 08:01:51.689359 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 2 08:01:51.689777 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jul 2 08:01:51.689832 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jul 2 08:01:51.689883 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jul 2 08:01:51.690049 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 2 08:01:51.690102 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jul 2 08:01:51.690150 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jul 2 08:01:51.690220 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jul 2 08:01:51.690407 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jul 2 08:01:51.690462 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jul 2 08:01:51.690511 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jul 2 08:01:51.690567 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jul 2 08:01:51.690614 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jul 2 08:01:51.690665 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jul 2 08:01:51.691010 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jul 2 08:01:51.691077 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 2 08:01:51.691143 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jul 2 08:01:51.691196 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jul 2 08:01:51.691244 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 2 08:01:51.691291 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jul 2 08:01:51.691337 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jul 2 08:01:51.691382 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jul 2 08:01:51.691434 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jul 2 08:01:51.691486 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jul 2 08:01:51.691568 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jul 2 08:01:51.691617 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jul 2 08:01:51.691666 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jul 2 08:01:51.691712 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 2 08:01:51.692045 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jul 2 08:01:51.692106 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jul 2 08:01:51.692157 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jul 2 08:01:51.692205 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jul 2 08:01:51.692264 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jul 2 08:01:51.692313 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jul 2 08:01:51.692359 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jul 2 08:01:51.692406 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jul 2 08:01:51.692455 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jul 2 08:01:51.692503 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jul 2 08:01:51.692562 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jul 2 08:01:51.692610 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jul 2 08:01:51.692657 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jul 2 08:01:51.692703 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 2 08:01:51.692757 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jul 2 08:01:51.692807 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jul 2 08:01:51.692853 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jul 2 08:01:51.692904 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jul 2 08:01:51.692950 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jul 2 08:01:51.692995 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jul 2 08:01:51.693057 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jul 2 08:01:51.693113 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jul 2 08:01:51.693165 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jul 2 08:01:51.693213 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jul 2 08:01:51.699296 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jul 2 08:01:51.699361 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 2 08:01:51.699409 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Jul 2 08:01:51.699455 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000cffff window] Jul 2 08:01:51.699495 kernel: pci_bus 0000:00: resource 6 [mem 0x000d0000-0x000d3fff window] Jul 2 08:01:51.699545 kernel: pci_bus 0000:00: resource 7 [mem 0x000d4000-0x000d7fff window] Jul 2 08:01:51.699586 kernel: pci_bus 0000:00: resource 8 [mem 0x000d8000-0x000dbfff window] Jul 2 08:01:51.699628 kernel: pci_bus 0000:00: resource 9 [mem 0xc0000000-0xfebfffff window] Jul 2 08:01:51.699668 kernel: pci_bus 0000:00: resource 10 [io 0x0000-0x0cf7 window] Jul 2 08:01:51.699710 kernel: pci_bus 0000:00: resource 11 [io 0x0d00-0xfeff window] Jul 2 08:01:51.699762 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Jul 2 08:01:51.699805 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Jul 2 08:01:51.699847 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 2 08:01:51.699890 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Jul 2 08:01:51.699932 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000cffff window] Jul 2 08:01:51.699974 kernel: pci_bus 0000:02: resource 6 [mem 0x000d0000-0x000d3fff window] Jul 2 08:01:51.700016 kernel: pci_bus 0000:02: resource 7 [mem 0x000d4000-0x000d7fff window] Jul 2 08:01:51.700060 kernel: pci_bus 0000:02: resource 8 [mem 0x000d8000-0x000dbfff window] Jul 2 08:01:51.700102 kernel: pci_bus 0000:02: resource 9 [mem 0xc0000000-0xfebfffff window] Jul 2 08:01:51.700159 kernel: pci_bus 0000:02: resource 10 [io 0x0000-0x0cf7 window] Jul 2 08:01:51.700204 kernel: pci_bus 0000:02: resource 11 [io 0x0d00-0xfeff window] Jul 2 08:01:51.700251 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Jul 2 08:01:51.700294 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Jul 2 08:01:51.700337 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Jul 2 08:01:51.700387 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Jul 2 08:01:51.700441 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Jul 2 08:01:51.700499 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Jul 2 08:01:51.700556 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Jul 2 08:01:51.703943 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Jul 2 08:01:51.703996 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Jul 2 08:01:51.704047 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Jul 2 08:01:51.704094 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Jul 2 08:01:51.704144 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Jul 2 08:01:51.704195 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 2 08:01:51.704248 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Jul 2 08:01:51.704292 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Jul 2 08:01:51.704361 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Jul 2 08:01:51.704421 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Jul 2 08:01:51.704470 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Jul 2 08:01:51.704512 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Jul 2 08:01:51.704571 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Jul 2 08:01:51.704616 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Jul 2 08:01:51.704660 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Jul 2 08:01:51.704711 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Jul 2 08:01:51.705059 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Jul 2 08:01:51.705118 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Jul 2 08:01:51.705176 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Jul 2 08:01:51.705223 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Jul 2 08:01:51.705270 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Jul 2 08:01:51.705320 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Jul 2 08:01:51.705366 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 2 08:01:51.705413 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Jul 2 08:01:51.705468 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 2 08:01:51.705525 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Jul 2 08:01:51.705645 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Jul 2 08:01:51.705692 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Jul 2 08:01:51.705760 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Jul 2 08:01:51.705926 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Jul 2 08:01:51.705974 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 2 08:01:51.706021 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Jul 2 08:01:51.706065 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Jul 2 08:01:51.706106 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 2 08:01:51.706162 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Jul 2 08:01:51.706206 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Jul 2 08:01:51.706248 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Jul 2 08:01:51.706294 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Jul 2 08:01:51.706337 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Jul 2 08:01:51.706379 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Jul 2 08:01:51.706426 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Jul 2 08:01:51.706470 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 2 08:01:51.706516 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Jul 2 08:01:51.706566 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 2 08:01:51.706615 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Jul 2 08:01:51.706658 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Jul 2 08:01:51.706707 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Jul 2 08:01:51.706753 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Jul 2 08:01:51.706810 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Jul 2 08:01:51.706855 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 2 08:01:51.706901 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Jul 2 08:01:51.706944 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Jul 2 08:01:51.707283 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Jul 2 08:01:51.707341 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Jul 2 08:01:51.707391 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Jul 2 08:01:51.707459 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Jul 2 08:01:51.707521 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Jul 2 08:01:51.707600 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Jul 2 08:01:51.707649 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Jul 2 08:01:51.707696 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 2 08:01:51.707743 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Jul 2 08:01:51.707920 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Jul 2 08:01:51.707991 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Jul 2 08:01:51.708038 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Jul 2 08:01:51.708372 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Jul 2 08:01:51.708426 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Jul 2 08:01:51.708481 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Jul 2 08:01:51.708526 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 2 08:01:51.708588 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 2 08:01:51.708597 kernel: PCI: CLS 32 bytes, default 64 Jul 2 08:01:51.708604 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 2 08:01:51.708611 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jul 2 08:01:51.708617 kernel: clocksource: Switched to clocksource tsc Jul 2 08:01:51.708625 kernel: Initialise system trusted keyrings Jul 2 08:01:51.708631 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 2 08:01:51.708638 kernel: Key type asymmetric registered Jul 2 08:01:51.708644 kernel: Asymmetric key parser 'x509' registered Jul 2 08:01:51.708649 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 2 08:01:51.708656 kernel: io scheduler mq-deadline registered Jul 2 08:01:51.708662 kernel: io scheduler kyber registered Jul 2 08:01:51.708668 kernel: io scheduler bfq registered Jul 2 08:01:51.708723 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Jul 2 08:01:51.708775 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:01:51.708826 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Jul 2 08:01:51.709013 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:01:51.709067 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Jul 2 08:01:51.709116 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:01:51.709456 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Jul 2 08:01:51.709516 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:01:51.709600 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Jul 2 08:01:51.709667 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:01:51.709718 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Jul 2 08:01:51.709975 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:01:51.710051 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Jul 2 08:01:51.710111 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:01:51.710167 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Jul 2 08:01:51.710239 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:01:51.710525 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Jul 2 08:01:51.710657 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:01:51.710715 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Jul 2 08:01:51.710768 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:01:51.710826 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Jul 2 08:01:51.710898 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:01:51.711825 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Jul 2 08:01:51.711878 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:01:51.711928 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Jul 2 08:01:51.712300 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:01:51.712364 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Jul 2 08:01:51.712418 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:01:51.712469 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Jul 2 08:01:51.712518 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:01:51.712606 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Jul 2 08:01:51.712660 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:01:51.712710 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Jul 2 08:01:51.712756 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:01:51.712808 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Jul 2 08:01:51.713167 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:01:51.713220 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Jul 2 08:01:51.713273 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:01:51.713326 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Jul 2 08:01:51.713374 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:01:51.713423 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Jul 2 08:01:51.713472 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:01:51.713520 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Jul 2 08:01:51.713584 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:01:51.713637 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Jul 2 08:01:51.713691 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:01:51.713739 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Jul 2 08:01:51.713793 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:01:51.713858 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Jul 2 08:01:51.713907 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:01:51.713954 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Jul 2 08:01:51.714002 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:01:51.714049 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Jul 2 08:01:51.714097 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:01:51.714151 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Jul 2 08:01:51.714203 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:01:51.714254 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Jul 2 08:01:51.714302 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:01:51.714351 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Jul 2 08:01:51.714398 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:01:51.714448 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Jul 2 08:01:51.714499 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:01:51.714575 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Jul 2 08:01:51.714628 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 2 08:01:51.714638 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 08:01:51.714645 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 08:01:51.714657 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 08:01:51.714664 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Jul 2 08:01:51.714670 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 2 08:01:51.714676 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 2 08:01:51.714727 kernel: rtc_cmos 00:01: registered as rtc0 Jul 2 08:01:51.714773 kernel: rtc_cmos 00:01: setting system clock to 2024-07-02T08:01:51 UTC (1719907311) Jul 2 08:01:51.714781 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 2 08:01:51.714822 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Jul 2 08:01:51.714832 kernel: fail to initialize ptp_kvm Jul 2 08:01:51.714838 kernel: intel_pstate: CPU model not supported Jul 2 08:01:51.714845 kernel: NET: Registered PF_INET6 protocol family Jul 2 08:01:51.714851 kernel: Segment Routing with IPv6 Jul 2 08:01:51.714857 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 08:01:51.714863 kernel: NET: Registered PF_PACKET protocol family Jul 2 08:01:51.714869 kernel: Key type dns_resolver registered Jul 2 08:01:51.714875 kernel: IPI shorthand broadcast: enabled Jul 2 08:01:51.714882 kernel: sched_clock: Marking stable (901136960, 223707234)->(1190857407, -66013213) Jul 2 08:01:51.714889 kernel: registered taskstats version 1 Jul 2 08:01:51.714895 kernel: Loading compiled-in X.509 certificates Jul 2 08:01:51.714901 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.161-flatcar: a1ce693884775675566f1ed116e36d15950b9a42' Jul 2 08:01:51.714907 kernel: Key type .fscrypt registered Jul 2 08:01:51.714913 kernel: Key type fscrypt-provisioning registered Jul 2 08:01:51.714920 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 08:01:51.714926 kernel: ima: Allocated hash algorithm: sha1 Jul 2 08:01:51.714933 kernel: ima: No architecture policies found Jul 2 08:01:51.714940 kernel: clk: Disabling unused clocks Jul 2 08:01:51.714946 kernel: Freeing unused kernel image (initmem) memory: 47444K Jul 2 08:01:51.714954 kernel: Write protecting the kernel read-only data: 28672k Jul 2 08:01:51.714963 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jul 2 08:01:51.714973 kernel: Freeing unused kernel image (rodata/data gap) memory: 624K Jul 2 08:01:51.714982 kernel: Run /init as init process Jul 2 08:01:51.714992 kernel: with arguments: Jul 2 08:01:51.714998 kernel: /init Jul 2 08:01:51.715004 kernel: with environment: Jul 2 08:01:51.715013 kernel: HOME=/ Jul 2 08:01:51.715021 kernel: TERM=linux Jul 2 08:01:51.715027 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 08:01:51.715035 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 08:01:51.715043 systemd[1]: Detected virtualization vmware. Jul 2 08:01:51.715050 systemd[1]: Detected architecture x86-64. Jul 2 08:01:51.715056 systemd[1]: Running in initrd. Jul 2 08:01:51.715062 systemd[1]: No hostname configured, using default hostname. Jul 2 08:01:51.715068 systemd[1]: Hostname set to . Jul 2 08:01:51.715079 systemd[1]: Initializing machine ID from random generator. Jul 2 08:01:51.715087 systemd[1]: Queued start job for default target initrd.target. Jul 2 08:01:51.715094 systemd[1]: Started systemd-ask-password-console.path. Jul 2 08:01:51.715100 systemd[1]: Reached target cryptsetup.target. Jul 2 08:01:51.715106 systemd[1]: Reached target paths.target. Jul 2 08:01:51.715112 systemd[1]: Reached target slices.target. Jul 2 08:01:51.715118 systemd[1]: Reached target swap.target. Jul 2 08:01:51.715125 systemd[1]: Reached target timers.target. Jul 2 08:01:51.715133 systemd[1]: Listening on iscsid.socket. Jul 2 08:01:51.715139 systemd[1]: Listening on iscsiuio.socket. Jul 2 08:01:51.715148 systemd[1]: Listening on systemd-journald-audit.socket. Jul 2 08:01:51.715159 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 2 08:01:51.715168 systemd[1]: Listening on systemd-journald.socket. Jul 2 08:01:51.715175 systemd[1]: Listening on systemd-networkd.socket. Jul 2 08:01:51.715183 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 08:01:51.715191 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 08:01:51.715197 systemd[1]: Reached target sockets.target. Jul 2 08:01:51.715207 systemd[1]: Starting kmod-static-nodes.service... Jul 2 08:01:51.715217 systemd[1]: Finished network-cleanup.service. Jul 2 08:01:51.715228 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 08:01:51.715238 systemd[1]: Starting systemd-journald.service... Jul 2 08:01:51.715244 systemd[1]: Starting systemd-modules-load.service... Jul 2 08:01:51.715250 systemd[1]: Starting systemd-resolved.service... Jul 2 08:01:51.715257 systemd[1]: Starting systemd-vconsole-setup.service... Jul 2 08:01:51.715265 systemd[1]: Finished kmod-static-nodes.service. Jul 2 08:01:51.715272 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 08:01:51.715279 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 08:01:51.715287 systemd[1]: Finished systemd-vconsole-setup.service. Jul 2 08:01:51.715296 kernel: audit: type=1130 audit(1719907311.646:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:51.715306 systemd[1]: Starting dracut-cmdline-ask.service... Jul 2 08:01:51.715317 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 08:01:51.715325 kernel: audit: type=1130 audit(1719907311.656:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:51.715331 systemd[1]: Finished dracut-cmdline-ask.service. Jul 2 08:01:51.715339 systemd[1]: Starting dracut-cmdline.service... Jul 2 08:01:51.715345 kernel: audit: type=1130 audit(1719907311.668:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:51.715351 systemd[1]: Started systemd-resolved.service. Jul 2 08:01:51.715360 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 08:01:51.715371 kernel: audit: type=1130 audit(1719907311.699:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:51.715381 systemd[1]: Reached target nss-lookup.target. Jul 2 08:01:51.715394 kernel: Bridge firewalling registered Jul 2 08:01:51.715408 systemd-journald[216]: Journal started Jul 2 08:01:51.715455 systemd-journald[216]: Runtime Journal (/run/log/journal/ec947a38928e4881bd17980bf81a2bb3) is 4.8M, max 38.8M, 34.0M free. Jul 2 08:01:51.716666 systemd[1]: Started systemd-journald.service. Jul 2 08:01:51.716682 kernel: audit: type=1130 audit(1719907311.714:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:51.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:51.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:51.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:51.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:51.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:51.650233 systemd-modules-load[217]: Inserted module 'overlay' Jul 2 08:01:51.695968 systemd-resolved[218]: Positive Trust Anchors: Jul 2 08:01:51.695975 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 08:01:51.695995 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 08:01:51.697895 systemd-resolved[218]: Defaulting to hostname 'linux'. Jul 2 08:01:51.708888 systemd-modules-load[217]: Inserted module 'br_netfilter' Jul 2 08:01:51.720949 dracut-cmdline[233]: dracut-dracut-053 Jul 2 08:01:51.720949 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Jul 2 08:01:51.720949 dracut-cmdline[233]: BEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 08:01:51.725546 kernel: SCSI subsystem initialized Jul 2 08:01:51.734552 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 08:01:51.736521 kernel: device-mapper: uevent: version 1.0.3 Jul 2 08:01:51.736551 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 2 08:01:51.739465 systemd-modules-load[217]: Inserted module 'dm_multipath' Jul 2 08:01:51.739906 systemd[1]: Finished systemd-modules-load.service. Jul 2 08:01:51.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:51.740460 systemd[1]: Starting systemd-sysctl.service... Jul 2 08:01:51.743092 kernel: audit: type=1130 audit(1719907311.738:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:51.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:51.746611 systemd[1]: Finished systemd-sysctl.service. Jul 2 08:01:51.749564 kernel: audit: type=1130 audit(1719907311.745:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:51.756544 kernel: Loading iSCSI transport class v2.0-870. Jul 2 08:01:51.768547 kernel: iscsi: registered transport (tcp) Jul 2 08:01:51.783545 kernel: iscsi: registered transport (qla4xxx) Jul 2 08:01:51.783583 kernel: QLogic iSCSI HBA Driver Jul 2 08:01:51.804677 kernel: audit: type=1130 audit(1719907311.799:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:51.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:51.800786 systemd[1]: Finished dracut-cmdline.service. Jul 2 08:01:51.801612 systemd[1]: Starting dracut-pre-udev.service... Jul 2 08:01:51.840554 kernel: raid6: avx2x4 gen() 46500 MB/s Jul 2 08:01:51.857547 kernel: raid6: avx2x4 xor() 21262 MB/s Jul 2 08:01:51.874550 kernel: raid6: avx2x2 gen() 50021 MB/s Jul 2 08:01:51.891555 kernel: raid6: avx2x2 xor() 28987 MB/s Jul 2 08:01:51.908556 kernel: raid6: avx2x1 gen() 42810 MB/s Jul 2 08:01:51.925548 kernel: raid6: avx2x1 xor() 27090 MB/s Jul 2 08:01:51.942548 kernel: raid6: sse2x4 gen() 20579 MB/s Jul 2 08:01:51.959640 kernel: raid6: sse2x4 xor() 11302 MB/s Jul 2 08:01:51.976549 kernel: raid6: sse2x2 gen() 21263 MB/s Jul 2 08:01:51.993547 kernel: raid6: sse2x2 xor() 13327 MB/s Jul 2 08:01:52.010543 kernel: raid6: sse2x1 gen() 18156 MB/s Jul 2 08:01:52.027732 kernel: raid6: sse2x1 xor() 8889 MB/s Jul 2 08:01:52.027759 kernel: raid6: using algorithm avx2x2 gen() 50021 MB/s Jul 2 08:01:52.027768 kernel: raid6: .... xor() 28987 MB/s, rmw enabled Jul 2 08:01:52.028910 kernel: raid6: using avx2x2 recovery algorithm Jul 2 08:01:52.037544 kernel: xor: automatically using best checksumming function avx Jul 2 08:01:52.099552 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jul 2 08:01:52.104075 systemd[1]: Finished dracut-pre-udev.service. Jul 2 08:01:52.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:52.104908 systemd[1]: Starting systemd-udevd.service... Jul 2 08:01:52.103000 audit: BPF prog-id=7 op=LOAD Jul 2 08:01:52.103000 audit: BPF prog-id=8 op=LOAD Jul 2 08:01:52.108545 kernel: audit: type=1130 audit(1719907312.102:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:52.116266 systemd-udevd[416]: Using default interface naming scheme 'v252'. Jul 2 08:01:52.119749 systemd[1]: Started systemd-udevd.service. Jul 2 08:01:52.120287 systemd[1]: Starting dracut-pre-trigger.service... Jul 2 08:01:52.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:52.129077 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Jul 2 08:01:52.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:52.146195 systemd[1]: Finished dracut-pre-trigger.service. Jul 2 08:01:52.146801 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 08:01:52.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:52.213728 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 08:01:52.275106 kernel: VMware PVSCSI driver - version 1.0.7.0-k Jul 2 08:01:52.275144 kernel: vmw_pvscsi: using 64bit dma Jul 2 08:01:52.279135 kernel: vmw_pvscsi: max_id: 16 Jul 2 08:01:52.279171 kernel: vmw_pvscsi: setting ring_pages to 8 Jul 2 08:01:52.289404 kernel: VMware vmxnet3 virtual NIC driver - version 1.6.0.0-k-NAPI Jul 2 08:01:52.289438 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Jul 2 08:01:52.289538 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Jul 2 08:01:52.291544 kernel: vmw_pvscsi: enabling reqCallThreshold Jul 2 08:01:52.291568 kernel: vmw_pvscsi: driver-based request coalescing enabled Jul 2 08:01:52.291585 kernel: vmw_pvscsi: using MSI-X Jul 2 08:01:52.295540 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Jul 2 08:01:52.305611 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Jul 2 08:01:52.307599 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Jul 2 08:01:52.307634 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 08:01:52.316930 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Jul 2 08:01:52.317549 kernel: libata version 3.00 loaded. Jul 2 08:01:52.320568 kernel: AVX2 version of gcm_enc/dec engaged. Jul 2 08:01:52.320596 kernel: AES CTR mode by8 optimization enabled Jul 2 08:01:52.321540 kernel: ata_piix 0000:00:07.1: version 2.13 Jul 2 08:01:52.326044 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Jul 2 08:01:52.326138 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 2 08:01:52.326200 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Jul 2 08:01:52.326258 kernel: sd 0:0:0:0: [sda] Cache data unavailable Jul 2 08:01:52.326314 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Jul 2 08:01:52.329543 kernel: scsi host1: ata_piix Jul 2 08:01:52.329728 kernel: scsi host2: ata_piix Jul 2 08:01:52.329931 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Jul 2 08:01:52.329946 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Jul 2 08:01:52.342572 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 08:01:52.344546 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 2 08:01:52.504548 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Jul 2 08:01:52.511209 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Jul 2 08:01:52.535991 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Jul 2 08:01:52.536127 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 2 08:01:52.548115 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 2 08:01:52.552126 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 2 08:01:52.553467 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 2 08:01:52.553580 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (467) Jul 2 08:01:52.555193 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 2 08:01:52.555538 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 2 08:01:52.556234 systemd[1]: Starting disk-uuid.service... Jul 2 08:01:52.560613 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 08:01:52.613559 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 08:01:52.619549 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 08:01:53.630191 disk-uuid[549]: The operation has completed successfully. Jul 2 08:01:53.630543 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 08:01:53.670968 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 08:01:53.671046 systemd[1]: Finished disk-uuid.service. Jul 2 08:01:53.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:53.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:53.671906 systemd[1]: Starting verity-setup.service... Jul 2 08:01:53.700544 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 2 08:01:53.848587 systemd[1]: Found device dev-mapper-usr.device. Jul 2 08:01:53.849304 systemd[1]: Mounting sysusr-usr.mount... Jul 2 08:01:53.849624 systemd[1]: Finished verity-setup.service. Jul 2 08:01:53.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:53.911552 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 2 08:01:53.911874 systemd[1]: Mounted sysusr-usr.mount. Jul 2 08:01:53.912458 systemd[1]: Starting afterburn-network-kargs.service... Jul 2 08:01:53.913007 systemd[1]: Starting ignition-setup.service... Jul 2 08:01:53.932783 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 08:01:53.932820 kernel: BTRFS info (device sda6): using free space tree Jul 2 08:01:53.932829 kernel: BTRFS info (device sda6): has skinny extents Jul 2 08:01:53.939542 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 2 08:01:53.946902 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 08:01:53.963639 systemd[1]: Finished ignition-setup.service. Jul 2 08:01:53.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:53.964415 systemd[1]: Starting ignition-fetch-offline.service... Jul 2 08:01:54.065788 systemd[1]: Finished afterburn-network-kargs.service. Jul 2 08:01:54.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:54.066581 systemd[1]: Starting parse-ip-for-networkd.service... Jul 2 08:01:54.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:54.121000 audit: BPF prog-id=9 op=LOAD Jul 2 08:01:54.122564 systemd[1]: Finished parse-ip-for-networkd.service. Jul 2 08:01:54.123453 systemd[1]: Starting systemd-networkd.service... Jul 2 08:01:54.137115 systemd-networkd[736]: lo: Link UP Jul 2 08:01:54.137121 systemd-networkd[736]: lo: Gained carrier Jul 2 08:01:54.137386 systemd-networkd[736]: Enumeration completed Jul 2 08:01:54.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:54.137575 systemd-networkd[736]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Jul 2 08:01:54.137616 systemd[1]: Started systemd-networkd.service. Jul 2 08:01:54.137761 systemd[1]: Reached target network.target. Jul 2 08:01:54.138248 systemd[1]: Starting iscsiuio.service... Jul 2 08:01:54.141384 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jul 2 08:01:54.141511 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jul 2 08:01:54.142065 systemd-networkd[736]: ens192: Link UP Jul 2 08:01:54.142069 systemd-networkd[736]: ens192: Gained carrier Jul 2 08:01:54.144290 systemd[1]: Started iscsiuio.service. Jul 2 08:01:54.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:54.144888 systemd[1]: Starting iscsid.service... Jul 2 08:01:54.146881 iscsid[741]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 2 08:01:54.146881 iscsid[741]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 2 08:01:54.146881 iscsid[741]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 2 08:01:54.146881 iscsid[741]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 2 08:01:54.146881 iscsid[741]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 2 08:01:54.147920 iscsid[741]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 2 08:01:54.147695 systemd[1]: Started iscsid.service. Jul 2 08:01:54.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:54.148218 systemd[1]: Starting dracut-initqueue.service... Jul 2 08:01:54.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:54.154489 systemd[1]: Finished dracut-initqueue.service. Jul 2 08:01:54.154635 systemd[1]: Reached target remote-fs-pre.target. Jul 2 08:01:54.154721 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 08:01:54.154806 systemd[1]: Reached target remote-fs.target. Jul 2 08:01:54.155345 systemd[1]: Starting dracut-pre-mount.service... Jul 2 08:01:54.160341 systemd[1]: Finished dracut-pre-mount.service. Jul 2 08:01:54.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:54.299707 ignition[607]: Ignition 2.14.0 Jul 2 08:01:54.299721 ignition[607]: Stage: fetch-offline Jul 2 08:01:54.299764 ignition[607]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 08:01:54.299787 ignition[607]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Jul 2 08:01:54.303301 ignition[607]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 2 08:01:54.303415 ignition[607]: parsed url from cmdline: "" Jul 2 08:01:54.303417 ignition[607]: no config URL provided Jul 2 08:01:54.303421 ignition[607]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 08:01:54.303426 ignition[607]: no config at "/usr/lib/ignition/user.ign" Jul 2 08:01:54.310058 ignition[607]: config successfully fetched Jul 2 08:01:54.310081 ignition[607]: parsing config with SHA512: 603074e5e8446c4c033b5c6872021bafe73776cbf82a096d92d98c563630280059abae859dd9a6ddbd87c6f9e5a23258ed25c9dfae4416a0caa6bf32bf02a461 Jul 2 08:01:54.313751 unknown[607]: fetched base config from "system" Jul 2 08:01:54.313954 unknown[607]: fetched user config from "vmware" Jul 2 08:01:54.314490 ignition[607]: fetch-offline: fetch-offline passed Jul 2 08:01:54.314675 ignition[607]: Ignition finished successfully Jul 2 08:01:54.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:54.315308 systemd[1]: Finished ignition-fetch-offline.service. Jul 2 08:01:54.315456 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 2 08:01:54.315919 systemd[1]: Starting ignition-kargs.service... Jul 2 08:01:54.322072 ignition[755]: Ignition 2.14.0 Jul 2 08:01:54.322079 ignition[755]: Stage: kargs Jul 2 08:01:54.322141 ignition[755]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 08:01:54.322153 ignition[755]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Jul 2 08:01:54.323489 ignition[755]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 2 08:01:54.325000 ignition[755]: kargs: kargs passed Jul 2 08:01:54.325028 ignition[755]: Ignition finished successfully Jul 2 08:01:54.325997 systemd[1]: Finished ignition-kargs.service. Jul 2 08:01:54.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:54.326622 systemd[1]: Starting ignition-disks.service... Jul 2 08:01:54.330980 ignition[761]: Ignition 2.14.0 Jul 2 08:01:54.331235 ignition[761]: Stage: disks Jul 2 08:01:54.331597 ignition[761]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 08:01:54.331752 ignition[761]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Jul 2 08:01:54.333060 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 2 08:01:54.334747 ignition[761]: disks: disks passed Jul 2 08:01:54.334895 ignition[761]: Ignition finished successfully Jul 2 08:01:54.335490 systemd[1]: Finished ignition-disks.service. Jul 2 08:01:54.335676 systemd[1]: Reached target initrd-root-device.target. Jul 2 08:01:54.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:54.335789 systemd[1]: Reached target local-fs-pre.target. Jul 2 08:01:54.335933 systemd[1]: Reached target local-fs.target. Jul 2 08:01:54.336097 systemd[1]: Reached target sysinit.target. Jul 2 08:01:54.336272 systemd[1]: Reached target basic.target. Jul 2 08:01:54.336925 systemd[1]: Starting systemd-fsck-root.service... Jul 2 08:01:54.353832 systemd-fsck[769]: ROOT: clean, 614/1628000 files, 124057/1617920 blocks Jul 2 08:01:54.355550 systemd[1]: Finished systemd-fsck-root.service. Jul 2 08:01:54.356191 systemd[1]: Mounting sysroot.mount... Jul 2 08:01:54.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:54.362361 systemd[1]: Mounted sysroot.mount. Jul 2 08:01:54.362596 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 2 08:01:54.362523 systemd[1]: Reached target initrd-root-fs.target. Jul 2 08:01:54.363860 systemd[1]: Mounting sysroot-usr.mount... Jul 2 08:01:54.364240 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 2 08:01:54.364269 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 08:01:54.364287 systemd[1]: Reached target ignition-diskful.target. Jul 2 08:01:54.366443 systemd[1]: Mounted sysroot-usr.mount. Jul 2 08:01:54.367083 systemd[1]: Starting initrd-setup-root.service... Jul 2 08:01:54.370272 initrd-setup-root[779]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 08:01:54.374048 initrd-setup-root[787]: cut: /sysroot/etc/group: No such file or directory Jul 2 08:01:54.377607 initrd-setup-root[795]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 08:01:54.381922 initrd-setup-root[803]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 08:01:54.441676 systemd[1]: Finished initrd-setup-root.service. Jul 2 08:01:54.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:54.442260 systemd[1]: Starting ignition-mount.service... Jul 2 08:01:54.442754 systemd[1]: Starting sysroot-boot.service... Jul 2 08:01:54.446794 bash[820]: umount: /sysroot/usr/share/oem: not mounted. Jul 2 08:01:54.452275 ignition[821]: INFO : Ignition 2.14.0 Jul 2 08:01:54.452561 ignition[821]: INFO : Stage: mount Jul 2 08:01:54.452972 ignition[821]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 08:01:54.453193 ignition[821]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Jul 2 08:01:54.454599 ignition[821]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 2 08:01:54.456016 ignition[821]: INFO : mount: mount passed Jul 2 08:01:54.456208 ignition[821]: INFO : Ignition finished successfully Jul 2 08:01:54.456828 systemd[1]: Finished ignition-mount.service. Jul 2 08:01:54.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:54.483673 systemd[1]: Finished sysroot-boot.service. Jul 2 08:01:54.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:54.868436 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 08:01:54.888566 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (830) Jul 2 08:01:54.895120 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 08:01:54.895158 kernel: BTRFS info (device sda6): using free space tree Jul 2 08:01:54.895166 kernel: BTRFS info (device sda6): has skinny extents Jul 2 08:01:54.915575 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 2 08:01:54.919466 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 08:01:54.920190 systemd[1]: Starting ignition-files.service... Jul 2 08:01:54.932145 ignition[850]: INFO : Ignition 2.14.0 Jul 2 08:01:54.932145 ignition[850]: INFO : Stage: files Jul 2 08:01:54.932577 ignition[850]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 08:01:54.932577 ignition[850]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Jul 2 08:01:54.933908 ignition[850]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 2 08:01:54.936879 ignition[850]: DEBUG : files: compiled without relabeling support, skipping Jul 2 08:01:54.937692 ignition[850]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 08:01:54.937692 ignition[850]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 08:01:54.939998 ignition[850]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 08:01:54.940433 ignition[850]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 08:01:54.941604 unknown[850]: wrote ssh authorized keys file for user: core Jul 2 08:01:54.941921 ignition[850]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 08:01:54.942216 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 08:01:54.942463 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 2 08:01:55.020826 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 08:01:55.098192 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 08:01:55.108553 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 08:01:55.108863 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 2 08:01:55.217856 systemd-networkd[736]: ens192: Gained IPv6LL Jul 2 08:01:55.602932 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 2 08:01:55.651932 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 08:01:55.652160 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 2 08:01:55.652160 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 08:01:55.652160 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 08:01:55.652160 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 08:01:55.652160 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 08:01:55.652929 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 08:01:55.652929 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 08:01:55.652929 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 08:01:55.655000 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 08:01:55.655179 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 08:01:55.655179 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 08:01:55.655179 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 08:01:55.655761 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" Jul 2 08:01:55.655761 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Jul 2 08:01:55.663860 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3196736584" Jul 2 08:01:55.665405 ignition[850]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3196736584": device or resource busy Jul 2 08:01:55.665405 ignition[850]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3196736584", trying btrfs: device or resource busy Jul 2 08:01:55.665405 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3196736584" Jul 2 08:01:55.665405 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3196736584" Jul 2 08:01:55.667198 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (853) Jul 2 08:01:55.670329 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem3196736584" Jul 2 08:01:55.671084 systemd[1]: mnt-oem3196736584.mount: Deactivated successfully. Jul 2 08:01:55.671417 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem3196736584" Jul 2 08:01:55.671624 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" Jul 2 08:01:55.671624 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 08:01:55.671624 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Jul 2 08:01:55.940597 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET result: OK Jul 2 08:01:56.086455 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 08:01:56.089670 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jul 2 08:01:56.089944 ignition[850]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jul 2 08:01:56.090138 ignition[850]: INFO : files: op(11): [started] processing unit "vmtoolsd.service" Jul 2 08:01:56.090281 ignition[850]: INFO : files: op(11): [finished] processing unit "vmtoolsd.service" Jul 2 08:01:56.090421 ignition[850]: INFO : files: op(12): [started] processing unit "prepare-helm.service" Jul 2 08:01:56.090595 ignition[850]: INFO : files: op(12): op(13): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 08:01:56.090837 ignition[850]: INFO : files: op(12): op(13): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 08:01:56.091019 ignition[850]: INFO : files: op(12): [finished] processing unit "prepare-helm.service" Jul 2 08:01:56.091162 ignition[850]: INFO : files: op(14): [started] processing unit "coreos-metadata.service" Jul 2 08:01:56.091326 ignition[850]: INFO : files: op(14): op(15): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 08:01:56.091582 ignition[850]: INFO : files: op(14): op(15): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 08:01:56.091768 ignition[850]: INFO : files: op(14): [finished] processing unit "coreos-metadata.service" Jul 2 08:01:56.091914 ignition[850]: INFO : files: op(16): [started] setting preset to enabled for "vmtoolsd.service" Jul 2 08:01:56.092108 ignition[850]: INFO : files: op(16): [finished] setting preset to enabled for "vmtoolsd.service" Jul 2 08:01:56.092258 ignition[850]: INFO : files: op(17): [started] setting preset to enabled for "prepare-helm.service" Jul 2 08:01:56.092423 ignition[850]: INFO : files: op(17): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 08:01:56.092583 ignition[850]: INFO : files: op(18): [started] setting preset to disabled for "coreos-metadata.service" Jul 2 08:01:56.092740 ignition[850]: INFO : files: op(18): op(19): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 08:01:56.272025 ignition[850]: INFO : files: op(18): op(19): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 08:01:56.272352 ignition[850]: INFO : files: op(18): [finished] setting preset to disabled for "coreos-metadata.service" Jul 2 08:01:56.272776 ignition[850]: INFO : files: createResultFile: createFiles: op(1a): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 08:01:56.273078 ignition[850]: INFO : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 08:01:56.273268 ignition[850]: INFO : files: files passed Jul 2 08:01:56.273414 ignition[850]: INFO : Ignition finished successfully Jul 2 08:01:56.275136 systemd[1]: Finished ignition-files.service. Jul 2 08:01:56.277752 kernel: kauditd_printk_skb: 24 callbacks suppressed Jul 2 08:01:56.277791 kernel: audit: type=1130 audit(1719907316.273:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:56.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:56.275812 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 2 08:01:56.279175 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 2 08:01:56.280174 systemd[1]: Starting ignition-quench.service... Jul 2 08:01:56.282287 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 08:01:56.282495 systemd[1]: Finished ignition-quench.service. Jul 2 08:01:56.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:56.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:56.287754 kernel: audit: type=1130 audit(1719907316.281:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:56.287789 kernel: audit: type=1131 audit(1719907316.281:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:56.288790 initrd-setup-root-after-ignition[876]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 08:01:56.289563 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 2 08:01:56.289753 systemd[1]: Reached target ignition-complete.target. Jul 2 08:01:56.292491 kernel: audit: type=1130 audit(1719907316.288:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:56.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:56.292889 systemd[1]: Starting initrd-parse-etc.service... Jul 2 08:01:56.302258 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 08:01:56.302519 systemd[1]: Finished initrd-parse-etc.service. Jul 2 08:01:56.302863 systemd[1]: Reached target initrd-fs.target. Jul 2 08:01:56.303125 systemd[1]: Reached target initrd.target. Jul 2 08:01:56.303416 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 2 08:01:56.304348 systemd[1]: Starting dracut-pre-pivot.service... Jul 2 08:01:56.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:56.309594 kernel: audit: type=1130 audit(1719907316.301:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:56.309613 kernel: audit: type=1131 audit(1719907316.301:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:56.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:56.311094 systemd[1]: Finished dracut-pre-pivot.service. Jul 2 08:01:56.311650 systemd[1]: Starting initrd-cleanup.service... Jul 2 08:01:56.314137 kernel: audit: type=1130 audit(1719907316.309:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:56.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:56.317429 systemd[1]: Stopped target network.target. Jul 2 08:01:56.317586 systemd[1]: Stopped target nss-lookup.target. Jul 2 08:01:56.317744 systemd[1]: Stopped target remote-cryptsetup.target. Jul 2 08:01:56.317909 systemd[1]: Stopped target timers.target. Jul 2 08:01:56.318067 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 08:01:56.318127 systemd[1]: Stopped dracut-pre-pivot.service. Jul 2 08:01:56.320680 kernel: audit: type=1131 audit(1719907316.316:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:56.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:56.318393 systemd[1]: Stopped target initrd.target. Jul 2 08:01:56.320805 systemd[1]: Stopped target basic.target. Jul 2 08:01:56.320970 systemd[1]: Stopped target ignition-complete.target. Jul 2 08:01:56.321140 systemd[1]: Stopped target ignition-diskful.target. Jul 2 08:01:56.321308 systemd[1]: Stopped target initrd-root-device.target. Jul 2 08:01:56.321479 systemd[1]: Stopped target remote-fs.target. Jul 2 08:01:56.321647 systemd[1]: Stopped target remote-fs-pre.target. Jul 2 08:01:56.321817 systemd[1]: Stopped target sysinit.target. Jul 2 08:01:56.321980 systemd[1]: Stopped target local-fs.target. Jul 2 08:01:56.322143 systemd[1]: Stopped target local-fs-pre.target. Jul 2 08:01:56.322307 systemd[1]: Stopped target swap.target. Jul 2 08:01:56.322444 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 08:01:56.322519 systemd[1]: Stopped dracut-pre-mount.service. Jul 2 08:01:56.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:56.323604 systemd[1]: Stopped target cryptsetup.target. Jul 2 08:01:56.327869 kernel: audit: type=1131 audit(1719907316.321:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:56.327881 kernel: audit: type=1131 audit(1719907316.324:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:56.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:56.325207 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 08:01:56.325272 systemd[1]: Stopped dracut-initqueue.service. Jul 2 08:01:56.325436 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 08:01:56.325502 systemd[1]: Stopped ignition-fetch-offline.service. Jul 2 08:01:56.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:56.328673 systemd[1]: Stopped target paths.target. Jul 2 08:01:56.328919 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 08:01:56.329111 systemd[1]: Stopped systemd-ask-password-console.path. Jul 2 08:01:56.329374 systemd[1]: Stopped target slices.target. Jul 2 08:01:56.329626 systemd[1]: Stopped target sockets.target. Jul 2 08:01:56.329865 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 08:01:56.329917 systemd[1]: Closed iscsid.socket. Jul 2 08:01:56.330273 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 08:01:56.330319 systemd[1]: Closed iscsiuio.socket. Jul 2 08:01:56.330682 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 08:01:56.330748 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 2 08:01:56.329000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:56.331166 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 08:01:56.331229 systemd[1]: Stopped ignition-files.service. Jul 2 08:01:56.330000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:56.332120 systemd[1]: Stopping ignition-mount.service... Jul 2 08:01:56.332386 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 08:01:56.332453 systemd[1]: Stopped kmod-static-nodes.service. Jul 2 08:01:56.333213 systemd[1]: Stopping sysroot-boot.service... Jul 2 08:01:56.333755 systemd[1]: Stopping systemd-networkd.service... Jul 2 08:01:56.334130 systemd[1]: Stopping systemd-resolved.service... Jul 2 08:01:56.334357 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 08:01:56.334582 systemd[1]: Stopped systemd-udev-trigger.service. Jul 2 08:01:56.334917 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 08:01:56.335127 systemd[1]: Stopped dracut-pre-trigger.service. Jul 2 08:01:56.337135 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 08:01:56.337277 ignition[889]: INFO : Ignition 2.14.0 Jul 2 08:01:56.337277 ignition[889]: INFO : Stage: umount Jul 2 08:01:56.337277 ignition[889]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 08:01:56.337277 ignition[889]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Jul 2 08:01:56.331000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:56.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:56.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:56.338655 ignition[889]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 2 08:01:56.338867 systemd[1]: Finished initrd-cleanup.service. Jul 2 08:01:56.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:56.337000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:56.339297 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 08:01:56.339501 systemd[1]: Stopped systemd-networkd.service. Jul 2 08:01:56.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:56.340776 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 08:01:56.340972 systemd[1]: Stopped systemd-resolved.service. Jul 2 08:01:56.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:56.340000 audit: BPF prog-id=9 op=UNLOAD Jul 2 08:01:56.341753 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 08:01:56.341912 systemd[1]: Closed systemd-networkd.socket. Jul 2 08:01:56.342537 systemd[1]: Stopping network-cleanup.service... Jul 2 08:01:56.342821 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 08:01:56.342990 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 2 08:01:56.343245 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Jul 2 08:01:56.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:56.343449 systemd[1]: Stopped afterburn-network-kargs.service. Jul 2 08:01:56.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:56.343743 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 08:01:56.343895 systemd[1]: Stopped systemd-sysctl.service. Jul 2 08:01:56.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:56.344219 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 08:01:56.344386 systemd[1]: Stopped systemd-modules-load.service. Jul 2 08:01:56.343000 audit: BPF prog-id=6 op=UNLOAD Jul 2 08:01:56.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:56.345123 ignition[889]: INFO : umount: umount passed Jul 2 08:01:56.345526 ignition[889]: INFO : Ignition finished successfully Jul 2 08:01:56.346086 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 2 08:01:56.346608 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 08:01:56.346793 systemd[1]: Stopped ignition-mount.service. Jul 2 08:01:56.347312 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 08:01:56.347467 systemd[1]: Stopped ignition-disks.service. Jul 2 08:01:56.347711 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 08:01:56.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:56.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:56.347902 systemd[1]: Stopped ignition-kargs.service. Jul 2 08:01:56.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:56.348160 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 08:01:56.348307 systemd[1]: Stopped ignition-setup.service. Jul 2 08:01:56.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:56.350379 systemd[1]: Stopping systemd-udevd.service... Jul 2 08:01:56.352254 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 08:01:56.352446 systemd[1]: Stopped network-cleanup.service. Jul 2 08:01:56.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:56.354311 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 08:01:56.354517 systemd[1]: Stopped systemd-udevd.service. Jul 2 08:01:56.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:56.355097 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 08:01:56.355259 systemd[1]: Closed systemd-udevd-control.socket. Jul 2 08:01:56.355479 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 08:01:56.355637 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 2 08:01:56.355846 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 08:01:56.355993 systemd[1]: Stopped dracut-pre-udev.service. Jul 2 08:01:56.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:56.356251 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 08:01:56.356396 systemd[1]: Stopped dracut-cmdline.service. Jul 2 08:01:56.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:56.356729 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 08:01:56.356880 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 2 08:01:56.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:56.357507 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 2 08:01:56.357854 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 08:01:56.358012 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 2 08:01:56.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:56.360724 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 08:01:56.360912 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 2 08:01:56.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:56.359000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:56.369764 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 08:01:56.542880 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 08:01:56.542936 systemd[1]: Stopped sysroot-boot.service. Jul 2 08:01:56.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:56.543217 systemd[1]: Reached target initrd-switch-root.target. Jul 2 08:01:56.543325 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 08:01:56.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:56.543348 systemd[1]: Stopped initrd-setup-root.service. Jul 2 08:01:56.543906 systemd[1]: Starting initrd-switch-root.service... Jul 2 08:01:56.550600 systemd[1]: Switching root. Jul 2 08:01:56.573032 iscsid[741]: iscsid shutting down. Jul 2 08:01:56.573257 systemd-journald[216]: Journal stopped Jul 2 08:02:00.037958 systemd-journald[216]: Received SIGTERM from PID 1 (systemd). Jul 2 08:02:00.037977 kernel: SELinux: Class mctp_socket not defined in policy. Jul 2 08:02:00.037985 kernel: SELinux: Class anon_inode not defined in policy. Jul 2 08:02:00.037992 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 2 08:02:00.037997 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 08:02:00.038004 kernel: SELinux: policy capability open_perms=1 Jul 2 08:02:00.038011 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 08:02:00.038018 kernel: SELinux: policy capability always_check_network=0 Jul 2 08:02:00.038024 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 08:02:00.038030 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 08:02:00.038036 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 08:02:00.038042 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 08:02:00.038049 systemd[1]: Successfully loaded SELinux policy in 38.909ms. Jul 2 08:02:00.038057 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.551ms. Jul 2 08:02:00.038065 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 08:02:00.038072 systemd[1]: Detected virtualization vmware. Jul 2 08:02:00.038080 systemd[1]: Detected architecture x86-64. Jul 2 08:02:00.038086 systemd[1]: Detected first boot. Jul 2 08:02:00.038093 systemd[1]: Initializing machine ID from random generator. Jul 2 08:02:00.038099 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 2 08:02:00.038105 systemd[1]: Populated /etc with preset unit settings. Jul 2 08:02:00.038112 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 08:02:00.038119 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 08:02:00.038126 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:02:00.038134 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 2 08:02:00.038141 systemd[1]: Stopped iscsiuio.service. Jul 2 08:02:00.038148 systemd[1]: iscsid.service: Deactivated successfully. Jul 2 08:02:00.038155 systemd[1]: Stopped iscsid.service. Jul 2 08:02:00.038161 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 08:02:00.038168 systemd[1]: Stopped initrd-switch-root.service. Jul 2 08:02:00.038174 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 08:02:00.038182 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 2 08:02:00.038189 systemd[1]: Created slice system-addon\x2drun.slice. Jul 2 08:02:00.038195 systemd[1]: Created slice system-getty.slice. Jul 2 08:02:00.038202 systemd[1]: Created slice system-modprobe.slice. Jul 2 08:02:00.038209 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 2 08:02:00.038215 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 2 08:02:00.038221 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 2 08:02:00.038228 systemd[1]: Created slice user.slice. Jul 2 08:02:00.038236 systemd[1]: Started systemd-ask-password-console.path. Jul 2 08:02:00.038244 systemd[1]: Started systemd-ask-password-wall.path. Jul 2 08:02:00.038251 systemd[1]: Set up automount boot.automount. Jul 2 08:02:00.038258 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 2 08:02:00.038265 systemd[1]: Stopped target initrd-switch-root.target. Jul 2 08:02:00.038290 systemd[1]: Stopped target initrd-fs.target. Jul 2 08:02:00.038297 systemd[1]: Stopped target initrd-root-fs.target. Jul 2 08:02:00.038304 systemd[1]: Reached target integritysetup.target. Jul 2 08:02:00.038311 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 08:02:00.038318 systemd[1]: Reached target remote-fs.target. Jul 2 08:02:00.038325 systemd[1]: Reached target slices.target. Jul 2 08:02:00.038332 systemd[1]: Reached target swap.target. Jul 2 08:02:00.038339 systemd[1]: Reached target torcx.target. Jul 2 08:02:00.038346 systemd[1]: Reached target veritysetup.target. Jul 2 08:02:00.038353 systemd[1]: Listening on systemd-coredump.socket. Jul 2 08:02:00.038376 systemd[1]: Listening on systemd-initctl.socket. Jul 2 08:02:00.038383 systemd[1]: Listening on systemd-networkd.socket. Jul 2 08:02:00.038390 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 08:02:00.038397 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 08:02:00.038404 systemd[1]: Listening on systemd-userdbd.socket. Jul 2 08:02:00.038411 systemd[1]: Mounting dev-hugepages.mount... Jul 2 08:02:00.038418 systemd[1]: Mounting dev-mqueue.mount... Jul 2 08:02:00.038426 systemd[1]: Mounting media.mount... Jul 2 08:02:00.038433 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:02:00.038440 systemd[1]: Mounting sys-kernel-debug.mount... Jul 2 08:02:00.038447 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 2 08:02:00.038454 systemd[1]: Mounting tmp.mount... Jul 2 08:02:00.038461 systemd[1]: Starting flatcar-tmpfiles.service... Jul 2 08:02:00.038468 systemd[1]: Starting ignition-delete-config.service... Jul 2 08:02:00.038475 systemd[1]: Starting kmod-static-nodes.service... Jul 2 08:02:00.038482 systemd[1]: Starting modprobe@configfs.service... Jul 2 08:02:00.038490 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 08:02:00.038497 systemd[1]: Starting modprobe@drm.service... Jul 2 08:02:00.038504 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 08:02:00.038511 systemd[1]: Starting modprobe@fuse.service... Jul 2 08:02:00.038518 systemd[1]: Starting modprobe@loop.service... Jul 2 08:02:00.038525 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 08:02:00.048750 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 08:02:00.048764 systemd[1]: Stopped systemd-fsck-root.service. Jul 2 08:02:00.048772 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 08:02:00.048782 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 08:02:00.048789 systemd[1]: Stopped systemd-journald.service. Jul 2 08:02:00.048796 systemd[1]: Starting systemd-journald.service... Jul 2 08:02:00.048803 systemd[1]: Starting systemd-modules-load.service... Jul 2 08:02:00.048810 systemd[1]: Starting systemd-network-generator.service... Jul 2 08:02:00.048816 systemd[1]: Starting systemd-remount-fs.service... Jul 2 08:02:00.048823 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 08:02:00.048830 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 08:02:00.048844 systemd[1]: Stopped verity-setup.service. Jul 2 08:02:00.048854 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:02:00.048861 systemd[1]: Mounted dev-hugepages.mount. Jul 2 08:02:00.048869 systemd[1]: Mounted dev-mqueue.mount. Jul 2 08:02:00.048876 systemd[1]: Mounted media.mount. Jul 2 08:02:00.048883 systemd[1]: Mounted sys-kernel-debug.mount. Jul 2 08:02:00.048890 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 2 08:02:00.048897 systemd[1]: Mounted tmp.mount. Jul 2 08:02:00.048904 systemd[1]: Finished kmod-static-nodes.service. Jul 2 08:02:00.048910 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:02:00.048919 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 08:02:00.048926 systemd[1]: Finished systemd-remount-fs.service. Jul 2 08:02:00.048933 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 08:02:00.048939 systemd[1]: Starting systemd-hwdb-update.service... Jul 2 08:02:00.048946 systemd[1]: Starting systemd-random-seed.service... Jul 2 08:02:00.048953 systemd[1]: Finished flatcar-tmpfiles.service. Jul 2 08:02:00.048960 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 08:02:00.048966 systemd[1]: Finished modprobe@configfs.service. Jul 2 08:02:00.048973 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 08:02:00.048981 systemd[1]: Finished modprobe@drm.service. Jul 2 08:02:00.048988 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:02:00.048995 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 08:02:00.049002 systemd[1]: Finished systemd-network-generator.service. Jul 2 08:02:00.049009 systemd[1]: Reached target network-pre.target. Jul 2 08:02:00.049016 systemd[1]: Mounting sys-kernel-config.mount... Jul 2 08:02:00.049023 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 08:02:00.049030 systemd[1]: Starting systemd-sysusers.service... Jul 2 08:02:00.049041 systemd-journald[1002]: Journal started Jul 2 08:02:00.049075 systemd-journald[1002]: Runtime Journal (/run/log/journal/9f6393552792465d9cce6f417827c591) is 4.8M, max 38.8M, 34.0M free. Jul 2 08:01:56.816000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 08:01:56.864000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 08:01:56.864000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 08:01:56.864000 audit: BPF prog-id=10 op=LOAD Jul 2 08:01:56.864000 audit: BPF prog-id=10 op=UNLOAD Jul 2 08:01:56.864000 audit: BPF prog-id=11 op=LOAD Jul 2 08:01:56.864000 audit: BPF prog-id=11 op=UNLOAD Jul 2 08:01:57.273000 audit[922]: AVC avc: denied { associate } for pid=922 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 2 08:01:57.273000 audit[922]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8b2 a1=c0000cede0 a2=c0000d70c0 a3=32 items=0 ppid=905 pid=922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 08:01:57.273000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 08:01:57.275000 audit[922]: AVC avc: denied { associate } for pid=922 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 2 08:01:57.275000 audit[922]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d989 a2=1ed a3=0 items=2 ppid=905 pid=922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 08:01:57.275000 audit: CWD cwd="/" Jul 2 08:01:57.275000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:01:57.275000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:01:57.275000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 08:01:59.903000 audit: BPF prog-id=12 op=LOAD Jul 2 08:01:59.903000 audit: BPF prog-id=3 op=UNLOAD Jul 2 08:01:59.903000 audit: BPF prog-id=13 op=LOAD Jul 2 08:01:59.903000 audit: BPF prog-id=14 op=LOAD Jul 2 08:01:59.903000 audit: BPF prog-id=4 op=UNLOAD Jul 2 08:01:59.903000 audit: BPF prog-id=5 op=UNLOAD Jul 2 08:01:59.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:59.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:59.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:59.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:59.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:59.916000 audit: BPF prog-id=12 op=UNLOAD Jul 2 08:01:59.969000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:59.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:59.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:59.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:59.973000 audit: BPF prog-id=15 op=LOAD Jul 2 08:01:59.973000 audit: BPF prog-id=16 op=LOAD Jul 2 08:01:59.973000 audit: BPF prog-id=17 op=LOAD Jul 2 08:01:59.973000 audit: BPF prog-id=13 op=UNLOAD Jul 2 08:01:59.973000 audit: BPF prog-id=14 op=UNLOAD Jul 2 08:01:59.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:00.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:00.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:00.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:00.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:00.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:00.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:00.031000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:00.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:00.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:00.035000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 08:02:00.035000 audit[1002]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffccd469a00 a2=4000 a3=7ffccd469a9c items=0 ppid=1 pid=1002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 08:02:00.035000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 2 08:02:00.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:00.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:00.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:59.902896 systemd[1]: Queued start job for default target multi-user.target. Jul 2 08:02:00.052019 systemd[1]: Finished systemd-random-seed.service. Jul 2 08:02:00.052032 systemd[1]: Started systemd-journald.service. Jul 2 08:02:00.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:00.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:01:57.270982 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-07-02T08:01:57Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 08:01:59.906279 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 08:01:57.271592 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-07-02T08:01:57Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 08:01:57.271611 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-07-02T08:01:57Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 08:01:57.271637 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-07-02T08:01:57Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Jul 2 08:01:57.271645 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-07-02T08:01:57Z" level=debug msg="skipped missing lower profile" missing profile=oem Jul 2 08:01:57.271671 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-07-02T08:01:57Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Jul 2 08:01:57.271681 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-07-02T08:01:57Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Jul 2 08:01:57.271841 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-07-02T08:01:57Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Jul 2 08:02:00.052560 jq[989]: true Jul 2 08:01:57.271874 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-07-02T08:01:57Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 08:01:57.271884 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-07-02T08:01:57Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 08:01:57.274304 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-07-02T08:01:57Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Jul 2 08:01:57.274331 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-07-02T08:01:57Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Jul 2 08:01:57.274347 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-07-02T08:01:57Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.5: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.5 Jul 2 08:01:57.274358 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-07-02T08:01:57Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Jul 2 08:01:57.274371 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-07-02T08:01:57Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.5: no such file or directory" path=/var/lib/torcx/store/3510.3.5 Jul 2 08:01:57.274381 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-07-02T08:01:57Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Jul 2 08:01:59.554457 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-07-02T08:01:59Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 08:01:59.554640 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-07-02T08:01:59Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 08:01:59.554803 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-07-02T08:01:59Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 08:01:59.555050 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-07-02T08:01:59Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 08:01:59.555085 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-07-02T08:01:59Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Jul 2 08:01:59.555138 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2024-07-02T08:01:59Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Jul 2 08:02:00.053293 jq[1037]: true Jul 2 08:02:00.054326 systemd[1]: Mounted sys-kernel-config.mount. Jul 2 08:02:00.054546 systemd[1]: Reached target first-boot-complete.target. Jul 2 08:02:00.056164 systemd[1]: Starting systemd-journal-flush.service... Jul 2 08:02:00.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:00.057728 systemd[1]: Finished systemd-modules-load.service. Jul 2 08:02:00.060369 systemd[1]: Starting systemd-sysctl.service... Jul 2 08:02:00.062679 kernel: fuse: init (API version 7.34) Jul 2 08:02:00.069930 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 08:02:00.070007 systemd[1]: Finished modprobe@fuse.service. Jul 2 08:02:00.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:00.068000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:00.070984 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 2 08:02:00.072924 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 2 08:02:00.075538 kernel: loop: module loaded Jul 2 08:02:00.080392 systemd-journald[1002]: Time spent on flushing to /var/log/journal/9f6393552792465d9cce6f417827c591 is 19.408ms for 2007 entries. Jul 2 08:02:00.080392 systemd-journald[1002]: System Journal (/var/log/journal/9f6393552792465d9cce6f417827c591) is 8.0M, max 584.8M, 576.8M free. Jul 2 08:02:00.198353 systemd-journald[1002]: Received client request to flush runtime journal. Jul 2 08:02:00.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:00.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:00.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:00.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:00.081668 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:02:00.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:00.199040 udevadm[1047]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 2 08:02:00.081749 systemd[1]: Finished modprobe@loop.service. Jul 2 08:02:00.081924 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 08:02:00.123187 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 08:02:00.124135 systemd[1]: Starting systemd-udev-settle.service... Jul 2 08:02:00.168812 systemd[1]: Finished systemd-sysctl.service. Jul 2 08:02:00.198813 systemd[1]: Finished systemd-journal-flush.service. Jul 2 08:02:00.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:00.242149 systemd[1]: Finished systemd-sysusers.service. Jul 2 08:02:00.317511 ignition[1041]: Ignition 2.14.0 Jul 2 08:02:00.317759 ignition[1041]: deleting config from guestinfo properties Jul 2 08:02:00.321743 ignition[1041]: Successfully deleted config Jul 2 08:02:00.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ignition-delete-config comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:00.322382 systemd[1]: Finished ignition-delete-config.service. Jul 2 08:02:00.691501 systemd[1]: Finished systemd-hwdb-update.service. Jul 2 08:02:00.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:00.690000 audit: BPF prog-id=18 op=LOAD Jul 2 08:02:00.690000 audit: BPF prog-id=19 op=LOAD Jul 2 08:02:00.690000 audit: BPF prog-id=7 op=UNLOAD Jul 2 08:02:00.690000 audit: BPF prog-id=8 op=UNLOAD Jul 2 08:02:00.692588 systemd[1]: Starting systemd-udevd.service... Jul 2 08:02:00.703860 systemd-udevd[1054]: Using default interface naming scheme 'v252'. Jul 2 08:02:00.959656 systemd[1]: Started systemd-udevd.service. Jul 2 08:02:00.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:00.959000 audit: BPF prog-id=20 op=LOAD Jul 2 08:02:00.961362 systemd[1]: Starting systemd-networkd.service... Jul 2 08:02:00.966000 audit: BPF prog-id=21 op=LOAD Jul 2 08:02:00.966000 audit: BPF prog-id=22 op=LOAD Jul 2 08:02:00.966000 audit: BPF prog-id=23 op=LOAD Jul 2 08:02:00.968383 systemd[1]: Starting systemd-userdbd.service... Jul 2 08:02:00.997466 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Jul 2 08:02:01.005634 systemd[1]: Started systemd-userdbd.service. Jul 2 08:02:01.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:01.050558 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 2 08:02:01.067548 kernel: ACPI: button: Power Button [PWRF] Jul 2 08:02:01.124548 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1060) Jul 2 08:02:01.126951 kernel: vmw_vmci 0000:00:07.7: Found VMCI PCI device at 0x11080, irq 16 Jul 2 08:02:01.127083 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Jul 2 08:02:01.127906 kernel: Guest personality initialized and is active Jul 2 08:02:01.126000 audit[1058]: AVC avc: denied { confidentiality } for pid=1058 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 2 08:02:01.126000 audit[1058]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55bf3b936f30 a1=3207c a2=7f23fb626bc5 a3=5 items=108 ppid=1054 pid=1058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 08:02:01.126000 audit: CWD cwd="/" Jul 2 08:02:01.126000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=1 name=(null) inode=25256 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=2 name=(null) inode=25256 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=3 name=(null) inode=25257 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=4 name=(null) inode=25256 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=5 name=(null) inode=25258 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=6 name=(null) inode=25256 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=7 name=(null) inode=25259 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=8 name=(null) inode=25259 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=9 name=(null) inode=25260 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=10 name=(null) inode=25259 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=11 name=(null) inode=25261 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=12 name=(null) inode=25259 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=13 name=(null) inode=25262 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=14 name=(null) inode=25259 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=15 name=(null) inode=25263 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=16 name=(null) inode=25259 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=17 name=(null) inode=25264 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=18 name=(null) inode=25256 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=19 name=(null) inode=25265 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=20 name=(null) inode=25265 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=21 name=(null) inode=25266 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=22 name=(null) inode=25265 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=23 name=(null) inode=25267 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=24 name=(null) inode=25265 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=25 name=(null) inode=25268 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=26 name=(null) inode=25265 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=27 name=(null) inode=25269 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=28 name=(null) inode=25265 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=29 name=(null) inode=25270 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=30 name=(null) inode=25256 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=31 name=(null) inode=25271 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=32 name=(null) inode=25271 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=33 name=(null) inode=25272 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=34 name=(null) inode=25271 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=35 name=(null) inode=25273 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=36 name=(null) inode=25271 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=37 name=(null) inode=25274 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=38 name=(null) inode=25271 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=39 name=(null) inode=25275 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=40 name=(null) inode=25271 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=41 name=(null) inode=25276 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=42 name=(null) inode=25256 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=43 name=(null) inode=25277 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=44 name=(null) inode=25277 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=45 name=(null) inode=25278 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=46 name=(null) inode=25277 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=47 name=(null) inode=25279 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=48 name=(null) inode=25277 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=49 name=(null) inode=25280 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=50 name=(null) inode=25277 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=51 name=(null) inode=25281 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=52 name=(null) inode=25277 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=53 name=(null) inode=25282 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=55 name=(null) inode=25283 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=56 name=(null) inode=25283 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=57 name=(null) inode=25284 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=58 name=(null) inode=25283 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=59 name=(null) inode=25285 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=60 name=(null) inode=25283 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=61 name=(null) inode=25286 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=62 name=(null) inode=25286 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=63 name=(null) inode=25287 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=64 name=(null) inode=25286 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=65 name=(null) inode=25288 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=66 name=(null) inode=25286 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=67 name=(null) inode=25289 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=68 name=(null) inode=25286 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=69 name=(null) inode=25290 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=70 name=(null) inode=25286 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=71 name=(null) inode=25291 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.134570 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 2 08:02:01.134590 kernel: Initialized host personality Jul 2 08:02:01.126000 audit: PATH item=72 name=(null) inode=25283 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=73 name=(null) inode=25292 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=74 name=(null) inode=25292 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=75 name=(null) inode=25293 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=76 name=(null) inode=25292 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=77 name=(null) inode=25294 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=78 name=(null) inode=25292 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=79 name=(null) inode=25295 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=80 name=(null) inode=25292 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=81 name=(null) inode=25296 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=82 name=(null) inode=25292 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=83 name=(null) inode=25297 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=84 name=(null) inode=25283 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=85 name=(null) inode=25298 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=86 name=(null) inode=25298 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=87 name=(null) inode=25299 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=88 name=(null) inode=25298 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=89 name=(null) inode=25300 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=90 name=(null) inode=25298 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=91 name=(null) inode=25301 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=92 name=(null) inode=25298 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=93 name=(null) inode=25302 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=94 name=(null) inode=25298 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=95 name=(null) inode=25303 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=96 name=(null) inode=25283 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=97 name=(null) inode=25304 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=98 name=(null) inode=25304 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=99 name=(null) inode=25305 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=100 name=(null) inode=25304 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=101 name=(null) inode=25306 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=102 name=(null) inode=25304 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=103 name=(null) inode=25307 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=104 name=(null) inode=25304 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=105 name=(null) inode=25308 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=106 name=(null) inode=25304 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PATH item=107 name=(null) inode=25309 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:02:01.126000 audit: PROCTITLE proctitle="(udev-worker)" Jul 2 08:02:01.145197 (udev-worker)[1069]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Jul 2 08:02:01.151478 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 08:02:01.157549 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Jul 2 08:02:01.157606 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Jul 2 08:02:01.169555 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 08:02:01.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:01.254805 systemd[1]: Finished systemd-udev-settle.service. Jul 2 08:02:01.255991 systemd[1]: Starting lvm2-activation-early.service... Jul 2 08:02:01.308617 systemd-networkd[1062]: lo: Link UP Jul 2 08:02:01.308624 systemd-networkd[1062]: lo: Gained carrier Jul 2 08:02:01.308934 systemd-networkd[1062]: Enumeration completed Jul 2 08:02:01.309002 systemd[1]: Started systemd-networkd.service. Jul 2 08:02:01.311543 kernel: kauditd_printk_skb: 219 callbacks suppressed Jul 2 08:02:01.311576 kernel: audit: type=1130 audit(1719907321.307:144): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:01.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:01.309221 systemd-networkd[1062]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Jul 2 08:02:01.326610 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jul 2 08:02:01.326766 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jul 2 08:02:01.327541 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): ens192: link becomes ready Jul 2 08:02:01.328745 systemd-networkd[1062]: ens192: Link UP Jul 2 08:02:01.328828 systemd-networkd[1062]: ens192: Gained carrier Jul 2 08:02:01.423377 lvm[1087]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 08:02:01.461126 systemd[1]: Finished lvm2-activation-early.service. Jul 2 08:02:01.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:01.461320 systemd[1]: Reached target cryptsetup.target. Jul 2 08:02:01.464546 kernel: audit: type=1130 audit(1719907321.459:145): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:01.465295 systemd[1]: Starting lvm2-activation.service... Jul 2 08:02:01.468345 lvm[1088]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 08:02:01.498117 systemd[1]: Finished lvm2-activation.service. Jul 2 08:02:01.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:01.498307 systemd[1]: Reached target local-fs-pre.target. Jul 2 08:02:01.501543 kernel: audit: type=1130 audit(1719907321.496:146): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:01.500949 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 08:02:01.500976 systemd[1]: Reached target local-fs.target. Jul 2 08:02:01.501096 systemd[1]: Reached target machines.target. Jul 2 08:02:01.502121 systemd[1]: Starting ldconfig.service... Jul 2 08:02:01.524500 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 08:02:01.524565 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 08:02:01.525616 systemd[1]: Starting systemd-boot-update.service... Jul 2 08:02:01.526468 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 2 08:02:01.527538 systemd[1]: Starting systemd-machine-id-commit.service... Jul 2 08:02:01.528553 systemd[1]: Starting systemd-sysext.service... Jul 2 08:02:01.551724 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1090 (bootctl) Jul 2 08:02:01.552538 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 2 08:02:01.573679 systemd[1]: Unmounting usr-share-oem.mount... Jul 2 08:02:01.585002 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 2 08:02:01.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:01.588548 kernel: audit: type=1130 audit(1719907321.583:147): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:01.599047 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 2 08:02:01.599157 systemd[1]: Unmounted usr-share-oem.mount. Jul 2 08:02:01.633549 kernel: loop0: detected capacity change from 0 to 209816 Jul 2 08:02:02.419006 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 08:02:02.419732 systemd[1]: Finished systemd-machine-id-commit.service. Jul 2 08:02:02.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:02.423545 kernel: audit: type=1130 audit(1719907322.418:148): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:02.432765 systemd-fsck[1100]: fsck.fat 4.2 (2021-01-31) Jul 2 08:02:02.432765 systemd-fsck[1100]: /dev/sda1: 789 files, 119238/258078 clusters Jul 2 08:02:02.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:02.433857 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 2 08:02:02.434896 systemd[1]: Mounting boot.mount... Jul 2 08:02:02.437558 kernel: audit: type=1130 audit(1719907322.432:149): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:02.442549 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 08:02:02.449664 systemd[1]: Mounted boot.mount. Jul 2 08:02:02.459006 systemd[1]: Finished systemd-boot-update.service. Jul 2 08:02:02.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:02.462698 kernel: audit: type=1130 audit(1719907322.457:150): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:02.462741 kernel: loop1: detected capacity change from 0 to 209816 Jul 2 08:02:02.542233 (sd-sysext)[1104]: Using extensions 'kubernetes'. Jul 2 08:02:02.543145 (sd-sysext)[1104]: Merged extensions into '/usr'. Jul 2 08:02:02.555970 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:02:02.557382 systemd[1]: Mounting usr-share-oem.mount... Jul 2 08:02:02.558328 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 08:02:02.560263 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 08:02:02.561219 systemd[1]: Starting modprobe@loop.service... Jul 2 08:02:02.561557 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 08:02:02.561648 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 08:02:02.561729 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:02:02.563469 systemd[1]: Mounted usr-share-oem.mount. Jul 2 08:02:02.563744 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:02:02.563824 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 08:02:02.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:02.564138 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:02:02.564205 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 08:02:02.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:02.566933 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:02:02.567007 systemd[1]: Finished modprobe@loop.service. Jul 2 08:02:02.569442 kernel: audit: type=1130 audit(1719907322.562:151): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:02.569691 kernel: audit: type=1131 audit(1719907322.562:152): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:02.569714 kernel: audit: type=1130 audit(1719907322.565:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:02.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:02.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:02.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:02.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:02.573144 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 08:02:02.573198 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 08:02:02.573999 systemd[1]: Finished systemd-sysext.service. Jul 2 08:02:02.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:02.575091 systemd[1]: Starting ensure-sysext.service... Jul 2 08:02:02.576055 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 2 08:02:02.582684 systemd[1]: Reloading. Jul 2 08:02:02.615948 /usr/lib/systemd/system-generators/torcx-generator[1130]: time="2024-07-02T08:02:02Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 08:02:02.615966 /usr/lib/systemd/system-generators/torcx-generator[1130]: time="2024-07-02T08:02:02Z" level=info msg="torcx already run" Jul 2 08:02:02.661222 systemd-tmpfiles[1111]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 2 08:02:02.682906 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 08:02:02.682920 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 08:02:02.694628 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:02:02.702355 systemd-tmpfiles[1111]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 08:02:02.708848 systemd-tmpfiles[1111]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 08:02:02.733000 audit: BPF prog-id=24 op=LOAD Jul 2 08:02:02.733000 audit: BPF prog-id=25 op=LOAD Jul 2 08:02:02.733000 audit: BPF prog-id=18 op=UNLOAD Jul 2 08:02:02.733000 audit: BPF prog-id=19 op=UNLOAD Jul 2 08:02:02.734000 audit: BPF prog-id=26 op=LOAD Jul 2 08:02:02.734000 audit: BPF prog-id=21 op=UNLOAD Jul 2 08:02:02.734000 audit: BPF prog-id=27 op=LOAD Jul 2 08:02:02.734000 audit: BPF prog-id=28 op=LOAD Jul 2 08:02:02.735000 audit: BPF prog-id=22 op=UNLOAD Jul 2 08:02:02.735000 audit: BPF prog-id=23 op=UNLOAD Jul 2 08:02:02.735000 audit: BPF prog-id=29 op=LOAD Jul 2 08:02:02.735000 audit: BPF prog-id=15 op=UNLOAD Jul 2 08:02:02.735000 audit: BPF prog-id=30 op=LOAD Jul 2 08:02:02.735000 audit: BPF prog-id=31 op=LOAD Jul 2 08:02:02.735000 audit: BPF prog-id=16 op=UNLOAD Jul 2 08:02:02.735000 audit: BPF prog-id=17 op=UNLOAD Jul 2 08:02:02.735000 audit: BPF prog-id=32 op=LOAD Jul 2 08:02:02.736000 audit: BPF prog-id=20 op=UNLOAD Jul 2 08:02:02.747079 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:02:02.747928 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 08:02:02.749011 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 08:02:02.750129 systemd[1]: Starting modprobe@loop.service... Jul 2 08:02:02.750355 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 08:02:02.750422 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 08:02:02.750486 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:02:02.751108 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:02:02.751185 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 08:02:02.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:02.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:02.751689 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:02:02.751804 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 08:02:02.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:02.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:02.752240 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:02:02.752358 systemd[1]: Finished modprobe@loop.service. Jul 2 08:02:02.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:02.751000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:02.752842 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 08:02:02.752960 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 08:02:02.754088 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:02:02.754901 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 08:02:02.755915 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 08:02:02.757683 systemd[1]: Starting modprobe@loop.service... Jul 2 08:02:02.757908 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 08:02:02.757977 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 08:02:02.758042 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:02:02.758498 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:02:02.758672 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 08:02:02.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:02.757000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:02.759001 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:02:02.759074 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 08:02:02.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:02.757000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:02.759386 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:02:02.759456 systemd[1]: Finished modprobe@loop.service. Jul 2 08:02:02.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:02.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:02.759759 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 08:02:02.759821 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 08:02:02.761611 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:02:02.762366 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 08:02:02.763128 systemd[1]: Starting modprobe@drm.service... Jul 2 08:02:02.764312 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 08:02:02.765332 systemd[1]: Starting modprobe@loop.service... Jul 2 08:02:02.765888 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 08:02:02.765969 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 08:02:02.766788 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 2 08:02:02.766944 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:02:02.767725 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:02:02.767808 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 08:02:02.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:02.766000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:02.768126 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 08:02:02.768199 systemd[1]: Finished modprobe@drm.service. Jul 2 08:02:02.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:02.766000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:02.768500 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:02:02.768584 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 08:02:02.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:02.767000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:02.768895 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:02:02.768971 systemd[1]: Finished modprobe@loop.service. Jul 2 08:02:02.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:02.767000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:02.769442 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 08:02:02.769506 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 08:02:02.770290 systemd[1]: Finished ensure-sysext.service. Jul 2 08:02:02.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:02.931947 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 2 08:02:02.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:02.933000 systemd[1]: Starting audit-rules.service... Jul 2 08:02:02.933900 systemd[1]: Starting clean-ca-certificates.service... Jul 2 08:02:02.934745 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 2 08:02:02.933000 audit: BPF prog-id=33 op=LOAD Jul 2 08:02:02.936000 audit: BPF prog-id=34 op=LOAD Jul 2 08:02:02.943000 audit[1207]: SYSTEM_BOOT pid=1207 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 2 08:02:02.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:02.937101 systemd[1]: Starting systemd-resolved.service... Jul 2 08:02:02.938363 systemd[1]: Starting systemd-timesyncd.service... Jul 2 08:02:02.939350 systemd[1]: Starting systemd-update-utmp.service... Jul 2 08:02:02.947945 systemd[1]: Finished systemd-update-utmp.service. Jul 2 08:02:02.949425 systemd[1]: Finished clean-ca-certificates.service. Jul 2 08:02:02.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:02.949610 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 08:02:02.961688 systemd-networkd[1062]: ens192: Gained IPv6LL Jul 2 08:02:02.966067 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 2 08:02:02.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:03.029420 systemd[1]: Started systemd-timesyncd.service. Jul 2 08:02:03.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:03.029648 systemd[1]: Reached target time-set.target. Jul 2 08:02:03.057684 systemd-resolved[1205]: Positive Trust Anchors: Jul 2 08:02:03.057696 systemd-resolved[1205]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 08:02:03.057722 systemd-resolved[1205]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 08:02:03.064201 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 2 08:02:03.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:02:03.071000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 2 08:02:03.071000 audit[1223]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff1a631420 a2=420 a3=0 items=0 ppid=1202 pid=1223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 08:02:03.071000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 2 08:02:03.073877 augenrules[1223]: No rules Jul 2 08:02:03.074333 systemd[1]: Finished audit-rules.service. Jul 2 08:02:03.086928 systemd-resolved[1205]: Defaulting to hostname 'linux'. Jul 2 08:02:03.088002 systemd[1]: Started systemd-resolved.service. Jul 2 08:02:03.088175 systemd[1]: Reached target network.target. Jul 2 08:02:03.088266 systemd[1]: Reached target network-online.target. Jul 2 08:02:03.088359 systemd[1]: Reached target nss-lookup.target. Jul 2 08:02:03.173757 ldconfig[1089]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 08:02:03.200738 systemd[1]: Finished ldconfig.service. Jul 2 08:02:03.202042 systemd[1]: Starting systemd-update-done.service... Jul 2 08:02:03.215280 systemd[1]: Finished systemd-update-done.service. Jul 2 08:02:03.215505 systemd[1]: Reached target sysinit.target. Jul 2 08:02:03.215710 systemd[1]: Started motdgen.path. Jul 2 08:02:03.215844 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 2 08:02:03.216086 systemd[1]: Started logrotate.timer. Jul 2 08:02:03.216313 systemd[1]: Started mdadm.timer. Jul 2 08:02:03.216420 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 2 08:02:03.216604 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 08:02:03.216625 systemd[1]: Reached target paths.target. Jul 2 08:02:03.216737 systemd[1]: Reached target timers.target. Jul 2 08:02:03.217046 systemd[1]: Listening on dbus.socket. Jul 2 08:02:03.218057 systemd[1]: Starting docker.socket... Jul 2 08:02:58.772369 systemd-resolved[1205]: Clock change detected. Flushing caches. Jul 2 08:02:58.772442 systemd-timesyncd[1206]: Contacted time server 198.137.202.56:123 (0.flatcar.pool.ntp.org). Jul 2 08:02:58.772477 systemd-timesyncd[1206]: Initial clock synchronization to Tue 2024-07-02 08:02:58.772342 UTC. Jul 2 08:02:58.786485 systemd[1]: Listening on sshd.socket. Jul 2 08:02:58.786712 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 08:02:58.787101 systemd[1]: Listening on docker.socket. Jul 2 08:02:58.787542 systemd[1]: Reached target sockets.target. Jul 2 08:02:58.787733 systemd[1]: Reached target basic.target. Jul 2 08:02:58.787955 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 08:02:58.787975 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 08:02:58.788960 systemd[1]: Starting containerd.service... Jul 2 08:02:58.790164 systemd[1]: Starting dbus.service... Jul 2 08:02:58.791395 systemd[1]: Starting enable-oem-cloudinit.service... Jul 2 08:02:58.792546 systemd[1]: Starting extend-filesystems.service... Jul 2 08:02:58.793239 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 2 08:02:58.796031 jq[1233]: false Jul 2 08:02:58.796433 systemd[1]: Starting kubelet.service... Jul 2 08:02:58.797360 systemd[1]: Starting motdgen.service... Jul 2 08:02:58.798110 systemd[1]: Starting prepare-helm.service... Jul 2 08:02:58.798874 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 2 08:02:58.799645 systemd[1]: Starting sshd-keygen.service... Jul 2 08:02:58.801148 systemd[1]: Starting systemd-logind.service... Jul 2 08:02:58.801255 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 08:02:58.801298 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 08:02:58.801715 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 08:02:58.802142 systemd[1]: Starting update-engine.service... Jul 2 08:02:58.803808 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 2 08:02:58.806796 systemd[1]: Starting vmtoolsd.service... Jul 2 08:02:58.808894 jq[1245]: true Jul 2 08:02:58.810199 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 08:02:58.810321 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 2 08:02:58.816353 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 08:02:58.816467 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 2 08:02:58.821896 jq[1250]: true Jul 2 08:02:58.826374 tar[1249]: linux-amd64/helm Jul 2 08:02:58.836035 systemd[1]: Started vmtoolsd.service. Jul 2 08:02:58.844519 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 08:02:58.844631 systemd[1]: Finished motdgen.service. Jul 2 08:02:58.846196 extend-filesystems[1234]: Found loop1 Jul 2 08:02:58.846480 extend-filesystems[1234]: Found sda Jul 2 08:02:58.847271 extend-filesystems[1234]: Found sda1 Jul 2 08:02:58.848983 extend-filesystems[1234]: Found sda2 Jul 2 08:02:58.849356 extend-filesystems[1234]: Found sda3 Jul 2 08:02:58.849699 dbus-daemon[1232]: [system] SELinux support is enabled Jul 2 08:02:58.849782 systemd[1]: Started dbus.service. Jul 2 08:02:58.850108 extend-filesystems[1234]: Found usr Jul 2 08:02:58.850273 extend-filesystems[1234]: Found sda4 Jul 2 08:02:58.850480 extend-filesystems[1234]: Found sda6 Jul 2 08:02:58.851076 extend-filesystems[1234]: Found sda7 Jul 2 08:02:58.851076 extend-filesystems[1234]: Found sda9 Jul 2 08:02:58.851076 extend-filesystems[1234]: Checking size of /dev/sda9 Jul 2 08:02:58.851131 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 08:02:58.851146 systemd[1]: Reached target system-config.target. Jul 2 08:02:58.852416 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 08:02:58.852426 systemd[1]: Reached target user-config.target. Jul 2 08:02:58.876094 extend-filesystems[1234]: Old size kept for /dev/sda9 Jul 2 08:02:58.887829 extend-filesystems[1234]: Found sr0 Jul 2 08:02:58.888515 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 08:02:58.888613 systemd[1]: Finished extend-filesystems.service. Jul 2 08:02:58.902940 kernel: NET: Registered PF_VSOCK protocol family Jul 2 08:02:58.912960 update_engine[1243]: I0702 08:02:58.912012 1243 main.cc:92] Flatcar Update Engine starting Jul 2 08:02:58.917012 systemd[1]: Started update-engine.service. Jul 2 08:02:58.918297 systemd[1]: Started locksmithd.service. Jul 2 08:02:58.918718 update_engine[1243]: I0702 08:02:58.918658 1243 update_check_scheduler.cc:74] Next update check in 10m8s Jul 2 08:02:58.921782 bash[1283]: Updated "/home/core/.ssh/authorized_keys" Jul 2 08:02:58.922216 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 2 08:02:58.930295 env[1252]: time="2024-07-02T08:02:58.930260561Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 2 08:02:58.943028 systemd-logind[1242]: Watching system buttons on /dev/input/event1 (Power Button) Jul 2 08:02:58.943042 systemd-logind[1242]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 08:02:58.943284 systemd-logind[1242]: New seat seat0. Jul 2 08:02:58.944366 systemd[1]: Started systemd-logind.service. Jul 2 08:02:58.997289 env[1252]: time="2024-07-02T08:02:58.997000371Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 08:02:58.997289 env[1252]: time="2024-07-02T08:02:58.997110241Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:02:58.999618 env[1252]: time="2024-07-02T08:02:58.999068526Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.161-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:02:58.999618 env[1252]: time="2024-07-02T08:02:58.999086845Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:02:58.999618 env[1252]: time="2024-07-02T08:02:58.999210132Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:02:58.999618 env[1252]: time="2024-07-02T08:02:58.999220178Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 08:02:58.999618 env[1252]: time="2024-07-02T08:02:58.999232401Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 08:02:58.999618 env[1252]: time="2024-07-02T08:02:58.999238539Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 08:02:58.999618 env[1252]: time="2024-07-02T08:02:58.999281064Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:02:58.999618 env[1252]: time="2024-07-02T08:02:58.999433952Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:02:58.999618 env[1252]: time="2024-07-02T08:02:58.999507930Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:02:58.999618 env[1252]: time="2024-07-02T08:02:58.999517912Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 08:02:58.999806 env[1252]: time="2024-07-02T08:02:58.999545256Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 08:02:58.999806 env[1252]: time="2024-07-02T08:02:58.999552742Z" level=info msg="metadata content store policy set" policy=shared Jul 2 08:02:59.002303 env[1252]: time="2024-07-02T08:02:59.001111078Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 08:02:59.002303 env[1252]: time="2024-07-02T08:02:59.001133556Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 08:02:59.002303 env[1252]: time="2024-07-02T08:02:59.001145069Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 08:02:59.002303 env[1252]: time="2024-07-02T08:02:59.001181152Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 08:02:59.002303 env[1252]: time="2024-07-02T08:02:59.001193448Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 08:02:59.002303 env[1252]: time="2024-07-02T08:02:59.001202045Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 08:02:59.002303 env[1252]: time="2024-07-02T08:02:59.001213286Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 08:02:59.002303 env[1252]: time="2024-07-02T08:02:59.001222439Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 08:02:59.002303 env[1252]: time="2024-07-02T08:02:59.001230309Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 2 08:02:59.002303 env[1252]: time="2024-07-02T08:02:59.001237296Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 08:02:59.002303 env[1252]: time="2024-07-02T08:02:59.001244552Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 08:02:59.002303 env[1252]: time="2024-07-02T08:02:59.001255573Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 08:02:59.002303 env[1252]: time="2024-07-02T08:02:59.001318603Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 08:02:59.002303 env[1252]: time="2024-07-02T08:02:59.001380821Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 08:02:59.002586 env[1252]: time="2024-07-02T08:02:59.001530352Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 08:02:59.002586 env[1252]: time="2024-07-02T08:02:59.001550919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 08:02:59.002586 env[1252]: time="2024-07-02T08:02:59.001559336Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 08:02:59.002586 env[1252]: time="2024-07-02T08:02:59.001586430Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 08:02:59.002586 env[1252]: time="2024-07-02T08:02:59.001594787Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 08:02:59.002586 env[1252]: time="2024-07-02T08:02:59.001602203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 08:02:59.002586 env[1252]: time="2024-07-02T08:02:59.001608410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 08:02:59.002586 env[1252]: time="2024-07-02T08:02:59.001614772Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 08:02:59.002586 env[1252]: time="2024-07-02T08:02:59.001621883Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 08:02:59.002586 env[1252]: time="2024-07-02T08:02:59.001628778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 08:02:59.002586 env[1252]: time="2024-07-02T08:02:59.001635061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 08:02:59.002586 env[1252]: time="2024-07-02T08:02:59.001642725Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 08:02:59.002586 env[1252]: time="2024-07-02T08:02:59.001723410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 08:02:59.002586 env[1252]: time="2024-07-02T08:02:59.001735917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 08:02:59.002586 env[1252]: time="2024-07-02T08:02:59.001742883Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 08:02:59.002810 env[1252]: time="2024-07-02T08:02:59.001748857Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 08:02:59.002810 env[1252]: time="2024-07-02T08:02:59.001757387Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 2 08:02:59.002810 env[1252]: time="2024-07-02T08:02:59.001763355Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 08:02:59.002810 env[1252]: time="2024-07-02T08:02:59.001773196Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 2 08:02:59.002810 env[1252]: time="2024-07-02T08:02:59.001795351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 08:02:59.002891 env[1252]: time="2024-07-02T08:02:59.001910203Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 08:02:59.002891 env[1252]: time="2024-07-02T08:02:59.001952701Z" level=info msg="Connect containerd service" Jul 2 08:02:59.002891 env[1252]: time="2024-07-02T08:02:59.001972881Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 08:02:59.005306 env[1252]: time="2024-07-02T08:02:59.003079655Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 08:02:59.005306 env[1252]: time="2024-07-02T08:02:59.003229052Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 08:02:59.005306 env[1252]: time="2024-07-02T08:02:59.003253228Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 08:02:59.005306 env[1252]: time="2024-07-02T08:02:59.003599697Z" level=info msg="containerd successfully booted in 0.089842s" Jul 2 08:02:59.005306 env[1252]: time="2024-07-02T08:02:59.004460728Z" level=info msg="Start subscribing containerd event" Jul 2 08:02:59.005306 env[1252]: time="2024-07-02T08:02:59.004507453Z" level=info msg="Start recovering state" Jul 2 08:02:59.005306 env[1252]: time="2024-07-02T08:02:59.004552066Z" level=info msg="Start event monitor" Jul 2 08:02:59.005306 env[1252]: time="2024-07-02T08:02:59.004573480Z" level=info msg="Start snapshots syncer" Jul 2 08:02:59.005306 env[1252]: time="2024-07-02T08:02:59.004582872Z" level=info msg="Start cni network conf syncer for default" Jul 2 08:02:59.005306 env[1252]: time="2024-07-02T08:02:59.004590542Z" level=info msg="Start streaming server" Jul 2 08:02:59.003332 systemd[1]: Started containerd.service. Jul 2 08:02:59.257817 tar[1249]: linux-amd64/LICENSE Jul 2 08:02:59.257890 tar[1249]: linux-amd64/README.md Jul 2 08:02:59.261872 systemd[1]: Finished prepare-helm.service. Jul 2 08:02:59.390981 sshd_keygen[1263]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 08:02:59.407486 systemd[1]: Finished sshd-keygen.service. Jul 2 08:02:59.408698 systemd[1]: Starting issuegen.service... Jul 2 08:02:59.413096 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 08:02:59.413200 systemd[1]: Finished issuegen.service. Jul 2 08:02:59.414409 systemd[1]: Starting systemd-user-sessions.service... Jul 2 08:02:59.420703 systemd[1]: Finished systemd-user-sessions.service. Jul 2 08:02:59.421735 systemd[1]: Started getty@tty1.service. Jul 2 08:02:59.422880 systemd[1]: Started serial-getty@ttyS0.service. Jul 2 08:02:59.423258 systemd[1]: Reached target getty.target. Jul 2 08:02:59.431442 locksmithd[1293]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 08:03:00.021020 systemd[1]: Started kubelet.service. Jul 2 08:03:00.021368 systemd[1]: Reached target multi-user.target. Jul 2 08:03:00.022378 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 2 08:03:00.028620 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 2 08:03:00.028721 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 2 08:03:00.028915 systemd[1]: Startup finished in 937ms (kernel) + 5.223s (initrd) + 7.711s (userspace) = 13.871s. Jul 2 08:03:00.061081 login[1360]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 2 08:03:00.062628 login[1361]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 2 08:03:00.069458 systemd[1]: Created slice user-500.slice. Jul 2 08:03:00.070398 systemd[1]: Starting user-runtime-dir@500.service... Jul 2 08:03:00.074523 systemd-logind[1242]: New session 2 of user core. Jul 2 08:03:00.077245 systemd-logind[1242]: New session 1 of user core. Jul 2 08:03:00.080208 systemd[1]: Finished user-runtime-dir@500.service. Jul 2 08:03:00.081246 systemd[1]: Starting user@500.service... Jul 2 08:03:00.094582 (systemd)[1368]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:03:00.216325 systemd[1368]: Queued start job for default target default.target. Jul 2 08:03:00.216873 systemd[1368]: Reached target paths.target. Jul 2 08:03:00.216893 systemd[1368]: Reached target sockets.target. Jul 2 08:03:00.216902 systemd[1368]: Reached target timers.target. Jul 2 08:03:00.216910 systemd[1368]: Reached target basic.target. Jul 2 08:03:00.216942 systemd[1368]: Reached target default.target. Jul 2 08:03:00.216960 systemd[1368]: Startup finished in 118ms. Jul 2 08:03:00.216983 systemd[1]: Started user@500.service. Jul 2 08:03:00.217772 systemd[1]: Started session-1.scope. Jul 2 08:03:00.218256 systemd[1]: Started session-2.scope. Jul 2 08:03:00.733295 kubelet[1365]: E0702 08:03:00.733248 1365 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:03:00.734532 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:03:00.734605 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:03:10.985282 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 08:03:10.985567 systemd[1]: Stopped kubelet.service. Jul 2 08:03:10.987096 systemd[1]: Starting kubelet.service... Jul 2 08:03:11.039276 systemd[1]: Started kubelet.service. Jul 2 08:03:11.150545 kubelet[1397]: E0702 08:03:11.150511 1397 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:03:11.152866 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:03:11.152950 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:03:21.403526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 08:03:21.403686 systemd[1]: Stopped kubelet.service. Jul 2 08:03:21.404876 systemd[1]: Starting kubelet.service... Jul 2 08:03:21.605826 systemd[1]: Started kubelet.service. Jul 2 08:03:21.677109 kubelet[1407]: E0702 08:03:21.677046 1407 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:03:21.678433 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:03:21.678510 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:03:31.929065 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 2 08:03:31.929195 systemd[1]: Stopped kubelet.service. Jul 2 08:03:31.930281 systemd[1]: Starting kubelet.service... Jul 2 08:03:32.146382 systemd[1]: Started kubelet.service. Jul 2 08:03:32.207632 kubelet[1417]: E0702 08:03:32.207557 1417 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:03:32.209100 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:03:32.209216 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:03:38.974241 systemd[1]: Created slice system-sshd.slice. Jul 2 08:03:38.975294 systemd[1]: Started sshd@0-139.178.70.105:22-139.178.68.195:37290.service. Jul 2 08:03:39.032509 sshd[1425]: Accepted publickey for core from 139.178.68.195 port 37290 ssh2: RSA SHA256:ZBsE641/vEJoOwiO3PsmlEnzQGV1D8+6UMqIklXV5hk Jul 2 08:03:39.033464 sshd[1425]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:03:39.035962 systemd-logind[1242]: New session 3 of user core. Jul 2 08:03:39.036391 systemd[1]: Started session-3.scope. Jul 2 08:03:39.083275 systemd[1]: Started sshd@1-139.178.70.105:22-139.178.68.195:37302.service. Jul 2 08:03:39.116711 sshd[1430]: Accepted publickey for core from 139.178.68.195 port 37302 ssh2: RSA SHA256:ZBsE641/vEJoOwiO3PsmlEnzQGV1D8+6UMqIklXV5hk Jul 2 08:03:39.117875 sshd[1430]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:03:39.120485 systemd-logind[1242]: New session 4 of user core. Jul 2 08:03:39.121217 systemd[1]: Started session-4.scope. Jul 2 08:03:39.173001 sshd[1430]: pam_unix(sshd:session): session closed for user core Jul 2 08:03:39.176101 systemd[1]: Started sshd@2-139.178.70.105:22-139.178.68.195:37314.service. Jul 2 08:03:39.177969 systemd[1]: sshd@1-139.178.70.105:22-139.178.68.195:37302.service: Deactivated successfully. Jul 2 08:03:39.178498 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 08:03:39.179381 systemd-logind[1242]: Session 4 logged out. Waiting for processes to exit. Jul 2 08:03:39.179904 systemd-logind[1242]: Removed session 4. Jul 2 08:03:39.211137 sshd[1435]: Accepted publickey for core from 139.178.68.195 port 37314 ssh2: RSA SHA256:ZBsE641/vEJoOwiO3PsmlEnzQGV1D8+6UMqIklXV5hk Jul 2 08:03:39.212177 sshd[1435]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:03:39.215463 systemd[1]: Started session-5.scope. Jul 2 08:03:39.215872 systemd-logind[1242]: New session 5 of user core. Jul 2 08:03:39.263024 sshd[1435]: pam_unix(sshd:session): session closed for user core Jul 2 08:03:39.265137 systemd[1]: sshd@2-139.178.70.105:22-139.178.68.195:37314.service: Deactivated successfully. Jul 2 08:03:39.265522 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 08:03:39.265882 systemd-logind[1242]: Session 5 logged out. Waiting for processes to exit. Jul 2 08:03:39.266583 systemd[1]: Started sshd@3-139.178.70.105:22-139.178.68.195:37324.service. Jul 2 08:03:39.267036 systemd-logind[1242]: Removed session 5. Jul 2 08:03:39.299223 sshd[1442]: Accepted publickey for core from 139.178.68.195 port 37324 ssh2: RSA SHA256:ZBsE641/vEJoOwiO3PsmlEnzQGV1D8+6UMqIklXV5hk Jul 2 08:03:39.299963 sshd[1442]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:03:39.302933 systemd[1]: Started session-6.scope. Jul 2 08:03:39.303146 systemd-logind[1242]: New session 6 of user core. Jul 2 08:03:39.352189 sshd[1442]: pam_unix(sshd:session): session closed for user core Jul 2 08:03:39.354690 systemd[1]: Started sshd@4-139.178.70.105:22-139.178.68.195:37340.service. Jul 2 08:03:39.355166 systemd[1]: sshd@3-139.178.70.105:22-139.178.68.195:37324.service: Deactivated successfully. Jul 2 08:03:39.355547 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 08:03:39.355967 systemd-logind[1242]: Session 6 logged out. Waiting for processes to exit. Jul 2 08:03:39.356489 systemd-logind[1242]: Removed session 6. Jul 2 08:03:39.388511 sshd[1447]: Accepted publickey for core from 139.178.68.195 port 37340 ssh2: RSA SHA256:ZBsE641/vEJoOwiO3PsmlEnzQGV1D8+6UMqIklXV5hk Jul 2 08:03:39.389427 sshd[1447]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:03:39.391805 systemd-logind[1242]: New session 7 of user core. Jul 2 08:03:39.392289 systemd[1]: Started session-7.scope. Jul 2 08:03:39.450826 sudo[1451]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 08:03:39.450977 sudo[1451]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 08:03:39.464881 systemd[1]: Starting docker.service... Jul 2 08:03:39.489853 env[1461]: time="2024-07-02T08:03:39.489819697Z" level=info msg="Starting up" Jul 2 08:03:39.490527 env[1461]: time="2024-07-02T08:03:39.490508845Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 08:03:39.490527 env[1461]: time="2024-07-02T08:03:39.490521822Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 08:03:39.490585 env[1461]: time="2024-07-02T08:03:39.490537081Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 08:03:39.490585 env[1461]: time="2024-07-02T08:03:39.490543323Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 08:03:39.491495 env[1461]: time="2024-07-02T08:03:39.491480361Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 08:03:39.491495 env[1461]: time="2024-07-02T08:03:39.491492190Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 08:03:39.491551 env[1461]: time="2024-07-02T08:03:39.491500672Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 08:03:39.491551 env[1461]: time="2024-07-02T08:03:39.491505512Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 08:03:39.509150 env[1461]: time="2024-07-02T08:03:39.509124030Z" level=info msg="Loading containers: start." Jul 2 08:03:39.613934 kernel: Initializing XFRM netlink socket Jul 2 08:03:39.688797 env[1461]: time="2024-07-02T08:03:39.688769384Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 2 08:03:39.788422 systemd-networkd[1062]: docker0: Link UP Jul 2 08:03:39.824798 env[1461]: time="2024-07-02T08:03:39.824776403Z" level=info msg="Loading containers: done." Jul 2 08:03:39.831486 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3842006537-merged.mount: Deactivated successfully. Jul 2 08:03:39.872508 env[1461]: time="2024-07-02T08:03:39.872473072Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 08:03:39.872645 env[1461]: time="2024-07-02T08:03:39.872598206Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 2 08:03:39.872675 env[1461]: time="2024-07-02T08:03:39.872667864Z" level=info msg="Daemon has completed initialization" Jul 2 08:03:39.955004 systemd[1]: Started docker.service. Jul 2 08:03:39.958417 env[1461]: time="2024-07-02T08:03:39.958391681Z" level=info msg="API listen on /run/docker.sock" Jul 2 08:03:41.767209 env[1252]: time="2024-07-02T08:03:41.767181453Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jul 2 08:03:42.384849 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 2 08:03:42.384970 systemd[1]: Stopped kubelet.service. Jul 2 08:03:42.386026 systemd[1]: Starting kubelet.service... Jul 2 08:03:42.388841 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount771181281.mount: Deactivated successfully. Jul 2 08:03:42.444562 systemd[1]: Started kubelet.service. Jul 2 08:03:42.472577 kubelet[1592]: E0702 08:03:42.472552 1592 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:03:42.473857 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:03:42.473957 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:03:44.161881 update_engine[1243]: I0702 08:03:44.161849 1243 update_attempter.cc:509] Updating boot flags... Jul 2 08:03:44.378624 env[1252]: time="2024-07-02T08:03:44.378588179Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:44.384932 env[1252]: time="2024-07-02T08:03:44.384905545Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:44.387024 env[1252]: time="2024-07-02T08:03:44.387009195Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:44.389107 env[1252]: time="2024-07-02T08:03:44.389091956Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:44.389535 env[1252]: time="2024-07-02T08:03:44.389517161Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\"" Jul 2 08:03:44.395279 env[1252]: time="2024-07-02T08:03:44.395256452Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jul 2 08:03:46.978001 env[1252]: time="2024-07-02T08:03:46.977955218Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:46.990893 env[1252]: time="2024-07-02T08:03:46.990877459Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:47.000336 env[1252]: time="2024-07-02T08:03:47.000323382Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:47.014055 env[1252]: time="2024-07-02T08:03:47.014036019Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:47.014539 env[1252]: time="2024-07-02T08:03:47.014521910Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\"" Jul 2 08:03:47.020541 env[1252]: time="2024-07-02T08:03:47.020465775Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jul 2 08:03:48.318434 env[1252]: time="2024-07-02T08:03:48.318398935Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:48.323321 env[1252]: time="2024-07-02T08:03:48.323302004Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:48.327372 env[1252]: time="2024-07-02T08:03:48.327352212Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:48.331657 env[1252]: time="2024-07-02T08:03:48.331636816Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:48.332231 env[1252]: time="2024-07-02T08:03:48.332209736Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\"" Jul 2 08:03:48.340597 env[1252]: time="2024-07-02T08:03:48.340571062Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jul 2 08:03:49.597937 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2973998562.mount: Deactivated successfully. Jul 2 08:03:50.096941 env[1252]: time="2024-07-02T08:03:50.096894752Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:50.123985 env[1252]: time="2024-07-02T08:03:50.123955233Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:50.131826 env[1252]: time="2024-07-02T08:03:50.131798647Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:50.141678 env[1252]: time="2024-07-02T08:03:50.141657546Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:50.142038 env[1252]: time="2024-07-02T08:03:50.142018585Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\"" Jul 2 08:03:50.151747 env[1252]: time="2024-07-02T08:03:50.151712337Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 08:03:50.684288 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3419492427.mount: Deactivated successfully. Jul 2 08:03:50.704677 env[1252]: time="2024-07-02T08:03:50.704652605Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:50.713554 env[1252]: time="2024-07-02T08:03:50.713536788Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:50.716666 env[1252]: time="2024-07-02T08:03:50.716646508Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:50.718001 env[1252]: time="2024-07-02T08:03:50.717981429Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:50.718298 env[1252]: time="2024-07-02T08:03:50.718277229Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jul 2 08:03:50.726377 env[1252]: time="2024-07-02T08:03:50.726346378Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 08:03:51.465240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1010090271.mount: Deactivated successfully. Jul 2 08:03:52.664886 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 2 08:03:52.665014 systemd[1]: Stopped kubelet.service. Jul 2 08:03:52.666092 systemd[1]: Starting kubelet.service... Jul 2 08:03:53.450318 env[1252]: time="2024-07-02T08:03:53.450280320Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:53.451976 env[1252]: time="2024-07-02T08:03:53.451957681Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:53.454291 env[1252]: time="2024-07-02T08:03:53.453953743Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:53.458881 env[1252]: time="2024-07-02T08:03:53.458855761Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:53.459423 env[1252]: time="2024-07-02T08:03:53.459406980Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jul 2 08:03:53.467382 env[1252]: time="2024-07-02T08:03:53.467361831Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jul 2 08:03:54.036595 systemd[1]: Started kubelet.service. Jul 2 08:03:54.084060 kubelet[1655]: E0702 08:03:54.084029 1655 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:03:54.085262 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:03:54.085342 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:03:54.247711 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount506356716.mount: Deactivated successfully. Jul 2 08:03:54.773558 env[1252]: time="2024-07-02T08:03:54.773526204Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:54.774281 env[1252]: time="2024-07-02T08:03:54.774261018Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:54.775086 env[1252]: time="2024-07-02T08:03:54.775068310Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:54.775868 env[1252]: time="2024-07-02T08:03:54.775852068Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:54.776257 env[1252]: time="2024-07-02T08:03:54.776238852Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Jul 2 08:03:56.438953 systemd[1]: Stopped kubelet.service. Jul 2 08:03:56.440632 systemd[1]: Starting kubelet.service... Jul 2 08:03:56.452065 systemd[1]: Reloading. Jul 2 08:03:56.505312 /usr/lib/systemd/system-generators/torcx-generator[1747]: time="2024-07-02T08:03:56Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 08:03:56.505334 /usr/lib/systemd/system-generators/torcx-generator[1747]: time="2024-07-02T08:03:56Z" level=info msg="torcx already run" Jul 2 08:03:56.569504 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 08:03:56.569612 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 08:03:56.581457 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:03:56.639336 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 08:03:56.639518 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 08:03:56.639763 systemd[1]: Stopped kubelet.service. Jul 2 08:03:56.641297 systemd[1]: Starting kubelet.service... Jul 2 08:03:57.301249 systemd[1]: Started kubelet.service. Jul 2 08:03:57.368773 kubelet[1810]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:03:57.368773 kubelet[1810]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 08:03:57.368773 kubelet[1810]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:03:57.369023 kubelet[1810]: I0702 08:03:57.368814 1810 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 08:03:57.531248 kubelet[1810]: I0702 08:03:57.531224 1810 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 08:03:57.531248 kubelet[1810]: I0702 08:03:57.531246 1810 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 08:03:57.531415 kubelet[1810]: I0702 08:03:57.531402 1810 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 08:03:57.543483 kubelet[1810]: E0702 08:03:57.543467 1810 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.70.105:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.70.105:6443: connect: connection refused Jul 2 08:03:57.544876 kubelet[1810]: I0702 08:03:57.544863 1810 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 08:03:57.554807 kubelet[1810]: I0702 08:03:57.554458 1810 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 08:03:57.556238 kubelet[1810]: I0702 08:03:57.556228 1810 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 08:03:57.556403 kubelet[1810]: I0702 08:03:57.556393 1810 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 08:03:57.556498 kubelet[1810]: I0702 08:03:57.556491 1810 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 08:03:57.556545 kubelet[1810]: I0702 08:03:57.556538 1810 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 08:03:57.565755 kubelet[1810]: I0702 08:03:57.565738 1810 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:03:57.567438 kubelet[1810]: I0702 08:03:57.567427 1810 kubelet.go:393] "Attempting to sync node with API server" Jul 2 08:03:57.567516 kubelet[1810]: I0702 08:03:57.567506 1810 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 08:03:57.567606 kubelet[1810]: I0702 08:03:57.567595 1810 kubelet.go:309] "Adding apiserver pod source" Jul 2 08:03:57.567655 kubelet[1810]: I0702 08:03:57.567647 1810 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 08:03:57.567779 kubelet[1810]: W0702 08:03:57.567750 1810 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://139.178.70.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jul 2 08:03:57.567814 kubelet[1810]: E0702 08:03:57.567786 1810 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jul 2 08:03:57.570519 kubelet[1810]: I0702 08:03:57.570508 1810 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 08:03:57.577180 kubelet[1810]: W0702 08:03:57.577149 1810 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://139.178.70.105:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jul 2 08:03:57.577180 kubelet[1810]: E0702 08:03:57.577178 1810 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.105:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jul 2 08:03:57.577611 kubelet[1810]: W0702 08:03:57.577600 1810 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 08:03:57.578069 kubelet[1810]: I0702 08:03:57.578060 1810 server.go:1232] "Started kubelet" Jul 2 08:03:57.579538 kubelet[1810]: I0702 08:03:57.579529 1810 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 08:03:57.580142 kubelet[1810]: I0702 08:03:57.580134 1810 server.go:462] "Adding debug handlers to kubelet server" Jul 2 08:03:57.581591 kubelet[1810]: I0702 08:03:57.581577 1810 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 08:03:57.581717 kubelet[1810]: I0702 08:03:57.581705 1810 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 08:03:57.582176 kubelet[1810]: E0702 08:03:57.582153 1810 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 08:03:57.582213 kubelet[1810]: E0702 08:03:57.582178 1810 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 08:03:57.582325 kubelet[1810]: E0702 08:03:57.582279 1810 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17de56b3b764510a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.July, 2, 8, 3, 57, 578047754, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 8, 3, 57, 578047754, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://139.178.70.105:6443/api/v1/namespaces/default/events": dial tcp 139.178.70.105:6443: connect: connection refused'(may retry after sleeping) Jul 2 08:03:57.583989 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 2 08:03:57.584077 kubelet[1810]: I0702 08:03:57.584065 1810 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 08:03:57.584358 kubelet[1810]: I0702 08:03:57.584349 1810 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 08:03:57.586026 kubelet[1810]: I0702 08:03:57.586018 1810 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 08:03:57.586109 kubelet[1810]: I0702 08:03:57.586102 1810 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 08:03:57.586936 kubelet[1810]: W0702 08:03:57.586908 1810 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://139.178.70.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jul 2 08:03:57.586997 kubelet[1810]: E0702 08:03:57.586990 1810 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jul 2 08:03:57.587334 kubelet[1810]: E0702 08:03:57.587327 1810 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.105:6443: connect: connection refused" interval="200ms" Jul 2 08:03:57.600005 kubelet[1810]: I0702 08:03:57.599990 1810 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 08:03:57.603480 kubelet[1810]: I0702 08:03:57.603469 1810 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 08:03:57.603558 kubelet[1810]: I0702 08:03:57.603550 1810 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 08:03:57.603614 kubelet[1810]: I0702 08:03:57.603607 1810 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:03:57.603806 kubelet[1810]: I0702 08:03:57.603799 1810 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 08:03:57.603862 kubelet[1810]: I0702 08:03:57.603855 1810 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 08:03:57.603915 kubelet[1810]: I0702 08:03:57.603908 1810 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 08:03:57.603990 kubelet[1810]: E0702 08:03:57.603983 1810 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 08:03:57.607620 kubelet[1810]: W0702 08:03:57.607598 1810 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://139.178.70.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jul 2 08:03:57.607684 kubelet[1810]: E0702 08:03:57.607677 1810 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jul 2 08:03:57.610359 kubelet[1810]: I0702 08:03:57.610350 1810 policy_none.go:49] "None policy: Start" Jul 2 08:03:57.610753 kubelet[1810]: I0702 08:03:57.610746 1810 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 08:03:57.610807 kubelet[1810]: I0702 08:03:57.610800 1810 state_mem.go:35] "Initializing new in-memory state store" Jul 2 08:03:57.613695 systemd[1]: Created slice kubepods.slice. Jul 2 08:03:57.617446 systemd[1]: Created slice kubepods-burstable.slice. Jul 2 08:03:57.619404 systemd[1]: Created slice kubepods-besteffort.slice. Jul 2 08:03:57.625400 kubelet[1810]: I0702 08:03:57.625382 1810 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 08:03:57.625540 kubelet[1810]: I0702 08:03:57.625528 1810 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 08:03:57.626644 kubelet[1810]: E0702 08:03:57.626480 1810 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 2 08:03:57.688270 kubelet[1810]: I0702 08:03:57.688253 1810 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 08:03:57.688607 kubelet[1810]: E0702 08:03:57.688598 1810 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://139.178.70.105:6443/api/v1/nodes\": dial tcp 139.178.70.105:6443: connect: connection refused" node="localhost" Jul 2 08:03:57.704915 kubelet[1810]: I0702 08:03:57.704883 1810 topology_manager.go:215] "Topology Admit Handler" podUID="691e1a01f125ff2e238dce531c3178da" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 08:03:57.705804 kubelet[1810]: I0702 08:03:57.705794 1810 topology_manager.go:215] "Topology Admit Handler" podUID="d27baad490d2d4f748c86b318d7d74ef" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 08:03:57.706445 kubelet[1810]: I0702 08:03:57.706436 1810 topology_manager.go:215] "Topology Admit Handler" podUID="9c3207d669e00aa24ded52617c0d65d0" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 08:03:57.710433 systemd[1]: Created slice kubepods-burstable-pod691e1a01f125ff2e238dce531c3178da.slice. Jul 2 08:03:57.718843 systemd[1]: Created slice kubepods-burstable-podd27baad490d2d4f748c86b318d7d74ef.slice. Jul 2 08:03:57.725494 systemd[1]: Created slice kubepods-burstable-pod9c3207d669e00aa24ded52617c0d65d0.slice. Jul 2 08:03:57.787260 kubelet[1810]: I0702 08:03:57.787237 1810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3207d669e00aa24ded52617c0d65d0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9c3207d669e00aa24ded52617c0d65d0\") " pod="kube-system/kube-scheduler-localhost" Jul 2 08:03:57.787386 kubelet[1810]: I0702 08:03:57.787378 1810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/691e1a01f125ff2e238dce531c3178da-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"691e1a01f125ff2e238dce531c3178da\") " pod="kube-system/kube-apiserver-localhost" Jul 2 08:03:57.787445 kubelet[1810]: I0702 08:03:57.787437 1810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 08:03:57.787503 kubelet[1810]: I0702 08:03:57.787496 1810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 08:03:57.787562 kubelet[1810]: I0702 08:03:57.787554 1810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 08:03:57.787617 kubelet[1810]: I0702 08:03:57.787610 1810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 08:03:57.787682 kubelet[1810]: I0702 08:03:57.787674 1810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 08:03:57.787743 kubelet[1810]: I0702 08:03:57.787736 1810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/691e1a01f125ff2e238dce531c3178da-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"691e1a01f125ff2e238dce531c3178da\") " pod="kube-system/kube-apiserver-localhost" Jul 2 08:03:57.787846 kubelet[1810]: I0702 08:03:57.787839 1810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/691e1a01f125ff2e238dce531c3178da-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"691e1a01f125ff2e238dce531c3178da\") " pod="kube-system/kube-apiserver-localhost" Jul 2 08:03:57.788156 kubelet[1810]: E0702 08:03:57.788148 1810 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.105:6443: connect: connection refused" interval="400ms" Jul 2 08:03:57.889637 kubelet[1810]: I0702 08:03:57.889579 1810 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 08:03:57.889913 kubelet[1810]: E0702 08:03:57.889903 1810 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://139.178.70.105:6443/api/v1/nodes\": dial tcp 139.178.70.105:6443: connect: connection refused" node="localhost" Jul 2 08:03:58.019686 env[1252]: time="2024-07-02T08:03:58.019320083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:691e1a01f125ff2e238dce531c3178da,Namespace:kube-system,Attempt:0,}" Jul 2 08:03:58.025377 env[1252]: time="2024-07-02T08:03:58.025144562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d27baad490d2d4f748c86b318d7d74ef,Namespace:kube-system,Attempt:0,}" Jul 2 08:03:58.027535 env[1252]: time="2024-07-02T08:03:58.027513087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9c3207d669e00aa24ded52617c0d65d0,Namespace:kube-system,Attempt:0,}" Jul 2 08:03:58.189394 kubelet[1810]: E0702 08:03:58.189355 1810 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.105:6443: connect: connection refused" interval="800ms" Jul 2 08:03:58.291680 kubelet[1810]: I0702 08:03:58.291659 1810 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 08:03:58.291964 kubelet[1810]: E0702 08:03:58.291950 1810 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://139.178.70.105:6443/api/v1/nodes\": dial tcp 139.178.70.105:6443: connect: connection refused" node="localhost" Jul 2 08:03:58.415706 kubelet[1810]: W0702 08:03:58.415644 1810 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://139.178.70.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jul 2 08:03:58.415706 kubelet[1810]: E0702 08:03:58.415683 1810 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jul 2 08:03:58.433059 kubelet[1810]: W0702 08:03:58.433006 1810 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://139.178.70.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jul 2 08:03:58.433059 kubelet[1810]: E0702 08:03:58.433045 1810 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jul 2 08:03:58.448484 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3651303604.mount: Deactivated successfully. Jul 2 08:03:58.450087 env[1252]: time="2024-07-02T08:03:58.450061729Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:58.451414 env[1252]: time="2024-07-02T08:03:58.451398846Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:58.453167 env[1252]: time="2024-07-02T08:03:58.453149655Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:58.455010 env[1252]: time="2024-07-02T08:03:58.454996123Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:58.457250 env[1252]: time="2024-07-02T08:03:58.457236663Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:58.459697 env[1252]: time="2024-07-02T08:03:58.459677813Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:58.460656 env[1252]: time="2024-07-02T08:03:58.460613255Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:58.463991 env[1252]: time="2024-07-02T08:03:58.463915756Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:58.466773 env[1252]: time="2024-07-02T08:03:58.466733905Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:58.468198 env[1252]: time="2024-07-02T08:03:58.468179441Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:58.471320 env[1252]: time="2024-07-02T08:03:58.471291581Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:58.474766 env[1252]: time="2024-07-02T08:03:58.472590669Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:03:58.487488 env[1252]: time="2024-07-02T08:03:58.487365363Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:03:58.487488 env[1252]: time="2024-07-02T08:03:58.487389411Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:03:58.487488 env[1252]: time="2024-07-02T08:03:58.487396583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:03:58.487707 env[1252]: time="2024-07-02T08:03:58.487672403Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7fda78f142ac8084e9fe940963c830d6f25e0723bf619180bc4a9ea6075b50eb pid=1853 runtime=io.containerd.runc.v2 Jul 2 08:03:58.490630 env[1252]: time="2024-07-02T08:03:58.490248794Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:03:58.490630 env[1252]: time="2024-07-02T08:03:58.490272560Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:03:58.490630 env[1252]: time="2024-07-02T08:03:58.490279553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:03:58.490630 env[1252]: time="2024-07-02T08:03:58.490342144Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9f0136d2be7de87d027794fa23006575d63edbd9e4cbd674930380ba195380db pid=1855 runtime=io.containerd.runc.v2 Jul 2 08:03:58.498036 systemd[1]: Started cri-containerd-7fda78f142ac8084e9fe940963c830d6f25e0723bf619180bc4a9ea6075b50eb.scope. Jul 2 08:03:58.511275 systemd[1]: Started cri-containerd-9f0136d2be7de87d027794fa23006575d63edbd9e4cbd674930380ba195380db.scope. Jul 2 08:03:58.623368 env[1252]: time="2024-07-02T08:03:58.539762288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:691e1a01f125ff2e238dce531c3178da,Namespace:kube-system,Attempt:0,} returns sandbox id \"7fda78f142ac8084e9fe940963c830d6f25e0723bf619180bc4a9ea6075b50eb\"" Jul 2 08:03:58.623368 env[1252]: time="2024-07-02T08:03:58.548570774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d27baad490d2d4f748c86b318d7d74ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f0136d2be7de87d027794fa23006575d63edbd9e4cbd674930380ba195380db\"" Jul 2 08:03:58.623494 kubelet[1810]: W0702 08:03:58.568012 1810 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://139.178.70.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jul 2 08:03:58.623494 kubelet[1810]: E0702 08:03:58.568052 1810 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jul 2 08:03:58.624977 env[1252]: time="2024-07-02T08:03:58.624953670Z" level=info msg="CreateContainer within sandbox \"9f0136d2be7de87d027794fa23006575d63edbd9e4cbd674930380ba195380db\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 08:03:58.625232 env[1252]: time="2024-07-02T08:03:58.625102911Z" level=info msg="CreateContainer within sandbox \"7fda78f142ac8084e9fe940963c830d6f25e0723bf619180bc4a9ea6075b50eb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 08:03:58.663549 env[1252]: time="2024-07-02T08:03:58.663512224Z" level=info msg="CreateContainer within sandbox \"7fda78f142ac8084e9fe940963c830d6f25e0723bf619180bc4a9ea6075b50eb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b12233c075fecb83de3a83ed33b76c7f517cc9cd6859ad8ca362b97a1e96cb15\"" Jul 2 08:03:58.664368 env[1252]: time="2024-07-02T08:03:58.664352729Z" level=info msg="CreateContainer within sandbox \"9f0136d2be7de87d027794fa23006575d63edbd9e4cbd674930380ba195380db\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"06c32ac36923ce055af69427034bd52ad6c061f1f09bc020eab6f8e255ed7b66\"" Jul 2 08:03:58.664837 env[1252]: time="2024-07-02T08:03:58.664823137Z" level=info msg="StartContainer for \"b12233c075fecb83de3a83ed33b76c7f517cc9cd6859ad8ca362b97a1e96cb15\"" Jul 2 08:03:58.667825 env[1252]: time="2024-07-02T08:03:58.667803818Z" level=info msg="StartContainer for \"06c32ac36923ce055af69427034bd52ad6c061f1f09bc020eab6f8e255ed7b66\"" Jul 2 08:03:58.670594 env[1252]: time="2024-07-02T08:03:58.660097200Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:03:58.670594 env[1252]: time="2024-07-02T08:03:58.660122228Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:03:58.670594 env[1252]: time="2024-07-02T08:03:58.660129243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:03:58.670594 env[1252]: time="2024-07-02T08:03:58.660469793Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a4fcba52ce288a82240125744434647aabae102b73cfcd38663083a310faf178 pid=1935 runtime=io.containerd.runc.v2 Jul 2 08:03:58.677887 systemd[1]: Started cri-containerd-b12233c075fecb83de3a83ed33b76c7f517cc9cd6859ad8ca362b97a1e96cb15.scope. Jul 2 08:03:58.691832 systemd[1]: Started cri-containerd-a4fcba52ce288a82240125744434647aabae102b73cfcd38663083a310faf178.scope. Jul 2 08:03:58.696200 systemd[1]: Started cri-containerd-06c32ac36923ce055af69427034bd52ad6c061f1f09bc020eab6f8e255ed7b66.scope. Jul 2 08:03:58.723202 env[1252]: time="2024-07-02T08:03:58.723146291Z" level=info msg="StartContainer for \"b12233c075fecb83de3a83ed33b76c7f517cc9cd6859ad8ca362b97a1e96cb15\" returns successfully" Jul 2 08:03:58.737136 env[1252]: time="2024-07-02T08:03:58.737105331Z" level=info msg="StartContainer for \"06c32ac36923ce055af69427034bd52ad6c061f1f09bc020eab6f8e255ed7b66\" returns successfully" Jul 2 08:03:58.750793 env[1252]: time="2024-07-02T08:03:58.750756507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9c3207d669e00aa24ded52617c0d65d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"a4fcba52ce288a82240125744434647aabae102b73cfcd38663083a310faf178\"" Jul 2 08:03:58.757983 env[1252]: time="2024-07-02T08:03:58.757960638Z" level=info msg="CreateContainer within sandbox \"a4fcba52ce288a82240125744434647aabae102b73cfcd38663083a310faf178\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 08:03:58.765019 env[1252]: time="2024-07-02T08:03:58.764988211Z" level=info msg="CreateContainer within sandbox \"a4fcba52ce288a82240125744434647aabae102b73cfcd38663083a310faf178\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"14f40708dc9b6aa19838d3fc564cbe3b804ba6d7e73a51414679a4668b24ba3c\"" Jul 2 08:03:58.765483 env[1252]: time="2024-07-02T08:03:58.765462454Z" level=info msg="StartContainer for \"14f40708dc9b6aa19838d3fc564cbe3b804ba6d7e73a51414679a4668b24ba3c\"" Jul 2 08:03:58.776263 systemd[1]: Started cri-containerd-14f40708dc9b6aa19838d3fc564cbe3b804ba6d7e73a51414679a4668b24ba3c.scope. Jul 2 08:03:58.817932 env[1252]: time="2024-07-02T08:03:58.817896250Z" level=info msg="StartContainer for \"14f40708dc9b6aa19838d3fc564cbe3b804ba6d7e73a51414679a4668b24ba3c\" returns successfully" Jul 2 08:03:58.880745 kubelet[1810]: W0702 08:03:58.880706 1810 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://139.178.70.105:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jul 2 08:03:58.880745 kubelet[1810]: E0702 08:03:58.880748 1810 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.105:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.105:6443: connect: connection refused Jul 2 08:03:58.990718 kubelet[1810]: E0702 08:03:58.990648 1810 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.105:6443: connect: connection refused" interval="1.6s" Jul 2 08:03:59.093074 kubelet[1810]: I0702 08:03:59.093052 1810 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 08:03:59.093288 kubelet[1810]: E0702 08:03:59.093272 1810 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://139.178.70.105:6443/api/v1/nodes\": dial tcp 139.178.70.105:6443: connect: connection refused" node="localhost" Jul 2 08:04:00.675036 kubelet[1810]: E0702 08:04:00.675010 1810 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 2 08:04:00.694962 kubelet[1810]: I0702 08:04:00.694943 1810 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 08:04:00.701553 kubelet[1810]: I0702 08:04:00.701530 1810 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Jul 2 08:04:01.569881 kubelet[1810]: I0702 08:04:01.569860 1810 apiserver.go:52] "Watching apiserver" Jul 2 08:04:01.586804 kubelet[1810]: I0702 08:04:01.586770 1810 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 08:04:03.430740 systemd[1]: Reloading. Jul 2 08:04:03.488714 /usr/lib/systemd/system-generators/torcx-generator[2100]: time="2024-07-02T08:04:03Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 08:04:03.488731 /usr/lib/systemd/system-generators/torcx-generator[2100]: time="2024-07-02T08:04:03Z" level=info msg="torcx already run" Jul 2 08:04:03.548256 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 08:04:03.548270 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 08:04:03.566014 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:04:03.651148 systemd[1]: Stopping kubelet.service... Jul 2 08:04:03.661362 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 08:04:03.661531 systemd[1]: Stopped kubelet.service. Jul 2 08:04:03.663209 systemd[1]: Starting kubelet.service... Jul 2 08:04:05.403035 systemd[1]: Started kubelet.service. Jul 2 08:04:05.685141 kubelet[2164]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:04:05.685141 kubelet[2164]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 08:04:05.685141 kubelet[2164]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:04:05.691289 kubelet[2164]: I0702 08:04:05.691257 2164 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 08:04:05.698107 kubelet[2164]: I0702 08:04:05.698092 2164 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 08:04:05.698224 kubelet[2164]: I0702 08:04:05.698217 2164 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 08:04:05.698382 kubelet[2164]: I0702 08:04:05.698374 2164 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 08:04:05.699329 kubelet[2164]: I0702 08:04:05.699320 2164 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 08:04:05.712496 kubelet[2164]: I0702 08:04:05.712474 2164 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 08:04:05.715582 kubelet[2164]: I0702 08:04:05.715569 2164 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 08:04:05.715686 kubelet[2164]: I0702 08:04:05.715674 2164 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 08:04:05.715782 kubelet[2164]: I0702 08:04:05.715770 2164 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 08:04:05.715782 kubelet[2164]: I0702 08:04:05.715783 2164 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 08:04:05.715882 kubelet[2164]: I0702 08:04:05.715789 2164 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 08:04:05.715882 kubelet[2164]: I0702 08:04:05.715814 2164 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:04:05.715882 kubelet[2164]: I0702 08:04:05.715864 2164 kubelet.go:393] "Attempting to sync node with API server" Jul 2 08:04:05.715882 kubelet[2164]: I0702 08:04:05.715871 2164 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 08:04:05.715979 kubelet[2164]: I0702 08:04:05.715886 2164 kubelet.go:309] "Adding apiserver pod source" Jul 2 08:04:05.715979 kubelet[2164]: I0702 08:04:05.715898 2164 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 08:04:05.716381 kubelet[2164]: I0702 08:04:05.716371 2164 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 08:04:05.716799 kubelet[2164]: I0702 08:04:05.716788 2164 server.go:1232] "Started kubelet" Jul 2 08:04:05.721980 kubelet[2164]: I0702 08:04:05.721965 2164 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 08:04:05.723871 kubelet[2164]: I0702 08:04:05.723862 2164 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 08:04:05.724566 kubelet[2164]: I0702 08:04:05.724559 2164 server.go:462] "Adding debug handlers to kubelet server" Jul 2 08:04:05.725301 kubelet[2164]: I0702 08:04:05.725293 2164 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 08:04:05.725441 kubelet[2164]: I0702 08:04:05.725434 2164 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 08:04:05.726085 kubelet[2164]: I0702 08:04:05.726072 2164 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 08:04:05.730074 kubelet[2164]: I0702 08:04:05.728225 2164 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 08:04:05.730074 kubelet[2164]: I0702 08:04:05.728314 2164 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 08:04:05.730074 kubelet[2164]: E0702 08:04:05.728933 2164 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 08:04:05.730074 kubelet[2164]: E0702 08:04:05.728945 2164 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 08:04:05.741901 kubelet[2164]: I0702 08:04:05.741879 2164 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 08:04:05.742482 kubelet[2164]: I0702 08:04:05.742468 2164 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 08:04:05.742482 kubelet[2164]: I0702 08:04:05.742480 2164 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 08:04:05.742538 kubelet[2164]: I0702 08:04:05.742491 2164 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 08:04:05.742538 kubelet[2164]: E0702 08:04:05.742530 2164 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 08:04:05.796144 kubelet[2164]: I0702 08:04:05.796123 2164 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 08:04:05.796144 kubelet[2164]: I0702 08:04:05.796142 2164 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 08:04:05.796253 kubelet[2164]: I0702 08:04:05.796156 2164 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:04:05.796281 kubelet[2164]: I0702 08:04:05.796270 2164 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 08:04:05.796302 kubelet[2164]: I0702 08:04:05.796288 2164 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 08:04:05.796302 kubelet[2164]: I0702 08:04:05.796295 2164 policy_none.go:49] "None policy: Start" Jul 2 08:04:05.796763 kubelet[2164]: I0702 08:04:05.796749 2164 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 08:04:05.796787 kubelet[2164]: I0702 08:04:05.796767 2164 state_mem.go:35] "Initializing new in-memory state store" Jul 2 08:04:05.796901 kubelet[2164]: I0702 08:04:05.796889 2164 state_mem.go:75] "Updated machine memory state" Jul 2 08:04:05.799336 kubelet[2164]: I0702 08:04:05.799320 2164 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 08:04:05.799490 kubelet[2164]: I0702 08:04:05.799477 2164 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 08:04:05.827891 kubelet[2164]: I0702 08:04:05.827864 2164 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 08:04:05.833326 sudo[2193]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 2 08:04:05.833496 sudo[2193]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 2 08:04:05.843149 kubelet[2164]: I0702 08:04:05.843130 2164 topology_manager.go:215] "Topology Admit Handler" podUID="691e1a01f125ff2e238dce531c3178da" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 08:04:05.843307 kubelet[2164]: I0702 08:04:05.843299 2164 topology_manager.go:215] "Topology Admit Handler" podUID="d27baad490d2d4f748c86b318d7d74ef" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 08:04:05.844088 kubelet[2164]: I0702 08:04:05.844079 2164 topology_manager.go:215] "Topology Admit Handler" podUID="9c3207d669e00aa24ded52617c0d65d0" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 08:04:05.860398 kubelet[2164]: I0702 08:04:05.860382 2164 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Jul 2 08:04:05.860555 kubelet[2164]: I0702 08:04:05.860549 2164 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Jul 2 08:04:05.875301 kubelet[2164]: E0702 08:04:05.874959 2164 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 2 08:04:05.930175 kubelet[2164]: I0702 08:04:05.930154 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/691e1a01f125ff2e238dce531c3178da-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"691e1a01f125ff2e238dce531c3178da\") " pod="kube-system/kube-apiserver-localhost" Jul 2 08:04:05.930305 kubelet[2164]: I0702 08:04:05.930297 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 08:04:05.930365 kubelet[2164]: I0702 08:04:05.930358 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 08:04:05.930429 kubelet[2164]: I0702 08:04:05.930422 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3207d669e00aa24ded52617c0d65d0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9c3207d669e00aa24ded52617c0d65d0\") " pod="kube-system/kube-scheduler-localhost" Jul 2 08:04:05.930483 kubelet[2164]: I0702 08:04:05.930476 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/691e1a01f125ff2e238dce531c3178da-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"691e1a01f125ff2e238dce531c3178da\") " pod="kube-system/kube-apiserver-localhost" Jul 2 08:04:05.930543 kubelet[2164]: I0702 08:04:05.930536 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/691e1a01f125ff2e238dce531c3178da-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"691e1a01f125ff2e238dce531c3178da\") " pod="kube-system/kube-apiserver-localhost" Jul 2 08:04:05.930598 kubelet[2164]: I0702 08:04:05.930591 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 08:04:05.930658 kubelet[2164]: I0702 08:04:05.930652 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 08:04:05.930714 kubelet[2164]: I0702 08:04:05.930707 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 08:04:06.727735 kubelet[2164]: I0702 08:04:06.727710 2164 apiserver.go:52] "Watching apiserver" Jul 2 08:04:06.786362 kubelet[2164]: E0702 08:04:06.786338 2164 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 2 08:04:06.802001 kubelet[2164]: I0702 08:04:06.801979 2164 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=5.797599622 podCreationTimestamp="2024-07-02 08:04:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:04:06.786654405 +0000 UTC m=+1.359271042" watchObservedRunningTime="2024-07-02 08:04:06.797599622 +0000 UTC m=+1.370216261" Jul 2 08:04:06.802469 kubelet[2164]: I0702 08:04:06.802458 2164 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.802440361 podCreationTimestamp="2024-07-02 08:04:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:04:06.802296565 +0000 UTC m=+1.374913205" watchObservedRunningTime="2024-07-02 08:04:06.802440361 +0000 UTC m=+1.375056999" Jul 2 08:04:06.822557 sudo[2193]: pam_unix(sudo:session): session closed for user root Jul 2 08:04:06.826886 kubelet[2164]: I0702 08:04:06.826869 2164 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 08:04:06.835822 kubelet[2164]: I0702 08:04:06.835806 2164 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.8357840890000001 podCreationTimestamp="2024-07-02 08:04:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:04:06.821732313 +0000 UTC m=+1.394348958" watchObservedRunningTime="2024-07-02 08:04:06.835784089 +0000 UTC m=+1.408400726" Jul 2 08:04:08.462314 sudo[1451]: pam_unix(sudo:session): session closed for user root Jul 2 08:04:08.464030 sshd[1447]: pam_unix(sshd:session): session closed for user core Jul 2 08:04:08.465851 systemd[1]: sshd@4-139.178.70.105:22-139.178.68.195:37340.service: Deactivated successfully. Jul 2 08:04:08.466425 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 08:04:08.466540 systemd[1]: session-7.scope: Consumed 2.815s CPU time. Jul 2 08:04:08.467427 systemd-logind[1242]: Session 7 logged out. Waiting for processes to exit. Jul 2 08:04:08.468286 systemd-logind[1242]: Removed session 7. Jul 2 08:04:15.760711 kubelet[2164]: I0702 08:04:15.760693 2164 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 08:04:15.761133 env[1252]: time="2024-07-02T08:04:15.761111805Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 08:04:15.761273 kubelet[2164]: I0702 08:04:15.761207 2164 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 08:04:16.278477 kubelet[2164]: I0702 08:04:16.278454 2164 topology_manager.go:215] "Topology Admit Handler" podUID="1162624f-bd0d-4121-a8f7-9612f6c8d305" podNamespace="kube-system" podName="cilium-6mjq7" Jul 2 08:04:16.292401 kubelet[2164]: I0702 08:04:16.292378 2164 topology_manager.go:215] "Topology Admit Handler" podUID="b9a26602-cbe7-424d-a156-cd23363c8d41" podNamespace="kube-system" podName="kube-proxy-kx82g" Jul 2 08:04:16.296056 systemd[1]: Created slice kubepods-burstable-pod1162624f_bd0d_4121_a8f7_9612f6c8d305.slice. Jul 2 08:04:16.299789 systemd[1]: Created slice kubepods-besteffort-podb9a26602_cbe7_424d_a156_cd23363c8d41.slice. Jul 2 08:04:16.394375 kubelet[2164]: I0702 08:04:16.394356 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1162624f-bd0d-4121-a8f7-9612f6c8d305-cni-path\") pod \"cilium-6mjq7\" (UID: \"1162624f-bd0d-4121-a8f7-9612f6c8d305\") " pod="kube-system/cilium-6mjq7" Jul 2 08:04:16.394525 kubelet[2164]: I0702 08:04:16.394517 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfzrz\" (UniqueName: \"kubernetes.io/projected/1162624f-bd0d-4121-a8f7-9612f6c8d305-kube-api-access-dfzrz\") pod \"cilium-6mjq7\" (UID: \"1162624f-bd0d-4121-a8f7-9612f6c8d305\") " pod="kube-system/cilium-6mjq7" Jul 2 08:04:16.394587 kubelet[2164]: I0702 08:04:16.394579 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1162624f-bd0d-4121-a8f7-9612f6c8d305-bpf-maps\") pod \"cilium-6mjq7\" (UID: \"1162624f-bd0d-4121-a8f7-9612f6c8d305\") " pod="kube-system/cilium-6mjq7" Jul 2 08:04:16.394649 kubelet[2164]: I0702 08:04:16.394641 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1162624f-bd0d-4121-a8f7-9612f6c8d305-clustermesh-secrets\") pod \"cilium-6mjq7\" (UID: \"1162624f-bd0d-4121-a8f7-9612f6c8d305\") " pod="kube-system/cilium-6mjq7" Jul 2 08:04:16.394715 kubelet[2164]: I0702 08:04:16.394708 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1162624f-bd0d-4121-a8f7-9612f6c8d305-lib-modules\") pod \"cilium-6mjq7\" (UID: \"1162624f-bd0d-4121-a8f7-9612f6c8d305\") " pod="kube-system/cilium-6mjq7" Jul 2 08:04:16.394777 kubelet[2164]: I0702 08:04:16.394770 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1162624f-bd0d-4121-a8f7-9612f6c8d305-hubble-tls\") pod \"cilium-6mjq7\" (UID: \"1162624f-bd0d-4121-a8f7-9612f6c8d305\") " pod="kube-system/cilium-6mjq7" Jul 2 08:04:16.394839 kubelet[2164]: I0702 08:04:16.394832 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1162624f-bd0d-4121-a8f7-9612f6c8d305-hostproc\") pod \"cilium-6mjq7\" (UID: \"1162624f-bd0d-4121-a8f7-9612f6c8d305\") " pod="kube-system/cilium-6mjq7" Jul 2 08:04:16.394915 kubelet[2164]: I0702 08:04:16.394899 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1162624f-bd0d-4121-a8f7-9612f6c8d305-xtables-lock\") pod \"cilium-6mjq7\" (UID: \"1162624f-bd0d-4121-a8f7-9612f6c8d305\") " pod="kube-system/cilium-6mjq7" Jul 2 08:04:16.394959 kubelet[2164]: I0702 08:04:16.394939 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkcg9\" (UniqueName: \"kubernetes.io/projected/b9a26602-cbe7-424d-a156-cd23363c8d41-kube-api-access-lkcg9\") pod \"kube-proxy-kx82g\" (UID: \"b9a26602-cbe7-424d-a156-cd23363c8d41\") " pod="kube-system/kube-proxy-kx82g" Jul 2 08:04:16.394985 kubelet[2164]: I0702 08:04:16.394965 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1162624f-bd0d-4121-a8f7-9612f6c8d305-host-proc-sys-net\") pod \"cilium-6mjq7\" (UID: \"1162624f-bd0d-4121-a8f7-9612f6c8d305\") " pod="kube-system/cilium-6mjq7" Jul 2 08:04:16.394985 kubelet[2164]: I0702 08:04:16.394976 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b9a26602-cbe7-424d-a156-cd23363c8d41-kube-proxy\") pod \"kube-proxy-kx82g\" (UID: \"b9a26602-cbe7-424d-a156-cd23363c8d41\") " pod="kube-system/kube-proxy-kx82g" Jul 2 08:04:16.395026 kubelet[2164]: I0702 08:04:16.394992 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b9a26602-cbe7-424d-a156-cd23363c8d41-xtables-lock\") pod \"kube-proxy-kx82g\" (UID: \"b9a26602-cbe7-424d-a156-cd23363c8d41\") " pod="kube-system/kube-proxy-kx82g" Jul 2 08:04:16.395026 kubelet[2164]: I0702 08:04:16.395015 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1162624f-bd0d-4121-a8f7-9612f6c8d305-cilium-run\") pod \"cilium-6mjq7\" (UID: \"1162624f-bd0d-4121-a8f7-9612f6c8d305\") " pod="kube-system/cilium-6mjq7" Jul 2 08:04:16.395066 kubelet[2164]: I0702 08:04:16.395028 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1162624f-bd0d-4121-a8f7-9612f6c8d305-host-proc-sys-kernel\") pod \"cilium-6mjq7\" (UID: \"1162624f-bd0d-4121-a8f7-9612f6c8d305\") " pod="kube-system/cilium-6mjq7" Jul 2 08:04:16.395066 kubelet[2164]: I0702 08:04:16.395039 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1162624f-bd0d-4121-a8f7-9612f6c8d305-etc-cni-netd\") pod \"cilium-6mjq7\" (UID: \"1162624f-bd0d-4121-a8f7-9612f6c8d305\") " pod="kube-system/cilium-6mjq7" Jul 2 08:04:16.395066 kubelet[2164]: I0702 08:04:16.395051 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9a26602-cbe7-424d-a156-cd23363c8d41-lib-modules\") pod \"kube-proxy-kx82g\" (UID: \"b9a26602-cbe7-424d-a156-cd23363c8d41\") " pod="kube-system/kube-proxy-kx82g" Jul 2 08:04:16.395066 kubelet[2164]: I0702 08:04:16.395064 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1162624f-bd0d-4121-a8f7-9612f6c8d305-cilium-cgroup\") pod \"cilium-6mjq7\" (UID: \"1162624f-bd0d-4121-a8f7-9612f6c8d305\") " pod="kube-system/cilium-6mjq7" Jul 2 08:04:16.395142 kubelet[2164]: I0702 08:04:16.395075 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1162624f-bd0d-4121-a8f7-9612f6c8d305-cilium-config-path\") pod \"cilium-6mjq7\" (UID: \"1162624f-bd0d-4121-a8f7-9612f6c8d305\") " pod="kube-system/cilium-6mjq7" Jul 2 08:04:16.581264 kubelet[2164]: E0702 08:04:16.581201 2164 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 2 08:04:16.581264 kubelet[2164]: E0702 08:04:16.581229 2164 projected.go:198] Error preparing data for projected volume kube-api-access-dfzrz for pod kube-system/cilium-6mjq7: configmap "kube-root-ca.crt" not found Jul 2 08:04:16.586360 kubelet[2164]: E0702 08:04:16.586338 2164 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 2 08:04:16.586462 kubelet[2164]: E0702 08:04:16.586454 2164 projected.go:198] Error preparing data for projected volume kube-api-access-lkcg9 for pod kube-system/kube-proxy-kx82g: configmap "kube-root-ca.crt" not found Jul 2 08:04:16.591627 kubelet[2164]: E0702 08:04:16.591599 2164 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1162624f-bd0d-4121-a8f7-9612f6c8d305-kube-api-access-dfzrz podName:1162624f-bd0d-4121-a8f7-9612f6c8d305 nodeName:}" failed. No retries permitted until 2024-07-02 08:04:17.081267718 +0000 UTC m=+11.653884355 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dfzrz" (UniqueName: "kubernetes.io/projected/1162624f-bd0d-4121-a8f7-9612f6c8d305-kube-api-access-dfzrz") pod "cilium-6mjq7" (UID: "1162624f-bd0d-4121-a8f7-9612f6c8d305") : configmap "kube-root-ca.crt" not found Jul 2 08:04:16.591749 kubelet[2164]: E0702 08:04:16.591636 2164 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b9a26602-cbe7-424d-a156-cd23363c8d41-kube-api-access-lkcg9 podName:b9a26602-cbe7-424d-a156-cd23363c8d41 nodeName:}" failed. No retries permitted until 2024-07-02 08:04:17.091619799 +0000 UTC m=+11.664236436 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lkcg9" (UniqueName: "kubernetes.io/projected/b9a26602-cbe7-424d-a156-cd23363c8d41-kube-api-access-lkcg9") pod "kube-proxy-kx82g" (UID: "b9a26602-cbe7-424d-a156-cd23363c8d41") : configmap "kube-root-ca.crt" not found Jul 2 08:04:16.750314 kubelet[2164]: I0702 08:04:16.750291 2164 topology_manager.go:215] "Topology Admit Handler" podUID="3955e3d1-a9af-465e-b011-9c56a5877900" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-frz2g" Jul 2 08:04:16.754370 systemd[1]: Created slice kubepods-besteffort-pod3955e3d1_a9af_465e_b011_9c56a5877900.slice. Jul 2 08:04:16.798315 kubelet[2164]: I0702 08:04:16.798294 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3955e3d1-a9af-465e-b011-9c56a5877900-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-frz2g\" (UID: \"3955e3d1-a9af-465e-b011-9c56a5877900\") " pod="kube-system/cilium-operator-6bc8ccdb58-frz2g" Jul 2 08:04:16.798556 kubelet[2164]: I0702 08:04:16.798546 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kph6z\" (UniqueName: \"kubernetes.io/projected/3955e3d1-a9af-465e-b011-9c56a5877900-kube-api-access-kph6z\") pod \"cilium-operator-6bc8ccdb58-frz2g\" (UID: \"3955e3d1-a9af-465e-b011-9c56a5877900\") " pod="kube-system/cilium-operator-6bc8ccdb58-frz2g" Jul 2 08:04:17.058600 env[1252]: time="2024-07-02T08:04:17.058520135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-frz2g,Uid:3955e3d1-a9af-465e-b011-9c56a5877900,Namespace:kube-system,Attempt:0,}" Jul 2 08:04:17.069697 env[1252]: time="2024-07-02T08:04:17.069525116Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:04:17.069697 env[1252]: time="2024-07-02T08:04:17.069559108Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:04:17.069697 env[1252]: time="2024-07-02T08:04:17.069567245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:04:17.069899 env[1252]: time="2024-07-02T08:04:17.069726914Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c51dff3137bafcc016963b127c5ca0c9c20195b7491ae5ebcdd67eb994abcd51 pid=2241 runtime=io.containerd.runc.v2 Jul 2 08:04:17.086572 systemd[1]: Started cri-containerd-c51dff3137bafcc016963b127c5ca0c9c20195b7491ae5ebcdd67eb994abcd51.scope. Jul 2 08:04:17.117507 env[1252]: time="2024-07-02T08:04:17.117474171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-frz2g,Uid:3955e3d1-a9af-465e-b011-9c56a5877900,Namespace:kube-system,Attempt:0,} returns sandbox id \"c51dff3137bafcc016963b127c5ca0c9c20195b7491ae5ebcdd67eb994abcd51\"" Jul 2 08:04:17.119194 env[1252]: time="2024-07-02T08:04:17.119174969Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 08:04:17.207154 env[1252]: time="2024-07-02T08:04:17.207114027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kx82g,Uid:b9a26602-cbe7-424d-a156-cd23363c8d41,Namespace:kube-system,Attempt:0,}" Jul 2 08:04:17.208192 env[1252]: time="2024-07-02T08:04:17.208162682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6mjq7,Uid:1162624f-bd0d-4121-a8f7-9612f6c8d305,Namespace:kube-system,Attempt:0,}" Jul 2 08:04:17.216901 env[1252]: time="2024-07-02T08:04:17.216759753Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:04:17.216901 env[1252]: time="2024-07-02T08:04:17.216798456Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:04:17.216901 env[1252]: time="2024-07-02T08:04:17.216810121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:04:17.217140 env[1252]: time="2024-07-02T08:04:17.217107914Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9fd9022db46d164200c2016275d0c7f1be63b66a934959053bb06a57cfd68610 pid=2283 runtime=io.containerd.runc.v2 Jul 2 08:04:17.218232 env[1252]: time="2024-07-02T08:04:17.218198041Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:04:17.218301 env[1252]: time="2024-07-02T08:04:17.218222441Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:04:17.218301 env[1252]: time="2024-07-02T08:04:17.218230380Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:04:17.218383 env[1252]: time="2024-07-02T08:04:17.218297001Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7162d951d2665472f41f4402fa5d81a07350e361845367f888f80341d64c4b47 pid=2295 runtime=io.containerd.runc.v2 Jul 2 08:04:17.225801 systemd[1]: Started cri-containerd-7162d951d2665472f41f4402fa5d81a07350e361845367f888f80341d64c4b47.scope. Jul 2 08:04:17.233383 systemd[1]: Started cri-containerd-9fd9022db46d164200c2016275d0c7f1be63b66a934959053bb06a57cfd68610.scope. Jul 2 08:04:17.250184 env[1252]: time="2024-07-02T08:04:17.250142472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6mjq7,Uid:1162624f-bd0d-4121-a8f7-9612f6c8d305,Namespace:kube-system,Attempt:0,} returns sandbox id \"7162d951d2665472f41f4402fa5d81a07350e361845367f888f80341d64c4b47\"" Jul 2 08:04:17.266242 env[1252]: time="2024-07-02T08:04:17.266211796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kx82g,Uid:b9a26602-cbe7-424d-a156-cd23363c8d41,Namespace:kube-system,Attempt:0,} returns sandbox id \"9fd9022db46d164200c2016275d0c7f1be63b66a934959053bb06a57cfd68610\"" Jul 2 08:04:17.268945 env[1252]: time="2024-07-02T08:04:17.268909184Z" level=info msg="CreateContainer within sandbox \"9fd9022db46d164200c2016275d0c7f1be63b66a934959053bb06a57cfd68610\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 08:04:17.312113 env[1252]: time="2024-07-02T08:04:17.311482502Z" level=info msg="CreateContainer within sandbox \"9fd9022db46d164200c2016275d0c7f1be63b66a934959053bb06a57cfd68610\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"336688dc1379f280f681f294e09ff743c8559157cd1fb71930e043c58cc3e24d\"" Jul 2 08:04:17.313357 env[1252]: time="2024-07-02T08:04:17.312470373Z" level=info msg="StartContainer for \"336688dc1379f280f681f294e09ff743c8559157cd1fb71930e043c58cc3e24d\"" Jul 2 08:04:17.323801 systemd[1]: Started cri-containerd-336688dc1379f280f681f294e09ff743c8559157cd1fb71930e043c58cc3e24d.scope. Jul 2 08:04:17.356493 env[1252]: time="2024-07-02T08:04:17.356152869Z" level=info msg="StartContainer for \"336688dc1379f280f681f294e09ff743c8559157cd1fb71930e043c58cc3e24d\" returns successfully" Jul 2 08:04:18.329132 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1728518786.mount: Deactivated successfully. Jul 2 08:04:18.842086 env[1252]: time="2024-07-02T08:04:18.842057351Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:04:18.842862 env[1252]: time="2024-07-02T08:04:18.842839201Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:04:18.843968 env[1252]: time="2024-07-02T08:04:18.843952385Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:04:18.844432 env[1252]: time="2024-07-02T08:04:18.844415444Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 2 08:04:18.845040 env[1252]: time="2024-07-02T08:04:18.845026139Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 08:04:18.847552 env[1252]: time="2024-07-02T08:04:18.846273309Z" level=info msg="CreateContainer within sandbox \"c51dff3137bafcc016963b127c5ca0c9c20195b7491ae5ebcdd67eb994abcd51\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 08:04:18.854387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2743214926.mount: Deactivated successfully. Jul 2 08:04:18.858462 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3378283235.mount: Deactivated successfully. Jul 2 08:04:18.865795 env[1252]: time="2024-07-02T08:04:18.865765658Z" level=info msg="CreateContainer within sandbox \"c51dff3137bafcc016963b127c5ca0c9c20195b7491ae5ebcdd67eb994abcd51\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"94f65a183e1e89f51d62bb19dafcbb25cba8e23fc1138c5273d95c34c2ffa26c\"" Jul 2 08:04:18.867372 env[1252]: time="2024-07-02T08:04:18.867355970Z" level=info msg="StartContainer for \"94f65a183e1e89f51d62bb19dafcbb25cba8e23fc1138c5273d95c34c2ffa26c\"" Jul 2 08:04:18.889220 systemd[1]: Started cri-containerd-94f65a183e1e89f51d62bb19dafcbb25cba8e23fc1138c5273d95c34c2ffa26c.scope. Jul 2 08:04:18.992108 env[1252]: time="2024-07-02T08:04:18.991621905Z" level=info msg="StartContainer for \"94f65a183e1e89f51d62bb19dafcbb25cba8e23fc1138c5273d95c34c2ffa26c\" returns successfully" Jul 2 08:04:20.113828 kubelet[2164]: I0702 08:04:20.113629 2164 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-kx82g" podStartSLOduration=4.105092909 podCreationTimestamp="2024-07-02 08:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:04:17.804192094 +0000 UTC m=+12.376808749" watchObservedRunningTime="2024-07-02 08:04:20.105092909 +0000 UTC m=+14.677709548" Jul 2 08:04:20.113828 kubelet[2164]: I0702 08:04:20.113720 2164 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-frz2g" podStartSLOduration=2.387202938 podCreationTimestamp="2024-07-02 08:04:16 +0000 UTC" firstStartedPulling="2024-07-02 08:04:17.118248798 +0000 UTC m=+11.690865434" lastFinishedPulling="2024-07-02 08:04:18.844742249 +0000 UTC m=+13.417358890" observedRunningTime="2024-07-02 08:04:20.104947545 +0000 UTC m=+14.677564196" watchObservedRunningTime="2024-07-02 08:04:20.113696394 +0000 UTC m=+14.686313032" Jul 2 08:04:22.835203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4165424983.mount: Deactivated successfully. Jul 2 08:04:25.715250 env[1252]: time="2024-07-02T08:04:25.715194026Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:04:25.757187 env[1252]: time="2024-07-02T08:04:25.756967046Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:04:25.770041 env[1252]: time="2024-07-02T08:04:25.770023791Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:04:25.770513 env[1252]: time="2024-07-02T08:04:25.770495650Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 2 08:04:25.771643 env[1252]: time="2024-07-02T08:04:25.771627678Z" level=info msg="CreateContainer within sandbox \"7162d951d2665472f41f4402fa5d81a07350e361845367f888f80341d64c4b47\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 08:04:25.804613 env[1252]: time="2024-07-02T08:04:25.804589703Z" level=info msg="CreateContainer within sandbox \"7162d951d2665472f41f4402fa5d81a07350e361845367f888f80341d64c4b47\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"35e285fbf6dd4fa4bdca5c929a798f0eb359edbb4b28a047b8a93204f3684b78\"" Jul 2 08:04:25.805012 env[1252]: time="2024-07-02T08:04:25.804994854Z" level=info msg="StartContainer for \"35e285fbf6dd4fa4bdca5c929a798f0eb359edbb4b28a047b8a93204f3684b78\"" Jul 2 08:04:25.820886 systemd[1]: Started cri-containerd-35e285fbf6dd4fa4bdca5c929a798f0eb359edbb4b28a047b8a93204f3684b78.scope. Jul 2 08:04:25.849128 env[1252]: time="2024-07-02T08:04:25.849103291Z" level=info msg="StartContainer for \"35e285fbf6dd4fa4bdca5c929a798f0eb359edbb4b28a047b8a93204f3684b78\" returns successfully" Jul 2 08:04:25.854429 systemd[1]: cri-containerd-35e285fbf6dd4fa4bdca5c929a798f0eb359edbb4b28a047b8a93204f3684b78.scope: Deactivated successfully. Jul 2 08:04:25.875452 env[1252]: time="2024-07-02T08:04:25.875417515Z" level=info msg="shim disconnected" id=35e285fbf6dd4fa4bdca5c929a798f0eb359edbb4b28a047b8a93204f3684b78 Jul 2 08:04:25.875452 env[1252]: time="2024-07-02T08:04:25.875448726Z" level=warning msg="cleaning up after shim disconnected" id=35e285fbf6dd4fa4bdca5c929a798f0eb359edbb4b28a047b8a93204f3684b78 namespace=k8s.io Jul 2 08:04:25.875452 env[1252]: time="2024-07-02T08:04:25.875455661Z" level=info msg="cleaning up dead shim" Jul 2 08:04:25.881003 env[1252]: time="2024-07-02T08:04:25.880978724Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:04:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2601 runtime=io.containerd.runc.v2\n" Jul 2 08:04:26.790889 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-35e285fbf6dd4fa4bdca5c929a798f0eb359edbb4b28a047b8a93204f3684b78-rootfs.mount: Deactivated successfully. Jul 2 08:04:26.817597 env[1252]: time="2024-07-02T08:04:26.816947268Z" level=info msg="CreateContainer within sandbox \"7162d951d2665472f41f4402fa5d81a07350e361845367f888f80341d64c4b47\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 08:04:26.825221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1860630197.mount: Deactivated successfully. Jul 2 08:04:26.828905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3769932806.mount: Deactivated successfully. Jul 2 08:04:26.831782 env[1252]: time="2024-07-02T08:04:26.830897055Z" level=info msg="CreateContainer within sandbox \"7162d951d2665472f41f4402fa5d81a07350e361845367f888f80341d64c4b47\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8c937b606865112d7201fa792f126c95098b485870a3373d432d8f95853318ce\"" Jul 2 08:04:26.831782 env[1252]: time="2024-07-02T08:04:26.831347874Z" level=info msg="StartContainer for \"8c937b606865112d7201fa792f126c95098b485870a3373d432d8f95853318ce\"" Jul 2 08:04:26.845526 systemd[1]: Started cri-containerd-8c937b606865112d7201fa792f126c95098b485870a3373d432d8f95853318ce.scope. Jul 2 08:04:26.866564 env[1252]: time="2024-07-02T08:04:26.866531521Z" level=info msg="StartContainer for \"8c937b606865112d7201fa792f126c95098b485870a3373d432d8f95853318ce\" returns successfully" Jul 2 08:04:26.882407 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 08:04:26.882609 systemd[1]: Stopped systemd-sysctl.service. Jul 2 08:04:26.883518 systemd[1]: Stopping systemd-sysctl.service... Jul 2 08:04:26.886437 systemd[1]: Starting systemd-sysctl.service... Jul 2 08:04:26.886948 systemd[1]: cri-containerd-8c937b606865112d7201fa792f126c95098b485870a3373d432d8f95853318ce.scope: Deactivated successfully. Jul 2 08:04:26.931810 systemd[1]: Finished systemd-sysctl.service. Jul 2 08:04:26.945079 env[1252]: time="2024-07-02T08:04:26.945042867Z" level=info msg="shim disconnected" id=8c937b606865112d7201fa792f126c95098b485870a3373d432d8f95853318ce Jul 2 08:04:26.945213 env[1252]: time="2024-07-02T08:04:26.945201077Z" level=warning msg="cleaning up after shim disconnected" id=8c937b606865112d7201fa792f126c95098b485870a3373d432d8f95853318ce namespace=k8s.io Jul 2 08:04:26.945269 env[1252]: time="2024-07-02T08:04:26.945255328Z" level=info msg="cleaning up dead shim" Jul 2 08:04:26.950191 env[1252]: time="2024-07-02T08:04:26.950166692Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:04:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2663 runtime=io.containerd.runc.v2\n" Jul 2 08:04:27.819477 env[1252]: time="2024-07-02T08:04:27.819453509Z" level=info msg="CreateContainer within sandbox \"7162d951d2665472f41f4402fa5d81a07350e361845367f888f80341d64c4b47\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 08:04:27.831011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2364458262.mount: Deactivated successfully. Jul 2 08:04:27.834735 env[1252]: time="2024-07-02T08:04:27.834709628Z" level=info msg="CreateContainer within sandbox \"7162d951d2665472f41f4402fa5d81a07350e361845367f888f80341d64c4b47\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3e9309d9dfa54ad68da3897347fdd698f90789de73c81cc326e3f21e9336ad88\"" Jul 2 08:04:27.835556 env[1252]: time="2024-07-02T08:04:27.835214820Z" level=info msg="StartContainer for \"3e9309d9dfa54ad68da3897347fdd698f90789de73c81cc326e3f21e9336ad88\"" Jul 2 08:04:27.848174 systemd[1]: Started cri-containerd-3e9309d9dfa54ad68da3897347fdd698f90789de73c81cc326e3f21e9336ad88.scope. Jul 2 08:04:27.864658 env[1252]: time="2024-07-02T08:04:27.864631505Z" level=info msg="StartContainer for \"3e9309d9dfa54ad68da3897347fdd698f90789de73c81cc326e3f21e9336ad88\" returns successfully" Jul 2 08:04:27.875452 systemd[1]: cri-containerd-3e9309d9dfa54ad68da3897347fdd698f90789de73c81cc326e3f21e9336ad88.scope: Deactivated successfully. Jul 2 08:04:27.886827 env[1252]: time="2024-07-02T08:04:27.886797243Z" level=info msg="shim disconnected" id=3e9309d9dfa54ad68da3897347fdd698f90789de73c81cc326e3f21e9336ad88 Jul 2 08:04:27.886827 env[1252]: time="2024-07-02T08:04:27.886824066Z" level=warning msg="cleaning up after shim disconnected" id=3e9309d9dfa54ad68da3897347fdd698f90789de73c81cc326e3f21e9336ad88 namespace=k8s.io Jul 2 08:04:27.886993 env[1252]: time="2024-07-02T08:04:27.886831922Z" level=info msg="cleaning up dead shim" Jul 2 08:04:27.894005 env[1252]: time="2024-07-02T08:04:27.893975997Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:04:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2720 runtime=io.containerd.runc.v2\n" Jul 2 08:04:28.821545 env[1252]: time="2024-07-02T08:04:28.821515781Z" level=info msg="CreateContainer within sandbox \"7162d951d2665472f41f4402fa5d81a07350e361845367f888f80341d64c4b47\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 08:04:28.985104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1587807096.mount: Deactivated successfully. Jul 2 08:04:28.989163 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3945287851.mount: Deactivated successfully. Jul 2 08:04:29.027131 env[1252]: time="2024-07-02T08:04:29.027104871Z" level=info msg="CreateContainer within sandbox \"7162d951d2665472f41f4402fa5d81a07350e361845367f888f80341d64c4b47\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"78ea2957fa8d5c4779b7fb26b9be77e0b49b00b965ff0f03f40140f20857446a\"" Jul 2 08:04:29.028362 env[1252]: time="2024-07-02T08:04:29.028346683Z" level=info msg="StartContainer for \"78ea2957fa8d5c4779b7fb26b9be77e0b49b00b965ff0f03f40140f20857446a\"" Jul 2 08:04:29.047342 systemd[1]: Started cri-containerd-78ea2957fa8d5c4779b7fb26b9be77e0b49b00b965ff0f03f40140f20857446a.scope. Jul 2 08:04:29.065423 systemd[1]: cri-containerd-78ea2957fa8d5c4779b7fb26b9be77e0b49b00b965ff0f03f40140f20857446a.scope: Deactivated successfully. Jul 2 08:04:29.081036 env[1252]: time="2024-07-02T08:04:29.066474625Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1162624f_bd0d_4121_a8f7_9612f6c8d305.slice/cri-containerd-78ea2957fa8d5c4779b7fb26b9be77e0b49b00b965ff0f03f40140f20857446a.scope/memory.events\": no such file or directory" Jul 2 08:04:29.086419 env[1252]: time="2024-07-02T08:04:29.086390362Z" level=info msg="StartContainer for \"78ea2957fa8d5c4779b7fb26b9be77e0b49b00b965ff0f03f40140f20857446a\" returns successfully" Jul 2 08:04:29.166079 env[1252]: time="2024-07-02T08:04:29.166052902Z" level=info msg="shim disconnected" id=78ea2957fa8d5c4779b7fb26b9be77e0b49b00b965ff0f03f40140f20857446a Jul 2 08:04:29.166240 env[1252]: time="2024-07-02T08:04:29.166229920Z" level=warning msg="cleaning up after shim disconnected" id=78ea2957fa8d5c4779b7fb26b9be77e0b49b00b965ff0f03f40140f20857446a namespace=k8s.io Jul 2 08:04:29.166299 env[1252]: time="2024-07-02T08:04:29.166284400Z" level=info msg="cleaning up dead shim" Jul 2 08:04:29.170573 env[1252]: time="2024-07-02T08:04:29.170545793Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:04:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2776 runtime=io.containerd.runc.v2\n" Jul 2 08:04:29.825984 env[1252]: time="2024-07-02T08:04:29.825955111Z" level=info msg="CreateContainer within sandbox \"7162d951d2665472f41f4402fa5d81a07350e361845367f888f80341d64c4b47\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 08:04:29.836044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3473196916.mount: Deactivated successfully. Jul 2 08:04:29.842162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2974480299.mount: Deactivated successfully. Jul 2 08:04:29.844183 env[1252]: time="2024-07-02T08:04:29.844162237Z" level=info msg="CreateContainer within sandbox \"7162d951d2665472f41f4402fa5d81a07350e361845367f888f80341d64c4b47\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ae65a43797f6f76e26805cc21f93b5a21982b617000703fff27422a7354f2fb4\"" Jul 2 08:04:29.845727 env[1252]: time="2024-07-02T08:04:29.845713170Z" level=info msg="StartContainer for \"ae65a43797f6f76e26805cc21f93b5a21982b617000703fff27422a7354f2fb4\"" Jul 2 08:04:29.856063 systemd[1]: Started cri-containerd-ae65a43797f6f76e26805cc21f93b5a21982b617000703fff27422a7354f2fb4.scope. Jul 2 08:04:29.875339 env[1252]: time="2024-07-02T08:04:29.875310024Z" level=info msg="StartContainer for \"ae65a43797f6f76e26805cc21f93b5a21982b617000703fff27422a7354f2fb4\" returns successfully" Jul 2 08:04:29.974686 kubelet[2164]: I0702 08:04:29.974666 2164 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jul 2 08:04:29.994625 kubelet[2164]: I0702 08:04:29.994593 2164 topology_manager.go:215] "Topology Admit Handler" podUID="08e88302-6ee1-44d5-b997-5d48a9e42d88" podNamespace="kube-system" podName="coredns-5dd5756b68-sgrmb" Jul 2 08:04:29.999606 kubelet[2164]: W0702 08:04:29.999577 2164 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 2 08:04:29.999606 kubelet[2164]: E0702 08:04:29.999599 2164 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jul 2 08:04:30.001954 systemd[1]: Created slice kubepods-burstable-pod08e88302_6ee1_44d5_b997_5d48a9e42d88.slice. Jul 2 08:04:30.002729 kubelet[2164]: I0702 08:04:30.002508 2164 topology_manager.go:215] "Topology Admit Handler" podUID="7d21d9e3-17f8-46a3-b21c-a2571af6c453" podNamespace="kube-system" podName="coredns-5dd5756b68-svw6g" Jul 2 08:04:30.005780 systemd[1]: Created slice kubepods-burstable-pod7d21d9e3_17f8_46a3_b21c_a2571af6c453.slice. Jul 2 08:04:30.098349 kubelet[2164]: I0702 08:04:30.098287 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08e88302-6ee1-44d5-b997-5d48a9e42d88-config-volume\") pod \"coredns-5dd5756b68-sgrmb\" (UID: \"08e88302-6ee1-44d5-b997-5d48a9e42d88\") " pod="kube-system/coredns-5dd5756b68-sgrmb" Jul 2 08:04:30.098349 kubelet[2164]: I0702 08:04:30.098329 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5htzs\" (UniqueName: \"kubernetes.io/projected/08e88302-6ee1-44d5-b997-5d48a9e42d88-kube-api-access-5htzs\") pod \"coredns-5dd5756b68-sgrmb\" (UID: \"08e88302-6ee1-44d5-b997-5d48a9e42d88\") " pod="kube-system/coredns-5dd5756b68-sgrmb" Jul 2 08:04:30.178960 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Jul 2 08:04:30.199281 kubelet[2164]: I0702 08:04:30.199256 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbbwv\" (UniqueName: \"kubernetes.io/projected/7d21d9e3-17f8-46a3-b21c-a2571af6c453-kube-api-access-gbbwv\") pod \"coredns-5dd5756b68-svw6g\" (UID: \"7d21d9e3-17f8-46a3-b21c-a2571af6c453\") " pod="kube-system/coredns-5dd5756b68-svw6g" Jul 2 08:04:30.199385 kubelet[2164]: I0702 08:04:30.199303 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d21d9e3-17f8-46a3-b21c-a2571af6c453-config-volume\") pod \"coredns-5dd5756b68-svw6g\" (UID: \"7d21d9e3-17f8-46a3-b21c-a2571af6c453\") " pod="kube-system/coredns-5dd5756b68-svw6g" Jul 2 08:04:30.420940 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Jul 2 08:04:30.835539 kubelet[2164]: I0702 08:04:30.835475 2164 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-6mjq7" podStartSLOduration=6.315896103 podCreationTimestamp="2024-07-02 08:04:16 +0000 UTC" firstStartedPulling="2024-07-02 08:04:17.251081551 +0000 UTC m=+11.823698185" lastFinishedPulling="2024-07-02 08:04:25.770636731 +0000 UTC m=+20.343253367" observedRunningTime="2024-07-02 08:04:30.832532416 +0000 UTC m=+25.405149060" watchObservedRunningTime="2024-07-02 08:04:30.835451285 +0000 UTC m=+25.408067924" Jul 2 08:04:31.200401 kubelet[2164]: E0702 08:04:31.200374 2164 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jul 2 08:04:31.201323 kubelet[2164]: E0702 08:04:31.201311 2164 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/08e88302-6ee1-44d5-b997-5d48a9e42d88-config-volume podName:08e88302-6ee1-44d5-b997-5d48a9e42d88 nodeName:}" failed. No retries permitted until 2024-07-02 08:04:31.700689036 +0000 UTC m=+26.273305674 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/08e88302-6ee1-44d5-b997-5d48a9e42d88-config-volume") pod "coredns-5dd5756b68-sgrmb" (UID: "08e88302-6ee1-44d5-b997-5d48a9e42d88") : failed to sync configmap cache: timed out waiting for the condition Jul 2 08:04:31.508293 env[1252]: time="2024-07-02T08:04:31.507980159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-svw6g,Uid:7d21d9e3-17f8-46a3-b21c-a2571af6c453,Namespace:kube-system,Attempt:0,}" Jul 2 08:04:31.805403 env[1252]: time="2024-07-02T08:04:31.805338835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-sgrmb,Uid:08e88302-6ee1-44d5-b997-5d48a9e42d88,Namespace:kube-system,Attempt:0,}" Jul 2 08:04:32.044397 systemd-networkd[1062]: cilium_host: Link UP Jul 2 08:04:32.045011 systemd-networkd[1062]: cilium_net: Link UP Jul 2 08:04:32.046979 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Jul 2 08:04:32.047026 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 2 08:04:32.047014 systemd-networkd[1062]: cilium_net: Gained carrier Jul 2 08:04:32.047116 systemd-networkd[1062]: cilium_host: Gained carrier Jul 2 08:04:32.161464 systemd-networkd[1062]: cilium_vxlan: Link UP Jul 2 08:04:32.161471 systemd-networkd[1062]: cilium_vxlan: Gained carrier Jul 2 08:04:32.233031 systemd-networkd[1062]: cilium_net: Gained IPv6LL Jul 2 08:04:32.416035 systemd-networkd[1062]: cilium_host: Gained IPv6LL Jul 2 08:04:32.512946 kernel: NET: Registered PF_ALG protocol family Jul 2 08:04:32.976229 systemd-networkd[1062]: lxc_health: Link UP Jul 2 08:04:32.994259 systemd-networkd[1062]: lxc_health: Gained carrier Jul 2 08:04:32.994957 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 08:04:33.330808 systemd-networkd[1062]: lxcb2692aea8f33: Link UP Jul 2 08:04:33.337967 kernel: eth0: renamed from tmpba057 Jul 2 08:04:33.344019 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 08:04:33.344080 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcb2692aea8f33: link becomes ready Jul 2 08:04:33.344052 systemd-networkd[1062]: lxcb2692aea8f33: Gained carrier Jul 2 08:04:33.554826 systemd-networkd[1062]: lxc28e6ade49403: Link UP Jul 2 08:04:33.562947 kernel: eth0: renamed from tmpdf2ab Jul 2 08:04:33.568676 systemd-networkd[1062]: lxc28e6ade49403: Gained carrier Jul 2 08:04:33.568986 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc28e6ade49403: link becomes ready Jul 2 08:04:34.000054 systemd-networkd[1062]: cilium_vxlan: Gained IPv6LL Jul 2 08:04:34.576009 systemd-networkd[1062]: lxcb2692aea8f33: Gained IPv6LL Jul 2 08:04:34.960104 systemd-networkd[1062]: lxc_health: Gained IPv6LL Jul 2 08:04:35.217130 systemd-networkd[1062]: lxc28e6ade49403: Gained IPv6LL Jul 2 08:04:36.169461 env[1252]: time="2024-07-02T08:04:36.169421043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:04:36.170031 env[1252]: time="2024-07-02T08:04:36.170015895Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:04:36.170100 env[1252]: time="2024-07-02T08:04:36.170086716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:04:36.177324 env[1252]: time="2024-07-02T08:04:36.170320919Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ba05734ec5358274544c0e2880c3a3e5fafd508270eb96058fb06e33eaa63e14 pid=3325 runtime=io.containerd.runc.v2 Jul 2 08:04:36.187418 systemd[1]: Started cri-containerd-ba05734ec5358274544c0e2880c3a3e5fafd508270eb96058fb06e33eaa63e14.scope. Jul 2 08:04:36.232522 env[1252]: time="2024-07-02T08:04:36.232478702Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:04:36.232618 env[1252]: time="2024-07-02T08:04:36.232510492Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:04:36.232618 env[1252]: time="2024-07-02T08:04:36.232517438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:04:36.232678 env[1252]: time="2024-07-02T08:04:36.232629825Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/df2ab0e7129167d02ebf1c4bc22ed6ea9b5bb694178eca9a24df5b6b223a24cc pid=3356 runtime=io.containerd.runc.v2 Jul 2 08:04:36.235471 systemd-resolved[1205]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 08:04:36.249481 systemd[1]: Started cri-containerd-df2ab0e7129167d02ebf1c4bc22ed6ea9b5bb694178eca9a24df5b6b223a24cc.scope. Jul 2 08:04:36.275887 systemd-resolved[1205]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 08:04:36.282930 env[1252]: time="2024-07-02T08:04:36.282293301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-sgrmb,Uid:08e88302-6ee1-44d5-b997-5d48a9e42d88,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba05734ec5358274544c0e2880c3a3e5fafd508270eb96058fb06e33eaa63e14\"" Jul 2 08:04:36.304971 env[1252]: time="2024-07-02T08:04:36.304945858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-svw6g,Uid:7d21d9e3-17f8-46a3-b21c-a2571af6c453,Namespace:kube-system,Attempt:0,} returns sandbox id \"df2ab0e7129167d02ebf1c4bc22ed6ea9b5bb694178eca9a24df5b6b223a24cc\"" Jul 2 08:04:36.314513 env[1252]: time="2024-07-02T08:04:36.314488337Z" level=info msg="CreateContainer within sandbox \"ba05734ec5358274544c0e2880c3a3e5fafd508270eb96058fb06e33eaa63e14\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 08:04:36.317949 env[1252]: time="2024-07-02T08:04:36.316826796Z" level=info msg="CreateContainer within sandbox \"df2ab0e7129167d02ebf1c4bc22ed6ea9b5bb694178eca9a24df5b6b223a24cc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 08:04:36.399149 env[1252]: time="2024-07-02T08:04:36.399122183Z" level=info msg="CreateContainer within sandbox \"df2ab0e7129167d02ebf1c4bc22ed6ea9b5bb694178eca9a24df5b6b223a24cc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1b35e21070db9ff58787ff26fa2459f72a7e4eb6ff188b59930011fc27b2c3e6\"" Jul 2 08:04:36.399753 env[1252]: time="2024-07-02T08:04:36.399697304Z" level=info msg="StartContainer for \"1b35e21070db9ff58787ff26fa2459f72a7e4eb6ff188b59930011fc27b2c3e6\"" Jul 2 08:04:36.409241 systemd[1]: Started cri-containerd-1b35e21070db9ff58787ff26fa2459f72a7e4eb6ff188b59930011fc27b2c3e6.scope. Jul 2 08:04:36.429218 env[1252]: time="2024-07-02T08:04:36.428802715Z" level=info msg="CreateContainer within sandbox \"ba05734ec5358274544c0e2880c3a3e5fafd508270eb96058fb06e33eaa63e14\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b145e90d2fd1f9e1db1e9ba2893f2c82bccab7433b730bad1cc5cb418608777a\"" Jul 2 08:04:36.429472 env[1252]: time="2024-07-02T08:04:36.429453895Z" level=info msg="StartContainer for \"b145e90d2fd1f9e1db1e9ba2893f2c82bccab7433b730bad1cc5cb418608777a\"" Jul 2 08:04:36.442212 systemd[1]: Started cri-containerd-b145e90d2fd1f9e1db1e9ba2893f2c82bccab7433b730bad1cc5cb418608777a.scope. Jul 2 08:04:36.453255 env[1252]: time="2024-07-02T08:04:36.452913817Z" level=info msg="StartContainer for \"1b35e21070db9ff58787ff26fa2459f72a7e4eb6ff188b59930011fc27b2c3e6\" returns successfully" Jul 2 08:04:36.478713 env[1252]: time="2024-07-02T08:04:36.478683493Z" level=info msg="StartContainer for \"b145e90d2fd1f9e1db1e9ba2893f2c82bccab7433b730bad1cc5cb418608777a\" returns successfully" Jul 2 08:04:36.869068 kubelet[2164]: I0702 08:04:36.868985 2164 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-sgrmb" podStartSLOduration=20.868939267000002 podCreationTimestamp="2024-07-02 08:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:04:36.868021347 +0000 UTC m=+31.440637992" watchObservedRunningTime="2024-07-02 08:04:36.868939267 +0000 UTC m=+31.441555908" Jul 2 08:04:36.869515 kubelet[2164]: I0702 08:04:36.869503 2164 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-svw6g" podStartSLOduration=20.869465479 podCreationTimestamp="2024-07-02 08:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:04:36.860511562 +0000 UTC m=+31.433128216" watchObservedRunningTime="2024-07-02 08:04:36.869465479 +0000 UTC m=+31.442082125" Jul 2 08:04:37.172792 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount928090977.mount: Deactivated successfully. Jul 2 08:04:41.092759 kubelet[2164]: I0702 08:04:41.092673 2164 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 08:05:24.869684 systemd[1]: Started sshd@5-139.178.70.105:22-139.178.68.195:32824.service. Jul 2 08:05:24.920153 sshd[3499]: Accepted publickey for core from 139.178.68.195 port 32824 ssh2: RSA SHA256:ZBsE641/vEJoOwiO3PsmlEnzQGV1D8+6UMqIklXV5hk Jul 2 08:05:24.921634 sshd[3499]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:05:24.924752 systemd-logind[1242]: New session 8 of user core. Jul 2 08:05:24.925559 systemd[1]: Started session-8.scope. Jul 2 08:05:25.103875 sshd[3499]: pam_unix(sshd:session): session closed for user core Jul 2 08:05:25.105387 systemd-logind[1242]: Session 8 logged out. Waiting for processes to exit. Jul 2 08:05:25.105485 systemd[1]: sshd@5-139.178.70.105:22-139.178.68.195:32824.service: Deactivated successfully. Jul 2 08:05:25.105928 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 08:05:25.106442 systemd-logind[1242]: Removed session 8. Jul 2 08:05:30.106609 systemd[1]: Started sshd@6-139.178.70.105:22-139.178.68.195:32838.service. Jul 2 08:05:30.286715 sshd[3512]: Accepted publickey for core from 139.178.68.195 port 32838 ssh2: RSA SHA256:ZBsE641/vEJoOwiO3PsmlEnzQGV1D8+6UMqIklXV5hk Jul 2 08:05:30.287996 sshd[3512]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:05:30.291102 systemd[1]: Started session-9.scope. Jul 2 08:05:30.291576 systemd-logind[1242]: New session 9 of user core. Jul 2 08:05:30.389426 sshd[3512]: pam_unix(sshd:session): session closed for user core Jul 2 08:05:30.391430 systemd[1]: sshd@6-139.178.70.105:22-139.178.68.195:32838.service: Deactivated successfully. Jul 2 08:05:30.391854 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 08:05:30.392127 systemd-logind[1242]: Session 9 logged out. Waiting for processes to exit. Jul 2 08:05:30.392546 systemd-logind[1242]: Removed session 9. Jul 2 08:05:35.392661 systemd[1]: Started sshd@7-139.178.70.105:22-139.178.68.195:43866.service. Jul 2 08:05:35.426226 sshd[3525]: Accepted publickey for core from 139.178.68.195 port 43866 ssh2: RSA SHA256:ZBsE641/vEJoOwiO3PsmlEnzQGV1D8+6UMqIklXV5hk Jul 2 08:05:35.427222 sshd[3525]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:05:35.430464 systemd-logind[1242]: New session 10 of user core. Jul 2 08:05:35.430781 systemd[1]: Started session-10.scope. Jul 2 08:05:35.517870 sshd[3525]: pam_unix(sshd:session): session closed for user core Jul 2 08:05:35.519365 systemd[1]: sshd@7-139.178.70.105:22-139.178.68.195:43866.service: Deactivated successfully. Jul 2 08:05:35.519809 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 08:05:35.520332 systemd-logind[1242]: Session 10 logged out. Waiting for processes to exit. Jul 2 08:05:35.520768 systemd-logind[1242]: Removed session 10. Jul 2 08:05:40.520769 systemd[1]: Started sshd@8-139.178.70.105:22-139.178.68.195:43872.service. Jul 2 08:05:40.565704 sshd[3539]: Accepted publickey for core from 139.178.68.195 port 43872 ssh2: RSA SHA256:ZBsE641/vEJoOwiO3PsmlEnzQGV1D8+6UMqIklXV5hk Jul 2 08:05:40.566803 sshd[3539]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:05:40.571104 systemd[1]: Started session-11.scope. Jul 2 08:05:40.571589 systemd-logind[1242]: New session 11 of user core. Jul 2 08:05:40.669136 sshd[3539]: pam_unix(sshd:session): session closed for user core Jul 2 08:05:40.671595 systemd[1]: Started sshd@9-139.178.70.105:22-139.178.68.195:43880.service. Jul 2 08:05:40.677331 systemd-logind[1242]: Session 11 logged out. Waiting for processes to exit. Jul 2 08:05:40.677533 systemd[1]: sshd@8-139.178.70.105:22-139.178.68.195:43872.service: Deactivated successfully. Jul 2 08:05:40.678004 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 08:05:40.678727 systemd-logind[1242]: Removed session 11. Jul 2 08:05:40.706100 sshd[3550]: Accepted publickey for core from 139.178.68.195 port 43880 ssh2: RSA SHA256:ZBsE641/vEJoOwiO3PsmlEnzQGV1D8+6UMqIklXV5hk Jul 2 08:05:40.706959 sshd[3550]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:05:40.710332 systemd-logind[1242]: New session 12 of user core. Jul 2 08:05:40.710342 systemd[1]: Started session-12.scope. Jul 2 08:05:41.315640 systemd[1]: Started sshd@10-139.178.70.105:22-139.178.68.195:43892.service. Jul 2 08:05:41.317675 sshd[3550]: pam_unix(sshd:session): session closed for user core Jul 2 08:05:41.326414 systemd[1]: sshd@9-139.178.70.105:22-139.178.68.195:43880.service: Deactivated successfully. Jul 2 08:05:41.327063 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 08:05:41.328068 systemd-logind[1242]: Session 12 logged out. Waiting for processes to exit. Jul 2 08:05:41.328986 systemd-logind[1242]: Removed session 12. Jul 2 08:05:41.365523 sshd[3560]: Accepted publickey for core from 139.178.68.195 port 43892 ssh2: RSA SHA256:ZBsE641/vEJoOwiO3PsmlEnzQGV1D8+6UMqIklXV5hk Jul 2 08:05:41.366634 sshd[3560]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:05:41.369487 systemd-logind[1242]: New session 13 of user core. Jul 2 08:05:41.370057 systemd[1]: Started session-13.scope. Jul 2 08:05:41.483584 sshd[3560]: pam_unix(sshd:session): session closed for user core Jul 2 08:05:41.485342 systemd[1]: sshd@10-139.178.70.105:22-139.178.68.195:43892.service: Deactivated successfully. Jul 2 08:05:41.485789 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 08:05:41.486486 systemd-logind[1242]: Session 13 logged out. Waiting for processes to exit. Jul 2 08:05:41.487101 systemd-logind[1242]: Removed session 13. Jul 2 08:05:46.487129 systemd[1]: Started sshd@11-139.178.70.105:22-139.178.68.195:42422.service. Jul 2 08:05:46.520349 sshd[3572]: Accepted publickey for core from 139.178.68.195 port 42422 ssh2: RSA SHA256:ZBsE641/vEJoOwiO3PsmlEnzQGV1D8+6UMqIklXV5hk Jul 2 08:05:46.521358 sshd[3572]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:05:46.524695 systemd[1]: Started session-14.scope. Jul 2 08:05:46.524986 systemd-logind[1242]: New session 14 of user core. Jul 2 08:05:46.618450 sshd[3572]: pam_unix(sshd:session): session closed for user core Jul 2 08:05:46.620162 systemd[1]: sshd@11-139.178.70.105:22-139.178.68.195:42422.service: Deactivated successfully. Jul 2 08:05:46.620594 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 08:05:46.620898 systemd-logind[1242]: Session 14 logged out. Waiting for processes to exit. Jul 2 08:05:46.621362 systemd-logind[1242]: Removed session 14. Jul 2 08:05:51.621896 systemd[1]: Started sshd@12-139.178.70.105:22-139.178.68.195:42438.service. Jul 2 08:05:51.655503 sshd[3585]: Accepted publickey for core from 139.178.68.195 port 42438 ssh2: RSA SHA256:ZBsE641/vEJoOwiO3PsmlEnzQGV1D8+6UMqIklXV5hk Jul 2 08:05:51.656236 sshd[3585]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:05:51.659424 systemd[1]: Started session-15.scope. Jul 2 08:05:51.659857 systemd-logind[1242]: New session 15 of user core. Jul 2 08:05:51.757272 sshd[3585]: pam_unix(sshd:session): session closed for user core Jul 2 08:05:51.760741 systemd[1]: Started sshd@13-139.178.70.105:22-139.178.68.195:42454.service. Jul 2 08:05:51.763546 systemd-logind[1242]: Session 15 logged out. Waiting for processes to exit. Jul 2 08:05:51.764425 systemd[1]: sshd@12-139.178.70.105:22-139.178.68.195:42438.service: Deactivated successfully. Jul 2 08:05:51.764817 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 08:05:51.765736 systemd-logind[1242]: Removed session 15. Jul 2 08:05:51.793705 sshd[3596]: Accepted publickey for core from 139.178.68.195 port 42454 ssh2: RSA SHA256:ZBsE641/vEJoOwiO3PsmlEnzQGV1D8+6UMqIklXV5hk Jul 2 08:05:51.794488 sshd[3596]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:05:51.797493 systemd[1]: Started session-16.scope. Jul 2 08:05:51.798143 systemd-logind[1242]: New session 16 of user core. Jul 2 08:05:52.177017 sshd[3596]: pam_unix(sshd:session): session closed for user core Jul 2 08:05:52.180190 systemd[1]: Started sshd@14-139.178.70.105:22-139.178.68.195:42460.service. Jul 2 08:05:52.183940 systemd[1]: sshd@13-139.178.70.105:22-139.178.68.195:42454.service: Deactivated successfully. Jul 2 08:05:52.184382 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 08:05:52.184797 systemd-logind[1242]: Session 16 logged out. Waiting for processes to exit. Jul 2 08:05:52.185669 systemd-logind[1242]: Removed session 16. Jul 2 08:05:52.221982 sshd[3606]: Accepted publickey for core from 139.178.68.195 port 42460 ssh2: RSA SHA256:ZBsE641/vEJoOwiO3PsmlEnzQGV1D8+6UMqIklXV5hk Jul 2 08:05:52.223014 sshd[3606]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:05:52.225629 systemd-logind[1242]: New session 17 of user core. Jul 2 08:05:52.226386 systemd[1]: Started session-17.scope. Jul 2 08:05:53.120246 sshd[3606]: pam_unix(sshd:session): session closed for user core Jul 2 08:05:53.123031 systemd[1]: Started sshd@15-139.178.70.105:22-139.178.68.195:52176.service. Jul 2 08:05:53.124670 systemd[1]: sshd@14-139.178.70.105:22-139.178.68.195:42460.service: Deactivated successfully. Jul 2 08:05:53.125072 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 08:05:53.125565 systemd-logind[1242]: Session 17 logged out. Waiting for processes to exit. Jul 2 08:05:53.126079 systemd-logind[1242]: Removed session 17. Jul 2 08:05:53.159215 sshd[3622]: Accepted publickey for core from 139.178.68.195 port 52176 ssh2: RSA SHA256:ZBsE641/vEJoOwiO3PsmlEnzQGV1D8+6UMqIklXV5hk Jul 2 08:05:53.160329 sshd[3622]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:05:53.163020 systemd-logind[1242]: New session 18 of user core. Jul 2 08:05:53.163477 systemd[1]: Started session-18.scope. Jul 2 08:05:53.408643 sshd[3622]: pam_unix(sshd:session): session closed for user core Jul 2 08:05:53.410263 systemd[1]: Started sshd@16-139.178.70.105:22-139.178.68.195:52180.service. Jul 2 08:05:53.415114 systemd[1]: sshd@15-139.178.70.105:22-139.178.68.195:52176.service: Deactivated successfully. Jul 2 08:05:53.415623 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 08:05:53.416128 systemd-logind[1242]: Session 18 logged out. Waiting for processes to exit. Jul 2 08:05:53.416709 systemd-logind[1242]: Removed session 18. Jul 2 08:05:53.445102 sshd[3633]: Accepted publickey for core from 139.178.68.195 port 52180 ssh2: RSA SHA256:ZBsE641/vEJoOwiO3PsmlEnzQGV1D8+6UMqIklXV5hk Jul 2 08:05:53.445881 sshd[3633]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:05:53.448495 systemd-logind[1242]: New session 19 of user core. Jul 2 08:05:53.449023 systemd[1]: Started session-19.scope. Jul 2 08:05:53.613728 sshd[3633]: pam_unix(sshd:session): session closed for user core Jul 2 08:05:53.617324 systemd-logind[1242]: Session 19 logged out. Waiting for processes to exit. Jul 2 08:05:53.617548 systemd[1]: sshd@16-139.178.70.105:22-139.178.68.195:52180.service: Deactivated successfully. Jul 2 08:05:53.618177 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 08:05:53.618899 systemd-logind[1242]: Removed session 19. Jul 2 08:05:58.616642 systemd[1]: Started sshd@17-139.178.70.105:22-139.178.68.195:52194.service. Jul 2 08:05:58.651732 sshd[3648]: Accepted publickey for core from 139.178.68.195 port 52194 ssh2: RSA SHA256:ZBsE641/vEJoOwiO3PsmlEnzQGV1D8+6UMqIklXV5hk Jul 2 08:05:58.652996 sshd[3648]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:05:58.656908 systemd[1]: Started session-20.scope. Jul 2 08:05:58.657986 systemd-logind[1242]: New session 20 of user core. Jul 2 08:05:58.757593 sshd[3648]: pam_unix(sshd:session): session closed for user core Jul 2 08:05:58.759431 systemd[1]: sshd@17-139.178.70.105:22-139.178.68.195:52194.service: Deactivated successfully. Jul 2 08:05:58.759874 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 08:05:58.760460 systemd-logind[1242]: Session 20 logged out. Waiting for processes to exit. Jul 2 08:05:58.760973 systemd-logind[1242]: Removed session 20. Jul 2 08:06:03.761564 systemd[1]: Started sshd@18-139.178.70.105:22-139.178.68.195:55284.service. Jul 2 08:06:03.793933 sshd[3659]: Accepted publickey for core from 139.178.68.195 port 55284 ssh2: RSA SHA256:ZBsE641/vEJoOwiO3PsmlEnzQGV1D8+6UMqIklXV5hk Jul 2 08:06:03.795183 sshd[3659]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:06:03.798982 systemd[1]: Started session-21.scope. Jul 2 08:06:03.799981 systemd-logind[1242]: New session 21 of user core. Jul 2 08:06:03.896918 sshd[3659]: pam_unix(sshd:session): session closed for user core Jul 2 08:06:03.898649 systemd[1]: sshd@18-139.178.70.105:22-139.178.68.195:55284.service: Deactivated successfully. Jul 2 08:06:03.899149 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 08:06:03.900030 systemd-logind[1242]: Session 21 logged out. Waiting for processes to exit. Jul 2 08:06:03.900624 systemd-logind[1242]: Removed session 21. Jul 2 08:06:08.901485 systemd[1]: Started sshd@19-139.178.70.105:22-139.178.68.195:55290.service. Jul 2 08:06:08.933722 sshd[3672]: Accepted publickey for core from 139.178.68.195 port 55290 ssh2: RSA SHA256:ZBsE641/vEJoOwiO3PsmlEnzQGV1D8+6UMqIklXV5hk Jul 2 08:06:08.934990 sshd[3672]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:06:08.937525 systemd-logind[1242]: New session 22 of user core. Jul 2 08:06:08.938119 systemd[1]: Started session-22.scope. Jul 2 08:06:09.052221 sshd[3672]: pam_unix(sshd:session): session closed for user core Jul 2 08:06:09.054455 systemd[1]: sshd@19-139.178.70.105:22-139.178.68.195:55290.service: Deactivated successfully. Jul 2 08:06:09.054919 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 08:06:09.055370 systemd-logind[1242]: Session 22 logged out. Waiting for processes to exit. Jul 2 08:06:09.055940 systemd-logind[1242]: Removed session 22. Jul 2 08:06:14.055597 systemd[1]: Started sshd@20-139.178.70.105:22-139.178.68.195:46062.service. Jul 2 08:06:14.285524 sshd[3683]: Accepted publickey for core from 139.178.68.195 port 46062 ssh2: RSA SHA256:ZBsE641/vEJoOwiO3PsmlEnzQGV1D8+6UMqIklXV5hk Jul 2 08:06:14.286886 sshd[3683]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:06:14.290153 systemd-logind[1242]: New session 23 of user core. Jul 2 08:06:14.290829 systemd[1]: Started session-23.scope. Jul 2 08:06:14.503425 sshd[3683]: pam_unix(sshd:session): session closed for user core Jul 2 08:06:14.506945 systemd[1]: Started sshd@21-139.178.70.105:22-139.178.68.195:46076.service. Jul 2 08:06:14.513313 systemd-logind[1242]: Session 23 logged out. Waiting for processes to exit. Jul 2 08:06:14.513722 systemd[1]: sshd@20-139.178.70.105:22-139.178.68.195:46062.service: Deactivated successfully. Jul 2 08:06:14.514594 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 08:06:14.515575 systemd-logind[1242]: Removed session 23. Jul 2 08:06:14.569401 sshd[3693]: Accepted publickey for core from 139.178.68.195 port 46076 ssh2: RSA SHA256:ZBsE641/vEJoOwiO3PsmlEnzQGV1D8+6UMqIklXV5hk Jul 2 08:06:14.570400 sshd[3693]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:06:14.576725 systemd[1]: Started session-24.scope. Jul 2 08:06:14.577101 systemd-logind[1242]: New session 24 of user core. Jul 2 08:06:16.653507 env[1252]: time="2024-07-02T08:06:16.653379134Z" level=info msg="StopContainer for \"94f65a183e1e89f51d62bb19dafcbb25cba8e23fc1138c5273d95c34c2ffa26c\" with timeout 30 (s)" Jul 2 08:06:16.660658 env[1252]: time="2024-07-02T08:06:16.653643777Z" level=info msg="Stop container \"94f65a183e1e89f51d62bb19dafcbb25cba8e23fc1138c5273d95c34c2ffa26c\" with signal terminated" Jul 2 08:06:16.687376 systemd[1]: run-containerd-runc-k8s.io-ae65a43797f6f76e26805cc21f93b5a21982b617000703fff27422a7354f2fb4-runc.hoKELR.mount: Deactivated successfully. Jul 2 08:06:16.865541 systemd[1]: cri-containerd-94f65a183e1e89f51d62bb19dafcbb25cba8e23fc1138c5273d95c34c2ffa26c.scope: Deactivated successfully. Jul 2 08:06:16.880157 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-94f65a183e1e89f51d62bb19dafcbb25cba8e23fc1138c5273d95c34c2ffa26c-rootfs.mount: Deactivated successfully. Jul 2 08:06:16.900615 env[1252]: time="2024-07-02T08:06:16.900570753Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 08:06:16.996755 env[1252]: time="2024-07-02T08:06:16.996732306Z" level=info msg="StopContainer for \"ae65a43797f6f76e26805cc21f93b5a21982b617000703fff27422a7354f2fb4\" with timeout 2 (s)" Jul 2 08:06:16.997063 env[1252]: time="2024-07-02T08:06:16.997040105Z" level=info msg="Stop container \"ae65a43797f6f76e26805cc21f93b5a21982b617000703fff27422a7354f2fb4\" with signal terminated" Jul 2 08:06:17.122194 systemd-networkd[1062]: lxc_health: Link DOWN Jul 2 08:06:17.122200 systemd-networkd[1062]: lxc_health: Lost carrier Jul 2 08:06:17.181808 env[1252]: time="2024-07-02T08:06:17.164708234Z" level=info msg="shim disconnected" id=94f65a183e1e89f51d62bb19dafcbb25cba8e23fc1138c5273d95c34c2ffa26c Jul 2 08:06:17.181808 env[1252]: time="2024-07-02T08:06:17.164748385Z" level=warning msg="cleaning up after shim disconnected" id=94f65a183e1e89f51d62bb19dafcbb25cba8e23fc1138c5273d95c34c2ffa26c namespace=k8s.io Jul 2 08:06:17.181808 env[1252]: time="2024-07-02T08:06:17.164756413Z" level=info msg="cleaning up dead shim" Jul 2 08:06:17.181808 env[1252]: time="2024-07-02T08:06:17.171433098Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:06:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3750 runtime=io.containerd.runc.v2\n" Jul 2 08:06:17.197288 systemd[1]: cri-containerd-ae65a43797f6f76e26805cc21f93b5a21982b617000703fff27422a7354f2fb4.scope: Deactivated successfully. Jul 2 08:06:17.197488 systemd[1]: cri-containerd-ae65a43797f6f76e26805cc21f93b5a21982b617000703fff27422a7354f2fb4.scope: Consumed 4.740s CPU time. Jul 2 08:06:17.208273 env[1252]: time="2024-07-02T08:06:17.208130271Z" level=info msg="StopContainer for \"94f65a183e1e89f51d62bb19dafcbb25cba8e23fc1138c5273d95c34c2ffa26c\" returns successfully" Jul 2 08:06:17.208785 env[1252]: time="2024-07-02T08:06:17.208766572Z" level=info msg="StopPodSandbox for \"c51dff3137bafcc016963b127c5ca0c9c20195b7491ae5ebcdd67eb994abcd51\"" Jul 2 08:06:17.208920 env[1252]: time="2024-07-02T08:06:17.208900839Z" level=info msg="Container to stop \"94f65a183e1e89f51d62bb19dafcbb25cba8e23fc1138c5273d95c34c2ffa26c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:06:17.210676 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c51dff3137bafcc016963b127c5ca0c9c20195b7491ae5ebcdd67eb994abcd51-shm.mount: Deactivated successfully. Jul 2 08:06:17.217737 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae65a43797f6f76e26805cc21f93b5a21982b617000703fff27422a7354f2fb4-rootfs.mount: Deactivated successfully. Jul 2 08:06:17.223505 systemd[1]: cri-containerd-c51dff3137bafcc016963b127c5ca0c9c20195b7491ae5ebcdd67eb994abcd51.scope: Deactivated successfully. Jul 2 08:06:17.357709 env[1252]: time="2024-07-02T08:06:17.357039268Z" level=info msg="shim disconnected" id=c51dff3137bafcc016963b127c5ca0c9c20195b7491ae5ebcdd67eb994abcd51 Jul 2 08:06:17.357887 env[1252]: time="2024-07-02T08:06:17.357869469Z" level=warning msg="cleaning up after shim disconnected" id=c51dff3137bafcc016963b127c5ca0c9c20195b7491ae5ebcdd67eb994abcd51 namespace=k8s.io Jul 2 08:06:17.357970 env[1252]: time="2024-07-02T08:06:17.357955701Z" level=info msg="cleaning up dead shim" Jul 2 08:06:17.358213 env[1252]: time="2024-07-02T08:06:17.357595565Z" level=info msg="shim disconnected" id=ae65a43797f6f76e26805cc21f93b5a21982b617000703fff27422a7354f2fb4 Jul 2 08:06:17.358265 env[1252]: time="2024-07-02T08:06:17.358215374Z" level=warning msg="cleaning up after shim disconnected" id=ae65a43797f6f76e26805cc21f93b5a21982b617000703fff27422a7354f2fb4 namespace=k8s.io Jul 2 08:06:17.358265 env[1252]: time="2024-07-02T08:06:17.358223591Z" level=info msg="cleaning up dead shim" Jul 2 08:06:17.365197 env[1252]: time="2024-07-02T08:06:17.365172621Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:06:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3796 runtime=io.containerd.runc.v2\n" Jul 2 08:06:17.365315 env[1252]: time="2024-07-02T08:06:17.365297251Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:06:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3797 runtime=io.containerd.runc.v2\n" Jul 2 08:06:17.374951 env[1252]: time="2024-07-02T08:06:17.374934085Z" level=info msg="TearDown network for sandbox \"c51dff3137bafcc016963b127c5ca0c9c20195b7491ae5ebcdd67eb994abcd51\" successfully" Jul 2 08:06:17.375012 env[1252]: time="2024-07-02T08:06:17.374998272Z" level=info msg="StopPodSandbox for \"c51dff3137bafcc016963b127c5ca0c9c20195b7491ae5ebcdd67eb994abcd51\" returns successfully" Jul 2 08:06:17.385649 env[1252]: time="2024-07-02T08:06:17.385619889Z" level=info msg="StopContainer for \"ae65a43797f6f76e26805cc21f93b5a21982b617000703fff27422a7354f2fb4\" returns successfully" Jul 2 08:06:17.386015 env[1252]: time="2024-07-02T08:06:17.385991533Z" level=info msg="StopPodSandbox for \"7162d951d2665472f41f4402fa5d81a07350e361845367f888f80341d64c4b47\"" Jul 2 08:06:17.394361 env[1252]: time="2024-07-02T08:06:17.386179503Z" level=info msg="Container to stop \"35e285fbf6dd4fa4bdca5c929a798f0eb359edbb4b28a047b8a93204f3684b78\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:06:17.394361 env[1252]: time="2024-07-02T08:06:17.386197099Z" level=info msg="Container to stop \"8c937b606865112d7201fa792f126c95098b485870a3373d432d8f95853318ce\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:06:17.394361 env[1252]: time="2024-07-02T08:06:17.386249207Z" level=info msg="Container to stop \"3e9309d9dfa54ad68da3897347fdd698f90789de73c81cc326e3f21e9336ad88\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:06:17.394361 env[1252]: time="2024-07-02T08:06:17.386262602Z" level=info msg="Container to stop \"78ea2957fa8d5c4779b7fb26b9be77e0b49b00b965ff0f03f40140f20857446a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:06:17.394361 env[1252]: time="2024-07-02T08:06:17.386270589Z" level=info msg="Container to stop \"ae65a43797f6f76e26805cc21f93b5a21982b617000703fff27422a7354f2fb4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:06:17.392348 systemd[1]: cri-containerd-7162d951d2665472f41f4402fa5d81a07350e361845367f888f80341d64c4b47.scope: Deactivated successfully. Jul 2 08:06:17.506818 env[1252]: time="2024-07-02T08:06:17.506784595Z" level=info msg="shim disconnected" id=7162d951d2665472f41f4402fa5d81a07350e361845367f888f80341d64c4b47 Jul 2 08:06:17.506818 env[1252]: time="2024-07-02T08:06:17.506812640Z" level=warning msg="cleaning up after shim disconnected" id=7162d951d2665472f41f4402fa5d81a07350e361845367f888f80341d64c4b47 namespace=k8s.io Jul 2 08:06:17.506818 env[1252]: time="2024-07-02T08:06:17.506818614Z" level=info msg="cleaning up dead shim" Jul 2 08:06:17.511550 env[1252]: time="2024-07-02T08:06:17.511533562Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:06:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3841 runtime=io.containerd.runc.v2\n" Jul 2 08:06:17.517104 env[1252]: time="2024-07-02T08:06:17.517088922Z" level=info msg="TearDown network for sandbox \"7162d951d2665472f41f4402fa5d81a07350e361845367f888f80341d64c4b47\" successfully" Jul 2 08:06:17.517163 env[1252]: time="2024-07-02T08:06:17.517150951Z" level=info msg="StopPodSandbox for \"7162d951d2665472f41f4402fa5d81a07350e361845367f888f80341d64c4b47\" returns successfully" Jul 2 08:06:17.535972 kubelet[2164]: I0702 08:06:17.535953 2164 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kph6z\" (UniqueName: \"kubernetes.io/projected/3955e3d1-a9af-465e-b011-9c56a5877900-kube-api-access-kph6z\") pod \"3955e3d1-a9af-465e-b011-9c56a5877900\" (UID: \"3955e3d1-a9af-465e-b011-9c56a5877900\") " Jul 2 08:06:17.536188 kubelet[2164]: I0702 08:06:17.535984 2164 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3955e3d1-a9af-465e-b011-9c56a5877900-cilium-config-path\") pod \"3955e3d1-a9af-465e-b011-9c56a5877900\" (UID: \"3955e3d1-a9af-465e-b011-9c56a5877900\") " Jul 2 08:06:17.643539 kubelet[2164]: I0702 08:06:17.643471 2164 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1162624f-bd0d-4121-a8f7-9612f6c8d305-clustermesh-secrets\") pod \"1162624f-bd0d-4121-a8f7-9612f6c8d305\" (UID: \"1162624f-bd0d-4121-a8f7-9612f6c8d305\") " Jul 2 08:06:17.643687 kubelet[2164]: I0702 08:06:17.643673 2164 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1162624f-bd0d-4121-a8f7-9612f6c8d305-hubble-tls\") pod \"1162624f-bd0d-4121-a8f7-9612f6c8d305\" (UID: \"1162624f-bd0d-4121-a8f7-9612f6c8d305\") " Jul 2 08:06:17.643774 kubelet[2164]: I0702 08:06:17.643759 2164 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1162624f-bd0d-4121-a8f7-9612f6c8d305-etc-cni-netd\") pod \"1162624f-bd0d-4121-a8f7-9612f6c8d305\" (UID: \"1162624f-bd0d-4121-a8f7-9612f6c8d305\") " Jul 2 08:06:17.643850 kubelet[2164]: I0702 08:06:17.643841 2164 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1162624f-bd0d-4121-a8f7-9612f6c8d305-xtables-lock\") pod \"1162624f-bd0d-4121-a8f7-9612f6c8d305\" (UID: \"1162624f-bd0d-4121-a8f7-9612f6c8d305\") " Jul 2 08:06:17.643938 kubelet[2164]: I0702 08:06:17.643914 2164 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1162624f-bd0d-4121-a8f7-9612f6c8d305-host-proc-sys-net\") pod \"1162624f-bd0d-4121-a8f7-9612f6c8d305\" (UID: \"1162624f-bd0d-4121-a8f7-9612f6c8d305\") " Jul 2 08:06:17.644025 kubelet[2164]: I0702 08:06:17.644015 2164 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1162624f-bd0d-4121-a8f7-9612f6c8d305-cilium-config-path\") pod \"1162624f-bd0d-4121-a8f7-9612f6c8d305\" (UID: \"1162624f-bd0d-4121-a8f7-9612f6c8d305\") " Jul 2 08:06:17.644100 kubelet[2164]: I0702 08:06:17.644091 2164 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1162624f-bd0d-4121-a8f7-9612f6c8d305-host-proc-sys-kernel\") pod \"1162624f-bd0d-4121-a8f7-9612f6c8d305\" (UID: \"1162624f-bd0d-4121-a8f7-9612f6c8d305\") " Jul 2 08:06:17.644173 kubelet[2164]: I0702 08:06:17.644163 2164 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1162624f-bd0d-4121-a8f7-9612f6c8d305-cilium-cgroup\") pod \"1162624f-bd0d-4121-a8f7-9612f6c8d305\" (UID: \"1162624f-bd0d-4121-a8f7-9612f6c8d305\") " Jul 2 08:06:17.644273 kubelet[2164]: I0702 08:06:17.644244 2164 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1162624f-bd0d-4121-a8f7-9612f6c8d305-cni-path\") pod \"1162624f-bd0d-4121-a8f7-9612f6c8d305\" (UID: \"1162624f-bd0d-4121-a8f7-9612f6c8d305\") " Jul 2 08:06:17.644348 kubelet[2164]: I0702 08:06:17.644339 2164 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfzrz\" (UniqueName: \"kubernetes.io/projected/1162624f-bd0d-4121-a8f7-9612f6c8d305-kube-api-access-dfzrz\") pod \"1162624f-bd0d-4121-a8f7-9612f6c8d305\" (UID: \"1162624f-bd0d-4121-a8f7-9612f6c8d305\") " Jul 2 08:06:17.644439 kubelet[2164]: I0702 08:06:17.644430 2164 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1162624f-bd0d-4121-a8f7-9612f6c8d305-bpf-maps\") pod \"1162624f-bd0d-4121-a8f7-9612f6c8d305\" (UID: \"1162624f-bd0d-4121-a8f7-9612f6c8d305\") " Jul 2 08:06:17.644530 kubelet[2164]: I0702 08:06:17.644520 2164 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1162624f-bd0d-4121-a8f7-9612f6c8d305-lib-modules\") pod \"1162624f-bd0d-4121-a8f7-9612f6c8d305\" (UID: \"1162624f-bd0d-4121-a8f7-9612f6c8d305\") " Jul 2 08:06:17.644623 kubelet[2164]: I0702 08:06:17.644608 2164 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1162624f-bd0d-4121-a8f7-9612f6c8d305-hostproc\") pod \"1162624f-bd0d-4121-a8f7-9612f6c8d305\" (UID: \"1162624f-bd0d-4121-a8f7-9612f6c8d305\") " Jul 2 08:06:17.644726 kubelet[2164]: I0702 08:06:17.644709 2164 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1162624f-bd0d-4121-a8f7-9612f6c8d305-cilium-run\") pod \"1162624f-bd0d-4121-a8f7-9612f6c8d305\" (UID: \"1162624f-bd0d-4121-a8f7-9612f6c8d305\") " Jul 2 08:06:17.668029 kubelet[2164]: I0702 08:06:17.652218 2164 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3955e3d1-a9af-465e-b011-9c56a5877900-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3955e3d1-a9af-465e-b011-9c56a5877900" (UID: "3955e3d1-a9af-465e-b011-9c56a5877900"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 08:06:17.679968 kubelet[2164]: I0702 08:06:17.679936 2164 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1162624f-bd0d-4121-a8f7-9612f6c8d305-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1162624f-bd0d-4121-a8f7-9612f6c8d305" (UID: "1162624f-bd0d-4121-a8f7-9612f6c8d305"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:06:17.679968 kubelet[2164]: I0702 08:06:17.679967 2164 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1162624f-bd0d-4121-a8f7-9612f6c8d305-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1162624f-bd0d-4121-a8f7-9612f6c8d305" (UID: "1162624f-bd0d-4121-a8f7-9612f6c8d305"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:06:17.680054 kubelet[2164]: I0702 08:06:17.679978 2164 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1162624f-bd0d-4121-a8f7-9612f6c8d305-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1162624f-bd0d-4121-a8f7-9612f6c8d305" (UID: "1162624f-bd0d-4121-a8f7-9612f6c8d305"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:06:17.681479 kubelet[2164]: I0702 08:06:17.644826 2164 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1162624f-bd0d-4121-a8f7-9612f6c8d305-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1162624f-bd0d-4121-a8f7-9612f6c8d305" (UID: "1162624f-bd0d-4121-a8f7-9612f6c8d305"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:06:17.681772 kubelet[2164]: I0702 08:06:17.681745 2164 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1162624f-bd0d-4121-a8f7-9612f6c8d305-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1162624f-bd0d-4121-a8f7-9612f6c8d305" (UID: "1162624f-bd0d-4121-a8f7-9612f6c8d305"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:06:17.681772 kubelet[2164]: I0702 08:06:17.681766 2164 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1162624f-bd0d-4121-a8f7-9612f6c8d305-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1162624f-bd0d-4121-a8f7-9612f6c8d305" (UID: "1162624f-bd0d-4121-a8f7-9612f6c8d305"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:06:17.681834 kubelet[2164]: I0702 08:06:17.681778 2164 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1162624f-bd0d-4121-a8f7-9612f6c8d305-cni-path" (OuterVolumeSpecName: "cni-path") pod "1162624f-bd0d-4121-a8f7-9612f6c8d305" (UID: "1162624f-bd0d-4121-a8f7-9612f6c8d305"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:06:17.681856 kubelet[2164]: I0702 08:06:17.681838 2164 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1162624f-bd0d-4121-a8f7-9612f6c8d305-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1162624f-bd0d-4121-a8f7-9612f6c8d305" (UID: "1162624f-bd0d-4121-a8f7-9612f6c8d305"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:06:17.681877 kubelet[2164]: I0702 08:06:17.681855 2164 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1162624f-bd0d-4121-a8f7-9612f6c8d305-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1162624f-bd0d-4121-a8f7-9612f6c8d305" (UID: "1162624f-bd0d-4121-a8f7-9612f6c8d305"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:06:17.681877 kubelet[2164]: I0702 08:06:17.681863 2164 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1162624f-bd0d-4121-a8f7-9612f6c8d305-hostproc" (OuterVolumeSpecName: "hostproc") pod "1162624f-bd0d-4121-a8f7-9612f6c8d305" (UID: "1162624f-bd0d-4121-a8f7-9612f6c8d305"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:06:17.685566 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7162d951d2665472f41f4402fa5d81a07350e361845367f888f80341d64c4b47-rootfs.mount: Deactivated successfully. Jul 2 08:06:17.685624 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7162d951d2665472f41f4402fa5d81a07350e361845367f888f80341d64c4b47-shm.mount: Deactivated successfully. Jul 2 08:06:17.685665 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c51dff3137bafcc016963b127c5ca0c9c20195b7491ae5ebcdd67eb994abcd51-rootfs.mount: Deactivated successfully. Jul 2 08:06:17.685699 systemd[1]: var-lib-kubelet-pods-3955e3d1\x2da9af\x2d465e\x2db011\x2d9c56a5877900-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkph6z.mount: Deactivated successfully. Jul 2 08:06:17.685735 systemd[1]: var-lib-kubelet-pods-1162624f\x2dbd0d\x2d4121\x2da8f7\x2d9612f6c8d305-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 08:06:17.685771 systemd[1]: var-lib-kubelet-pods-1162624f\x2dbd0d\x2d4121\x2da8f7\x2d9612f6c8d305-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 08:06:17.694095 kubelet[2164]: I0702 08:06:17.694078 2164 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1162624f-bd0d-4121-a8f7-9612f6c8d305-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1162624f-bd0d-4121-a8f7-9612f6c8d305" (UID: "1162624f-bd0d-4121-a8f7-9612f6c8d305"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 08:06:17.695105 kubelet[2164]: I0702 08:06:17.695088 2164 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1162624f-bd0d-4121-a8f7-9612f6c8d305-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1162624f-bd0d-4121-a8f7-9612f6c8d305" (UID: "1162624f-bd0d-4121-a8f7-9612f6c8d305"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 08:06:17.695142 kubelet[2164]: I0702 08:06:17.695122 2164 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1162624f-bd0d-4121-a8f7-9612f6c8d305-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1162624f-bd0d-4121-a8f7-9612f6c8d305" (UID: "1162624f-bd0d-4121-a8f7-9612f6c8d305"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 08:06:17.695166 kubelet[2164]: I0702 08:06:17.695153 2164 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3955e3d1-a9af-465e-b011-9c56a5877900-kube-api-access-kph6z" (OuterVolumeSpecName: "kube-api-access-kph6z") pod "3955e3d1-a9af-465e-b011-9c56a5877900" (UID: "3955e3d1-a9af-465e-b011-9c56a5877900"). InnerVolumeSpecName "kube-api-access-kph6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 08:06:17.697516 systemd[1]: var-lib-kubelet-pods-1162624f\x2dbd0d\x2d4121\x2da8f7\x2d9612f6c8d305-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddfzrz.mount: Deactivated successfully. Jul 2 08:06:17.701261 kubelet[2164]: I0702 08:06:17.701248 2164 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1162624f-bd0d-4121-a8f7-9612f6c8d305-kube-api-access-dfzrz" (OuterVolumeSpecName: "kube-api-access-dfzrz") pod "1162624f-bd0d-4121-a8f7-9612f6c8d305" (UID: "1162624f-bd0d-4121-a8f7-9612f6c8d305"). InnerVolumeSpecName "kube-api-access-dfzrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 08:06:17.744973 kubelet[2164]: I0702 08:06:17.744955 2164 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1162624f-bd0d-4121-a8f7-9612f6c8d305-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 2 08:06:17.745129 kubelet[2164]: I0702 08:06:17.745119 2164 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1162624f-bd0d-4121-a8f7-9612f6c8d305-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 2 08:06:17.745202 kubelet[2164]: I0702 08:06:17.745194 2164 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1162624f-bd0d-4121-a8f7-9612f6c8d305-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 2 08:06:17.745276 kubelet[2164]: I0702 08:06:17.745260 2164 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1162624f-bd0d-4121-a8f7-9612f6c8d305-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 2 08:06:17.745342 kubelet[2164]: I0702 08:06:17.745335 2164 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1162624f-bd0d-4121-a8f7-9612f6c8d305-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 2 08:06:17.745399 kubelet[2164]: I0702 08:06:17.745392 2164 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1162624f-bd0d-4121-a8f7-9612f6c8d305-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 2 08:06:17.745452 kubelet[2164]: I0702 08:06:17.745443 2164 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-kph6z\" (UniqueName: \"kubernetes.io/projected/3955e3d1-a9af-465e-b011-9c56a5877900-kube-api-access-kph6z\") on node \"localhost\" DevicePath \"\"" Jul 2 08:06:17.745518 kubelet[2164]: I0702 08:06:17.745510 2164 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1162624f-bd0d-4121-a8f7-9612f6c8d305-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 2 08:06:17.745584 kubelet[2164]: I0702 08:06:17.745576 2164 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-dfzrz\" (UniqueName: \"kubernetes.io/projected/1162624f-bd0d-4121-a8f7-9612f6c8d305-kube-api-access-dfzrz\") on node \"localhost\" DevicePath \"\"" Jul 2 08:06:17.745673 kubelet[2164]: I0702 08:06:17.745663 2164 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1162624f-bd0d-4121-a8f7-9612f6c8d305-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 2 08:06:17.745739 kubelet[2164]: I0702 08:06:17.745731 2164 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1162624f-bd0d-4121-a8f7-9612f6c8d305-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 2 08:06:17.745795 kubelet[2164]: I0702 08:06:17.745787 2164 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1162624f-bd0d-4121-a8f7-9612f6c8d305-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 2 08:06:17.745849 kubelet[2164]: I0702 08:06:17.745841 2164 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3955e3d1-a9af-465e-b011-9c56a5877900-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 2 08:06:17.745914 kubelet[2164]: I0702 08:06:17.745906 2164 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1162624f-bd0d-4121-a8f7-9612f6c8d305-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 2 08:06:17.745992 kubelet[2164]: I0702 08:06:17.745984 2164 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1162624f-bd0d-4121-a8f7-9612f6c8d305-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 2 08:06:17.746059 kubelet[2164]: I0702 08:06:17.746051 2164 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1162624f-bd0d-4121-a8f7-9612f6c8d305-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 2 08:06:17.755266 systemd[1]: Removed slice kubepods-besteffort-pod3955e3d1_a9af_465e_b011_9c56a5877900.slice. Jul 2 08:06:17.764120 systemd[1]: Removed slice kubepods-burstable-pod1162624f_bd0d_4121_a8f7_9612f6c8d305.slice. Jul 2 08:06:17.764167 systemd[1]: kubepods-burstable-pod1162624f_bd0d_4121_a8f7_9612f6c8d305.slice: Consumed 4.804s CPU time. Jul 2 08:06:17.994837 kubelet[2164]: I0702 08:06:17.994817 2164 scope.go:117] "RemoveContainer" containerID="ae65a43797f6f76e26805cc21f93b5a21982b617000703fff27422a7354f2fb4" Jul 2 08:06:18.018050 env[1252]: time="2024-07-02T08:06:18.018017813Z" level=info msg="RemoveContainer for \"ae65a43797f6f76e26805cc21f93b5a21982b617000703fff27422a7354f2fb4\"" Jul 2 08:06:18.020834 env[1252]: time="2024-07-02T08:06:18.020670621Z" level=info msg="RemoveContainer for \"ae65a43797f6f76e26805cc21f93b5a21982b617000703fff27422a7354f2fb4\" returns successfully" Jul 2 08:06:18.022991 kubelet[2164]: I0702 08:06:18.022979 2164 scope.go:117] "RemoveContainer" containerID="78ea2957fa8d5c4779b7fb26b9be77e0b49b00b965ff0f03f40140f20857446a" Jul 2 08:06:18.024780 env[1252]: time="2024-07-02T08:06:18.024634053Z" level=info msg="RemoveContainer for \"78ea2957fa8d5c4779b7fb26b9be77e0b49b00b965ff0f03f40140f20857446a\"" Jul 2 08:06:18.030434 env[1252]: time="2024-07-02T08:06:18.030332143Z" level=info msg="RemoveContainer for \"78ea2957fa8d5c4779b7fb26b9be77e0b49b00b965ff0f03f40140f20857446a\" returns successfully" Jul 2 08:06:18.031316 kubelet[2164]: I0702 08:06:18.030888 2164 scope.go:117] "RemoveContainer" containerID="3e9309d9dfa54ad68da3897347fdd698f90789de73c81cc326e3f21e9336ad88" Jul 2 08:06:18.035344 env[1252]: time="2024-07-02T08:06:18.034995141Z" level=info msg="RemoveContainer for \"3e9309d9dfa54ad68da3897347fdd698f90789de73c81cc326e3f21e9336ad88\"" Jul 2 08:06:18.037009 env[1252]: time="2024-07-02T08:06:18.036983270Z" level=info msg="RemoveContainer for \"3e9309d9dfa54ad68da3897347fdd698f90789de73c81cc326e3f21e9336ad88\" returns successfully" Jul 2 08:06:18.037250 kubelet[2164]: I0702 08:06:18.037231 2164 scope.go:117] "RemoveContainer" containerID="8c937b606865112d7201fa792f126c95098b485870a3373d432d8f95853318ce" Jul 2 08:06:18.040151 env[1252]: time="2024-07-02T08:06:18.040121547Z" level=info msg="RemoveContainer for \"8c937b606865112d7201fa792f126c95098b485870a3373d432d8f95853318ce\"" Jul 2 08:06:18.041491 env[1252]: time="2024-07-02T08:06:18.041431995Z" level=info msg="RemoveContainer for \"8c937b606865112d7201fa792f126c95098b485870a3373d432d8f95853318ce\" returns successfully" Jul 2 08:06:18.041902 kubelet[2164]: I0702 08:06:18.041627 2164 scope.go:117] "RemoveContainer" containerID="35e285fbf6dd4fa4bdca5c929a798f0eb359edbb4b28a047b8a93204f3684b78" Jul 2 08:06:18.043832 env[1252]: time="2024-07-02T08:06:18.043322734Z" level=info msg="RemoveContainer for \"35e285fbf6dd4fa4bdca5c929a798f0eb359edbb4b28a047b8a93204f3684b78\"" Jul 2 08:06:18.044949 env[1252]: time="2024-07-02T08:06:18.044685806Z" level=info msg="RemoveContainer for \"35e285fbf6dd4fa4bdca5c929a798f0eb359edbb4b28a047b8a93204f3684b78\" returns successfully" Jul 2 08:06:18.045030 kubelet[2164]: I0702 08:06:18.044849 2164 scope.go:117] "RemoveContainer" containerID="ae65a43797f6f76e26805cc21f93b5a21982b617000703fff27422a7354f2fb4" Jul 2 08:06:18.045365 env[1252]: time="2024-07-02T08:06:18.045105167Z" level=error msg="ContainerStatus for \"ae65a43797f6f76e26805cc21f93b5a21982b617000703fff27422a7354f2fb4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ae65a43797f6f76e26805cc21f93b5a21982b617000703fff27422a7354f2fb4\": not found" Jul 2 08:06:18.046590 kubelet[2164]: E0702 08:06:18.046576 2164 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ae65a43797f6f76e26805cc21f93b5a21982b617000703fff27422a7354f2fb4\": not found" containerID="ae65a43797f6f76e26805cc21f93b5a21982b617000703fff27422a7354f2fb4" Jul 2 08:06:18.047505 kubelet[2164]: I0702 08:06:18.047493 2164 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ae65a43797f6f76e26805cc21f93b5a21982b617000703fff27422a7354f2fb4"} err="failed to get container status \"ae65a43797f6f76e26805cc21f93b5a21982b617000703fff27422a7354f2fb4\": rpc error: code = NotFound desc = an error occurred when try to find container \"ae65a43797f6f76e26805cc21f93b5a21982b617000703fff27422a7354f2fb4\": not found" Jul 2 08:06:18.047588 kubelet[2164]: I0702 08:06:18.047579 2164 scope.go:117] "RemoveContainer" containerID="78ea2957fa8d5c4779b7fb26b9be77e0b49b00b965ff0f03f40140f20857446a" Jul 2 08:06:18.048176 env[1252]: time="2024-07-02T08:06:18.047818305Z" level=error msg="ContainerStatus for \"78ea2957fa8d5c4779b7fb26b9be77e0b49b00b965ff0f03f40140f20857446a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"78ea2957fa8d5c4779b7fb26b9be77e0b49b00b965ff0f03f40140f20857446a\": not found" Jul 2 08:06:18.048308 kubelet[2164]: E0702 08:06:18.048286 2164 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"78ea2957fa8d5c4779b7fb26b9be77e0b49b00b965ff0f03f40140f20857446a\": not found" containerID="78ea2957fa8d5c4779b7fb26b9be77e0b49b00b965ff0f03f40140f20857446a" Jul 2 08:06:18.048374 kubelet[2164]: I0702 08:06:18.048366 2164 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"78ea2957fa8d5c4779b7fb26b9be77e0b49b00b965ff0f03f40140f20857446a"} err="failed to get container status \"78ea2957fa8d5c4779b7fb26b9be77e0b49b00b965ff0f03f40140f20857446a\": rpc error: code = NotFound desc = an error occurred when try to find container \"78ea2957fa8d5c4779b7fb26b9be77e0b49b00b965ff0f03f40140f20857446a\": not found" Jul 2 08:06:18.048424 kubelet[2164]: I0702 08:06:18.048418 2164 scope.go:117] "RemoveContainer" containerID="3e9309d9dfa54ad68da3897347fdd698f90789de73c81cc326e3f21e9336ad88" Jul 2 08:06:18.048594 env[1252]: time="2024-07-02T08:06:18.048564397Z" level=error msg="ContainerStatus for \"3e9309d9dfa54ad68da3897347fdd698f90789de73c81cc326e3f21e9336ad88\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3e9309d9dfa54ad68da3897347fdd698f90789de73c81cc326e3f21e9336ad88\": not found" Jul 2 08:06:18.048681 kubelet[2164]: E0702 08:06:18.048675 2164 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3e9309d9dfa54ad68da3897347fdd698f90789de73c81cc326e3f21e9336ad88\": not found" containerID="3e9309d9dfa54ad68da3897347fdd698f90789de73c81cc326e3f21e9336ad88" Jul 2 08:06:18.048739 kubelet[2164]: I0702 08:06:18.048731 2164 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3e9309d9dfa54ad68da3897347fdd698f90789de73c81cc326e3f21e9336ad88"} err="failed to get container status \"3e9309d9dfa54ad68da3897347fdd698f90789de73c81cc326e3f21e9336ad88\": rpc error: code = NotFound desc = an error occurred when try to find container \"3e9309d9dfa54ad68da3897347fdd698f90789de73c81cc326e3f21e9336ad88\": not found" Jul 2 08:06:18.048786 kubelet[2164]: I0702 08:06:18.048779 2164 scope.go:117] "RemoveContainer" containerID="8c937b606865112d7201fa792f126c95098b485870a3373d432d8f95853318ce" Jul 2 08:06:18.048964 env[1252]: time="2024-07-02T08:06:18.048897213Z" level=error msg="ContainerStatus for \"8c937b606865112d7201fa792f126c95098b485870a3373d432d8f95853318ce\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8c937b606865112d7201fa792f126c95098b485870a3373d432d8f95853318ce\": not found" Jul 2 08:06:18.049061 kubelet[2164]: E0702 08:06:18.049053 2164 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8c937b606865112d7201fa792f126c95098b485870a3373d432d8f95853318ce\": not found" containerID="8c937b606865112d7201fa792f126c95098b485870a3373d432d8f95853318ce" Jul 2 08:06:18.049169 kubelet[2164]: I0702 08:06:18.049160 2164 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8c937b606865112d7201fa792f126c95098b485870a3373d432d8f95853318ce"} err="failed to get container status \"8c937b606865112d7201fa792f126c95098b485870a3373d432d8f95853318ce\": rpc error: code = NotFound desc = an error occurred when try to find container \"8c937b606865112d7201fa792f126c95098b485870a3373d432d8f95853318ce\": not found" Jul 2 08:06:18.049227 kubelet[2164]: I0702 08:06:18.049221 2164 scope.go:117] "RemoveContainer" containerID="35e285fbf6dd4fa4bdca5c929a798f0eb359edbb4b28a047b8a93204f3684b78" Jul 2 08:06:18.049371 env[1252]: time="2024-07-02T08:06:18.049342515Z" level=error msg="ContainerStatus for \"35e285fbf6dd4fa4bdca5c929a798f0eb359edbb4b28a047b8a93204f3684b78\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"35e285fbf6dd4fa4bdca5c929a798f0eb359edbb4b28a047b8a93204f3684b78\": not found" Jul 2 08:06:18.049458 kubelet[2164]: E0702 08:06:18.049446 2164 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"35e285fbf6dd4fa4bdca5c929a798f0eb359edbb4b28a047b8a93204f3684b78\": not found" containerID="35e285fbf6dd4fa4bdca5c929a798f0eb359edbb4b28a047b8a93204f3684b78" Jul 2 08:06:18.049503 kubelet[2164]: I0702 08:06:18.049464 2164 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"35e285fbf6dd4fa4bdca5c929a798f0eb359edbb4b28a047b8a93204f3684b78"} err="failed to get container status \"35e285fbf6dd4fa4bdca5c929a798f0eb359edbb4b28a047b8a93204f3684b78\": rpc error: code = NotFound desc = an error occurred when try to find container \"35e285fbf6dd4fa4bdca5c929a798f0eb359edbb4b28a047b8a93204f3684b78\": not found" Jul 2 08:06:18.049503 kubelet[2164]: I0702 08:06:18.049470 2164 scope.go:117] "RemoveContainer" containerID="94f65a183e1e89f51d62bb19dafcbb25cba8e23fc1138c5273d95c34c2ffa26c" Jul 2 08:06:18.050049 env[1252]: time="2024-07-02T08:06:18.050032715Z" level=info msg="RemoveContainer for \"94f65a183e1e89f51d62bb19dafcbb25cba8e23fc1138c5273d95c34c2ffa26c\"" Jul 2 08:06:18.056750 env[1252]: time="2024-07-02T08:06:18.056731578Z" level=info msg="RemoveContainer for \"94f65a183e1e89f51d62bb19dafcbb25cba8e23fc1138c5273d95c34c2ffa26c\" returns successfully" Jul 2 08:06:18.056896 kubelet[2164]: I0702 08:06:18.056887 2164 scope.go:117] "RemoveContainer" containerID="94f65a183e1e89f51d62bb19dafcbb25cba8e23fc1138c5273d95c34c2ffa26c" Jul 2 08:06:18.057076 env[1252]: time="2024-07-02T08:06:18.057044874Z" level=error msg="ContainerStatus for \"94f65a183e1e89f51d62bb19dafcbb25cba8e23fc1138c5273d95c34c2ffa26c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"94f65a183e1e89f51d62bb19dafcbb25cba8e23fc1138c5273d95c34c2ffa26c\": not found" Jul 2 08:06:18.057133 kubelet[2164]: E0702 08:06:18.057127 2164 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"94f65a183e1e89f51d62bb19dafcbb25cba8e23fc1138c5273d95c34c2ffa26c\": not found" containerID="94f65a183e1e89f51d62bb19dafcbb25cba8e23fc1138c5273d95c34c2ffa26c" Jul 2 08:06:18.057158 kubelet[2164]: I0702 08:06:18.057142 2164 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"94f65a183e1e89f51d62bb19dafcbb25cba8e23fc1138c5273d95c34c2ffa26c"} err="failed to get container status \"94f65a183e1e89f51d62bb19dafcbb25cba8e23fc1138c5273d95c34c2ffa26c\": rpc error: code = NotFound desc = an error occurred when try to find container \"94f65a183e1e89f51d62bb19dafcbb25cba8e23fc1138c5273d95c34c2ffa26c\": not found" Jul 2 08:06:18.610048 sshd[3693]: pam_unix(sshd:session): session closed for user core Jul 2 08:06:18.614206 systemd[1]: Started sshd@22-139.178.70.105:22-139.178.68.195:46090.service. Jul 2 08:06:18.634493 systemd[1]: sshd@21-139.178.70.105:22-139.178.68.195:46076.service: Deactivated successfully. Jul 2 08:06:18.635082 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 08:06:18.635598 systemd-logind[1242]: Session 24 logged out. Waiting for processes to exit. Jul 2 08:06:18.636272 systemd-logind[1242]: Removed session 24. Jul 2 08:06:18.692699 sshd[3863]: Accepted publickey for core from 139.178.68.195 port 46090 ssh2: RSA SHA256:ZBsE641/vEJoOwiO3PsmlEnzQGV1D8+6UMqIklXV5hk Jul 2 08:06:18.693744 sshd[3863]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:06:18.696794 systemd-logind[1242]: New session 25 of user core. Jul 2 08:06:18.697645 systemd[1]: Started session-25.scope. Jul 2 08:06:19.087305 systemd[1]: Started sshd@23-139.178.70.105:22-139.178.68.195:46102.service. Jul 2 08:06:19.093861 sshd[3863]: pam_unix(sshd:session): session closed for user core Jul 2 08:06:19.095737 systemd[1]: sshd@22-139.178.70.105:22-139.178.68.195:46090.service: Deactivated successfully. Jul 2 08:06:19.096242 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 08:06:19.100080 systemd-logind[1242]: Session 25 logged out. Waiting for processes to exit. Jul 2 08:06:19.100798 systemd-logind[1242]: Removed session 25. Jul 2 08:06:19.125143 sshd[3873]: Accepted publickey for core from 139.178.68.195 port 46102 ssh2: RSA SHA256:ZBsE641/vEJoOwiO3PsmlEnzQGV1D8+6UMqIklXV5hk Jul 2 08:06:19.126155 sshd[3873]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:06:19.129070 systemd-logind[1242]: New session 26 of user core. Jul 2 08:06:19.129542 systemd[1]: Started session-26.scope. Jul 2 08:06:19.141345 kubelet[2164]: I0702 08:06:19.141254 2164 topology_manager.go:215] "Topology Admit Handler" podUID="74baf5c3-e80d-4453-93d5-f18548b66810" podNamespace="kube-system" podName="cilium-gcs9l" Jul 2 08:06:19.148068 kubelet[2164]: E0702 08:06:19.148035 2164 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1162624f-bd0d-4121-a8f7-9612f6c8d305" containerName="mount-bpf-fs" Jul 2 08:06:19.148210 kubelet[2164]: E0702 08:06:19.148199 2164 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1162624f-bd0d-4121-a8f7-9612f6c8d305" containerName="clean-cilium-state" Jul 2 08:06:19.148335 kubelet[2164]: E0702 08:06:19.148323 2164 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1162624f-bd0d-4121-a8f7-9612f6c8d305" containerName="cilium-agent" Jul 2 08:06:19.148425 kubelet[2164]: E0702 08:06:19.148410 2164 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3955e3d1-a9af-465e-b011-9c56a5877900" containerName="cilium-operator" Jul 2 08:06:19.148506 kubelet[2164]: E0702 08:06:19.148498 2164 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1162624f-bd0d-4121-a8f7-9612f6c8d305" containerName="mount-cgroup" Jul 2 08:06:19.148583 kubelet[2164]: E0702 08:06:19.148573 2164 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1162624f-bd0d-4121-a8f7-9612f6c8d305" containerName="apply-sysctl-overwrites" Jul 2 08:06:19.148699 kubelet[2164]: I0702 08:06:19.148690 2164 memory_manager.go:346] "RemoveStaleState removing state" podUID="3955e3d1-a9af-465e-b011-9c56a5877900" containerName="cilium-operator" Jul 2 08:06:19.148775 kubelet[2164]: I0702 08:06:19.148766 2164 memory_manager.go:346] "RemoveStaleState removing state" podUID="1162624f-bd0d-4121-a8f7-9612f6c8d305" containerName="cilium-agent" Jul 2 08:06:19.155324 systemd[1]: Created slice kubepods-burstable-pod74baf5c3_e80d_4453_93d5_f18548b66810.slice. Jul 2 08:06:19.254645 kubelet[2164]: I0702 08:06:19.254617 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/74baf5c3-e80d-4453-93d5-f18548b66810-cilium-cgroup\") pod \"cilium-gcs9l\" (UID: \"74baf5c3-e80d-4453-93d5-f18548b66810\") " pod="kube-system/cilium-gcs9l" Jul 2 08:06:19.254645 kubelet[2164]: I0702 08:06:19.254652 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74baf5c3-e80d-4453-93d5-f18548b66810-xtables-lock\") pod \"cilium-gcs9l\" (UID: \"74baf5c3-e80d-4453-93d5-f18548b66810\") " pod="kube-system/cilium-gcs9l" Jul 2 08:06:19.254791 kubelet[2164]: I0702 08:06:19.254664 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/74baf5c3-e80d-4453-93d5-f18548b66810-cilium-run\") pod \"cilium-gcs9l\" (UID: \"74baf5c3-e80d-4453-93d5-f18548b66810\") " pod="kube-system/cilium-gcs9l" Jul 2 08:06:19.254791 kubelet[2164]: I0702 08:06:19.254675 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/74baf5c3-e80d-4453-93d5-f18548b66810-bpf-maps\") pod \"cilium-gcs9l\" (UID: \"74baf5c3-e80d-4453-93d5-f18548b66810\") " pod="kube-system/cilium-gcs9l" Jul 2 08:06:19.254791 kubelet[2164]: I0702 08:06:19.254688 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/74baf5c3-e80d-4453-93d5-f18548b66810-cni-path\") pod \"cilium-gcs9l\" (UID: \"74baf5c3-e80d-4453-93d5-f18548b66810\") " pod="kube-system/cilium-gcs9l" Jul 2 08:06:19.254791 kubelet[2164]: I0702 08:06:19.254703 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/74baf5c3-e80d-4453-93d5-f18548b66810-clustermesh-secrets\") pod \"cilium-gcs9l\" (UID: \"74baf5c3-e80d-4453-93d5-f18548b66810\") " pod="kube-system/cilium-gcs9l" Jul 2 08:06:19.254791 kubelet[2164]: I0702 08:06:19.254716 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/74baf5c3-e80d-4453-93d5-f18548b66810-hostproc\") pod \"cilium-gcs9l\" (UID: \"74baf5c3-e80d-4453-93d5-f18548b66810\") " pod="kube-system/cilium-gcs9l" Jul 2 08:06:19.254791 kubelet[2164]: I0702 08:06:19.254729 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/74baf5c3-e80d-4453-93d5-f18548b66810-hubble-tls\") pod \"cilium-gcs9l\" (UID: \"74baf5c3-e80d-4453-93d5-f18548b66810\") " pod="kube-system/cilium-gcs9l" Jul 2 08:06:19.255001 kubelet[2164]: I0702 08:06:19.254741 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/74baf5c3-e80d-4453-93d5-f18548b66810-host-proc-sys-net\") pod \"cilium-gcs9l\" (UID: \"74baf5c3-e80d-4453-93d5-f18548b66810\") " pod="kube-system/cilium-gcs9l" Jul 2 08:06:19.255001 kubelet[2164]: I0702 08:06:19.254752 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vr8c\" (UniqueName: \"kubernetes.io/projected/74baf5c3-e80d-4453-93d5-f18548b66810-kube-api-access-4vr8c\") pod \"cilium-gcs9l\" (UID: \"74baf5c3-e80d-4453-93d5-f18548b66810\") " pod="kube-system/cilium-gcs9l" Jul 2 08:06:19.255001 kubelet[2164]: I0702 08:06:19.254764 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/74baf5c3-e80d-4453-93d5-f18548b66810-cilium-config-path\") pod \"cilium-gcs9l\" (UID: \"74baf5c3-e80d-4453-93d5-f18548b66810\") " pod="kube-system/cilium-gcs9l" Jul 2 08:06:19.255001 kubelet[2164]: I0702 08:06:19.254776 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/74baf5c3-e80d-4453-93d5-f18548b66810-cilium-ipsec-secrets\") pod \"cilium-gcs9l\" (UID: \"74baf5c3-e80d-4453-93d5-f18548b66810\") " pod="kube-system/cilium-gcs9l" Jul 2 08:06:19.255001 kubelet[2164]: I0702 08:06:19.254786 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/74baf5c3-e80d-4453-93d5-f18548b66810-etc-cni-netd\") pod \"cilium-gcs9l\" (UID: \"74baf5c3-e80d-4453-93d5-f18548b66810\") " pod="kube-system/cilium-gcs9l" Jul 2 08:06:19.255122 kubelet[2164]: I0702 08:06:19.254799 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74baf5c3-e80d-4453-93d5-f18548b66810-lib-modules\") pod \"cilium-gcs9l\" (UID: \"74baf5c3-e80d-4453-93d5-f18548b66810\") " pod="kube-system/cilium-gcs9l" Jul 2 08:06:19.255122 kubelet[2164]: I0702 08:06:19.254811 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/74baf5c3-e80d-4453-93d5-f18548b66810-host-proc-sys-kernel\") pod \"cilium-gcs9l\" (UID: \"74baf5c3-e80d-4453-93d5-f18548b66810\") " pod="kube-system/cilium-gcs9l" Jul 2 08:06:19.319789 systemd[1]: Started sshd@24-139.178.70.105:22-139.178.68.195:46104.service. Jul 2 08:06:19.324132 systemd[1]: sshd@23-139.178.70.105:22-139.178.68.195:46102.service: Deactivated successfully. Jul 2 08:06:19.322035 sshd[3873]: pam_unix(sshd:session): session closed for user core Jul 2 08:06:19.324633 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 08:06:19.326595 systemd-logind[1242]: Session 26 logged out. Waiting for processes to exit. Jul 2 08:06:19.327841 systemd-logind[1242]: Removed session 26. Jul 2 08:06:19.331752 kubelet[2164]: E0702 08:06:19.331734 2164 pod_workers.go:1300] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-4vr8c lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-gcs9l" podUID="74baf5c3-e80d-4453-93d5-f18548b66810" Jul 2 08:06:19.374045 sshd[3884]: Accepted publickey for core from 139.178.68.195 port 46104 ssh2: RSA SHA256:ZBsE641/vEJoOwiO3PsmlEnzQGV1D8+6UMqIklXV5hk Jul 2 08:06:19.375722 sshd[3884]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:06:19.383195 systemd[1]: Started session-27.scope. Jul 2 08:06:19.383709 systemd-logind[1242]: New session 27 of user core. Jul 2 08:06:19.745628 kubelet[2164]: I0702 08:06:19.745601 2164 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1162624f-bd0d-4121-a8f7-9612f6c8d305" path="/var/lib/kubelet/pods/1162624f-bd0d-4121-a8f7-9612f6c8d305/volumes" Jul 2 08:06:19.746654 kubelet[2164]: I0702 08:06:19.746641 2164 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="3955e3d1-a9af-465e-b011-9c56a5877900" path="/var/lib/kubelet/pods/3955e3d1-a9af-465e-b011-9c56a5877900/volumes" Jul 2 08:06:20.161944 kubelet[2164]: I0702 08:06:20.161859 2164 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/74baf5c3-e80d-4453-93d5-f18548b66810-hostproc\") pod \"74baf5c3-e80d-4453-93d5-f18548b66810\" (UID: \"74baf5c3-e80d-4453-93d5-f18548b66810\") " Jul 2 08:06:20.162285 kubelet[2164]: I0702 08:06:20.162276 2164 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/74baf5c3-e80d-4453-93d5-f18548b66810-cilium-run\") pod \"74baf5c3-e80d-4453-93d5-f18548b66810\" (UID: \"74baf5c3-e80d-4453-93d5-f18548b66810\") " Jul 2 08:06:20.162370 kubelet[2164]: I0702 08:06:20.162363 2164 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/74baf5c3-e80d-4453-93d5-f18548b66810-host-proc-sys-net\") pod \"74baf5c3-e80d-4453-93d5-f18548b66810\" (UID: \"74baf5c3-e80d-4453-93d5-f18548b66810\") " Jul 2 08:06:20.162447 kubelet[2164]: I0702 08:06:20.162241 2164 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74baf5c3-e80d-4453-93d5-f18548b66810-hostproc" (OuterVolumeSpecName: "hostproc") pod "74baf5c3-e80d-4453-93d5-f18548b66810" (UID: "74baf5c3-e80d-4453-93d5-f18548b66810"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:06:20.162447 kubelet[2164]: I0702 08:06:20.162334 2164 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74baf5c3-e80d-4453-93d5-f18548b66810-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "74baf5c3-e80d-4453-93d5-f18548b66810" (UID: "74baf5c3-e80d-4453-93d5-f18548b66810"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:06:20.162510 kubelet[2164]: I0702 08:06:20.162453 2164 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74baf5c3-e80d-4453-93d5-f18548b66810-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "74baf5c3-e80d-4453-93d5-f18548b66810" (UID: "74baf5c3-e80d-4453-93d5-f18548b66810"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:06:20.162544 kubelet[2164]: I0702 08:06:20.162429 2164 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74baf5c3-e80d-4453-93d5-f18548b66810-lib-modules\") pod \"74baf5c3-e80d-4453-93d5-f18548b66810\" (UID: \"74baf5c3-e80d-4453-93d5-f18548b66810\") " Jul 2 08:06:20.162611 kubelet[2164]: I0702 08:06:20.162604 2164 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/74baf5c3-e80d-4453-93d5-f18548b66810-clustermesh-secrets\") pod \"74baf5c3-e80d-4453-93d5-f18548b66810\" (UID: \"74baf5c3-e80d-4453-93d5-f18548b66810\") " Jul 2 08:06:20.162659 kubelet[2164]: I0702 08:06:20.162549 2164 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74baf5c3-e80d-4453-93d5-f18548b66810-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "74baf5c3-e80d-4453-93d5-f18548b66810" (UID: "74baf5c3-e80d-4453-93d5-f18548b66810"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:06:20.162707 kubelet[2164]: I0702 08:06:20.162701 2164 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/74baf5c3-e80d-4453-93d5-f18548b66810-cilium-cgroup\") pod \"74baf5c3-e80d-4453-93d5-f18548b66810\" (UID: \"74baf5c3-e80d-4453-93d5-f18548b66810\") " Jul 2 08:06:20.162761 kubelet[2164]: I0702 08:06:20.162754 2164 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/74baf5c3-e80d-4453-93d5-f18548b66810-host-proc-sys-kernel\") pod \"74baf5c3-e80d-4453-93d5-f18548b66810\" (UID: \"74baf5c3-e80d-4453-93d5-f18548b66810\") " Jul 2 08:06:20.162823 kubelet[2164]: I0702 08:06:20.162816 2164 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74baf5c3-e80d-4453-93d5-f18548b66810-xtables-lock\") pod \"74baf5c3-e80d-4453-93d5-f18548b66810\" (UID: \"74baf5c3-e80d-4453-93d5-f18548b66810\") " Jul 2 08:06:20.162872 kubelet[2164]: I0702 08:06:20.162866 2164 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/74baf5c3-e80d-4453-93d5-f18548b66810-bpf-maps\") pod \"74baf5c3-e80d-4453-93d5-f18548b66810\" (UID: \"74baf5c3-e80d-4453-93d5-f18548b66810\") " Jul 2 08:06:20.162933 kubelet[2164]: I0702 08:06:20.162918 2164 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/74baf5c3-e80d-4453-93d5-f18548b66810-cni-path\") pod \"74baf5c3-e80d-4453-93d5-f18548b66810\" (UID: \"74baf5c3-e80d-4453-93d5-f18548b66810\") " Jul 2 08:06:20.162992 kubelet[2164]: I0702 08:06:20.162985 2164 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/74baf5c3-e80d-4453-93d5-f18548b66810-cilium-config-path\") pod \"74baf5c3-e80d-4453-93d5-f18548b66810\" (UID: \"74baf5c3-e80d-4453-93d5-f18548b66810\") " Jul 2 08:06:20.163049 kubelet[2164]: I0702 08:06:20.163042 2164 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/74baf5c3-e80d-4453-93d5-f18548b66810-etc-cni-netd\") pod \"74baf5c3-e80d-4453-93d5-f18548b66810\" (UID: \"74baf5c3-e80d-4453-93d5-f18548b66810\") " Jul 2 08:06:20.163114 kubelet[2164]: I0702 08:06:20.163107 2164 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/74baf5c3-e80d-4453-93d5-f18548b66810-hubble-tls\") pod \"74baf5c3-e80d-4453-93d5-f18548b66810\" (UID: \"74baf5c3-e80d-4453-93d5-f18548b66810\") " Jul 2 08:06:20.163170 kubelet[2164]: I0702 08:06:20.163163 2164 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vr8c\" (UniqueName: \"kubernetes.io/projected/74baf5c3-e80d-4453-93d5-f18548b66810-kube-api-access-4vr8c\") pod \"74baf5c3-e80d-4453-93d5-f18548b66810\" (UID: \"74baf5c3-e80d-4453-93d5-f18548b66810\") " Jul 2 08:06:20.163226 kubelet[2164]: I0702 08:06:20.163219 2164 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/74baf5c3-e80d-4453-93d5-f18548b66810-cilium-ipsec-secrets\") pod \"74baf5c3-e80d-4453-93d5-f18548b66810\" (UID: \"74baf5c3-e80d-4453-93d5-f18548b66810\") " Jul 2 08:06:20.163295 kubelet[2164]: I0702 08:06:20.163288 2164 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74baf5c3-e80d-4453-93d5-f18548b66810-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 2 08:06:20.163349 kubelet[2164]: I0702 08:06:20.163341 2164 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/74baf5c3-e80d-4453-93d5-f18548b66810-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 2 08:06:20.163397 kubelet[2164]: I0702 08:06:20.163390 2164 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/74baf5c3-e80d-4453-93d5-f18548b66810-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 2 08:06:20.163442 kubelet[2164]: I0702 08:06:20.163435 2164 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/74baf5c3-e80d-4453-93d5-f18548b66810-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 2 08:06:20.165360 systemd[1]: var-lib-kubelet-pods-74baf5c3\x2de80d\x2d4453\x2d93d5\x2df18548b66810-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 08:06:20.165984 kubelet[2164]: I0702 08:06:20.165563 2164 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74baf5c3-e80d-4453-93d5-f18548b66810-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "74baf5c3-e80d-4453-93d5-f18548b66810" (UID: "74baf5c3-e80d-4453-93d5-f18548b66810"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 08:06:20.165984 kubelet[2164]: I0702 08:06:20.165586 2164 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74baf5c3-e80d-4453-93d5-f18548b66810-cni-path" (OuterVolumeSpecName: "cni-path") pod "74baf5c3-e80d-4453-93d5-f18548b66810" (UID: "74baf5c3-e80d-4453-93d5-f18548b66810"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:06:20.165984 kubelet[2164]: I0702 08:06:20.165597 2164 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74baf5c3-e80d-4453-93d5-f18548b66810-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "74baf5c3-e80d-4453-93d5-f18548b66810" (UID: "74baf5c3-e80d-4453-93d5-f18548b66810"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:06:20.165984 kubelet[2164]: I0702 08:06:20.165606 2164 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74baf5c3-e80d-4453-93d5-f18548b66810-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "74baf5c3-e80d-4453-93d5-f18548b66810" (UID: "74baf5c3-e80d-4453-93d5-f18548b66810"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:06:20.165984 kubelet[2164]: I0702 08:06:20.165615 2164 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74baf5c3-e80d-4453-93d5-f18548b66810-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "74baf5c3-e80d-4453-93d5-f18548b66810" (UID: "74baf5c3-e80d-4453-93d5-f18548b66810"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:06:20.166093 kubelet[2164]: I0702 08:06:20.165623 2164 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74baf5c3-e80d-4453-93d5-f18548b66810-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "74baf5c3-e80d-4453-93d5-f18548b66810" (UID: "74baf5c3-e80d-4453-93d5-f18548b66810"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:06:20.166093 kubelet[2164]: I0702 08:06:20.165633 2164 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74baf5c3-e80d-4453-93d5-f18548b66810-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "74baf5c3-e80d-4453-93d5-f18548b66810" (UID: "74baf5c3-e80d-4453-93d5-f18548b66810"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:06:20.166762 kubelet[2164]: I0702 08:06:20.166743 2164 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74baf5c3-e80d-4453-93d5-f18548b66810-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "74baf5c3-e80d-4453-93d5-f18548b66810" (UID: "74baf5c3-e80d-4453-93d5-f18548b66810"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 08:06:20.167499 kubelet[2164]: I0702 08:06:20.167485 2164 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74baf5c3-e80d-4453-93d5-f18548b66810-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "74baf5c3-e80d-4453-93d5-f18548b66810" (UID: "74baf5c3-e80d-4453-93d5-f18548b66810"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 08:06:20.168443 kubelet[2164]: I0702 08:06:20.168426 2164 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74baf5c3-e80d-4453-93d5-f18548b66810-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "74baf5c3-e80d-4453-93d5-f18548b66810" (UID: "74baf5c3-e80d-4453-93d5-f18548b66810"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 08:06:20.169128 kubelet[2164]: I0702 08:06:20.169116 2164 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74baf5c3-e80d-4453-93d5-f18548b66810-kube-api-access-4vr8c" (OuterVolumeSpecName: "kube-api-access-4vr8c") pod "74baf5c3-e80d-4453-93d5-f18548b66810" (UID: "74baf5c3-e80d-4453-93d5-f18548b66810"). InnerVolumeSpecName "kube-api-access-4vr8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 08:06:20.264623 kubelet[2164]: I0702 08:06:20.264596 2164 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/74baf5c3-e80d-4453-93d5-f18548b66810-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 2 08:06:20.264747 kubelet[2164]: I0702 08:06:20.264738 2164 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/74baf5c3-e80d-4453-93d5-f18548b66810-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 2 08:06:20.264802 kubelet[2164]: I0702 08:06:20.264795 2164 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/74baf5c3-e80d-4453-93d5-f18548b66810-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 2 08:06:20.264852 kubelet[2164]: I0702 08:06:20.264845 2164 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/74baf5c3-e80d-4453-93d5-f18548b66810-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 2 08:06:20.264904 kubelet[2164]: I0702 08:06:20.264897 2164 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/74baf5c3-e80d-4453-93d5-f18548b66810-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 2 08:06:20.264969 kubelet[2164]: I0702 08:06:20.264962 2164 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/74baf5c3-e80d-4453-93d5-f18548b66810-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 2 08:06:20.265017 kubelet[2164]: I0702 08:06:20.265011 2164 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74baf5c3-e80d-4453-93d5-f18548b66810-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 2 08:06:20.265061 kubelet[2164]: I0702 08:06:20.265055 2164 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/74baf5c3-e80d-4453-93d5-f18548b66810-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 2 08:06:20.265110 kubelet[2164]: I0702 08:06:20.265104 2164 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/74baf5c3-e80d-4453-93d5-f18548b66810-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Jul 2 08:06:20.265157 kubelet[2164]: I0702 08:06:20.265151 2164 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/74baf5c3-e80d-4453-93d5-f18548b66810-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 2 08:06:20.265201 kubelet[2164]: I0702 08:06:20.265195 2164 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-4vr8c\" (UniqueName: \"kubernetes.io/projected/74baf5c3-e80d-4453-93d5-f18548b66810-kube-api-access-4vr8c\") on node \"localhost\" DevicePath \"\"" Jul 2 08:06:20.359614 systemd[1]: var-lib-kubelet-pods-74baf5c3\x2de80d\x2d4453\x2d93d5\x2df18548b66810-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4vr8c.mount: Deactivated successfully. Jul 2 08:06:20.359672 systemd[1]: var-lib-kubelet-pods-74baf5c3\x2de80d\x2d4453\x2d93d5\x2df18548b66810-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 2 08:06:20.359711 systemd[1]: var-lib-kubelet-pods-74baf5c3\x2de80d\x2d4453\x2d93d5\x2df18548b66810-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 08:06:20.827121 kubelet[2164]: E0702 08:06:20.827089 2164 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 08:06:21.026084 systemd[1]: Removed slice kubepods-burstable-pod74baf5c3_e80d_4453_93d5_f18548b66810.slice. Jul 2 08:06:21.069749 kubelet[2164]: I0702 08:06:21.069728 2164 topology_manager.go:215] "Topology Admit Handler" podUID="28ffacfe-ac24-4a1f-8185-b9ba273d11e6" podNamespace="kube-system" podName="cilium-gd98c" Jul 2 08:06:21.073142 systemd[1]: Created slice kubepods-burstable-pod28ffacfe_ac24_4a1f_8185_b9ba273d11e6.slice. Jul 2 08:06:21.171140 kubelet[2164]: I0702 08:06:21.171119 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/28ffacfe-ac24-4a1f-8185-b9ba273d11e6-host-proc-sys-net\") pod \"cilium-gd98c\" (UID: \"28ffacfe-ac24-4a1f-8185-b9ba273d11e6\") " pod="kube-system/cilium-gd98c" Jul 2 08:06:21.171345 kubelet[2164]: I0702 08:06:21.171148 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/28ffacfe-ac24-4a1f-8185-b9ba273d11e6-cni-path\") pod \"cilium-gd98c\" (UID: \"28ffacfe-ac24-4a1f-8185-b9ba273d11e6\") " pod="kube-system/cilium-gd98c" Jul 2 08:06:21.171345 kubelet[2164]: I0702 08:06:21.171171 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/28ffacfe-ac24-4a1f-8185-b9ba273d11e6-cilium-config-path\") pod \"cilium-gd98c\" (UID: \"28ffacfe-ac24-4a1f-8185-b9ba273d11e6\") " pod="kube-system/cilium-gd98c" Jul 2 08:06:21.171345 kubelet[2164]: I0702 08:06:21.171184 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/28ffacfe-ac24-4a1f-8185-b9ba273d11e6-hostproc\") pod \"cilium-gd98c\" (UID: \"28ffacfe-ac24-4a1f-8185-b9ba273d11e6\") " pod="kube-system/cilium-gd98c" Jul 2 08:06:21.171345 kubelet[2164]: I0702 08:06:21.171197 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/28ffacfe-ac24-4a1f-8185-b9ba273d11e6-cilium-run\") pod \"cilium-gd98c\" (UID: \"28ffacfe-ac24-4a1f-8185-b9ba273d11e6\") " pod="kube-system/cilium-gd98c" Jul 2 08:06:21.171345 kubelet[2164]: I0702 08:06:21.171209 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/28ffacfe-ac24-4a1f-8185-b9ba273d11e6-cilium-cgroup\") pod \"cilium-gd98c\" (UID: \"28ffacfe-ac24-4a1f-8185-b9ba273d11e6\") " pod="kube-system/cilium-gd98c" Jul 2 08:06:21.171345 kubelet[2164]: I0702 08:06:21.171220 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/28ffacfe-ac24-4a1f-8185-b9ba273d11e6-xtables-lock\") pod \"cilium-gd98c\" (UID: \"28ffacfe-ac24-4a1f-8185-b9ba273d11e6\") " pod="kube-system/cilium-gd98c" Jul 2 08:06:21.171472 kubelet[2164]: I0702 08:06:21.171242 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/28ffacfe-ac24-4a1f-8185-b9ba273d11e6-bpf-maps\") pod \"cilium-gd98c\" (UID: \"28ffacfe-ac24-4a1f-8185-b9ba273d11e6\") " pod="kube-system/cilium-gd98c" Jul 2 08:06:21.171472 kubelet[2164]: I0702 08:06:21.171257 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/28ffacfe-ac24-4a1f-8185-b9ba273d11e6-hubble-tls\") pod \"cilium-gd98c\" (UID: \"28ffacfe-ac24-4a1f-8185-b9ba273d11e6\") " pod="kube-system/cilium-gd98c" Jul 2 08:06:21.171472 kubelet[2164]: I0702 08:06:21.171268 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/28ffacfe-ac24-4a1f-8185-b9ba273d11e6-clustermesh-secrets\") pod \"cilium-gd98c\" (UID: \"28ffacfe-ac24-4a1f-8185-b9ba273d11e6\") " pod="kube-system/cilium-gd98c" Jul 2 08:06:21.171472 kubelet[2164]: I0702 08:06:21.171279 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/28ffacfe-ac24-4a1f-8185-b9ba273d11e6-cilium-ipsec-secrets\") pod \"cilium-gd98c\" (UID: \"28ffacfe-ac24-4a1f-8185-b9ba273d11e6\") " pod="kube-system/cilium-gd98c" Jul 2 08:06:21.171472 kubelet[2164]: I0702 08:06:21.171291 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/28ffacfe-ac24-4a1f-8185-b9ba273d11e6-host-proc-sys-kernel\") pod \"cilium-gd98c\" (UID: \"28ffacfe-ac24-4a1f-8185-b9ba273d11e6\") " pod="kube-system/cilium-gd98c" Jul 2 08:06:21.171472 kubelet[2164]: I0702 08:06:21.171302 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/28ffacfe-ac24-4a1f-8185-b9ba273d11e6-lib-modules\") pod \"cilium-gd98c\" (UID: \"28ffacfe-ac24-4a1f-8185-b9ba273d11e6\") " pod="kube-system/cilium-gd98c" Jul 2 08:06:21.171590 kubelet[2164]: I0702 08:06:21.171321 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2v8c\" (UniqueName: \"kubernetes.io/projected/28ffacfe-ac24-4a1f-8185-b9ba273d11e6-kube-api-access-b2v8c\") pod \"cilium-gd98c\" (UID: \"28ffacfe-ac24-4a1f-8185-b9ba273d11e6\") " pod="kube-system/cilium-gd98c" Jul 2 08:06:21.171590 kubelet[2164]: I0702 08:06:21.171333 2164 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/28ffacfe-ac24-4a1f-8185-b9ba273d11e6-etc-cni-netd\") pod \"cilium-gd98c\" (UID: \"28ffacfe-ac24-4a1f-8185-b9ba273d11e6\") " pod="kube-system/cilium-gd98c" Jul 2 08:06:21.376255 env[1252]: time="2024-07-02T08:06:21.375889311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gd98c,Uid:28ffacfe-ac24-4a1f-8185-b9ba273d11e6,Namespace:kube-system,Attempt:0,}" Jul 2 08:06:21.389958 env[1252]: time="2024-07-02T08:06:21.389031412Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:06:21.389958 env[1252]: time="2024-07-02T08:06:21.389070610Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:06:21.389958 env[1252]: time="2024-07-02T08:06:21.389081710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:06:21.390202 env[1252]: time="2024-07-02T08:06:21.390169862Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/541251ce503e1783c57b28c2b29de123b0556bdc107382f974ec2fd0b182f1a9 pid=3916 runtime=io.containerd.runc.v2 Jul 2 08:06:21.405012 systemd[1]: run-containerd-runc-k8s.io-541251ce503e1783c57b28c2b29de123b0556bdc107382f974ec2fd0b182f1a9-runc.yynhdw.mount: Deactivated successfully. Jul 2 08:06:21.408315 systemd[1]: Started cri-containerd-541251ce503e1783c57b28c2b29de123b0556bdc107382f974ec2fd0b182f1a9.scope. Jul 2 08:06:21.426581 env[1252]: time="2024-07-02T08:06:21.426520124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gd98c,Uid:28ffacfe-ac24-4a1f-8185-b9ba273d11e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"541251ce503e1783c57b28c2b29de123b0556bdc107382f974ec2fd0b182f1a9\"" Jul 2 08:06:21.428952 env[1252]: time="2024-07-02T08:06:21.428684761Z" level=info msg="CreateContainer within sandbox \"541251ce503e1783c57b28c2b29de123b0556bdc107382f974ec2fd0b182f1a9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 08:06:21.496011 env[1252]: time="2024-07-02T08:06:21.495967727Z" level=info msg="CreateContainer within sandbox \"541251ce503e1783c57b28c2b29de123b0556bdc107382f974ec2fd0b182f1a9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"af1a064702dcc65031ada57b360037f55b9fb3f725425b8121964af2753af321\"" Jul 2 08:06:21.496490 env[1252]: time="2024-07-02T08:06:21.496474053Z" level=info msg="StartContainer for \"af1a064702dcc65031ada57b360037f55b9fb3f725425b8121964af2753af321\"" Jul 2 08:06:21.514532 systemd[1]: Started cri-containerd-af1a064702dcc65031ada57b360037f55b9fb3f725425b8121964af2753af321.scope. Jul 2 08:06:21.563048 env[1252]: time="2024-07-02T08:06:21.563007789Z" level=info msg="StartContainer for \"af1a064702dcc65031ada57b360037f55b9fb3f725425b8121964af2753af321\" returns successfully" Jul 2 08:06:21.686110 systemd[1]: cri-containerd-af1a064702dcc65031ada57b360037f55b9fb3f725425b8121964af2753af321.scope: Deactivated successfully. Jul 2 08:06:21.745325 kubelet[2164]: I0702 08:06:21.745301 2164 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="74baf5c3-e80d-4453-93d5-f18548b66810" path="/var/lib/kubelet/pods/74baf5c3-e80d-4453-93d5-f18548b66810/volumes" Jul 2 08:06:21.859630 env[1252]: time="2024-07-02T08:06:21.859588014Z" level=info msg="shim disconnected" id=af1a064702dcc65031ada57b360037f55b9fb3f725425b8121964af2753af321 Jul 2 08:06:21.859630 env[1252]: time="2024-07-02T08:06:21.859626271Z" level=warning msg="cleaning up after shim disconnected" id=af1a064702dcc65031ada57b360037f55b9fb3f725425b8121964af2753af321 namespace=k8s.io Jul 2 08:06:21.859630 env[1252]: time="2024-07-02T08:06:21.859637032Z" level=info msg="cleaning up dead shim" Jul 2 08:06:21.874186 env[1252]: time="2024-07-02T08:06:21.865282826Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:06:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4003 runtime=io.containerd.runc.v2\n" Jul 2 08:06:22.028213 env[1252]: time="2024-07-02T08:06:22.028145235Z" level=info msg="CreateContainer within sandbox \"541251ce503e1783c57b28c2b29de123b0556bdc107382f974ec2fd0b182f1a9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 08:06:22.034103 env[1252]: time="2024-07-02T08:06:22.034070422Z" level=info msg="CreateContainer within sandbox \"541251ce503e1783c57b28c2b29de123b0556bdc107382f974ec2fd0b182f1a9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3308fd4abbb17f3bb2c22ebaac59ce4030a0e21719d986bab39c7a9f28dc6dae\"" Jul 2 08:06:22.036658 env[1252]: time="2024-07-02T08:06:22.036632023Z" level=info msg="StartContainer for \"3308fd4abbb17f3bb2c22ebaac59ce4030a0e21719d986bab39c7a9f28dc6dae\"" Jul 2 08:06:22.049934 systemd[1]: Started cri-containerd-3308fd4abbb17f3bb2c22ebaac59ce4030a0e21719d986bab39c7a9f28dc6dae.scope. Jul 2 08:06:22.070124 env[1252]: time="2024-07-02T08:06:22.070086947Z" level=info msg="StartContainer for \"3308fd4abbb17f3bb2c22ebaac59ce4030a0e21719d986bab39c7a9f28dc6dae\" returns successfully" Jul 2 08:06:22.081935 systemd[1]: cri-containerd-3308fd4abbb17f3bb2c22ebaac59ce4030a0e21719d986bab39c7a9f28dc6dae.scope: Deactivated successfully. Jul 2 08:06:22.095159 env[1252]: time="2024-07-02T08:06:22.095129887Z" level=info msg="shim disconnected" id=3308fd4abbb17f3bb2c22ebaac59ce4030a0e21719d986bab39c7a9f28dc6dae Jul 2 08:06:22.095318 env[1252]: time="2024-07-02T08:06:22.095306396Z" level=warning msg="cleaning up after shim disconnected" id=3308fd4abbb17f3bb2c22ebaac59ce4030a0e21719d986bab39c7a9f28dc6dae namespace=k8s.io Jul 2 08:06:22.095379 env[1252]: time="2024-07-02T08:06:22.095359402Z" level=info msg="cleaning up dead shim" Jul 2 08:06:22.100073 env[1252]: time="2024-07-02T08:06:22.100045976Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:06:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4065 runtime=io.containerd.runc.v2\n" Jul 2 08:06:23.029531 env[1252]: time="2024-07-02T08:06:23.029502372Z" level=info msg="CreateContainer within sandbox \"541251ce503e1783c57b28c2b29de123b0556bdc107382f974ec2fd0b182f1a9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 08:06:23.036816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount485229440.mount: Deactivated successfully. Jul 2 08:06:23.041120 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3294920182.mount: Deactivated successfully. Jul 2 08:06:23.044259 env[1252]: time="2024-07-02T08:06:23.044233689Z" level=info msg="CreateContainer within sandbox \"541251ce503e1783c57b28c2b29de123b0556bdc107382f974ec2fd0b182f1a9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"db3e8f8ac1faca5a423b36c05a78ca4b600e3b1478a3625315114744d2607244\"" Jul 2 08:06:23.044700 env[1252]: time="2024-07-02T08:06:23.044677898Z" level=info msg="StartContainer for \"db3e8f8ac1faca5a423b36c05a78ca4b600e3b1478a3625315114744d2607244\"" Jul 2 08:06:23.056496 systemd[1]: Started cri-containerd-db3e8f8ac1faca5a423b36c05a78ca4b600e3b1478a3625315114744d2607244.scope. Jul 2 08:06:23.078815 env[1252]: time="2024-07-02T08:06:23.078783048Z" level=info msg="StartContainer for \"db3e8f8ac1faca5a423b36c05a78ca4b600e3b1478a3625315114744d2607244\" returns successfully" Jul 2 08:06:23.083873 systemd[1]: cri-containerd-db3e8f8ac1faca5a423b36c05a78ca4b600e3b1478a3625315114744d2607244.scope: Deactivated successfully. Jul 2 08:06:23.098099 env[1252]: time="2024-07-02T08:06:23.098062867Z" level=info msg="shim disconnected" id=db3e8f8ac1faca5a423b36c05a78ca4b600e3b1478a3625315114744d2607244 Jul 2 08:06:23.098264 env[1252]: time="2024-07-02T08:06:23.098248841Z" level=warning msg="cleaning up after shim disconnected" id=db3e8f8ac1faca5a423b36c05a78ca4b600e3b1478a3625315114744d2607244 namespace=k8s.io Jul 2 08:06:23.098333 env[1252]: time="2024-07-02T08:06:23.098321483Z" level=info msg="cleaning up dead shim" Jul 2 08:06:23.103026 env[1252]: time="2024-07-02T08:06:23.102999723Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:06:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4125 runtime=io.containerd.runc.v2\n" Jul 2 08:06:24.032662 env[1252]: time="2024-07-02T08:06:24.032627206Z" level=info msg="CreateContainer within sandbox \"541251ce503e1783c57b28c2b29de123b0556bdc107382f974ec2fd0b182f1a9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 08:06:24.039038 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1535557483.mount: Deactivated successfully. Jul 2 08:06:24.045705 env[1252]: time="2024-07-02T08:06:24.045668764Z" level=info msg="CreateContainer within sandbox \"541251ce503e1783c57b28c2b29de123b0556bdc107382f974ec2fd0b182f1a9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"277afd5f51691e0a5ecbf446a54f10f9905255d9f17ee59311c8032d350e6f0c\"" Jul 2 08:06:24.046339 env[1252]: time="2024-07-02T08:06:24.046319157Z" level=info msg="StartContainer for \"277afd5f51691e0a5ecbf446a54f10f9905255d9f17ee59311c8032d350e6f0c\"" Jul 2 08:06:24.064098 systemd[1]: Started cri-containerd-277afd5f51691e0a5ecbf446a54f10f9905255d9f17ee59311c8032d350e6f0c.scope. Jul 2 08:06:24.085240 env[1252]: time="2024-07-02T08:06:24.085209729Z" level=info msg="StartContainer for \"277afd5f51691e0a5ecbf446a54f10f9905255d9f17ee59311c8032d350e6f0c\" returns successfully" Jul 2 08:06:24.089220 systemd[1]: cri-containerd-277afd5f51691e0a5ecbf446a54f10f9905255d9f17ee59311c8032d350e6f0c.scope: Deactivated successfully. Jul 2 08:06:24.102476 env[1252]: time="2024-07-02T08:06:24.102448391Z" level=info msg="shim disconnected" id=277afd5f51691e0a5ecbf446a54f10f9905255d9f17ee59311c8032d350e6f0c Jul 2 08:06:24.102613 env[1252]: time="2024-07-02T08:06:24.102601829Z" level=warning msg="cleaning up after shim disconnected" id=277afd5f51691e0a5ecbf446a54f10f9905255d9f17ee59311c8032d350e6f0c namespace=k8s.io Jul 2 08:06:24.102670 env[1252]: time="2024-07-02T08:06:24.102653857Z" level=info msg="cleaning up dead shim" Jul 2 08:06:24.108529 env[1252]: time="2024-07-02T08:06:24.108508725Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:06:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4180 runtime=io.containerd.runc.v2\n" Jul 2 08:06:24.380107 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-277afd5f51691e0a5ecbf446a54f10f9905255d9f17ee59311c8032d350e6f0c-rootfs.mount: Deactivated successfully. Jul 2 08:06:25.037881 env[1252]: time="2024-07-02T08:06:25.037850278Z" level=info msg="CreateContainer within sandbox \"541251ce503e1783c57b28c2b29de123b0556bdc107382f974ec2fd0b182f1a9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 08:06:25.055200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2695844653.mount: Deactivated successfully. Jul 2 08:06:25.059881 env[1252]: time="2024-07-02T08:06:25.059854094Z" level=info msg="CreateContainer within sandbox \"541251ce503e1783c57b28c2b29de123b0556bdc107382f974ec2fd0b182f1a9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"083d8351384dbad6641ca6c8a6601fffd075d92158daab6dbb750bba0d91d2bc\"" Jul 2 08:06:25.060403 env[1252]: time="2024-07-02T08:06:25.060384393Z" level=info msg="StartContainer for \"083d8351384dbad6641ca6c8a6601fffd075d92158daab6dbb750bba0d91d2bc\"" Jul 2 08:06:25.077671 systemd[1]: Started cri-containerd-083d8351384dbad6641ca6c8a6601fffd075d92158daab6dbb750bba0d91d2bc.scope. Jul 2 08:06:25.103911 env[1252]: time="2024-07-02T08:06:25.103883341Z" level=info msg="StartContainer for \"083d8351384dbad6641ca6c8a6601fffd075d92158daab6dbb750bba0d91d2bc\" returns successfully" Jul 2 08:06:25.699938 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 2 08:06:26.055457 kubelet[2164]: I0702 08:06:26.055249 2164 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-gd98c" podStartSLOduration=5.052920059 podCreationTimestamp="2024-07-02 08:06:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:06:26.052138979 +0000 UTC m=+140.624755625" watchObservedRunningTime="2024-07-02 08:06:26.052920059 +0000 UTC m=+140.625536698" Jul 2 08:06:27.741087 systemd[1]: run-containerd-runc-k8s.io-083d8351384dbad6641ca6c8a6601fffd075d92158daab6dbb750bba0d91d2bc-runc.H47vtH.mount: Deactivated successfully. Jul 2 08:06:28.190284 systemd-networkd[1062]: lxc_health: Link UP Jul 2 08:06:28.197784 systemd-networkd[1062]: lxc_health: Gained carrier Jul 2 08:06:28.197960 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 08:06:29.843651 systemd[1]: run-containerd-runc-k8s.io-083d8351384dbad6641ca6c8a6601fffd075d92158daab6dbb750bba0d91d2bc-runc.Gkm6Ui.mount: Deactivated successfully. Jul 2 08:06:29.904210 systemd-networkd[1062]: lxc_health: Gained IPv6LL Jul 2 08:06:31.953960 systemd[1]: run-containerd-runc-k8s.io-083d8351384dbad6641ca6c8a6601fffd075d92158daab6dbb750bba0d91d2bc-runc.srABbu.mount: Deactivated successfully. Jul 2 08:06:34.033714 systemd[1]: run-containerd-runc-k8s.io-083d8351384dbad6641ca6c8a6601fffd075d92158daab6dbb750bba0d91d2bc-runc.cL8zwB.mount: Deactivated successfully. Jul 2 08:06:34.079119 sshd[3884]: pam_unix(sshd:session): session closed for user core Jul 2 08:06:34.080799 systemd[1]: sshd@24-139.178.70.105:22-139.178.68.195:46104.service: Deactivated successfully. Jul 2 08:06:34.081323 systemd[1]: session-27.scope: Deactivated successfully. Jul 2 08:06:34.082013 systemd-logind[1242]: Session 27 logged out. Waiting for processes to exit. Jul 2 08:06:34.082641 systemd-logind[1242]: Removed session 27. Jul 2 08:06:36.576739 systemd[1]: Started sshd@25-139.178.70.105:22-52.160.36.227:38870.service. Jul 2 08:06:36.600845 sshd[4860]: kex_exchange_identification: client sent invalid protocol identifier "MGLNDD_139.178.70.105_22" Jul 2 08:06:36.600845 sshd[4860]: banner exchange: Connection from 52.160.36.227 port 38870: invalid format Jul 2 08:06:36.601313 systemd[1]: sshd@25-139.178.70.105:22-52.160.36.227:38870.service: Deactivated successfully.