May 13 00:52:06.647099 kernel: Linux version 5.15.181-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon May 12 23:08:12 -00 2025 May 13 00:52:06.647114 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=b36b4a233fdb797f33aa4a04cfdf4a35ceaebd893b04da45dfb96d44a18c6166 May 13 00:52:06.647120 kernel: Disabled fast string operations May 13 00:52:06.647124 kernel: BIOS-provided physical RAM map: May 13 00:52:06.647128 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable May 13 00:52:06.647132 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved May 13 00:52:06.647137 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved May 13 00:52:06.647141 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable May 13 00:52:06.647145 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data May 13 00:52:06.647149 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS May 13 00:52:06.647153 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable May 13 00:52:06.647157 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved May 13 00:52:06.647161 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved May 13 00:52:06.647165 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved May 13 00:52:06.647171 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved May 13 00:52:06.647175 kernel: NX (Execute Disable) protection: active May 13 00:52:06.647179 kernel: SMBIOS 2.7 present. May 13 00:52:06.647184 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 May 13 00:52:06.647188 kernel: vmware: hypercall mode: 0x00 May 13 00:52:06.647192 kernel: Hypervisor detected: VMware May 13 00:52:06.647197 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz May 13 00:52:06.647201 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz May 13 00:52:06.647206 kernel: vmware: using clock offset of 3429930893 ns May 13 00:52:06.647210 kernel: tsc: Detected 3408.000 MHz processor May 13 00:52:06.647215 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 13 00:52:06.647219 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 13 00:52:06.647224 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 May 13 00:52:06.647228 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 13 00:52:06.647232 kernel: total RAM covered: 3072M May 13 00:52:06.647238 kernel: Found optimal setting for mtrr clean up May 13 00:52:06.647242 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G May 13 00:52:06.647247 kernel: Using GB pages for direct mapping May 13 00:52:06.647251 kernel: ACPI: Early table checksum verification disabled May 13 00:52:06.647255 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) May 13 00:52:06.647260 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) May 13 00:52:06.647264 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) May 13 00:52:06.647268 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) May 13 00:52:06.647273 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 May 13 00:52:06.647277 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 May 13 00:52:06.647282 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) May 13 00:52:06.647288 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) May 13 00:52:06.647293 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) May 13 00:52:06.647298 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) May 13 00:52:06.647303 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) May 13 00:52:06.647308 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) May 13 00:52:06.647313 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] May 13 00:52:06.647317 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] May 13 00:52:06.647322 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] May 13 00:52:06.647327 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] May 13 00:52:06.647331 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] May 13 00:52:06.647336 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] May 13 00:52:06.647341 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] May 13 00:52:06.647345 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] May 13 00:52:06.647351 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] May 13 00:52:06.647355 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] May 13 00:52:06.647360 kernel: system APIC only can use physical flat May 13 00:52:06.647365 kernel: Setting APIC routing to physical flat. May 13 00:52:06.647369 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 13 00:52:06.647374 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 May 13 00:52:06.647378 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 May 13 00:52:06.647383 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 May 13 00:52:06.647387 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 May 13 00:52:06.647393 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 May 13 00:52:06.647398 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 May 13 00:52:06.647402 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 May 13 00:52:06.647407 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 May 13 00:52:06.647411 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 May 13 00:52:06.647416 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 May 13 00:52:06.647420 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 May 13 00:52:06.647425 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 May 13 00:52:06.647430 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 May 13 00:52:06.647434 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 May 13 00:52:06.647446 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 May 13 00:52:06.647451 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 May 13 00:52:06.647455 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 May 13 00:52:06.647460 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 May 13 00:52:06.647464 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 May 13 00:52:06.647469 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 May 13 00:52:06.647473 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 May 13 00:52:06.647478 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 May 13 00:52:06.647483 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 May 13 00:52:06.647487 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 May 13 00:52:06.647493 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 May 13 00:52:06.647498 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 May 13 00:52:06.647502 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 May 13 00:52:06.647507 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 May 13 00:52:06.647511 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 May 13 00:52:06.647516 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 May 13 00:52:06.647521 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 May 13 00:52:06.647525 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 May 13 00:52:06.647530 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 May 13 00:52:06.647534 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 May 13 00:52:06.647540 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 May 13 00:52:06.647544 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 May 13 00:52:06.647549 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 May 13 00:52:06.647554 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 May 13 00:52:06.647558 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 May 13 00:52:06.647563 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 May 13 00:52:06.647567 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 May 13 00:52:06.647572 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 May 13 00:52:06.647576 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 May 13 00:52:06.647581 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 May 13 00:52:06.647586 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 May 13 00:52:06.647591 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 May 13 00:52:06.647596 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 May 13 00:52:06.647600 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 May 13 00:52:06.647605 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 May 13 00:52:06.647609 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 May 13 00:52:06.647614 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 May 13 00:52:06.647618 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 May 13 00:52:06.647623 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 May 13 00:52:06.647628 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 May 13 00:52:06.647633 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 May 13 00:52:06.647638 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 May 13 00:52:06.647642 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 May 13 00:52:06.647647 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 May 13 00:52:06.647651 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 May 13 00:52:06.647656 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 May 13 00:52:06.647665 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 May 13 00:52:06.647669 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 May 13 00:52:06.647674 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 May 13 00:52:06.647679 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 May 13 00:52:06.647684 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 May 13 00:52:06.647690 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 May 13 00:52:06.647695 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 May 13 00:52:06.647700 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 May 13 00:52:06.647705 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 May 13 00:52:06.647710 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 May 13 00:52:06.647715 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 May 13 00:52:06.647719 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 May 13 00:52:06.647725 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 May 13 00:52:06.647730 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 May 13 00:52:06.647735 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 May 13 00:52:06.647740 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 May 13 00:52:06.647745 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 May 13 00:52:06.647750 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 May 13 00:52:06.647754 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 May 13 00:52:06.647759 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 May 13 00:52:06.647764 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 May 13 00:52:06.647769 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 May 13 00:52:06.647775 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 May 13 00:52:06.647780 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 May 13 00:52:06.647785 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 May 13 00:52:06.647789 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 May 13 00:52:06.647794 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 May 13 00:52:06.647799 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 May 13 00:52:06.647804 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 May 13 00:52:06.647809 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 May 13 00:52:06.647814 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 May 13 00:52:06.647819 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 May 13 00:52:06.647824 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 May 13 00:52:06.647829 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 May 13 00:52:06.647834 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 May 13 00:52:06.647839 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 May 13 00:52:06.647844 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 May 13 00:52:06.647849 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 May 13 00:52:06.647854 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 May 13 00:52:06.647858 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 May 13 00:52:06.647863 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 May 13 00:52:06.647869 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 May 13 00:52:06.647874 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 May 13 00:52:06.647879 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 May 13 00:52:06.647884 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 May 13 00:52:06.647888 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 May 13 00:52:06.647894 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 May 13 00:52:06.647899 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 May 13 00:52:06.647903 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 May 13 00:52:06.647908 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 May 13 00:52:06.647913 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 May 13 00:52:06.647919 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 May 13 00:52:06.647924 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 May 13 00:52:06.647929 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 May 13 00:52:06.647933 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 May 13 00:52:06.647938 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 May 13 00:52:06.647943 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 May 13 00:52:06.647948 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 May 13 00:52:06.647953 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 May 13 00:52:06.647958 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 May 13 00:52:06.647963 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 May 13 00:52:06.647969 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 May 13 00:52:06.647973 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 May 13 00:52:06.647978 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 May 13 00:52:06.647983 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 May 13 00:52:06.647988 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 May 13 00:52:06.647993 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 May 13 00:52:06.647998 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 13 00:52:06.648003 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] May 13 00:52:06.648008 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug May 13 00:52:06.648016 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] May 13 00:52:06.648023 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] May 13 00:52:06.648028 kernel: Zone ranges: May 13 00:52:06.648033 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 13 00:52:06.648038 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] May 13 00:52:06.648043 kernel: Normal empty May 13 00:52:06.648048 kernel: Movable zone start for each node May 13 00:52:06.648053 kernel: Early memory node ranges May 13 00:52:06.648058 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] May 13 00:52:06.648063 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] May 13 00:52:06.648069 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] May 13 00:52:06.648074 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] May 13 00:52:06.648079 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 00:52:06.648084 kernel: On node 0, zone DMA: 98 pages in unavailable ranges May 13 00:52:06.648089 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges May 13 00:52:06.648094 kernel: ACPI: PM-Timer IO Port: 0x1008 May 13 00:52:06.648099 kernel: system APIC only can use physical flat May 13 00:52:06.648104 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) May 13 00:52:06.648109 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) May 13 00:52:06.648114 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) May 13 00:52:06.648119 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) May 13 00:52:06.648124 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) May 13 00:52:06.648129 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) May 13 00:52:06.648134 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) May 13 00:52:06.648139 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) May 13 00:52:06.648144 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) May 13 00:52:06.648149 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) May 13 00:52:06.648154 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) May 13 00:52:06.648159 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) May 13 00:52:06.648165 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) May 13 00:52:06.648170 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) May 13 00:52:06.648175 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) May 13 00:52:06.648180 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) May 13 00:52:06.648184 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) May 13 00:52:06.648190 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) May 13 00:52:06.648194 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) May 13 00:52:06.648199 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) May 13 00:52:06.648204 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) May 13 00:52:06.648209 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) May 13 00:52:06.648215 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) May 13 00:52:06.648220 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) May 13 00:52:06.648225 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) May 13 00:52:06.648230 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) May 13 00:52:06.648235 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) May 13 00:52:06.648240 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) May 13 00:52:06.648245 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) May 13 00:52:06.648250 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) May 13 00:52:06.648254 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) May 13 00:52:06.648260 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) May 13 00:52:06.648265 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) May 13 00:52:06.648270 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) May 13 00:52:06.648275 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) May 13 00:52:06.648280 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) May 13 00:52:06.648285 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) May 13 00:52:06.648290 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) May 13 00:52:06.648295 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) May 13 00:52:06.648300 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) May 13 00:52:06.648306 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) May 13 00:52:06.648310 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) May 13 00:52:06.648315 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) May 13 00:52:06.648320 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) May 13 00:52:06.648325 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) May 13 00:52:06.648330 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) May 13 00:52:06.648335 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) May 13 00:52:06.648340 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) May 13 00:52:06.648345 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) May 13 00:52:06.648350 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) May 13 00:52:06.648356 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) May 13 00:52:06.648361 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) May 13 00:52:06.648365 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) May 13 00:52:06.648370 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) May 13 00:52:06.648375 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) May 13 00:52:06.648380 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) May 13 00:52:06.648385 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) May 13 00:52:06.648390 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) May 13 00:52:06.648395 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) May 13 00:52:06.648401 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) May 13 00:52:06.648406 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) May 13 00:52:06.648411 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) May 13 00:52:06.648416 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) May 13 00:52:06.648420 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) May 13 00:52:06.648425 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) May 13 00:52:06.648430 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) May 13 00:52:06.648435 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) May 13 00:52:06.648457 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) May 13 00:52:06.648462 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) May 13 00:52:06.648468 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) May 13 00:52:06.648473 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) May 13 00:52:06.648478 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) May 13 00:52:06.648483 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) May 13 00:52:06.648488 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) May 13 00:52:06.648493 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) May 13 00:52:06.648498 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) May 13 00:52:06.648503 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) May 13 00:52:06.648508 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) May 13 00:52:06.648514 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) May 13 00:52:06.648518 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) May 13 00:52:06.648523 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) May 13 00:52:06.648528 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) May 13 00:52:06.648533 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) May 13 00:52:06.648538 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) May 13 00:52:06.648543 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) May 13 00:52:06.648548 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) May 13 00:52:06.648553 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) May 13 00:52:06.648558 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) May 13 00:52:06.648564 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) May 13 00:52:06.648569 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) May 13 00:52:06.648574 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) May 13 00:52:06.648579 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) May 13 00:52:06.648583 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) May 13 00:52:06.648588 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) May 13 00:52:06.648593 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) May 13 00:52:06.648598 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) May 13 00:52:06.648603 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) May 13 00:52:06.648609 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) May 13 00:52:06.648614 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) May 13 00:52:06.648619 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) May 13 00:52:06.648624 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) May 13 00:52:06.648629 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) May 13 00:52:06.648634 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) May 13 00:52:06.648638 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) May 13 00:52:06.648643 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) May 13 00:52:06.648648 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) May 13 00:52:06.648653 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) May 13 00:52:06.648659 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) May 13 00:52:06.648664 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) May 13 00:52:06.648669 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) May 13 00:52:06.648674 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) May 13 00:52:06.648679 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) May 13 00:52:06.648684 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) May 13 00:52:06.648688 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) May 13 00:52:06.648693 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) May 13 00:52:06.648698 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) May 13 00:52:06.648704 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) May 13 00:52:06.648709 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) May 13 00:52:06.648714 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) May 13 00:52:06.648719 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) May 13 00:52:06.648724 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) May 13 00:52:06.648729 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) May 13 00:52:06.648734 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) May 13 00:52:06.648739 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) May 13 00:52:06.648744 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) May 13 00:52:06.648749 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) May 13 00:52:06.648755 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) May 13 00:52:06.648760 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) May 13 00:52:06.648764 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 May 13 00:52:06.648770 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) May 13 00:52:06.648775 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 13 00:52:06.648780 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 May 13 00:52:06.648785 kernel: TSC deadline timer available May 13 00:52:06.648790 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs May 13 00:52:06.648795 kernel: [mem 0x80000000-0xefffffff] available for PCI devices May 13 00:52:06.648801 kernel: Booting paravirtualized kernel on VMware hypervisor May 13 00:52:06.648806 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 13 00:52:06.648811 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:128 nr_node_ids:1 May 13 00:52:06.648816 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 May 13 00:52:06.648821 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 May 13 00:52:06.648826 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 May 13 00:52:06.648831 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 May 13 00:52:06.648836 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 May 13 00:52:06.648841 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 May 13 00:52:06.648846 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 May 13 00:52:06.648851 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 May 13 00:52:06.648856 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 May 13 00:52:06.648867 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 May 13 00:52:06.648873 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 May 13 00:52:06.648878 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 May 13 00:52:06.648884 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 May 13 00:52:06.648889 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 May 13 00:52:06.648895 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 May 13 00:52:06.648900 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 May 13 00:52:06.648905 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 May 13 00:52:06.648910 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 May 13 00:52:06.648915 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 May 13 00:52:06.648921 kernel: Policy zone: DMA32 May 13 00:52:06.648927 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=b36b4a233fdb797f33aa4a04cfdf4a35ceaebd893b04da45dfb96d44a18c6166 May 13 00:52:06.648932 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 00:52:06.648938 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes May 13 00:52:06.648944 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes May 13 00:52:06.648949 kernel: printk: log_buf_len min size: 262144 bytes May 13 00:52:06.648955 kernel: printk: log_buf_len: 1048576 bytes May 13 00:52:06.648960 kernel: printk: early log buf free: 239728(91%) May 13 00:52:06.648966 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 00:52:06.648971 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 13 00:52:06.648977 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 00:52:06.648982 kernel: Memory: 1940392K/2096628K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47456K init, 4124K bss, 155976K reserved, 0K cma-reserved) May 13 00:52:06.648988 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 May 13 00:52:06.648994 kernel: ftrace: allocating 34584 entries in 136 pages May 13 00:52:06.648999 kernel: ftrace: allocated 136 pages with 2 groups May 13 00:52:06.649006 kernel: rcu: Hierarchical RCU implementation. May 13 00:52:06.649011 kernel: rcu: RCU event tracing is enabled. May 13 00:52:06.649017 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. May 13 00:52:06.649023 kernel: Rude variant of Tasks RCU enabled. May 13 00:52:06.649028 kernel: Tracing variant of Tasks RCU enabled. May 13 00:52:06.649034 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 00:52:06.649039 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 May 13 00:52:06.649045 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 May 13 00:52:06.649050 kernel: random: crng init done May 13 00:52:06.649055 kernel: Console: colour VGA+ 80x25 May 13 00:52:06.649060 kernel: printk: console [tty0] enabled May 13 00:52:06.649066 kernel: printk: console [ttyS0] enabled May 13 00:52:06.649071 kernel: ACPI: Core revision 20210730 May 13 00:52:06.649095 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns May 13 00:52:06.649101 kernel: APIC: Switch to symmetric I/O mode setup May 13 00:52:06.649106 kernel: x2apic enabled May 13 00:52:06.649112 kernel: Switched APIC routing to physical x2apic. May 13 00:52:06.649117 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 13 00:52:06.649123 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns May 13 00:52:06.649129 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) May 13 00:52:06.649134 kernel: Disabled fast string operations May 13 00:52:06.649155 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 13 00:52:06.649161 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 13 00:52:06.649167 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 13 00:52:06.649173 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! May 13 00:52:06.649178 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit May 13 00:52:06.649184 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall May 13 00:52:06.649189 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS May 13 00:52:06.649194 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT May 13 00:52:06.649200 kernel: RETBleed: Mitigation: Enhanced IBRS May 13 00:52:06.649206 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 13 00:52:06.649211 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp May 13 00:52:06.649217 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 13 00:52:06.649222 kernel: SRBDS: Unknown: Dependent on hypervisor status May 13 00:52:06.649228 kernel: GDS: Unknown: Dependent on hypervisor status May 13 00:52:06.649233 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 13 00:52:06.649238 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 13 00:52:06.649244 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 13 00:52:06.649249 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 13 00:52:06.649255 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 13 00:52:06.649261 kernel: Freeing SMP alternatives memory: 32K May 13 00:52:06.649266 kernel: pid_max: default: 131072 minimum: 1024 May 13 00:52:06.649272 kernel: LSM: Security Framework initializing May 13 00:52:06.649277 kernel: SELinux: Initializing. May 13 00:52:06.649282 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 13 00:52:06.649288 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 13 00:52:06.649293 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) May 13 00:52:06.649299 kernel: Performance Events: Skylake events, core PMU driver. May 13 00:52:06.649305 kernel: core: CPUID marked event: 'cpu cycles' unavailable May 13 00:52:06.649310 kernel: core: CPUID marked event: 'instructions' unavailable May 13 00:52:06.649316 kernel: core: CPUID marked event: 'bus cycles' unavailable May 13 00:52:06.649321 kernel: core: CPUID marked event: 'cache references' unavailable May 13 00:52:06.649326 kernel: core: CPUID marked event: 'cache misses' unavailable May 13 00:52:06.649331 kernel: core: CPUID marked event: 'branch instructions' unavailable May 13 00:52:06.649337 kernel: core: CPUID marked event: 'branch misses' unavailable May 13 00:52:06.649342 kernel: ... version: 1 May 13 00:52:06.649347 kernel: ... bit width: 48 May 13 00:52:06.649354 kernel: ... generic registers: 4 May 13 00:52:06.649359 kernel: ... value mask: 0000ffffffffffff May 13 00:52:06.649364 kernel: ... max period: 000000007fffffff May 13 00:52:06.649370 kernel: ... fixed-purpose events: 0 May 13 00:52:06.649375 kernel: ... event mask: 000000000000000f May 13 00:52:06.649380 kernel: signal: max sigframe size: 1776 May 13 00:52:06.649386 kernel: rcu: Hierarchical SRCU implementation. May 13 00:52:06.649391 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 13 00:52:06.649396 kernel: smp: Bringing up secondary CPUs ... May 13 00:52:06.649403 kernel: x86: Booting SMP configuration: May 13 00:52:06.649408 kernel: .... node #0, CPUs: #1 May 13 00:52:06.649413 kernel: Disabled fast string operations May 13 00:52:06.649418 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 May 13 00:52:06.649424 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 May 13 00:52:06.649429 kernel: smp: Brought up 1 node, 2 CPUs May 13 00:52:06.649434 kernel: smpboot: Max logical packages: 128 May 13 00:52:06.649446 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) May 13 00:52:06.649452 kernel: devtmpfs: initialized May 13 00:52:06.649457 kernel: x86/mm: Memory block size: 128MB May 13 00:52:06.649463 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) May 13 00:52:06.649469 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 00:52:06.649475 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) May 13 00:52:06.649480 kernel: pinctrl core: initialized pinctrl subsystem May 13 00:52:06.649485 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 00:52:06.649491 kernel: audit: initializing netlink subsys (disabled) May 13 00:52:06.649496 kernel: audit: type=2000 audit(1747097525.057:1): state=initialized audit_enabled=0 res=1 May 13 00:52:06.649501 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 00:52:06.649507 kernel: thermal_sys: Registered thermal governor 'user_space' May 13 00:52:06.649513 kernel: cpuidle: using governor menu May 13 00:52:06.649518 kernel: Simple Boot Flag at 0x36 set to 0x80 May 13 00:52:06.649524 kernel: ACPI: bus type PCI registered May 13 00:52:06.649529 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 00:52:06.649535 kernel: dca service started, version 1.12.1 May 13 00:52:06.649541 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) May 13 00:52:06.649546 kernel: PCI: MMCONFIG at [mem 0xf0000000-0xf7ffffff] reserved in E820 May 13 00:52:06.649552 kernel: PCI: Using configuration type 1 for base access May 13 00:52:06.649557 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 13 00:52:06.649564 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 13 00:52:06.649569 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 13 00:52:06.649574 kernel: ACPI: Added _OSI(Module Device) May 13 00:52:06.649580 kernel: ACPI: Added _OSI(Processor Device) May 13 00:52:06.649585 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 00:52:06.649590 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 00:52:06.649596 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 13 00:52:06.649601 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 13 00:52:06.649606 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 13 00:52:06.649613 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 00:52:06.649618 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored May 13 00:52:06.649624 kernel: ACPI: Interpreter enabled May 13 00:52:06.649629 kernel: ACPI: PM: (supports S0 S1 S5) May 13 00:52:06.649634 kernel: ACPI: Using IOAPIC for interrupt routing May 13 00:52:06.649640 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 13 00:52:06.649645 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F May 13 00:52:06.649650 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) May 13 00:52:06.649719 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 00:52:06.649768 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] May 13 00:52:06.649812 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] May 13 00:52:06.649820 kernel: PCI host bridge to bus 0000:00 May 13 00:52:06.649865 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 13 00:52:06.649905 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] May 13 00:52:06.649944 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 13 00:52:06.649986 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 13 00:52:06.650024 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] May 13 00:52:06.650062 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] May 13 00:52:06.650114 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 May 13 00:52:06.650164 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 May 13 00:52:06.650212 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 May 13 00:52:06.650264 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a May 13 00:52:06.650309 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] May 13 00:52:06.650373 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 13 00:52:06.650431 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 13 00:52:06.659474 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 13 00:52:06.659528 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 13 00:52:06.659582 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 May 13 00:52:06.659634 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI May 13 00:52:06.659681 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB May 13 00:52:06.659731 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 May 13 00:52:06.659778 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] May 13 00:52:06.659823 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] May 13 00:52:06.659872 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 May 13 00:52:06.659920 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] May 13 00:52:06.659965 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] May 13 00:52:06.660009 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] May 13 00:52:06.660053 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] May 13 00:52:06.660097 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 13 00:52:06.660145 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 May 13 00:52:06.660193 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 May 13 00:52:06.660279 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold May 13 00:52:06.660340 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 May 13 00:52:06.660390 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold May 13 00:52:06.660466 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 May 13 00:52:06.660524 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold May 13 00:52:06.660577 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 May 13 00:52:06.660625 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold May 13 00:52:06.660674 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 May 13 00:52:06.660720 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold May 13 00:52:06.660772 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 May 13 00:52:06.660818 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold May 13 00:52:06.660868 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 May 13 00:52:06.660916 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold May 13 00:52:06.660965 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 May 13 00:52:06.661010 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold May 13 00:52:06.661059 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 May 13 00:52:06.661106 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold May 13 00:52:06.661155 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 May 13 00:52:06.661203 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold May 13 00:52:06.661251 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 May 13 00:52:06.661297 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold May 13 00:52:06.661345 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 May 13 00:52:06.661391 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold May 13 00:52:06.663084 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 May 13 00:52:06.663153 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold May 13 00:52:06.663210 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 May 13 00:52:06.663258 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold May 13 00:52:06.663309 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 May 13 00:52:06.663356 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold May 13 00:52:06.663406 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 May 13 00:52:06.663511 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold May 13 00:52:06.663563 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 May 13 00:52:06.663609 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold May 13 00:52:06.663659 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 May 13 00:52:06.663705 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold May 13 00:52:06.663754 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 May 13 00:52:06.663803 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold May 13 00:52:06.663852 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 May 13 00:52:06.663897 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold May 13 00:52:06.663944 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 May 13 00:52:06.663990 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold May 13 00:52:06.664060 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 May 13 00:52:06.664108 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold May 13 00:52:06.664161 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 May 13 00:52:06.664207 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold May 13 00:52:06.664274 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 May 13 00:52:06.664319 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold May 13 00:52:06.664367 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 May 13 00:52:06.664412 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold May 13 00:52:06.664470 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 May 13 00:52:06.664517 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold May 13 00:52:06.664565 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 May 13 00:52:06.664612 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold May 13 00:52:06.664661 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 May 13 00:52:06.664708 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold May 13 00:52:06.664759 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 May 13 00:52:06.664805 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold May 13 00:52:06.664853 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 May 13 00:52:06.664900 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold May 13 00:52:06.664950 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 May 13 00:52:06.664996 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold May 13 00:52:06.665052 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 May 13 00:52:06.665098 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold May 13 00:52:06.665147 kernel: pci_bus 0000:01: extended config space not accessible May 13 00:52:06.665195 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 13 00:52:06.665243 kernel: pci_bus 0000:02: extended config space not accessible May 13 00:52:06.665252 kernel: acpiphp: Slot [32] registered May 13 00:52:06.665258 kernel: acpiphp: Slot [33] registered May 13 00:52:06.665265 kernel: acpiphp: Slot [34] registered May 13 00:52:06.665271 kernel: acpiphp: Slot [35] registered May 13 00:52:06.665276 kernel: acpiphp: Slot [36] registered May 13 00:52:06.665282 kernel: acpiphp: Slot [37] registered May 13 00:52:06.665288 kernel: acpiphp: Slot [38] registered May 13 00:52:06.665293 kernel: acpiphp: Slot [39] registered May 13 00:52:06.665299 kernel: acpiphp: Slot [40] registered May 13 00:52:06.665304 kernel: acpiphp: Slot [41] registered May 13 00:52:06.665310 kernel: acpiphp: Slot [42] registered May 13 00:52:06.665316 kernel: acpiphp: Slot [43] registered May 13 00:52:06.665322 kernel: acpiphp: Slot [44] registered May 13 00:52:06.665328 kernel: acpiphp: Slot [45] registered May 13 00:52:06.665333 kernel: acpiphp: Slot [46] registered May 13 00:52:06.665339 kernel: acpiphp: Slot [47] registered May 13 00:52:06.665344 kernel: acpiphp: Slot [48] registered May 13 00:52:06.665350 kernel: acpiphp: Slot [49] registered May 13 00:52:06.665356 kernel: acpiphp: Slot [50] registered May 13 00:52:06.665361 kernel: acpiphp: Slot [51] registered May 13 00:52:06.665367 kernel: acpiphp: Slot [52] registered May 13 00:52:06.665373 kernel: acpiphp: Slot [53] registered May 13 00:52:06.665379 kernel: acpiphp: Slot [54] registered May 13 00:52:06.665385 kernel: acpiphp: Slot [55] registered May 13 00:52:06.665391 kernel: acpiphp: Slot [56] registered May 13 00:52:06.665396 kernel: acpiphp: Slot [57] registered May 13 00:52:06.665402 kernel: acpiphp: Slot [58] registered May 13 00:52:06.665407 kernel: acpiphp: Slot [59] registered May 13 00:52:06.665413 kernel: acpiphp: Slot [60] registered May 13 00:52:06.665418 kernel: acpiphp: Slot [61] registered May 13 00:52:06.665425 kernel: acpiphp: Slot [62] registered May 13 00:52:06.665430 kernel: acpiphp: Slot [63] registered May 13 00:52:06.665483 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) May 13 00:52:06.665530 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] May 13 00:52:06.665575 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] May 13 00:52:06.665621 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] May 13 00:52:06.665666 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) May 13 00:52:06.665711 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) May 13 00:52:06.665758 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) May 13 00:52:06.665803 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) May 13 00:52:06.665848 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) May 13 00:52:06.665899 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 May 13 00:52:06.665946 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] May 13 00:52:06.665993 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] May 13 00:52:06.666039 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] May 13 00:52:06.666087 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold May 13 00:52:06.666134 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' May 13 00:52:06.666180 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] May 13 00:52:06.666225 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] May 13 00:52:06.666270 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] May 13 00:52:06.666316 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] May 13 00:52:06.666361 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] May 13 00:52:06.666406 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] May 13 00:52:06.666465 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] May 13 00:52:06.666514 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] May 13 00:52:06.673365 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] May 13 00:52:06.673427 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] May 13 00:52:06.673491 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] May 13 00:52:06.673546 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] May 13 00:52:06.673594 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] May 13 00:52:06.673643 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] May 13 00:52:06.673689 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] May 13 00:52:06.673733 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] May 13 00:52:06.673778 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] May 13 00:52:06.673824 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] May 13 00:52:06.673872 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] May 13 00:52:06.673916 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] May 13 00:52:06.673963 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] May 13 00:52:06.674007 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] May 13 00:52:06.674087 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] May 13 00:52:06.674133 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] May 13 00:52:06.674178 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] May 13 00:52:06.674223 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] May 13 00:52:06.674278 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 May 13 00:52:06.674326 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] May 13 00:52:06.674372 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] May 13 00:52:06.674419 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] May 13 00:52:06.674492 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] May 13 00:52:06.674540 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] May 13 00:52:06.674587 kernel: pci 0000:0b:00.0: supports D1 D2 May 13 00:52:06.674638 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 13 00:52:06.674685 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' May 13 00:52:06.674731 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] May 13 00:52:06.674778 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] May 13 00:52:06.674823 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] May 13 00:52:06.674871 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] May 13 00:52:06.674916 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] May 13 00:52:06.674961 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] May 13 00:52:06.675009 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] May 13 00:52:06.675061 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] May 13 00:52:06.675106 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] May 13 00:52:06.675153 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] May 13 00:52:06.675200 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] May 13 00:52:06.675248 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] May 13 00:52:06.675294 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] May 13 00:52:06.675342 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] May 13 00:52:06.675389 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] May 13 00:52:06.675435 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] May 13 00:52:06.675491 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] May 13 00:52:06.675540 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] May 13 00:52:06.675586 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] May 13 00:52:06.675633 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] May 13 00:52:06.675679 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] May 13 00:52:06.675726 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] May 13 00:52:06.675774 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] May 13 00:52:06.675821 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] May 13 00:52:06.675867 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] May 13 00:52:06.675912 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] May 13 00:52:06.675959 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] May 13 00:52:06.676006 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] May 13 00:52:06.676052 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] May 13 00:52:06.676097 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] May 13 00:52:06.676146 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] May 13 00:52:06.676192 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] May 13 00:52:06.676238 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] May 13 00:52:06.676285 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] May 13 00:52:06.676332 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] May 13 00:52:06.676377 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] May 13 00:52:06.676423 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] May 13 00:52:06.676705 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] May 13 00:52:06.676760 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] May 13 00:52:06.676808 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] May 13 00:52:06.676855 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] May 13 00:52:06.676902 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] May 13 00:52:06.676950 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] May 13 00:52:06.676996 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] May 13 00:52:06.677043 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] May 13 00:52:06.677093 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] May 13 00:52:06.677139 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] May 13 00:52:06.677187 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] May 13 00:52:06.677234 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] May 13 00:52:06.677279 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] May 13 00:52:06.677327 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] May 13 00:52:06.677372 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] May 13 00:52:06.677418 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] May 13 00:52:06.677476 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] May 13 00:52:06.677523 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] May 13 00:52:06.677569 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] May 13 00:52:06.677614 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] May 13 00:52:06.677661 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] May 13 00:52:06.677708 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] May 13 00:52:06.677754 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] May 13 00:52:06.677800 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] May 13 00:52:06.677850 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] May 13 00:52:06.677897 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] May 13 00:52:06.677942 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] May 13 00:52:06.677989 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] May 13 00:52:06.678044 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] May 13 00:52:06.678093 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] May 13 00:52:06.678141 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] May 13 00:52:06.678190 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] May 13 00:52:06.678235 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] May 13 00:52:06.678282 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] May 13 00:52:06.678329 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] May 13 00:52:06.678375 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] May 13 00:52:06.678423 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] May 13 00:52:06.678477 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] May 13 00:52:06.678523 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] May 13 00:52:06.678573 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] May 13 00:52:06.678619 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] May 13 00:52:06.678664 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] May 13 00:52:06.678672 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 May 13 00:52:06.678678 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 May 13 00:52:06.678684 kernel: ACPI: PCI: Interrupt link LNKB disabled May 13 00:52:06.678690 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 13 00:52:06.678696 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 May 13 00:52:06.678702 kernel: iommu: Default domain type: Translated May 13 00:52:06.678709 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 13 00:52:06.678778 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device May 13 00:52:06.679099 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 13 00:52:06.679154 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible May 13 00:52:06.679163 kernel: vgaarb: loaded May 13 00:52:06.679170 kernel: pps_core: LinuxPPS API ver. 1 registered May 13 00:52:06.679176 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 13 00:52:06.679182 kernel: PTP clock support registered May 13 00:52:06.679188 kernel: PCI: Using ACPI for IRQ routing May 13 00:52:06.679196 kernel: PCI: pci_cache_line_size set to 64 bytes May 13 00:52:06.679202 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] May 13 00:52:06.679208 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] May 13 00:52:06.679213 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 May 13 00:52:06.679219 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter May 13 00:52:06.679225 kernel: clocksource: Switched to clocksource tsc-early May 13 00:52:06.679231 kernel: VFS: Disk quotas dquot_6.6.0 May 13 00:52:06.679237 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 00:52:06.679243 kernel: pnp: PnP ACPI init May 13 00:52:06.684161 kernel: system 00:00: [io 0x1000-0x103f] has been reserved May 13 00:52:06.684219 kernel: system 00:00: [io 0x1040-0x104f] has been reserved May 13 00:52:06.684262 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved May 13 00:52:06.684308 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved May 13 00:52:06.684352 kernel: pnp 00:06: [dma 2] May 13 00:52:06.684398 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved May 13 00:52:06.684567 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved May 13 00:52:06.684610 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved May 13 00:52:06.684618 kernel: pnp: PnP ACPI: found 8 devices May 13 00:52:06.684624 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 13 00:52:06.684630 kernel: NET: Registered PF_INET protocol family May 13 00:52:06.684636 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 00:52:06.684642 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 13 00:52:06.684648 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 00:52:06.684655 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 13 00:52:06.684661 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) May 13 00:52:06.684667 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 13 00:52:06.684689 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 13 00:52:06.684695 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 13 00:52:06.684701 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 00:52:06.684722 kernel: NET: Registered PF_XDP protocol family May 13 00:52:06.684770 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 13 00:52:06.684818 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 May 13 00:52:06.684867 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 May 13 00:52:06.684912 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 May 13 00:52:06.684957 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 May 13 00:52:06.685004 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 May 13 00:52:06.685049 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 May 13 00:52:06.685096 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 May 13 00:52:06.685141 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 May 13 00:52:06.685359 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 May 13 00:52:06.685415 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 May 13 00:52:06.685471 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 May 13 00:52:06.685517 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 May 13 00:52:06.685565 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 May 13 00:52:06.685611 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 May 13 00:52:06.685656 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 May 13 00:52:06.685701 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 May 13 00:52:06.686025 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 May 13 00:52:06.686076 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 May 13 00:52:06.686125 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 May 13 00:52:06.686172 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 May 13 00:52:06.686234 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 May 13 00:52:06.686281 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 May 13 00:52:06.686326 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] May 13 00:52:06.686371 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] May 13 00:52:06.686418 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] May 13 00:52:06.686484 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] May 13 00:52:06.686531 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] May 13 00:52:06.686575 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] May 13 00:52:06.686620 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] May 13 00:52:06.686664 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] May 13 00:52:06.686709 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] May 13 00:52:06.686753 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] May 13 00:52:06.686800 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] May 13 00:52:06.686844 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] May 13 00:52:06.686888 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] May 13 00:52:06.686933 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] May 13 00:52:06.686978 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] May 13 00:52:06.687022 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] May 13 00:52:06.687066 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] May 13 00:52:06.687111 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] May 13 00:52:06.687158 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] May 13 00:52:06.687201 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] May 13 00:52:06.687247 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] May 13 00:52:06.687291 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] May 13 00:52:06.687335 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] May 13 00:52:06.687379 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] May 13 00:52:06.687423 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] May 13 00:52:06.687498 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] May 13 00:52:06.687562 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] May 13 00:52:06.687606 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] May 13 00:52:06.687651 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] May 13 00:52:06.687695 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] May 13 00:52:06.687740 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] May 13 00:52:06.687785 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] May 13 00:52:06.687829 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] May 13 00:52:06.687873 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] May 13 00:52:06.688078 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] May 13 00:52:06.688133 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] May 13 00:52:06.688180 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] May 13 00:52:06.688225 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] May 13 00:52:06.688271 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] May 13 00:52:06.688315 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] May 13 00:52:06.688360 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] May 13 00:52:06.688405 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] May 13 00:52:06.688462 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] May 13 00:52:06.688510 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] May 13 00:52:06.688555 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] May 13 00:52:06.688598 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] May 13 00:52:06.688642 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] May 13 00:52:06.688686 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] May 13 00:52:06.688730 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] May 13 00:52:06.688774 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] May 13 00:52:06.688819 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] May 13 00:52:06.688864 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] May 13 00:52:06.688911 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] May 13 00:52:06.688955 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] May 13 00:52:06.688999 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] May 13 00:52:06.689048 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] May 13 00:52:06.689092 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] May 13 00:52:06.689136 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] May 13 00:52:06.689181 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] May 13 00:52:06.689225 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] May 13 00:52:06.689269 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] May 13 00:52:06.689315 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] May 13 00:52:06.689358 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] May 13 00:52:06.689403 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] May 13 00:52:06.689456 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] May 13 00:52:06.689502 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] May 13 00:52:06.689547 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] May 13 00:52:06.689628 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] May 13 00:52:06.689675 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] May 13 00:52:06.689721 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] May 13 00:52:06.689985 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] May 13 00:52:06.690039 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] May 13 00:52:06.690087 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] May 13 00:52:06.690139 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] May 13 00:52:06.690185 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] May 13 00:52:06.690231 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] May 13 00:52:06.690277 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] May 13 00:52:06.690324 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] May 13 00:52:06.690369 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] May 13 00:52:06.690415 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] May 13 00:52:06.690796 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] May 13 00:52:06.690849 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] May 13 00:52:06.690896 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] May 13 00:52:06.690942 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] May 13 00:52:06.690987 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] May 13 00:52:06.691032 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] May 13 00:52:06.691079 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 13 00:52:06.691411 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] May 13 00:52:06.691470 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] May 13 00:52:06.691753 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] May 13 00:52:06.691808 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] May 13 00:52:06.691860 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] May 13 00:52:06.691917 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] May 13 00:52:06.691966 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] May 13 00:52:06.692012 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] May 13 00:52:06.692099 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] May 13 00:52:06.692146 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] May 13 00:52:06.692193 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] May 13 00:52:06.692241 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] May 13 00:52:06.692286 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] May 13 00:52:06.692333 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] May 13 00:52:06.692379 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] May 13 00:52:06.692423 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] May 13 00:52:06.692694 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] May 13 00:52:06.692746 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] May 13 00:52:06.692794 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] May 13 00:52:06.692841 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] May 13 00:52:06.692894 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] May 13 00:52:06.692941 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] May 13 00:52:06.692996 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] May 13 00:52:06.693046 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] May 13 00:52:06.693091 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] May 13 00:52:06.693137 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] May 13 00:52:06.693184 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] May 13 00:52:06.693228 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] May 13 00:52:06.693272 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] May 13 00:52:06.693317 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] May 13 00:52:06.693362 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] May 13 00:52:06.693406 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] May 13 00:52:06.693463 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] May 13 00:52:06.693510 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] May 13 00:52:06.693555 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] May 13 00:52:06.693602 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] May 13 00:52:06.693648 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] May 13 00:52:06.693710 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] May 13 00:52:06.693766 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] May 13 00:52:06.693813 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] May 13 00:52:06.693858 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] May 13 00:52:06.693904 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] May 13 00:52:06.693949 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] May 13 00:52:06.693994 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] May 13 00:52:06.694061 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] May 13 00:52:06.694126 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] May 13 00:52:06.694171 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] May 13 00:52:06.694216 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] May 13 00:52:06.694261 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] May 13 00:52:06.694307 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] May 13 00:52:06.694352 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] May 13 00:52:06.694398 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] May 13 00:52:06.694452 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] May 13 00:52:06.694502 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] May 13 00:52:06.694570 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] May 13 00:52:06.694630 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] May 13 00:52:06.694676 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] May 13 00:52:06.694721 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] May 13 00:52:06.694767 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] May 13 00:52:06.694812 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] May 13 00:52:06.694858 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] May 13 00:52:06.694903 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] May 13 00:52:06.694949 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] May 13 00:52:06.694997 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] May 13 00:52:06.695048 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] May 13 00:52:06.695094 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] May 13 00:52:06.695140 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] May 13 00:52:06.695185 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] May 13 00:52:06.695232 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] May 13 00:52:06.695277 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] May 13 00:52:06.695323 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] May 13 00:52:06.695368 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] May 13 00:52:06.695413 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] May 13 00:52:06.695468 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] May 13 00:52:06.695514 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] May 13 00:52:06.695559 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] May 13 00:52:06.695604 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] May 13 00:52:06.695649 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] May 13 00:52:06.695693 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] May 13 00:52:06.695738 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] May 13 00:52:06.695783 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] May 13 00:52:06.695829 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] May 13 00:52:06.695877 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] May 13 00:52:06.695923 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] May 13 00:52:06.695968 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] May 13 00:52:06.696013 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] May 13 00:52:06.696058 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] May 13 00:52:06.696104 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] May 13 00:52:06.696150 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] May 13 00:52:06.696195 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] May 13 00:52:06.696240 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] May 13 00:52:06.696285 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] May 13 00:52:06.696334 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] May 13 00:52:06.696378 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] May 13 00:52:06.696424 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] May 13 00:52:06.696689 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] May 13 00:52:06.696742 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] May 13 00:52:06.696790 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] May 13 00:52:06.696856 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] May 13 00:52:06.697113 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] May 13 00:52:06.697168 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] May 13 00:52:06.697220 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] May 13 00:52:06.697283 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] May 13 00:52:06.697649 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] May 13 00:52:06.697921 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] May 13 00:52:06.697977 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] May 13 00:52:06.698030 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] May 13 00:52:06.698082 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] May 13 00:52:06.698130 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] May 13 00:52:06.698176 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] May 13 00:52:06.698222 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] May 13 00:52:06.698271 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] May 13 00:52:06.698316 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] May 13 00:52:06.698362 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] May 13 00:52:06.698404 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] May 13 00:52:06.698718 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] May 13 00:52:06.698769 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] May 13 00:52:06.698811 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] May 13 00:52:06.698856 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] May 13 00:52:06.698912 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] May 13 00:52:06.698957 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] May 13 00:52:06.698999 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] May 13 00:52:06.699041 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] May 13 00:52:06.699083 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] May 13 00:52:06.699125 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] May 13 00:52:06.699166 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] May 13 00:52:06.699215 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] May 13 00:52:06.699258 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] May 13 00:52:06.699300 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] May 13 00:52:06.699346 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] May 13 00:52:06.699388 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] May 13 00:52:06.699431 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] May 13 00:52:06.699492 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] May 13 00:52:06.699539 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] May 13 00:52:06.699580 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] May 13 00:52:06.699627 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] May 13 00:52:06.699688 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] May 13 00:52:06.699734 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] May 13 00:52:06.699777 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] May 13 00:52:06.699823 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] May 13 00:52:06.699868 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] May 13 00:52:06.699915 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] May 13 00:52:06.699958 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] May 13 00:52:06.700004 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] May 13 00:52:06.700047 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] May 13 00:52:06.700097 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] May 13 00:52:06.700141 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] May 13 00:52:06.700184 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] May 13 00:52:06.700232 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] May 13 00:52:06.700276 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] May 13 00:52:06.700338 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] May 13 00:52:06.700392 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] May 13 00:52:06.700651 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] May 13 00:52:06.700708 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] May 13 00:52:06.700758 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] May 13 00:52:06.700802 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] May 13 00:52:06.700868 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] May 13 00:52:06.701121 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] May 13 00:52:06.701176 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] May 13 00:52:06.701222 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] May 13 00:52:06.701289 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] May 13 00:52:06.701580 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] May 13 00:52:06.701636 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] May 13 00:52:06.701885 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] May 13 00:52:06.701949 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] May 13 00:52:06.701996 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] May 13 00:52:06.702076 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] May 13 00:52:06.702132 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] May 13 00:52:06.702177 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] May 13 00:52:06.702220 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] May 13 00:52:06.702266 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] May 13 00:52:06.702311 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] May 13 00:52:06.702353 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] May 13 00:52:06.702399 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] May 13 00:52:06.702452 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] May 13 00:52:06.702503 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] May 13 00:52:06.702545 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] May 13 00:52:06.702593 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] May 13 00:52:06.702638 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] May 13 00:52:06.702684 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] May 13 00:52:06.702726 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] May 13 00:52:06.702770 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] May 13 00:52:06.702812 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] May 13 00:52:06.702858 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] May 13 00:52:06.702902 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] May 13 00:52:06.702944 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] May 13 00:52:06.702990 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] May 13 00:52:06.703032 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] May 13 00:52:06.703073 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] May 13 00:52:06.703118 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] May 13 00:52:06.703162 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] May 13 00:52:06.703211 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] May 13 00:52:06.703254 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] May 13 00:52:06.703300 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] May 13 00:52:06.703342 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] May 13 00:52:06.703387 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] May 13 00:52:06.703432 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] May 13 00:52:06.703745 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] May 13 00:52:06.703792 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] May 13 00:52:06.703838 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] May 13 00:52:06.703881 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] May 13 00:52:06.703931 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 13 00:52:06.703942 kernel: PCI: CLS 32 bytes, default 64 May 13 00:52:06.703949 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 13 00:52:06.703955 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns May 13 00:52:06.703961 kernel: clocksource: Switched to clocksource tsc May 13 00:52:06.703967 kernel: Initialise system trusted keyrings May 13 00:52:06.703972 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 13 00:52:06.703979 kernel: Key type asymmetric registered May 13 00:52:06.703984 kernel: Asymmetric key parser 'x509' registered May 13 00:52:06.703990 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 13 00:52:06.703997 kernel: io scheduler mq-deadline registered May 13 00:52:06.704003 kernel: io scheduler kyber registered May 13 00:52:06.704009 kernel: io scheduler bfq registered May 13 00:52:06.704098 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 May 13 00:52:06.704145 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:52:06.704191 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 May 13 00:52:06.704238 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:52:06.704283 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 May 13 00:52:06.704330 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:52:06.704376 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 May 13 00:52:06.704421 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:52:06.704482 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 May 13 00:52:06.704529 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:52:06.704576 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 May 13 00:52:06.704624 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:52:06.704670 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 May 13 00:52:06.704715 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:52:06.704761 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 May 13 00:52:06.704809 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:52:06.705084 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 May 13 00:52:06.705178 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:52:06.705296 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 May 13 00:52:06.705358 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:52:06.705428 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 May 13 00:52:06.705755 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:52:06.706013 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 May 13 00:52:06.706068 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:52:06.706115 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 May 13 00:52:06.706162 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:52:06.706214 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 May 13 00:52:06.706260 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:52:06.706308 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 May 13 00:52:06.706354 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:52:06.706399 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 May 13 00:52:06.706692 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:52:06.706749 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 May 13 00:52:06.707042 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:52:06.707098 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 May 13 00:52:06.707146 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:52:06.707193 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 May 13 00:52:06.707244 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:52:06.707290 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 May 13 00:52:06.707603 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:52:06.707867 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 May 13 00:52:06.707924 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:52:06.707974 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 May 13 00:52:06.708022 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:52:06.708069 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 May 13 00:52:06.708115 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:52:06.708164 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 May 13 00:52:06.708210 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:52:06.708257 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 May 13 00:52:06.708303 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:52:06.708360 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 May 13 00:52:06.708417 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:52:06.708738 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 May 13 00:52:06.708791 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:52:06.708840 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 May 13 00:52:06.709180 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:52:06.709235 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 May 13 00:52:06.709285 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:52:06.709627 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 May 13 00:52:06.709682 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:52:06.709732 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 May 13 00:52:06.709780 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:52:06.709828 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 May 13 00:52:06.709879 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ May 13 00:52:06.709889 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 13 00:52:06.709896 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 00:52:06.709902 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 13 00:52:06.709908 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 May 13 00:52:06.709915 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 13 00:52:06.709921 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 13 00:52:06.709968 kernel: rtc_cmos 00:01: registered as rtc0 May 13 00:52:06.710018 kernel: rtc_cmos 00:01: setting system clock to 2025-05-13T00:52:06 UTC (1747097526) May 13 00:52:06.710062 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram May 13 00:52:06.710070 kernel: intel_pstate: CPU model not supported May 13 00:52:06.710077 kernel: NET: Registered PF_INET6 protocol family May 13 00:52:06.710083 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 13 00:52:06.710091 kernel: Segment Routing with IPv6 May 13 00:52:06.710098 kernel: In-situ OAM (IOAM) with IPv6 May 13 00:52:06.710104 kernel: NET: Registered PF_PACKET protocol family May 13 00:52:06.710111 kernel: Key type dns_resolver registered May 13 00:52:06.710117 kernel: IPI shorthand broadcast: enabled May 13 00:52:06.710123 kernel: sched_clock: Marking stable (809040793, 214081828)->(1083669384, -60546763) May 13 00:52:06.710129 kernel: registered taskstats version 1 May 13 00:52:06.710135 kernel: Loading compiled-in X.509 certificates May 13 00:52:06.710142 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.181-flatcar: 52373c12592f53b0567bb941a0a0fec888191095' May 13 00:52:06.710148 kernel: Key type .fscrypt registered May 13 00:52:06.710154 kernel: Key type fscrypt-provisioning registered May 13 00:52:06.710161 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 00:52:06.710167 kernel: ima: Allocated hash algorithm: sha1 May 13 00:52:06.710173 kernel: ima: No architecture policies found May 13 00:52:06.710179 kernel: clk: Disabling unused clocks May 13 00:52:06.710185 kernel: Freeing unused kernel image (initmem) memory: 47456K May 13 00:52:06.710191 kernel: Write protecting the kernel read-only data: 28672k May 13 00:52:06.710197 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 13 00:52:06.710203 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 13 00:52:06.710209 kernel: Run /init as init process May 13 00:52:06.710217 kernel: with arguments: May 13 00:52:06.710223 kernel: /init May 13 00:52:06.710229 kernel: with environment: May 13 00:52:06.710235 kernel: HOME=/ May 13 00:52:06.710241 kernel: TERM=linux May 13 00:52:06.710247 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 00:52:06.710254 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 13 00:52:06.710262 systemd[1]: Detected virtualization vmware. May 13 00:52:06.710268 systemd[1]: Detected architecture x86-64. May 13 00:52:06.710275 systemd[1]: Running in initrd. May 13 00:52:06.710282 systemd[1]: No hostname configured, using default hostname. May 13 00:52:06.710288 systemd[1]: Hostname set to . May 13 00:52:06.710294 systemd[1]: Initializing machine ID from random generator. May 13 00:52:06.710300 systemd[1]: Queued start job for default target initrd.target. May 13 00:52:06.710306 systemd[1]: Started systemd-ask-password-console.path. May 13 00:52:06.710313 systemd[1]: Reached target cryptsetup.target. May 13 00:52:06.710319 systemd[1]: Reached target paths.target. May 13 00:52:06.710326 systemd[1]: Reached target slices.target. May 13 00:52:06.710332 systemd[1]: Reached target swap.target. May 13 00:52:06.710338 systemd[1]: Reached target timers.target. May 13 00:52:06.710345 systemd[1]: Listening on iscsid.socket. May 13 00:52:06.710351 systemd[1]: Listening on iscsiuio.socket. May 13 00:52:06.710358 systemd[1]: Listening on systemd-journald-audit.socket. May 13 00:52:06.710612 systemd[1]: Listening on systemd-journald-dev-log.socket. May 13 00:52:06.710620 systemd[1]: Listening on systemd-journald.socket. May 13 00:52:06.710629 systemd[1]: Listening on systemd-networkd.socket. May 13 00:52:06.710635 systemd[1]: Listening on systemd-udevd-control.socket. May 13 00:52:06.710641 systemd[1]: Listening on systemd-udevd-kernel.socket. May 13 00:52:06.710648 systemd[1]: Reached target sockets.target. May 13 00:52:06.710654 systemd[1]: Starting kmod-static-nodes.service... May 13 00:52:06.710660 systemd[1]: Finished network-cleanup.service. May 13 00:52:06.710667 systemd[1]: Starting systemd-fsck-usr.service... May 13 00:52:06.710673 systemd[1]: Starting systemd-journald.service... May 13 00:52:06.710679 systemd[1]: Starting systemd-modules-load.service... May 13 00:52:06.710687 systemd[1]: Starting systemd-resolved.service... May 13 00:52:06.710693 systemd[1]: Starting systemd-vconsole-setup.service... May 13 00:52:06.710699 systemd[1]: Finished kmod-static-nodes.service. May 13 00:52:06.710706 systemd[1]: Finished systemd-fsck-usr.service. May 13 00:52:06.710712 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 13 00:52:06.710718 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 13 00:52:06.710725 kernel: audit: type=1130 audit(1747097526.650:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:06.710733 systemd[1]: Finished systemd-vconsole-setup.service. May 13 00:52:06.710740 kernel: audit: type=1130 audit(1747097526.654:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:06.710747 systemd[1]: Starting dracut-cmdline-ask.service... May 13 00:52:06.710753 systemd[1]: Started systemd-resolved.service. May 13 00:52:06.710759 systemd[1]: Reached target nss-lookup.target. May 13 00:52:06.710766 kernel: audit: type=1130 audit(1747097526.667:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:06.710772 systemd[1]: Finished dracut-cmdline-ask.service. May 13 00:52:06.710778 systemd[1]: Starting dracut-cmdline.service... May 13 00:52:06.710786 kernel: audit: type=1130 audit(1747097526.678:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:06.710796 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 00:52:06.710803 kernel: Bridge firewalling registered May 13 00:52:06.710809 kernel: SCSI subsystem initialized May 13 00:52:06.710818 systemd-journald[216]: Journal started May 13 00:52:06.710853 systemd-journald[216]: Runtime Journal (/run/log/journal/95eba71d1a384a259d8386851671ce01) is 4.8M, max 38.8M, 34.0M free. May 13 00:52:06.714279 systemd[1]: Started systemd-journald.service. May 13 00:52:06.714297 kernel: audit: type=1130 audit(1747097526.710:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:06.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:06.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:06.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:06.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:06.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:06.650510 systemd-modules-load[217]: Inserted module 'overlay' May 13 00:52:06.665653 systemd-resolved[218]: Positive Trust Anchors: May 13 00:52:06.665660 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:52:06.665680 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 13 00:52:06.667366 systemd-resolved[218]: Defaulting to hostname 'linux'. May 13 00:52:06.692984 systemd-modules-load[217]: Inserted module 'br_netfilter' May 13 00:52:06.716767 dracut-cmdline[231]: dracut-dracut-053 May 13 00:52:06.716767 dracut-cmdline[231]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA May 13 00:52:06.716767 dracut-cmdline[231]: BEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=b36b4a233fdb797f33aa4a04cfdf4a35ceaebd893b04da45dfb96d44a18c6166 May 13 00:52:06.721015 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 00:52:06.721027 kernel: device-mapper: uevent: version 1.0.3 May 13 00:52:06.721035 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 13 00:52:06.726475 kernel: audit: type=1130 audit(1747097526.722:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:06.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:06.723128 systemd-modules-load[217]: Inserted module 'dm_multipath' May 13 00:52:06.723492 systemd[1]: Finished systemd-modules-load.service. May 13 00:52:06.726128 systemd[1]: Starting systemd-sysctl.service... May 13 00:52:06.730412 systemd[1]: Finished systemd-sysctl.service. May 13 00:52:06.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:06.733555 kernel: audit: type=1130 audit(1747097526.729:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:06.745455 kernel: Loading iSCSI transport class v2.0-870. May 13 00:52:06.756451 kernel: iscsi: registered transport (tcp) May 13 00:52:06.771452 kernel: iscsi: registered transport (qla4xxx) May 13 00:52:06.771469 kernel: QLogic iSCSI HBA Driver May 13 00:52:06.787391 systemd[1]: Finished dracut-cmdline.service. May 13 00:52:06.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:06.788001 systemd[1]: Starting dracut-pre-udev.service... May 13 00:52:06.791305 kernel: audit: type=1130 audit(1747097526.786:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:06.824457 kernel: raid6: avx2x4 gen() 48355 MB/s May 13 00:52:06.841451 kernel: raid6: avx2x4 xor() 20548 MB/s May 13 00:52:06.858453 kernel: raid6: avx2x2 gen() 53460 MB/s May 13 00:52:06.875454 kernel: raid6: avx2x2 xor() 32002 MB/s May 13 00:52:06.892450 kernel: raid6: avx2x1 gen() 45348 MB/s May 13 00:52:06.909450 kernel: raid6: avx2x1 xor() 27893 MB/s May 13 00:52:06.926453 kernel: raid6: sse2x4 gen() 21414 MB/s May 13 00:52:06.943451 kernel: raid6: sse2x4 xor() 11838 MB/s May 13 00:52:06.960449 kernel: raid6: sse2x2 gen() 21673 MB/s May 13 00:52:06.977451 kernel: raid6: sse2x2 xor() 13315 MB/s May 13 00:52:06.994459 kernel: raid6: sse2x1 gen() 18070 MB/s May 13 00:52:07.011631 kernel: raid6: sse2x1 xor() 8964 MB/s May 13 00:52:07.011673 kernel: raid6: using algorithm avx2x2 gen() 53460 MB/s May 13 00:52:07.011681 kernel: raid6: .... xor() 32002 MB/s, rmw enabled May 13 00:52:07.012824 kernel: raid6: using avx2x2 recovery algorithm May 13 00:52:07.021457 kernel: xor: automatically using best checksumming function avx May 13 00:52:07.080457 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 13 00:52:07.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:07.087479 kernel: audit: type=1130 audit(1747097527.083:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:07.086000 audit: BPF prog-id=7 op=LOAD May 13 00:52:07.086000 audit: BPF prog-id=8 op=LOAD May 13 00:52:07.084661 systemd[1]: Finished dracut-pre-udev.service. May 13 00:52:07.087345 systemd[1]: Starting systemd-udevd.service... May 13 00:52:07.095104 systemd-udevd[414]: Using default interface naming scheme 'v252'. May 13 00:52:07.097699 systemd[1]: Started systemd-udevd.service. May 13 00:52:07.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:07.100637 systemd[1]: Starting dracut-pre-trigger.service... May 13 00:52:07.106419 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation May 13 00:52:07.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:07.122407 systemd[1]: Finished dracut-pre-trigger.service. May 13 00:52:07.122914 systemd[1]: Starting systemd-udev-trigger.service... May 13 00:52:07.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:07.184455 systemd[1]: Finished systemd-udev-trigger.service. May 13 00:52:07.242167 kernel: VMware PVSCSI driver - version 1.0.7.0-k May 13 00:52:07.242200 kernel: vmw_pvscsi: using 64bit dma May 13 00:52:07.242208 kernel: vmw_pvscsi: max_id: 16 May 13 00:52:07.242215 kernel: vmw_pvscsi: setting ring_pages to 8 May 13 00:52:07.245452 kernel: libata version 3.00 loaded. May 13 00:52:07.255453 kernel: ata_piix 0000:00:07.1: version 2.13 May 13 00:52:07.266154 kernel: scsi host0: ata_piix May 13 00:52:07.266223 kernel: scsi host1: ata_piix May 13 00:52:07.266282 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 May 13 00:52:07.266290 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 May 13 00:52:07.266836 kernel: cryptd: max_cpu_qlen set to 1000 May 13 00:52:07.267998 kernel: VMware vmxnet3 virtual NIC driver - version 1.6.0.0-k-NAPI May 13 00:52:07.270452 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 May 13 00:52:07.271436 kernel: vmw_pvscsi: enabling reqCallThreshold May 13 00:52:07.271456 kernel: vmw_pvscsi: driver-based request coalescing enabled May 13 00:52:07.271464 kernel: vmw_pvscsi: using MSI-X May 13 00:52:07.271477 kernel: scsi host2: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 May 13 00:52:07.271561 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #2 May 13 00:52:07.271630 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps May 13 00:52:07.271707 kernel: scsi 2:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 May 13 00:52:07.435507 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 May 13 00:52:07.441455 kernel: scsi 1:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 May 13 00:52:07.447450 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 May 13 00:52:07.451453 kernel: AVX2 version of gcm_enc/dec engaged. May 13 00:52:07.455468 kernel: AES CTR mode by8 optimization enabled May 13 00:52:07.462840 kernel: sd 2:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) May 13 00:52:07.467923 kernel: sd 2:0:0:0: [sda] Write Protect is off May 13 00:52:07.467989 kernel: sd 2:0:0:0: [sda] Mode Sense: 31 00 00 00 May 13 00:52:07.468055 kernel: sd 2:0:0:0: [sda] Cache data unavailable May 13 00:52:07.468110 kernel: sd 2:0:0:0: [sda] Assuming drive cache: write through May 13 00:52:07.468164 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 13 00:52:07.468172 kernel: sd 2:0:0:0: [sda] Attached SCSI disk May 13 00:52:07.477467 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray May 13 00:52:07.494567 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 13 00:52:07.494577 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 May 13 00:52:07.516528 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 13 00:52:07.520770 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 13 00:52:07.521016 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 13 00:52:07.521453 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (474) May 13 00:52:07.521730 systemd[1]: Starting disk-uuid.service... May 13 00:52:07.523853 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 13 00:52:07.528911 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 13 00:52:07.551455 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 13 00:52:07.556460 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 13 00:52:08.560359 disk-uuid[548]: The operation has completed successfully. May 13 00:52:08.560601 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 13 00:52:08.592220 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 00:52:08.592507 systemd[1]: Finished disk-uuid.service. May 13 00:52:08.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:08.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:08.593222 systemd[1]: Starting verity-setup.service... May 13 00:52:08.602459 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 13 00:52:08.643649 systemd[1]: Found device dev-mapper-usr.device. May 13 00:52:08.644326 systemd[1]: Mounting sysusr-usr.mount... May 13 00:52:08.644721 systemd[1]: Finished verity-setup.service. May 13 00:52:08.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:08.695457 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 13 00:52:08.695511 systemd[1]: Mounted sysusr-usr.mount. May 13 00:52:08.696248 systemd[1]: Starting afterburn-network-kargs.service... May 13 00:52:08.696871 systemd[1]: Starting ignition-setup.service... May 13 00:52:08.710060 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 13 00:52:08.710090 kernel: BTRFS info (device sda6): using free space tree May 13 00:52:08.710102 kernel: BTRFS info (device sda6): has skinny extents May 13 00:52:08.716452 kernel: BTRFS info (device sda6): enabling ssd optimizations May 13 00:52:08.723020 systemd[1]: mnt-oem.mount: Deactivated successfully. May 13 00:52:08.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:08.728705 systemd[1]: Finished ignition-setup.service. May 13 00:52:08.729324 systemd[1]: Starting ignition-fetch-offline.service... May 13 00:52:08.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:08.776727 systemd[1]: Finished afterburn-network-kargs.service. May 13 00:52:08.777352 systemd[1]: Starting parse-ip-for-networkd.service... May 13 00:52:08.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:08.826000 audit: BPF prog-id=9 op=LOAD May 13 00:52:08.827847 systemd[1]: Finished parse-ip-for-networkd.service. May 13 00:52:08.828752 systemd[1]: Starting systemd-networkd.service... May 13 00:52:08.842737 systemd-networkd[734]: lo: Link UP May 13 00:52:08.842742 systemd-networkd[734]: lo: Gained carrier May 13 00:52:08.843005 systemd-networkd[734]: Enumeration completed May 13 00:52:08.843201 systemd-networkd[734]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. May 13 00:52:08.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:08.843391 systemd[1]: Started systemd-networkd.service. May 13 00:52:08.843540 systemd[1]: Reached target network.target. May 13 00:52:08.844206 systemd[1]: Starting iscsiuio.service... May 13 00:52:08.847923 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated May 13 00:52:08.848064 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps May 13 00:52:08.848186 systemd[1]: Started iscsiuio.service. May 13 00:52:08.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:08.849281 systemd-networkd[734]: ens192: Link UP May 13 00:52:08.849285 systemd-networkd[734]: ens192: Gained carrier May 13 00:52:08.849789 systemd[1]: Starting iscsid.service... May 13 00:52:08.851509 iscsid[739]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 13 00:52:08.851509 iscsid[739]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 13 00:52:08.851509 iscsid[739]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 13 00:52:08.851509 iscsid[739]: If using hardware iscsi like qla4xxx this message can be ignored. May 13 00:52:08.851509 iscsid[739]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 13 00:52:08.851509 iscsid[739]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 13 00:52:08.852488 systemd[1]: Started iscsid.service. May 13 00:52:08.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:08.853258 systemd[1]: Starting dracut-initqueue.service... May 13 00:52:08.855603 ignition[606]: Ignition 2.14.0 May 13 00:52:08.855611 ignition[606]: Stage: fetch-offline May 13 00:52:08.855645 ignition[606]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 13 00:52:08.855660 ignition[606]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed May 13 00:52:08.859233 ignition[606]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 13 00:52:08.859311 ignition[606]: parsed url from cmdline: "" May 13 00:52:08.859313 ignition[606]: no config URL provided May 13 00:52:08.859316 ignition[606]: reading system config file "/usr/lib/ignition/user.ign" May 13 00:52:08.859320 ignition[606]: no config at "/usr/lib/ignition/user.ign" May 13 00:52:08.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:08.861140 systemd[1]: Finished dracut-initqueue.service. May 13 00:52:08.861294 systemd[1]: Reached target remote-fs-pre.target. May 13 00:52:08.861385 systemd[1]: Reached target remote-cryptsetup.target. May 13 00:52:08.861489 systemd[1]: Reached target remote-fs.target. May 13 00:52:08.862021 systemd[1]: Starting dracut-pre-mount.service... May 13 00:52:08.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:08.867044 systemd[1]: Finished dracut-pre-mount.service. May 13 00:52:08.868277 ignition[606]: config successfully fetched May 13 00:52:08.868406 ignition[606]: parsing config with SHA512: 393471e2d2fb0998bcd7fafd1ef2d8bf5d8e1b88bd257983545e47d39395b4a12db22a7c141c8e283558b0dfeb5ea3ca51a64df2e2842500c1c9d07a55da82b9 May 13 00:52:08.873326 unknown[606]: fetched base config from "system" May 13 00:52:08.873332 unknown[606]: fetched user config from "vmware" May 13 00:52:08.873652 ignition[606]: fetch-offline: fetch-offline passed May 13 00:52:08.873693 ignition[606]: Ignition finished successfully May 13 00:52:08.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:08.874530 systemd[1]: Finished ignition-fetch-offline.service. May 13 00:52:08.874713 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 00:52:08.875182 systemd[1]: Starting ignition-kargs.service... May 13 00:52:08.880516 ignition[754]: Ignition 2.14.0 May 13 00:52:08.880523 ignition[754]: Stage: kargs May 13 00:52:08.880585 ignition[754]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 13 00:52:08.880595 ignition[754]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed May 13 00:52:08.881887 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 13 00:52:08.883276 ignition[754]: kargs: kargs passed May 13 00:52:08.883304 ignition[754]: Ignition finished successfully May 13 00:52:08.884130 systemd[1]: Finished ignition-kargs.service. May 13 00:52:08.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:08.884793 systemd[1]: Starting ignition-disks.service... May 13 00:52:08.891418 ignition[760]: Ignition 2.14.0 May 13 00:52:08.891704 ignition[760]: Stage: disks May 13 00:52:08.891883 ignition[760]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 13 00:52:08.892049 ignition[760]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed May 13 00:52:08.893358 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 13 00:52:08.894841 ignition[760]: disks: disks passed May 13 00:52:08.894986 ignition[760]: Ignition finished successfully May 13 00:52:08.895606 systemd[1]: Finished ignition-disks.service. May 13 00:52:08.895761 systemd[1]: Reached target initrd-root-device.target. May 13 00:52:08.895853 systemd[1]: Reached target local-fs-pre.target. May 13 00:52:08.895935 systemd[1]: Reached target local-fs.target. May 13 00:52:08.896014 systemd[1]: Reached target sysinit.target. May 13 00:52:08.896092 systemd[1]: Reached target basic.target. May 13 00:52:08.896666 systemd[1]: Starting systemd-fsck-root.service... May 13 00:52:08.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:08.906548 systemd-fsck[768]: ROOT: clean, 619/1628000 files, 124060/1617920 blocks May 13 00:52:08.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:08.907926 systemd[1]: Finished systemd-fsck-root.service. May 13 00:52:08.908496 systemd[1]: Mounting sysroot.mount... May 13 00:52:08.916015 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 13 00:52:08.915787 systemd[1]: Mounted sysroot.mount. May 13 00:52:08.915906 systemd[1]: Reached target initrd-root-fs.target. May 13 00:52:08.916859 systemd[1]: Mounting sysroot-usr.mount... May 13 00:52:08.917395 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 13 00:52:08.917586 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 00:52:08.917774 systemd[1]: Reached target ignition-diskful.target. May 13 00:52:08.918774 systemd[1]: Mounted sysroot-usr.mount. May 13 00:52:08.919388 systemd[1]: Starting initrd-setup-root.service... May 13 00:52:08.922335 initrd-setup-root[778]: cut: /sysroot/etc/passwd: No such file or directory May 13 00:52:08.925754 initrd-setup-root[786]: cut: /sysroot/etc/group: No such file or directory May 13 00:52:08.928046 initrd-setup-root[794]: cut: /sysroot/etc/shadow: No such file or directory May 13 00:52:08.930345 initrd-setup-root[802]: cut: /sysroot/etc/gshadow: No such file or directory May 13 00:52:08.960411 systemd[1]: Finished initrd-setup-root.service. May 13 00:52:08.960981 systemd[1]: Starting ignition-mount.service... May 13 00:52:08.961448 systemd[1]: Starting sysroot-boot.service... May 13 00:52:08.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:08.964730 bash[819]: umount: /sysroot/usr/share/oem: not mounted. May 13 00:52:08.970061 ignition[820]: INFO : Ignition 2.14.0 May 13 00:52:08.970352 ignition[820]: INFO : Stage: mount May 13 00:52:08.970601 ignition[820]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 13 00:52:08.970769 ignition[820]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed May 13 00:52:08.972396 ignition[820]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 13 00:52:08.974059 ignition[820]: INFO : mount: mount passed May 13 00:52:08.974175 ignition[820]: INFO : Ignition finished successfully May 13 00:52:08.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:08.974783 systemd[1]: Finished ignition-mount.service. May 13 00:52:08.980735 systemd[1]: Finished sysroot-boot.service. May 13 00:52:08.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:09.658088 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 13 00:52:09.667412 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (829) May 13 00:52:09.667446 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 13 00:52:09.667458 kernel: BTRFS info (device sda6): using free space tree May 13 00:52:09.668328 kernel: BTRFS info (device sda6): has skinny extents May 13 00:52:09.674457 kernel: BTRFS info (device sda6): enabling ssd optimizations May 13 00:52:09.674413 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 13 00:52:09.675057 systemd[1]: Starting ignition-files.service... May 13 00:52:09.686664 ignition[849]: INFO : Ignition 2.14.0 May 13 00:52:09.686664 ignition[849]: INFO : Stage: files May 13 00:52:09.686949 ignition[849]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 13 00:52:09.686949 ignition[849]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed May 13 00:52:09.687933 ignition[849]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 13 00:52:09.689769 ignition[849]: DEBUG : files: compiled without relabeling support, skipping May 13 00:52:09.690234 ignition[849]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 00:52:09.690234 ignition[849]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 00:52:09.692310 ignition[849]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 00:52:09.692531 ignition[849]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 00:52:09.693271 unknown[849]: wrote ssh authorized keys file for user: core May 13 00:52:09.693480 ignition[849]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 00:52:09.693742 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 13 00:52:09.693913 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 13 00:52:09.745670 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 00:52:09.867352 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 13 00:52:09.867836 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 00:52:09.868081 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 13 00:52:10.386006 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 13 00:52:10.447808 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 00:52:10.448064 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 13 00:52:10.448064 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 13 00:52:10.448064 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 00:52:10.448064 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 00:52:10.448064 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 00:52:10.448924 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 00:52:10.448924 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 00:52:10.448924 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 00:52:10.448924 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:52:10.448924 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:52:10.448924 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 00:52:10.448924 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 00:52:10.448924 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" May 13 00:52:10.448924 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition May 13 00:52:10.453376 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2626098077" May 13 00:52:10.453603 ignition[849]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2626098077": device or resource busy May 13 00:52:10.453808 ignition[849]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2626098077", trying btrfs: device or resource busy May 13 00:52:10.454045 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2626098077" May 13 00:52:10.456464 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2626098077" May 13 00:52:10.457983 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem2626098077" May 13 00:52:10.458198 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem2626098077" May 13 00:52:10.458387 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" May 13 00:52:10.458593 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 00:52:10.458813 systemd[1]: mnt-oem2626098077.mount: Deactivated successfully. May 13 00:52:10.459183 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 May 13 00:52:10.623693 systemd-networkd[734]: ens192: Gained IPv6LL May 13 00:52:10.884001 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET result: OK May 13 00:52:11.095994 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 00:52:11.105761 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" May 13 00:52:11.105948 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" May 13 00:52:11.105948 ignition[849]: INFO : files: op(11): [started] processing unit "vmtoolsd.service" May 13 00:52:11.105948 ignition[849]: INFO : files: op(11): [finished] processing unit "vmtoolsd.service" May 13 00:52:11.105948 ignition[849]: INFO : files: op(12): [started] processing unit "prepare-helm.service" May 13 00:52:11.105948 ignition[849]: INFO : files: op(12): op(13): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 00:52:11.105948 ignition[849]: INFO : files: op(12): op(13): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 00:52:11.105948 ignition[849]: INFO : files: op(12): [finished] processing unit "prepare-helm.service" May 13 00:52:11.105948 ignition[849]: INFO : files: op(14): [started] processing unit "coreos-metadata.service" May 13 00:52:11.105948 ignition[849]: INFO : files: op(14): op(15): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:52:11.107455 ignition[849]: INFO : files: op(14): op(15): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:52:11.107455 ignition[849]: INFO : files: op(14): [finished] processing unit "coreos-metadata.service" May 13 00:52:11.107455 ignition[849]: INFO : files: op(16): [started] setting preset to enabled for "prepare-helm.service" May 13 00:52:11.107455 ignition[849]: INFO : files: op(16): [finished] setting preset to enabled for "prepare-helm.service" May 13 00:52:11.107455 ignition[849]: INFO : files: op(17): [started] setting preset to disabled for "coreos-metadata.service" May 13 00:52:11.107455 ignition[849]: INFO : files: op(17): op(18): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:52:11.170578 ignition[849]: INFO : files: op(17): op(18): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:52:11.170784 ignition[849]: INFO : files: op(17): [finished] setting preset to disabled for "coreos-metadata.service" May 13 00:52:11.170784 ignition[849]: INFO : files: op(19): [started] setting preset to enabled for "vmtoolsd.service" May 13 00:52:11.170784 ignition[849]: INFO : files: op(19): [finished] setting preset to enabled for "vmtoolsd.service" May 13 00:52:11.170784 ignition[849]: INFO : files: createResultFile: createFiles: op(1a): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 00:52:11.170784 ignition[849]: INFO : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 00:52:11.170784 ignition[849]: INFO : files: files passed May 13 00:52:11.170784 ignition[849]: INFO : Ignition finished successfully May 13 00:52:11.172076 systemd[1]: Finished ignition-files.service. May 13 00:52:11.174170 kernel: kauditd_printk_skb: 24 callbacks suppressed May 13 00:52:11.174192 kernel: audit: type=1130 audit(1747097531.170:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:11.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:11.172650 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 13 00:52:11.172784 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 13 00:52:11.173166 systemd[1]: Starting ignition-quench.service... May 13 00:52:11.178726 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 00:52:11.178784 systemd[1]: Finished ignition-quench.service. May 13 00:52:11.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:11.177000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:11.183671 kernel: audit: type=1130 audit(1747097531.177:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:11.183689 kernel: audit: type=1131 audit(1747097531.177:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:11.184717 initrd-setup-root-after-ignition[875]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 00:52:11.185236 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 13 00:52:11.187848 kernel: audit: type=1130 audit(1747097531.183:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:11.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:11.185410 systemd[1]: Reached target ignition-complete.target. May 13 00:52:11.188357 systemd[1]: Starting initrd-parse-etc.service... May 13 00:52:11.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:11.196603 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 00:52:11.201776 kernel: audit: type=1130 audit(1747097531.195:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:11.201793 kernel: audit: type=1131 audit(1747097531.195:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:11.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:11.196655 systemd[1]: Finished initrd-parse-etc.service. May 13 00:52:11.196823 systemd[1]: Reached target initrd-fs.target. May 13 00:52:11.201680 systemd[1]: Reached target initrd.target. May 13 00:52:11.201842 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 13 00:52:11.202382 systemd[1]: Starting dracut-pre-pivot.service... May 13 00:52:11.209549 systemd[1]: Finished dracut-pre-pivot.service. May 13 00:52:11.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:11.210160 systemd[1]: Starting initrd-cleanup.service... May 13 00:52:11.213139 kernel: audit: type=1130 audit(1747097531.208:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:11.217422 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 00:52:11.217486 systemd[1]: Finished initrd-cleanup.service. May 13 00:52:11.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:11.218046 systemd[1]: Stopped target nss-lookup.target. May 13 00:52:11.222535 kernel: audit: type=1130 audit(1747097531.216:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:11.222549 kernel: audit: type=1131 audit(1747097531.216:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:11.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:11.222626 systemd[1]: Stopped target remote-cryptsetup.target. May 13 00:52:11.222855 systemd[1]: Stopped target timers.target. May 13 00:52:11.223076 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 00:52:11.223230 systemd[1]: Stopped dracut-pre-pivot.service. May 13 00:52:11.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:11.223520 systemd[1]: Stopped target initrd.target. May 13 00:52:11.225940 systemd[1]: Stopped target basic.target. May 13 00:52:11.226137 systemd[1]: Stopped target ignition-complete.target. May 13 00:52:11.226348 systemd[1]: Stopped target ignition-diskful.target. May 13 00:52:11.226472 kernel: audit: type=1131 audit(1747097531.221:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:11.226578 systemd[1]: Stopped target initrd-root-device.target. May 13 00:52:11.226789 systemd[1]: Stopped target remote-fs.target. May 13 00:52:11.226985 systemd[1]: Stopped target remote-fs-pre.target. May 13 00:52:11.227198 systemd[1]: Stopped target sysinit.target. May 13 00:52:11.227399 systemd[1]: Stopped target local-fs.target. May 13 00:52:11.227622 systemd[1]: Stopped target local-fs-pre.target. May 13 00:52:11.227823 systemd[1]: Stopped target swap.target. May 13 00:52:11.228020 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 00:52:11.228169 systemd[1]: Stopped dracut-pre-mount.service. May 13 00:52:11.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:11.228445 systemd[1]: Stopped target cryptsetup.target. May 13 00:52:11.228658 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 00:52:11.228805 systemd[1]: Stopped dracut-initqueue.service. May 13 00:52:11.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:11.229075 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 00:52:11.229235 systemd[1]: Stopped ignition-fetch-offline.service. May 13 00:52:11.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:11.229529 systemd[1]: Stopped target paths.target. May 13 00:52:11.229723 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 00:52:11.231464 systemd[1]: Stopped systemd-ask-password-console.path. May 13 00:52:11.231678 systemd[1]: Stopped target slices.target. May 13 00:52:11.231873 systemd[1]: Stopped target sockets.target. May 13 00:52:11.232099 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 00:52:11.232262 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 13 00:52:11.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:11.232550 systemd[1]: ignition-files.service: Deactivated successfully. May 13 00:52:11.232699 systemd[1]: Stopped ignition-files.service. May 13 00:52:11.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:11.233353 systemd[1]: Stopping ignition-mount.service... May 13 00:52:11.233717 systemd[1]: Stopping iscsid.service... May 13 00:52:11.234067 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 00:52:11.234223 systemd[1]: Stopped kmod-static-nodes.service. May 13 00:52:11.234817 systemd[1]: Stopping sysroot-boot.service... May 13 00:52:11.235045 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 00:52:11.235217 systemd[1]: Stopped systemd-udev-trigger.service. May 13 00:52:11.235505 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 00:52:11.235628 iscsid[739]: iscsid shutting down. May 13 00:52:11.235838 systemd[1]: Stopped dracut-pre-trigger.service. May 13 00:52:11.237460 systemd[1]: iscsid.service: Deactivated successfully. May 13 00:52:11.237656 systemd[1]: Stopped iscsid.service. May 13 00:52:11.237910 systemd[1]: iscsid.socket: Deactivated successfully. May 13 00:52:11.238056 systemd[1]: Closed iscsid.socket. May 13 00:52:11.238658 systemd[1]: Stopping iscsiuio.service... May 13 00:52:11.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:11.234000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:11.234000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:11.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:11.240481 ignition[888]: INFO : Ignition 2.14.0 May 13 00:52:11.240481 ignition[888]: INFO : Stage: umount May 13 00:52:11.240857 systemd[1]: iscsiuio.service: Deactivated successfully. May 13 00:52:11.241064 systemd[1]: Stopped iscsiuio.service. May 13 00:52:11.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:11.241385 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 00:52:11.241549 systemd[1]: Closed iscsiuio.socket. May 13 00:52:11.241681 ignition[888]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 13 00:52:11.241681 ignition[888]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed May 13 00:52:11.243651 ignition[888]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" May 13 00:52:11.246040 ignition[888]: INFO : umount: umount passed May 13 00:52:11.246040 ignition[888]: INFO : Ignition finished successfully May 13 00:52:11.248229 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 00:52:11.248508 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 00:52:11.248555 systemd[1]: Stopped ignition-mount.service. May 13 00:52:11.248714 systemd[1]: Stopped target network.target. May 13 00:52:11.248795 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 00:52:11.248818 systemd[1]: Stopped ignition-disks.service. May 13 00:52:11.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:11.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:11.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:11.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:11.248915 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 00:52:11.248934 systemd[1]: Stopped ignition-kargs.service. May 13 00:52:11.249029 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 00:52:11.249049 systemd[1]: Stopped ignition-setup.service. May 13 00:52:11.249196 systemd[1]: Stopping systemd-networkd.service... May 13 00:52:11.249316 systemd[1]: Stopping systemd-resolved.service... May 13 00:52:11.256646 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 00:52:11.256720 systemd[1]: Stopped systemd-resolved.service. May 13 00:52:11.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:11.259045 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 00:52:11.259103 systemd[1]: Stopped systemd-networkd.service. May 13 00:52:11.258000 audit: BPF prog-id=6 op=UNLOAD May 13 00:52:11.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:11.259821 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 00:52:11.259842 systemd[1]: Closed systemd-networkd.socket. May 13 00:52:11.259000 audit: BPF prog-id=9 op=UNLOAD May 13 00:52:11.260715 systemd[1]: Stopping network-cleanup.service... May 13 00:52:11.260940 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 00:52:11.260968 systemd[1]: Stopped parse-ip-for-networkd.service. May 13 00:52:11.261386 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. May 13 00:52:11.261412 systemd[1]: Stopped afterburn-network-kargs.service. May 13 00:52:11.259000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:11.260000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:11.261819 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 00:52:11.261842 systemd[1]: Stopped systemd-sysctl.service. May 13 00:52:11.260000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:11.262282 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 00:52:11.262306 systemd[1]: Stopped systemd-modules-load.service. May 13 00:52:11.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:11.264829 systemd[1]: Stopping systemd-udevd.service... May 13 00:52:11.265851 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 00:52:11.268043 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 00:52:11.268258 systemd[1]: Stopped systemd-udevd.service. May 13 00:52:11.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:11.268861 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 00:52:11.269051 systemd[1]: Stopped network-cleanup.service. May 13 00:52:11.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:11.269334 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 00:52:11.269354 systemd[1]: Closed systemd-udevd-control.socket. May 13 00:52:11.269740 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 00:52:11.269760 systemd[1]: Closed systemd-udevd-kernel.socket. May 13 00:52:11.270096 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 00:52:11.270120 systemd[1]: Stopped dracut-pre-udev.service. May 13 00:52:11.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:11.270510 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 00:52:11.270533 systemd[1]: Stopped dracut-cmdline.service. May 13 00:52:11.269000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:11.270878 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 00:52:11.270899 systemd[1]: Stopped dracut-cmdline-ask.service. May 13 00:52:11.269000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:11.271748 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 13 00:52:11.272032 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:52:11.272062 systemd[1]: Stopped systemd-vconsole-setup.service. May 13 00:52:11.270000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:11.274924 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 00:52:11.275117 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 13 00:52:11.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:11.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:11.351781 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 00:52:11.351857 systemd[1]: Stopped sysroot-boot.service. May 13 00:52:11.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:11.352194 systemd[1]: Reached target initrd-switch-root.target. May 13 00:52:11.352339 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 00:52:11.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:11.352370 systemd[1]: Stopped initrd-setup-root.service. May 13 00:52:11.353078 systemd[1]: Starting initrd-switch-root.service... May 13 00:52:11.360120 systemd[1]: Switching root. May 13 00:52:11.376187 systemd-journald[216]: Journal stopped May 13 00:52:13.725891 systemd-journald[216]: Received SIGTERM from PID 1 (systemd). May 13 00:52:13.725909 kernel: SELinux: Class mctp_socket not defined in policy. May 13 00:52:13.725918 kernel: SELinux: Class anon_inode not defined in policy. May 13 00:52:13.725925 kernel: SELinux: the above unknown classes and permissions will be allowed May 13 00:52:13.725930 kernel: SELinux: policy capability network_peer_controls=1 May 13 00:52:13.725936 kernel: SELinux: policy capability open_perms=1 May 13 00:52:13.725942 kernel: SELinux: policy capability extended_socket_class=1 May 13 00:52:13.725948 kernel: SELinux: policy capability always_check_network=0 May 13 00:52:13.725954 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 00:52:13.725959 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 00:52:13.725964 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 00:52:13.725969 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 00:52:13.725976 systemd[1]: Successfully loaded SELinux policy in 101.309ms. May 13 00:52:13.725983 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 5.691ms. May 13 00:52:13.725991 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 13 00:52:13.725998 systemd[1]: Detected virtualization vmware. May 13 00:52:13.726005 systemd[1]: Detected architecture x86-64. May 13 00:52:13.726011 systemd[1]: Detected first boot. May 13 00:52:13.726020 systemd[1]: Initializing machine ID from random generator. May 13 00:52:13.726030 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 13 00:52:13.726037 systemd[1]: Populated /etc with preset unit settings. May 13 00:52:13.726044 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:52:13.726051 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:52:13.726058 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:52:13.726066 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 00:52:13.726072 systemd[1]: Stopped initrd-switch-root.service. May 13 00:52:13.726078 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 00:52:13.726085 systemd[1]: Created slice system-addon\x2dconfig.slice. May 13 00:52:13.726092 systemd[1]: Created slice system-addon\x2drun.slice. May 13 00:52:13.726178 systemd[1]: Created slice system-getty.slice. May 13 00:52:13.726191 systemd[1]: Created slice system-modprobe.slice. May 13 00:52:13.726202 systemd[1]: Created slice system-serial\x2dgetty.slice. May 13 00:52:13.726212 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 13 00:52:13.726221 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 13 00:52:13.726227 systemd[1]: Created slice user.slice. May 13 00:52:13.726233 systemd[1]: Started systemd-ask-password-console.path. May 13 00:52:13.726240 systemd[1]: Started systemd-ask-password-wall.path. May 13 00:52:13.726247 systemd[1]: Set up automount boot.automount. May 13 00:52:13.726253 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 13 00:52:13.726261 systemd[1]: Stopped target initrd-switch-root.target. May 13 00:52:13.726268 systemd[1]: Stopped target initrd-fs.target. May 13 00:52:13.726275 systemd[1]: Stopped target initrd-root-fs.target. May 13 00:52:13.726282 systemd[1]: Reached target integritysetup.target. May 13 00:52:13.726289 systemd[1]: Reached target remote-cryptsetup.target. May 13 00:52:13.726296 systemd[1]: Reached target remote-fs.target. May 13 00:52:13.726302 systemd[1]: Reached target slices.target. May 13 00:52:13.726309 systemd[1]: Reached target swap.target. May 13 00:52:13.726316 systemd[1]: Reached target torcx.target. May 13 00:52:13.726323 systemd[1]: Reached target veritysetup.target. May 13 00:52:13.726330 systemd[1]: Listening on systemd-coredump.socket. May 13 00:52:13.726337 systemd[1]: Listening on systemd-initctl.socket. May 13 00:52:13.726343 systemd[1]: Listening on systemd-networkd.socket. May 13 00:52:13.726350 systemd[1]: Listening on systemd-udevd-control.socket. May 13 00:52:13.726357 systemd[1]: Listening on systemd-udevd-kernel.socket. May 13 00:52:13.726365 systemd[1]: Listening on systemd-userdbd.socket. May 13 00:52:13.726372 systemd[1]: Mounting dev-hugepages.mount... May 13 00:52:13.726378 systemd[1]: Mounting dev-mqueue.mount... May 13 00:52:13.726385 systemd[1]: Mounting media.mount... May 13 00:52:13.726392 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:52:13.726399 systemd[1]: Mounting sys-kernel-debug.mount... May 13 00:52:13.726405 systemd[1]: Mounting sys-kernel-tracing.mount... May 13 00:52:13.726413 systemd[1]: Mounting tmp.mount... May 13 00:52:13.726420 systemd[1]: Starting flatcar-tmpfiles.service... May 13 00:52:13.726427 systemd[1]: Starting ignition-delete-config.service... May 13 00:52:13.726433 systemd[1]: Starting kmod-static-nodes.service... May 13 00:52:13.726451 systemd[1]: Starting modprobe@configfs.service... May 13 00:52:13.726461 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:52:13.726468 systemd[1]: Starting modprobe@drm.service... May 13 00:52:13.726474 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:52:13.726481 systemd[1]: Starting modprobe@fuse.service... May 13 00:52:13.726489 systemd[1]: Starting modprobe@loop.service... May 13 00:52:13.726497 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 00:52:13.726504 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 00:52:13.726511 systemd[1]: Stopped systemd-fsck-root.service. May 13 00:52:13.726518 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 00:52:13.726525 systemd[1]: Stopped systemd-fsck-usr.service. May 13 00:52:13.726531 systemd[1]: Stopped systemd-journald.service. May 13 00:52:13.726538 systemd[1]: Starting systemd-journald.service... May 13 00:52:13.726545 systemd[1]: Starting systemd-modules-load.service... May 13 00:52:13.726553 systemd[1]: Starting systemd-network-generator.service... May 13 00:52:13.726560 systemd[1]: Starting systemd-remount-fs.service... May 13 00:52:13.726566 systemd[1]: Starting systemd-udev-trigger.service... May 13 00:52:13.726573 systemd[1]: verity-setup.service: Deactivated successfully. May 13 00:52:13.726580 systemd[1]: Stopped verity-setup.service. May 13 00:52:13.726587 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:52:13.726594 systemd[1]: Mounted dev-hugepages.mount. May 13 00:52:13.726601 systemd[1]: Mounted dev-mqueue.mount. May 13 00:52:13.726608 systemd[1]: Mounted media.mount. May 13 00:52:13.726615 systemd[1]: Mounted sys-kernel-debug.mount. May 13 00:52:13.726622 systemd[1]: Mounted sys-kernel-tracing.mount. May 13 00:52:13.726628 systemd[1]: Mounted tmp.mount. May 13 00:52:13.726635 kernel: fuse: init (API version 7.34) May 13 00:52:13.726641 systemd[1]: Finished kmod-static-nodes.service. May 13 00:52:13.726648 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 00:52:13.726655 systemd[1]: Finished modprobe@configfs.service. May 13 00:52:13.726664 systemd-journald[1021]: Journal started May 13 00:52:13.726693 systemd-journald[1021]: Runtime Journal (/run/log/journal/09030253ec1f46f0a9490243866d4878) is 4.8M, max 38.8M, 34.0M free. May 13 00:52:11.734000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 00:52:11.778000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 13 00:52:11.778000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 13 00:52:11.778000 audit: BPF prog-id=10 op=LOAD May 13 00:52:11.778000 audit: BPF prog-id=10 op=UNLOAD May 13 00:52:11.778000 audit: BPF prog-id=11 op=LOAD May 13 00:52:11.778000 audit: BPF prog-id=11 op=UNLOAD May 13 00:52:11.889000 audit[921]: AVC avc: denied { associate } for pid=921 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 13 00:52:11.889000 audit[921]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001078d2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=904 pid=921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:52:11.889000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 13 00:52:11.890000 audit[921]: AVC avc: denied { associate } for pid=921 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 13 00:52:11.890000 audit[921]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001079a9 a2=1ed a3=0 items=2 ppid=904 pid=921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:52:11.890000 audit: CWD cwd="/" May 13 00:52:11.890000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:11.890000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:11.890000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 13 00:52:13.622000 audit: BPF prog-id=12 op=LOAD May 13 00:52:13.622000 audit: BPF prog-id=3 op=UNLOAD May 13 00:52:13.622000 audit: BPF prog-id=13 op=LOAD May 13 00:52:13.622000 audit: BPF prog-id=14 op=LOAD May 13 00:52:13.622000 audit: BPF prog-id=4 op=UNLOAD May 13 00:52:13.622000 audit: BPF prog-id=5 op=UNLOAD May 13 00:52:13.623000 audit: BPF prog-id=15 op=LOAD May 13 00:52:13.623000 audit: BPF prog-id=12 op=UNLOAD May 13 00:52:13.623000 audit: BPF prog-id=16 op=LOAD May 13 00:52:13.623000 audit: BPF prog-id=17 op=LOAD May 13 00:52:13.623000 audit: BPF prog-id=13 op=UNLOAD May 13 00:52:13.623000 audit: BPF prog-id=14 op=UNLOAD May 13 00:52:13.624000 audit: BPF prog-id=18 op=LOAD May 13 00:52:13.624000 audit: BPF prog-id=15 op=UNLOAD May 13 00:52:13.624000 audit: BPF prog-id=19 op=LOAD May 13 00:52:13.624000 audit: BPF prog-id=20 op=LOAD May 13 00:52:13.624000 audit: BPF prog-id=16 op=UNLOAD May 13 00:52:13.624000 audit: BPF prog-id=17 op=UNLOAD May 13 00:52:13.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:13.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:13.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:13.633000 audit: BPF prog-id=18 op=UNLOAD May 13 00:52:13.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:13.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:13.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:13.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:13.694000 audit: BPF prog-id=21 op=LOAD May 13 00:52:13.694000 audit: BPF prog-id=22 op=LOAD May 13 00:52:13.694000 audit: BPF prog-id=23 op=LOAD May 13 00:52:13.695000 audit: BPF prog-id=19 op=UNLOAD May 13 00:52:13.695000 audit: BPF prog-id=20 op=UNLOAD May 13 00:52:13.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:13.722000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 13 00:52:13.722000 audit[1021]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffedeef2c30 a2=4000 a3=7ffedeef2ccc items=0 ppid=1 pid=1021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:52:13.722000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 13 00:52:13.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:13.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:13.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:11.887785 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2025-05-13T00:52:11Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 00:52:13.622544 systemd[1]: Queued start job for default target multi-user.target. May 13 00:52:11.888215 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2025-05-13T00:52:11Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 13 00:52:13.622552 systemd[1]: Unnecessary job was removed for dev-sda6.device. May 13 00:52:11.888227 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2025-05-13T00:52:11Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 13 00:52:13.625932 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 00:52:11.888247 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2025-05-13T00:52:11Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 13 00:52:11.888252 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2025-05-13T00:52:11Z" level=debug msg="skipped missing lower profile" missing profile=oem May 13 00:52:11.888270 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2025-05-13T00:52:11Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 13 00:52:13.727974 systemd[1]: Started systemd-journald.service. May 13 00:52:11.888277 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2025-05-13T00:52:11Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 13 00:52:11.888391 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2025-05-13T00:52:11Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 13 00:52:13.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:11.888412 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2025-05-13T00:52:11Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 13 00:52:13.728147 jq[988]: true May 13 00:52:11.888419 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2025-05-13T00:52:11Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 13 00:52:11.889959 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2025-05-13T00:52:11Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 13 00:52:11.889981 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2025-05-13T00:52:11Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 13 00:52:11.889992 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2025-05-13T00:52:11Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 May 13 00:52:13.728346 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:52:13.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:13.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:11.890001 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2025-05-13T00:52:11Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 13 00:52:13.728422 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:52:11.890010 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2025-05-13T00:52:11Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 May 13 00:52:13.728768 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:52:11.890018 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2025-05-13T00:52:11Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 13 00:52:13.728849 systemd[1]: Finished modprobe@drm.service. May 13 00:52:13.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:13.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:13.407113 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2025-05-13T00:52:13Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 13 00:52:13.407264 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2025-05-13T00:52:13Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 13 00:52:13.407329 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2025-05-13T00:52:13Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 13 00:52:13.407432 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2025-05-13T00:52:13Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 13 00:52:13.729156 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:52:13.407481 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2025-05-13T00:52:13Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 13 00:52:13.407524 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2025-05-13T00:52:13Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 13 00:52:13.729230 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:52:13.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:13.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:13.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:13.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:13.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:13.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:13.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:13.729516 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 00:52:13.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:13.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:13.756670 systemd-journald[1021]: Time spent on flushing to /var/log/journal/09030253ec1f46f0a9490243866d4878 is 42.513ms for 1986 entries. May 13 00:52:13.756670 systemd-journald[1021]: System Journal (/var/log/journal/09030253ec1f46f0a9490243866d4878) is 8.0M, max 584.8M, 576.8M free. May 13 00:52:13.855595 systemd-journald[1021]: Received client request to flush runtime journal. May 13 00:52:13.855647 kernel: loop: module loaded May 13 00:52:13.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:13.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:13.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:13.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:13.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:13.855865 jq[1033]: true May 13 00:52:13.729601 systemd[1]: Finished modprobe@fuse.service. May 13 00:52:13.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:13.729832 systemd[1]: Finished systemd-modules-load.service. May 13 00:52:13.730043 systemd[1]: Finished systemd-network-generator.service. May 13 00:52:13.730413 systemd[1]: Finished systemd-remount-fs.service. May 13 00:52:13.731393 systemd[1]: Reached target network-pre.target. May 13 00:52:13.732335 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 13 00:52:13.733480 systemd[1]: Mounting sys-kernel-config.mount... May 13 00:52:13.735191 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 00:52:13.736910 systemd[1]: Starting systemd-hwdb-update.service... May 13 00:52:13.737939 systemd[1]: Starting systemd-journal-flush.service... May 13 00:52:13.738071 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:52:13.738765 systemd[1]: Starting systemd-random-seed.service... May 13 00:52:13.739521 systemd[1]: Starting systemd-sysctl.service... May 13 00:52:13.740471 systemd[1]: Finished flatcar-tmpfiles.service. May 13 00:52:13.741150 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 13 00:52:13.741768 systemd[1]: Mounted sys-kernel-config.mount. May 13 00:52:13.743266 systemd[1]: Starting systemd-sysusers.service... May 13 00:52:13.751164 systemd[1]: Finished systemd-random-seed.service. May 13 00:52:13.751325 systemd[1]: Reached target first-boot-complete.target. May 13 00:52:13.762498 systemd[1]: Finished systemd-sysctl.service. May 13 00:52:13.769771 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:52:13.770971 systemd[1]: Finished modprobe@loop.service. May 13 00:52:13.771156 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:52:13.790559 systemd[1]: Finished systemd-sysusers.service. May 13 00:52:13.851119 systemd[1]: Finished systemd-udev-trigger.service. May 13 00:52:13.852109 systemd[1]: Starting systemd-udev-settle.service... May 13 00:52:13.856333 systemd[1]: Finished systemd-journal-flush.service. May 13 00:52:13.860408 udevadm[1052]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 13 00:52:14.012701 ignition[1041]: Ignition 2.14.0 May 13 00:52:14.013137 ignition[1041]: deleting config from guestinfo properties May 13 00:52:14.098682 ignition[1041]: Successfully deleted config May 13 00:52:14.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ignition-delete-config comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:14.099589 systemd[1]: Finished ignition-delete-config.service. May 13 00:52:14.406067 systemd[1]: Finished systemd-hwdb-update.service. May 13 00:52:14.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:14.405000 audit: BPF prog-id=24 op=LOAD May 13 00:52:14.405000 audit: BPF prog-id=25 op=LOAD May 13 00:52:14.405000 audit: BPF prog-id=7 op=UNLOAD May 13 00:52:14.405000 audit: BPF prog-id=8 op=UNLOAD May 13 00:52:14.407198 systemd[1]: Starting systemd-udevd.service... May 13 00:52:14.418921 systemd-udevd[1053]: Using default interface naming scheme 'v252'. May 13 00:52:14.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:14.440000 audit: BPF prog-id=26 op=LOAD May 13 00:52:14.441169 systemd[1]: Started systemd-udevd.service. May 13 00:52:14.442492 systemd[1]: Starting systemd-networkd.service... May 13 00:52:14.450000 audit: BPF prog-id=27 op=LOAD May 13 00:52:14.450000 audit: BPF prog-id=28 op=LOAD May 13 00:52:14.450000 audit: BPF prog-id=29 op=LOAD May 13 00:52:14.452705 systemd[1]: Starting systemd-userdbd.service... May 13 00:52:14.469731 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. May 13 00:52:14.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:14.476921 systemd[1]: Started systemd-userdbd.service. May 13 00:52:14.516554 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 13 00:52:14.523456 kernel: ACPI: button: Power Button [PWRF] May 13 00:52:14.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:14.537253 systemd-networkd[1061]: lo: Link UP May 13 00:52:14.537258 systemd-networkd[1061]: lo: Gained carrier May 13 00:52:14.537536 systemd-networkd[1061]: Enumeration completed May 13 00:52:14.537595 systemd[1]: Started systemd-networkd.service. May 13 00:52:14.538079 systemd-networkd[1061]: ens192: Configuring with /etc/systemd/network/00-vmware.network. May 13 00:52:14.541125 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated May 13 00:52:14.541255 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps May 13 00:52:14.542386 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): ens192: link becomes ready May 13 00:52:14.542640 systemd-networkd[1061]: ens192: Link UP May 13 00:52:14.542801 systemd-networkd[1061]: ens192: Gained carrier May 13 00:52:14.569528 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 13 00:52:14.592000 audit[1056]: AVC avc: denied { confidentiality } for pid=1056 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 13 00:52:14.592000 audit[1056]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=562e11d74790 a1=338ac a2=7fb3b7d96bc5 a3=5 items=110 ppid=1053 pid=1056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:52:14.592000 audit: CWD cwd="/" May 13 00:52:14.592000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=1 name=(null) inode=25076 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=2 name=(null) inode=25076 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=3 name=(null) inode=25077 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=4 name=(null) inode=25076 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=5 name=(null) inode=25078 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=6 name=(null) inode=25076 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=7 name=(null) inode=25079 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=8 name=(null) inode=25079 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=9 name=(null) inode=25080 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=10 name=(null) inode=25079 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=11 name=(null) inode=25081 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=12 name=(null) inode=25079 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=13 name=(null) inode=25082 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=14 name=(null) inode=25079 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=15 name=(null) inode=25083 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=16 name=(null) inode=25079 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=17 name=(null) inode=25084 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=18 name=(null) inode=25076 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=19 name=(null) inode=25085 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=20 name=(null) inode=25085 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=21 name=(null) inode=25086 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=22 name=(null) inode=25085 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=23 name=(null) inode=25087 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=24 name=(null) inode=25085 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=25 name=(null) inode=25088 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=26 name=(null) inode=25085 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=27 name=(null) inode=25089 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=28 name=(null) inode=25085 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=29 name=(null) inode=25090 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=30 name=(null) inode=25076 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=31 name=(null) inode=25091 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=32 name=(null) inode=25091 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=33 name=(null) inode=25092 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=34 name=(null) inode=25091 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=35 name=(null) inode=25093 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=36 name=(null) inode=25091 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=37 name=(null) inode=25094 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=38 name=(null) inode=25091 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=39 name=(null) inode=25095 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=40 name=(null) inode=25091 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=41 name=(null) inode=25096 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=42 name=(null) inode=25076 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=43 name=(null) inode=25097 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=44 name=(null) inode=25097 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=45 name=(null) inode=25098 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=46 name=(null) inode=25097 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=47 name=(null) inode=25099 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=48 name=(null) inode=25097 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=49 name=(null) inode=25100 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=50 name=(null) inode=25097 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=51 name=(null) inode=25101 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=52 name=(null) inode=25097 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=53 name=(null) inode=25102 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=55 name=(null) inode=25103 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=56 name=(null) inode=25103 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=57 name=(null) inode=25104 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=58 name=(null) inode=25103 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=59 name=(null) inode=25105 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=60 name=(null) inode=25103 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=61 name=(null) inode=25106 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=62 name=(null) inode=25106 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.597829 kernel: vmw_vmci 0000:00:07.7: Found VMCI PCI device at 0x11080, irq 16 May 13 00:52:14.598152 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc May 13 00:52:14.598231 kernel: Guest personality initialized and is active May 13 00:52:14.592000 audit: PATH item=63 name=(null) inode=25107 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=64 name=(null) inode=25106 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=65 name=(null) inode=25108 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=66 name=(null) inode=25106 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=67 name=(null) inode=25109 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=68 name=(null) inode=25106 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=69 name=(null) inode=25110 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=70 name=(null) inode=25106 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=71 name=(null) inode=25111 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=72 name=(null) inode=25103 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=73 name=(null) inode=25112 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=74 name=(null) inode=25112 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=75 name=(null) inode=25113 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=76 name=(null) inode=25112 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=77 name=(null) inode=25114 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=78 name=(null) inode=25112 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=79 name=(null) inode=25115 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=80 name=(null) inode=25112 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=81 name=(null) inode=25116 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=82 name=(null) inode=25112 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=83 name=(null) inode=25117 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=84 name=(null) inode=25103 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=85 name=(null) inode=25118 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=86 name=(null) inode=25118 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=87 name=(null) inode=25119 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=88 name=(null) inode=25118 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=89 name=(null) inode=25120 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=90 name=(null) inode=25118 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=91 name=(null) inode=25121 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=92 name=(null) inode=25118 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=93 name=(null) inode=25122 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=94 name=(null) inode=25118 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=95 name=(null) inode=25123 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=96 name=(null) inode=25103 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=97 name=(null) inode=25124 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=98 name=(null) inode=25124 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=99 name=(null) inode=25125 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=100 name=(null) inode=25124 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=101 name=(null) inode=25126 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=102 name=(null) inode=25124 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=103 name=(null) inode=25127 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=104 name=(null) inode=25124 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=105 name=(null) inode=25128 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=106 name=(null) inode=25124 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=107 name=(null) inode=25129 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PATH item=109 name=(null) inode=25130 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:52:14.592000 audit: PROCTITLE proctitle="(udev-worker)" May 13 00:52:14.604457 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! May 13 00:52:14.607454 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 13 00:52:14.607496 kernel: Initialized host personality May 13 00:52:14.623454 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 May 13 00:52:14.642495 kernel: mousedev: PS/2 mouse device common for all mice May 13 00:52:14.644173 (udev-worker)[1063]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. May 13 00:52:14.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:14.654699 systemd[1]: Finished systemd-udev-settle.service. May 13 00:52:14.655655 systemd[1]: Starting lvm2-activation-early.service... May 13 00:52:14.673846 lvm[1087]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:52:14.699010 systemd[1]: Finished lvm2-activation-early.service. May 13 00:52:14.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:14.699198 systemd[1]: Reached target cryptsetup.target. May 13 00:52:14.700120 systemd[1]: Starting lvm2-activation.service... May 13 00:52:14.702623 lvm[1088]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:52:14.717965 systemd[1]: Finished lvm2-activation.service. May 13 00:52:14.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:14.718149 systemd[1]: Reached target local-fs-pre.target. May 13 00:52:14.718251 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 00:52:14.718267 systemd[1]: Reached target local-fs.target. May 13 00:52:14.718358 systemd[1]: Reached target machines.target. May 13 00:52:14.719295 systemd[1]: Starting ldconfig.service... May 13 00:52:14.719847 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:52:14.719890 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:52:14.720653 systemd[1]: Starting systemd-boot-update.service... May 13 00:52:14.721316 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 13 00:52:14.722228 systemd[1]: Starting systemd-machine-id-commit.service... May 13 00:52:14.723036 systemd[1]: Starting systemd-sysext.service... May 13 00:52:14.726720 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1090 (bootctl) May 13 00:52:14.727382 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 13 00:52:14.740649 systemd[1]: Unmounting usr-share-oem.mount... May 13 00:52:14.747650 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 13 00:52:14.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:14.755134 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 13 00:52:14.755254 systemd[1]: Unmounted usr-share-oem.mount. May 13 00:52:14.775452 kernel: loop0: detected capacity change from 0 to 218376 May 13 00:52:15.053454 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 00:52:15.088131 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 00:52:15.088645 systemd[1]: Finished systemd-machine-id-commit.service. May 13 00:52:15.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:15.105456 kernel: loop1: detected capacity change from 0 to 218376 May 13 00:52:15.117563 systemd-fsck[1099]: fsck.fat 4.2 (2021-01-31) May 13 00:52:15.117563 systemd-fsck[1099]: /dev/sda1: 790 files, 120692/258078 clusters May 13 00:52:15.119404 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 13 00:52:15.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:15.120744 systemd[1]: Mounting boot.mount... May 13 00:52:15.132248 (sd-sysext)[1102]: Using extensions 'kubernetes'. May 13 00:52:15.132476 (sd-sysext)[1102]: Merged extensions into '/usr'. May 13 00:52:15.140609 systemd[1]: Mounted boot.mount. May 13 00:52:15.148793 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:52:15.150330 systemd[1]: Mounting usr-share-oem.mount... May 13 00:52:15.151389 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:52:15.154152 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:52:15.154841 systemd[1]: Starting modprobe@loop.service... May 13 00:52:15.154965 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:52:15.155041 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:52:15.155119 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:52:15.156707 systemd[1]: Mounted usr-share-oem.mount. May 13 00:52:15.156970 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:52:15.157047 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:52:15.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:15.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:15.157347 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:52:15.157415 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:52:15.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:15.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:15.157714 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:52:15.157777 systemd[1]: Finished modprobe@loop.service. May 13 00:52:15.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:15.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:15.158076 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:52:15.158140 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:52:15.158940 systemd[1]: Finished systemd-sysext.service. May 13 00:52:15.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:15.160219 systemd[1]: Starting ensure-sysext.service... May 13 00:52:15.161220 systemd[1]: Starting systemd-tmpfiles-setup.service... May 13 00:52:15.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:15.164609 systemd[1]: Finished systemd-boot-update.service. May 13 00:52:15.169180 systemd[1]: Reloading. May 13 00:52:15.180885 systemd-tmpfiles[1111]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 13 00:52:15.189069 systemd-tmpfiles[1111]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 00:52:15.196855 /usr/lib/systemd/system-generators/torcx-generator[1130]: time="2025-05-13T00:52:15Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 00:52:15.196872 /usr/lib/systemd/system-generators/torcx-generator[1130]: time="2025-05-13T00:52:15Z" level=info msg="torcx already run" May 13 00:52:15.203539 systemd-tmpfiles[1111]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 00:52:15.263092 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:52:15.263104 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:52:15.274897 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:52:15.307000 audit: BPF prog-id=30 op=LOAD May 13 00:52:15.307000 audit: BPF prog-id=26 op=UNLOAD May 13 00:52:15.308000 audit: BPF prog-id=31 op=LOAD May 13 00:52:15.308000 audit: BPF prog-id=27 op=UNLOAD May 13 00:52:15.308000 audit: BPF prog-id=32 op=LOAD May 13 00:52:15.308000 audit: BPF prog-id=33 op=LOAD May 13 00:52:15.308000 audit: BPF prog-id=28 op=UNLOAD May 13 00:52:15.308000 audit: BPF prog-id=29 op=UNLOAD May 13 00:52:15.309000 audit: BPF prog-id=34 op=LOAD May 13 00:52:15.309000 audit: BPF prog-id=35 op=LOAD May 13 00:52:15.309000 audit: BPF prog-id=24 op=UNLOAD May 13 00:52:15.309000 audit: BPF prog-id=25 op=UNLOAD May 13 00:52:15.310000 audit: BPF prog-id=36 op=LOAD May 13 00:52:15.310000 audit: BPF prog-id=21 op=UNLOAD May 13 00:52:15.310000 audit: BPF prog-id=37 op=LOAD May 13 00:52:15.310000 audit: BPF prog-id=38 op=LOAD May 13 00:52:15.310000 audit: BPF prog-id=22 op=UNLOAD May 13 00:52:15.310000 audit: BPF prog-id=23 op=UNLOAD May 13 00:52:15.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:15.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:15.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:15.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:15.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:15.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:15.325237 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:52:15.326029 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:52:15.326807 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:52:15.327545 systemd[1]: Starting modprobe@loop.service... May 13 00:52:15.327678 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:52:15.327756 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:52:15.327831 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:52:15.328281 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:52:15.328367 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:52:15.328667 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:52:15.328733 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:52:15.329020 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:52:15.329078 systemd[1]: Finished modprobe@loop.service. May 13 00:52:15.329290 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:52:15.329345 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:52:15.331559 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:52:15.332437 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:52:15.333736 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:52:15.335189 systemd[1]: Starting modprobe@loop.service... May 13 00:52:15.335430 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:52:15.335567 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:52:15.335634 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:52:15.336082 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:52:15.336891 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:52:15.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:15.335000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:15.337258 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:52:15.337482 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:52:15.337819 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:52:15.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:15.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:15.338035 systemd[1]: Finished modprobe@loop.service. May 13 00:52:15.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:15.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:15.338357 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:52:15.338413 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:52:15.340779 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:52:15.342146 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:52:15.343399 systemd[1]: Starting modprobe@drm.service... May 13 00:52:15.344711 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:52:15.345979 systemd[1]: Starting modprobe@loop.service... May 13 00:52:15.346194 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:52:15.346364 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:52:15.347880 systemd[1]: Starting systemd-networkd-wait-online.service... May 13 00:52:15.348113 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:52:15.349016 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:52:15.349149 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:52:15.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:15.348000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:15.349857 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:52:15.349998 systemd[1]: Finished modprobe@drm.service. May 13 00:52:15.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:15.348000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:15.350584 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:52:15.350715 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:52:15.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:15.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:15.351165 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:52:15.351323 systemd[1]: Finished modprobe@loop.service. May 13 00:52:15.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:15.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:15.352564 systemd[1]: Finished systemd-tmpfiles-setup.service. May 13 00:52:15.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:15.353503 systemd[1]: Finished ensure-sysext.service. May 13 00:52:15.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:15.355051 systemd[1]: Starting audit-rules.service... May 13 00:52:15.355809 systemd[1]: Starting clean-ca-certificates.service... May 13 00:52:15.357000 audit: BPF prog-id=39 op=LOAD May 13 00:52:15.360000 audit: BPF prog-id=40 op=LOAD May 13 00:52:15.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:15.369000 audit[1209]: SYSTEM_BOOT pid=1209 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 13 00:52:15.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:15.358505 systemd[1]: Starting systemd-journal-catalog-update.service... May 13 00:52:15.358624 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:52:15.358661 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:52:15.361015 systemd[1]: Starting systemd-resolved.service... May 13 00:52:15.362757 systemd[1]: Starting systemd-timesyncd.service... May 13 00:52:15.363599 systemd[1]: Starting systemd-update-utmp.service... May 13 00:52:15.363996 systemd[1]: Finished clean-ca-certificates.service. May 13 00:52:15.364239 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:52:15.373454 systemd[1]: Finished systemd-update-utmp.service. May 13 00:52:15.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:15.392336 systemd[1]: Finished systemd-journal-catalog-update.service. May 13 00:52:15.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:52:15.443118 systemd[1]: Started systemd-timesyncd.service. May 13 00:52:15.443317 systemd[1]: Reached target time-set.target. May 13 00:52:15.462000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 13 00:52:15.462000 audit[1224]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd51be70b0 a2=420 a3=0 items=0 ppid=1203 pid=1224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:52:15.462000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 13 00:52:15.464345 augenrules[1224]: No rules May 13 00:52:15.464793 systemd-resolved[1207]: Positive Trust Anchors: May 13 00:52:15.464860 systemd[1]: Finished audit-rules.service. May 13 00:52:15.465010 systemd-resolved[1207]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:52:15.465086 systemd-resolved[1207]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 13 00:52:15.493907 systemd-resolved[1207]: Defaulting to hostname 'linux'. May 13 00:52:15.495613 systemd[1]: Started systemd-resolved.service. May 13 00:52:15.495758 systemd[1]: Reached target network.target. May 13 00:52:15.495849 systemd[1]: Reached target nss-lookup.target. May 13 00:52:15.508876 ldconfig[1089]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 00:52:15.523837 systemd[1]: Finished ldconfig.service. May 13 00:52:15.524868 systemd[1]: Starting systemd-update-done.service... May 13 00:52:15.530561 systemd[1]: Finished systemd-update-done.service. May 13 00:52:15.530707 systemd[1]: Reached target sysinit.target. May 13 00:52:15.530843 systemd[1]: Started motdgen.path. May 13 00:52:15.530943 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 13 00:52:15.531125 systemd[1]: Started logrotate.timer. May 13 00:52:15.531259 systemd[1]: Started mdadm.timer. May 13 00:52:15.531374 systemd[1]: Started systemd-tmpfiles-clean.timer. May 13 00:52:15.531472 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 00:52:15.531492 systemd[1]: Reached target paths.target. May 13 00:52:15.531571 systemd[1]: Reached target timers.target. May 13 00:52:15.531802 systemd[1]: Listening on dbus.socket. May 13 00:52:15.532555 systemd[1]: Starting docker.socket... May 13 00:52:15.534344 systemd[1]: Listening on sshd.socket. May 13 00:52:15.534530 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:52:15.534792 systemd[1]: Listening on docker.socket. May 13 00:52:15.534915 systemd[1]: Reached target sockets.target. May 13 00:52:15.535003 systemd[1]: Reached target basic.target. May 13 00:52:15.535140 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 13 00:52:15.535159 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 13 00:52:15.535773 systemd[1]: Starting containerd.service... May 13 00:52:15.537034 systemd[1]: Starting dbus.service... May 13 00:52:15.537735 systemd[1]: Starting enable-oem-cloudinit.service... May 13 00:52:15.540359 jq[1234]: false May 13 00:52:15.540058 systemd[1]: Starting extend-filesystems.service... May 13 00:52:15.540191 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 13 00:52:15.540914 systemd[1]: Starting motdgen.service... May 13 00:52:15.541647 systemd[1]: Starting prepare-helm.service... May 13 00:52:15.542398 systemd[1]: Starting ssh-key-proc-cmdline.service... May 13 00:52:15.543273 systemd[1]: Starting sshd-keygen.service... May 13 00:52:15.545608 systemd[1]: Starting systemd-logind.service... May 13 00:52:15.545845 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:52:15.545882 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 00:52:15.546224 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 00:52:15.546681 systemd[1]: Starting update-engine.service... May 13 00:52:15.549211 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 13 00:52:15.550354 systemd[1]: Starting vmtoolsd.service... May 13 00:52:15.551261 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 00:52:15.551369 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 13 00:52:15.560267 jq[1243]: true May 13 00:52:15.564921 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 00:52:15.565032 systemd[1]: Finished ssh-key-proc-cmdline.service. May 13 00:52:15.571474 tar[1247]: linux-amd64/LICENSE May 13 00:52:15.571474 tar[1247]: linux-amd64/helm May 13 00:52:15.575178 systemd[1]: Started vmtoolsd.service. May 13 00:52:15.577647 extend-filesystems[1235]: Found loop1 May 13 00:52:15.578146 extend-filesystems[1235]: Found sda May 13 00:52:15.578146 extend-filesystems[1235]: Found sda1 May 13 00:52:15.578146 extend-filesystems[1235]: Found sda2 May 13 00:52:15.578146 extend-filesystems[1235]: Found sda3 May 13 00:52:15.578146 extend-filesystems[1235]: Found usr May 13 00:52:15.578146 extend-filesystems[1235]: Found sda4 May 13 00:52:15.578146 extend-filesystems[1235]: Found sda6 May 13 00:52:15.578146 extend-filesystems[1235]: Found sda7 May 13 00:52:15.578146 extend-filesystems[1235]: Found sda9 May 13 00:52:15.578146 extend-filesystems[1235]: Checking size of /dev/sda9 May 13 00:52:15.584362 dbus-daemon[1233]: [system] SELinux support is enabled May 13 00:52:15.584467 systemd[1]: Started dbus.service. May 13 00:52:15.586266 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 00:52:15.586288 systemd[1]: Reached target system-config.target. May 13 00:52:15.586406 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 00:52:15.586419 systemd[1]: Reached target user-config.target. May 13 00:52:15.587573 jq[1257]: true May 13 00:53:36.041768 systemd-timesyncd[1208]: Contacted time server 159.203.82.102:123 (0.flatcar.pool.ntp.org). May 13 00:53:36.041914 systemd-timesyncd[1208]: Initial clock synchronization to Tue 2025-05-13 00:53:36.041604 UTC. May 13 00:53:36.045415 systemd-resolved[1207]: Clock change detected. Flushing caches. May 13 00:53:36.055308 extend-filesystems[1235]: Old size kept for /dev/sda9 May 13 00:53:36.055308 extend-filesystems[1235]: Found sr0 May 13 00:53:36.055531 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 00:53:36.055648 systemd[1]: Finished extend-filesystems.service. May 13 00:53:36.056542 systemd[1]: motdgen.service: Deactivated successfully. May 13 00:53:36.056629 systemd[1]: Finished motdgen.service. May 13 00:53:36.081203 env[1248]: time="2025-05-13T00:53:36.080124403Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 13 00:53:36.083189 kernel: NET: Registered PF_VSOCK protocol family May 13 00:53:36.085084 update_engine[1241]: I0513 00:53:36.083801 1241 main.cc:92] Flatcar Update Engine starting May 13 00:53:36.087421 update_engine[1241]: I0513 00:53:36.087368 1241 update_check_scheduler.cc:74] Next update check in 5m50s May 13 00:53:36.090069 systemd[1]: Started update-engine.service. May 13 00:53:36.090612 bash[1288]: Updated "/home/core/.ssh/authorized_keys" May 13 00:53:36.091478 systemd[1]: Started locksmithd.service. May 13 00:53:36.093353 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 13 00:53:36.122739 systemd-logind[1240]: Watching system buttons on /dev/input/event1 (Power Button) May 13 00:53:36.122758 systemd-logind[1240]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 13 00:53:36.123994 systemd-logind[1240]: New seat seat0. May 13 00:53:36.129868 systemd[1]: Started systemd-logind.service. May 13 00:53:36.135685 env[1248]: time="2025-05-13T00:53:36.135657053Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 13 00:53:36.139647 env[1248]: time="2025-05-13T00:53:36.139431385Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 13 00:53:36.142944 env[1248]: time="2025-05-13T00:53:36.142424089Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.181-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 13 00:53:36.142944 env[1248]: time="2025-05-13T00:53:36.142441818Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 13 00:53:36.142944 env[1248]: time="2025-05-13T00:53:36.142568559Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:53:36.142944 env[1248]: time="2025-05-13T00:53:36.142578934Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 13 00:53:36.142944 env[1248]: time="2025-05-13T00:53:36.142586904Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 13 00:53:36.142944 env[1248]: time="2025-05-13T00:53:36.142592192Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 13 00:53:36.142944 env[1248]: time="2025-05-13T00:53:36.142633912Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 13 00:53:36.142944 env[1248]: time="2025-05-13T00:53:36.142761521Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 13 00:53:36.142944 env[1248]: time="2025-05-13T00:53:36.142826968Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:53:36.142944 env[1248]: time="2025-05-13T00:53:36.142837199Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 13 00:53:36.143220 env[1248]: time="2025-05-13T00:53:36.142863350Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 13 00:53:36.143220 env[1248]: time="2025-05-13T00:53:36.142870453Z" level=info msg="metadata content store policy set" policy=shared May 13 00:53:36.145984 env[1248]: time="2025-05-13T00:53:36.144374826Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 13 00:53:36.145984 env[1248]: time="2025-05-13T00:53:36.144390249Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 13 00:53:36.145984 env[1248]: time="2025-05-13T00:53:36.144398925Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 13 00:53:36.145984 env[1248]: time="2025-05-13T00:53:36.144417834Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 13 00:53:36.145984 env[1248]: time="2025-05-13T00:53:36.144425998Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 13 00:53:36.145984 env[1248]: time="2025-05-13T00:53:36.144433837Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 13 00:53:36.145984 env[1248]: time="2025-05-13T00:53:36.144440901Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 13 00:53:36.145984 env[1248]: time="2025-05-13T00:53:36.144448666Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 13 00:53:36.145984 env[1248]: time="2025-05-13T00:53:36.144456633Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 13 00:53:36.145984 env[1248]: time="2025-05-13T00:53:36.144463894Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 13 00:53:36.145984 env[1248]: time="2025-05-13T00:53:36.144474926Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 13 00:53:36.145984 env[1248]: time="2025-05-13T00:53:36.144486164Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 13 00:53:36.145984 env[1248]: time="2025-05-13T00:53:36.144538950Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 13 00:53:36.145984 env[1248]: time="2025-05-13T00:53:36.144591492Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 13 00:53:36.145655 systemd[1]: Started containerd.service. May 13 00:53:36.146244 env[1248]: time="2025-05-13T00:53:36.144727979Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 13 00:53:36.146244 env[1248]: time="2025-05-13T00:53:36.144748088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 13 00:53:36.146244 env[1248]: time="2025-05-13T00:53:36.144756731Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 13 00:53:36.146244 env[1248]: time="2025-05-13T00:53:36.144784603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 13 00:53:36.146244 env[1248]: time="2025-05-13T00:53:36.144792687Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 13 00:53:36.146244 env[1248]: time="2025-05-13T00:53:36.144799558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 13 00:53:36.146244 env[1248]: time="2025-05-13T00:53:36.144806337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 13 00:53:36.146244 env[1248]: time="2025-05-13T00:53:36.144815926Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 13 00:53:36.146244 env[1248]: time="2025-05-13T00:53:36.144827050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 13 00:53:36.146244 env[1248]: time="2025-05-13T00:53:36.144835809Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 13 00:53:36.146244 env[1248]: time="2025-05-13T00:53:36.144842276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 13 00:53:36.146244 env[1248]: time="2025-05-13T00:53:36.144849729Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 13 00:53:36.146244 env[1248]: time="2025-05-13T00:53:36.144920739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 13 00:53:36.146244 env[1248]: time="2025-05-13T00:53:36.144930475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 13 00:53:36.146244 env[1248]: time="2025-05-13T00:53:36.144937348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 13 00:53:36.146490 env[1248]: time="2025-05-13T00:53:36.144943438Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 13 00:53:36.146490 env[1248]: time="2025-05-13T00:53:36.144951404Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 13 00:53:36.146490 env[1248]: time="2025-05-13T00:53:36.144957246Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 13 00:53:36.146490 env[1248]: time="2025-05-13T00:53:36.144967150Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 13 00:53:36.146490 env[1248]: time="2025-05-13T00:53:36.144993393Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 13 00:53:36.146568 env[1248]: time="2025-05-13T00:53:36.145109684Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 13 00:53:36.146568 env[1248]: time="2025-05-13T00:53:36.145141471Z" level=info msg="Connect containerd service" May 13 00:53:36.146568 env[1248]: time="2025-05-13T00:53:36.145160205Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 13 00:53:36.146568 env[1248]: time="2025-05-13T00:53:36.145439998Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:53:36.146568 env[1248]: time="2025-05-13T00:53:36.145564840Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 00:53:36.146568 env[1248]: time="2025-05-13T00:53:36.145586934Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 00:53:36.146568 env[1248]: time="2025-05-13T00:53:36.145612082Z" level=info msg="containerd successfully booted in 0.066152s" May 13 00:53:36.150054 env[1248]: time="2025-05-13T00:53:36.150006510Z" level=info msg="Start subscribing containerd event" May 13 00:53:36.150084 env[1248]: time="2025-05-13T00:53:36.150068201Z" level=info msg="Start recovering state" May 13 00:53:36.150117 env[1248]: time="2025-05-13T00:53:36.150107012Z" level=info msg="Start event monitor" May 13 00:53:36.150143 env[1248]: time="2025-05-13T00:53:36.150118716Z" level=info msg="Start snapshots syncer" May 13 00:53:36.150143 env[1248]: time="2025-05-13T00:53:36.150126093Z" level=info msg="Start cni network conf syncer for default" May 13 00:53:36.150143 env[1248]: time="2025-05-13T00:53:36.150133324Z" level=info msg="Start streaming server" May 13 00:53:36.191473 systemd-networkd[1061]: ens192: Gained IPv6LL May 13 00:53:36.192869 systemd[1]: Finished systemd-networkd-wait-online.service. May 13 00:53:36.193166 systemd[1]: Reached target network-online.target. May 13 00:53:36.194347 systemd[1]: Starting kubelet.service... May 13 00:53:36.616115 tar[1247]: linux-amd64/README.md May 13 00:53:36.622086 systemd[1]: Finished prepare-helm.service. May 13 00:53:36.637625 locksmithd[1293]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 00:53:37.213061 sshd_keygen[1258]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 00:53:37.227564 systemd[1]: Finished sshd-keygen.service. May 13 00:53:37.228696 systemd[1]: Starting issuegen.service... May 13 00:53:37.231662 systemd[1]: issuegen.service: Deactivated successfully. May 13 00:53:37.231753 systemd[1]: Finished issuegen.service. May 13 00:53:37.232767 systemd[1]: Starting systemd-user-sessions.service... May 13 00:53:37.243233 systemd[1]: Finished systemd-user-sessions.service. May 13 00:53:37.244142 systemd[1]: Started getty@tty1.service. May 13 00:53:37.244910 systemd[1]: Started serial-getty@ttyS0.service. May 13 00:53:37.245104 systemd[1]: Reached target getty.target. May 13 00:53:37.568516 systemd[1]: Started kubelet.service. May 13 00:53:37.568931 systemd[1]: Reached target multi-user.target. May 13 00:53:37.570059 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 13 00:53:37.575501 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 13 00:53:37.575608 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 13 00:53:37.575837 systemd[1]: Startup finished in 843ms (kernel) + 5.007s (initrd) + 5.579s (userspace) = 11.430s. May 13 00:53:37.607407 login[1360]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 13 00:53:37.608747 login[1361]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 13 00:53:37.617124 systemd[1]: Created slice user-500.slice. May 13 00:53:37.618075 systemd[1]: Starting user-runtime-dir@500.service... May 13 00:53:37.622136 systemd-logind[1240]: New session 1 of user core. May 13 00:53:37.625377 systemd-logind[1240]: New session 2 of user core. May 13 00:53:37.628122 systemd[1]: Finished user-runtime-dir@500.service. May 13 00:53:37.629228 systemd[1]: Starting user@500.service... May 13 00:53:37.632268 (systemd)[1367]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 00:53:37.776987 systemd[1367]: Queued start job for default target default.target. May 13 00:53:37.777732 systemd[1367]: Reached target paths.target. May 13 00:53:37.777750 systemd[1367]: Reached target sockets.target. May 13 00:53:37.777759 systemd[1367]: Reached target timers.target. May 13 00:53:37.777767 systemd[1367]: Reached target basic.target. May 13 00:53:37.777832 systemd[1]: Started user@500.service. May 13 00:53:37.778636 systemd[1]: Started session-1.scope. May 13 00:53:37.778736 systemd[1367]: Reached target default.target. May 13 00:53:37.778763 systemd[1367]: Startup finished in 142ms. May 13 00:53:37.779166 systemd[1]: Started session-2.scope. May 13 00:53:38.604843 kubelet[1364]: E0513 00:53:38.604812 1364 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:53:38.606037 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:53:38.606119 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:53:48.856668 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 00:53:48.856802 systemd[1]: Stopped kubelet.service. May 13 00:53:48.857906 systemd[1]: Starting kubelet.service... May 13 00:53:49.191101 systemd[1]: Started kubelet.service. May 13 00:53:49.263943 kubelet[1396]: E0513 00:53:49.263917 1396 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:53:49.266043 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:53:49.266157 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:53:51.123587 systemd[1]: Created slice system-sshd.slice. May 13 00:53:51.124400 systemd[1]: Started sshd@0-139.178.70.106:22-82.193.122.91:36939.service. May 13 00:53:52.390293 sshd[1402]: Invalid user webmaster from 82.193.122.91 port 36939 May 13 00:53:52.392585 sshd[1402]: pam_faillock(sshd:auth): User unknown May 13 00:53:52.393035 sshd[1402]: pam_unix(sshd:auth): check pass; user unknown May 13 00:53:52.393100 sshd[1402]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=82.193.122.91 May 13 00:53:52.393421 sshd[1402]: pam_faillock(sshd:auth): User unknown May 13 00:53:53.975631 sshd[1402]: Failed password for invalid user webmaster from 82.193.122.91 port 36939 ssh2 May 13 00:53:54.286482 sshd[1404]: pam_faillock(sshd:auth): User unknown May 13 00:53:54.292287 sshd[1402]: Postponed keyboard-interactive for invalid user webmaster from 82.193.122.91 port 36939 ssh2 [preauth] May 13 00:53:54.513313 sshd[1404]: pam_unix(sshd:auth): check pass; user unknown May 13 00:53:54.513674 sshd[1404]: pam_faillock(sshd:auth): User unknown May 13 00:53:56.703183 sshd[1402]: PAM: Permission denied for illegal user webmaster from 82.193.122.91 May 13 00:53:56.703512 sshd[1402]: Failed keyboard-interactive/pam for invalid user webmaster from 82.193.122.91 port 36939 ssh2 May 13 00:53:57.009977 sshd[1402]: Connection closed by invalid user webmaster 82.193.122.91 port 36939 [preauth] May 13 00:53:57.010681 systemd[1]: sshd@0-139.178.70.106:22-82.193.122.91:36939.service: Deactivated successfully. May 13 00:53:59.516711 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 00:53:59.516830 systemd[1]: Stopped kubelet.service. May 13 00:53:59.517815 systemd[1]: Starting kubelet.service... May 13 00:53:59.776493 systemd[1]: Started kubelet.service. May 13 00:53:59.798443 kubelet[1410]: E0513 00:53:59.798409 1410 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:53:59.799513 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:53:59.799585 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:54:03.788125 systemd[1]: Started sshd@1-139.178.70.106:22-116.141.105.6:35636.service. May 13 00:54:06.159156 systemd[1]: Started sshd@2-139.178.70.106:22-147.75.109.163:52600.service. May 13 00:54:06.193734 sshd[1420]: Accepted publickey for core from 147.75.109.163 port 52600 ssh2: RSA SHA256:vqcd0a/HrGkVrwMSh8NRsk9omLQxrQEaC7i/Qmd/lrA May 13 00:54:06.194504 sshd[1420]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:54:06.197575 systemd[1]: Started session-3.scope. May 13 00:54:06.198374 systemd-logind[1240]: New session 3 of user core. May 13 00:54:06.246177 systemd[1]: Started sshd@3-139.178.70.106:22-147.75.109.163:52614.service. May 13 00:54:06.281962 sshd[1425]: Accepted publickey for core from 147.75.109.163 port 52614 ssh2: RSA SHA256:vqcd0a/HrGkVrwMSh8NRsk9omLQxrQEaC7i/Qmd/lrA May 13 00:54:06.282769 sshd[1425]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:54:06.285903 systemd[1]: Started session-4.scope. May 13 00:54:06.286079 systemd-logind[1240]: New session 4 of user core. May 13 00:54:06.337008 sshd[1425]: pam_unix(sshd:session): session closed for user core May 13 00:54:06.339300 systemd[1]: Started sshd@4-139.178.70.106:22-147.75.109.163:52618.service. May 13 00:54:06.339596 systemd[1]: sshd@3-139.178.70.106:22-147.75.109.163:52614.service: Deactivated successfully. May 13 00:54:06.340212 systemd-logind[1240]: Session 4 logged out. Waiting for processes to exit. May 13 00:54:06.340242 systemd[1]: session-4.scope: Deactivated successfully. May 13 00:54:06.340977 systemd-logind[1240]: Removed session 4. May 13 00:54:06.373136 sshd[1430]: Accepted publickey for core from 147.75.109.163 port 52618 ssh2: RSA SHA256:vqcd0a/HrGkVrwMSh8NRsk9omLQxrQEaC7i/Qmd/lrA May 13 00:54:06.374029 sshd[1430]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:54:06.376868 systemd-logind[1240]: New session 5 of user core. May 13 00:54:06.377400 systemd[1]: Started session-5.scope. May 13 00:54:06.424416 sshd[1430]: pam_unix(sshd:session): session closed for user core May 13 00:54:06.426542 systemd[1]: sshd@4-139.178.70.106:22-147.75.109.163:52618.service: Deactivated successfully. May 13 00:54:06.426857 systemd[1]: session-5.scope: Deactivated successfully. May 13 00:54:06.427198 systemd-logind[1240]: Session 5 logged out. Waiting for processes to exit. May 13 00:54:06.427820 systemd[1]: Started sshd@5-139.178.70.106:22-147.75.109.163:52622.service. May 13 00:54:06.428426 systemd-logind[1240]: Removed session 5. May 13 00:54:06.461365 sshd[1437]: Accepted publickey for core from 147.75.109.163 port 52622 ssh2: RSA SHA256:vqcd0a/HrGkVrwMSh8NRsk9omLQxrQEaC7i/Qmd/lrA May 13 00:54:06.462290 sshd[1437]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:54:06.464983 systemd-logind[1240]: New session 6 of user core. May 13 00:54:06.465740 systemd[1]: Started session-6.scope. May 13 00:54:06.488635 sshd[1417]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=116.141.105.6 user=lp May 13 00:54:06.515206 sshd[1437]: pam_unix(sshd:session): session closed for user core May 13 00:54:06.516901 systemd[1]: Started sshd@6-139.178.70.106:22-147.75.109.163:52630.service. May 13 00:54:06.517454 systemd[1]: sshd@5-139.178.70.106:22-147.75.109.163:52622.service: Deactivated successfully. May 13 00:54:06.517866 systemd[1]: session-6.scope: Deactivated successfully. May 13 00:54:06.518220 systemd-logind[1240]: Session 6 logged out. Waiting for processes to exit. May 13 00:54:06.518857 systemd-logind[1240]: Removed session 6. May 13 00:54:06.550073 sshd[1442]: Accepted publickey for core from 147.75.109.163 port 52630 ssh2: RSA SHA256:vqcd0a/HrGkVrwMSh8NRsk9omLQxrQEaC7i/Qmd/lrA May 13 00:54:06.550760 sshd[1442]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:54:06.553199 systemd-logind[1240]: New session 7 of user core. May 13 00:54:06.553656 systemd[1]: Started session-7.scope. May 13 00:54:06.611443 sudo[1446]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 00:54:06.611573 sudo[1446]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 13 00:54:06.625689 systemd[1]: Starting docker.service... May 13 00:54:06.650804 env[1456]: time="2025-05-13T00:54:06.650778666Z" level=info msg="Starting up" May 13 00:54:06.651706 env[1456]: time="2025-05-13T00:54:06.651694945Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 13 00:54:06.651763 env[1456]: time="2025-05-13T00:54:06.651752770Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 13 00:54:06.651816 env[1456]: time="2025-05-13T00:54:06.651805379Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 13 00:54:06.651858 env[1456]: time="2025-05-13T00:54:06.651849593Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 13 00:54:06.652739 env[1456]: time="2025-05-13T00:54:06.652729476Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 13 00:54:06.652796 env[1456]: time="2025-05-13T00:54:06.652787536Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 13 00:54:06.652842 env[1456]: time="2025-05-13T00:54:06.652831560Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 13 00:54:06.652885 env[1456]: time="2025-05-13T00:54:06.652875886Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 13 00:54:06.655692 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1848437465-merged.mount: Deactivated successfully. May 13 00:54:06.703911 env[1456]: time="2025-05-13T00:54:06.703431975Z" level=info msg="Loading containers: start." May 13 00:54:06.780360 kernel: Initializing XFRM netlink socket May 13 00:54:06.846438 env[1456]: time="2025-05-13T00:54:06.846412370Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 13 00:54:06.890177 systemd-networkd[1061]: docker0: Link UP May 13 00:54:06.900067 env[1456]: time="2025-05-13T00:54:06.900044265Z" level=info msg="Loading containers: done." May 13 00:54:06.906176 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck589240664-merged.mount: Deactivated successfully. May 13 00:54:06.929822 env[1456]: time="2025-05-13T00:54:06.929800825Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 00:54:06.930037 env[1456]: time="2025-05-13T00:54:06.930027084Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 13 00:54:06.930148 env[1456]: time="2025-05-13T00:54:06.930139468Z" level=info msg="Daemon has completed initialization" May 13 00:54:06.992655 env[1456]: time="2025-05-13T00:54:06.992208584Z" level=info msg="API listen on /run/docker.sock" May 13 00:54:06.992290 systemd[1]: Started docker.service. May 13 00:54:08.031016 env[1248]: time="2025-05-13T00:54:08.030991513Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 13 00:54:08.656507 sshd[1417]: Failed password for lp from 116.141.105.6 port 35636 ssh2 May 13 00:54:08.702943 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount463563241.mount: Deactivated successfully. May 13 00:54:10.050058 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 13 00:54:10.050189 systemd[1]: Stopped kubelet.service. May 13 00:54:10.051212 systemd[1]: Starting kubelet.service... May 13 00:54:10.053365 env[1248]: time="2025-05-13T00:54:10.053327695Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:54:10.055541 env[1248]: time="2025-05-13T00:54:10.055516440Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:54:10.061362 env[1248]: time="2025-05-13T00:54:10.061316247Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:54:10.069374 env[1248]: time="2025-05-13T00:54:10.069336461Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:54:10.069674 env[1248]: time="2025-05-13T00:54:10.069659000Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" May 13 00:54:10.070499 env[1248]: time="2025-05-13T00:54:10.070485913Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 13 00:54:10.238870 systemd[1]: Started kubelet.service. May 13 00:54:10.261236 kubelet[1583]: E0513 00:54:10.261195 1583 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:54:10.262285 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:54:10.262381 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:54:11.678814 env[1248]: time="2025-05-13T00:54:11.678751535Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:54:11.680084 env[1248]: time="2025-05-13T00:54:11.680058912Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:54:11.681290 env[1248]: time="2025-05-13T00:54:11.681270187Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:54:11.682910 env[1248]: time="2025-05-13T00:54:11.682895108Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:54:11.683333 env[1248]: time="2025-05-13T00:54:11.683312221Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" May 13 00:54:11.683722 env[1248]: time="2025-05-13T00:54:11.683711124Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 13 00:54:12.719451 sshd[1417]: PAM: Permission denied for lp from 116.141.105.6 May 13 00:54:13.066051 env[1248]: time="2025-05-13T00:54:13.065962429Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:54:13.077583 env[1248]: time="2025-05-13T00:54:13.077553559Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:54:13.080377 env[1248]: time="2025-05-13T00:54:13.080348516Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:54:13.084951 env[1248]: time="2025-05-13T00:54:13.084933331Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:54:13.085677 env[1248]: time="2025-05-13T00:54:13.085659566Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" May 13 00:54:13.086567 env[1248]: time="2025-05-13T00:54:13.086552224Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 13 00:54:13.390363 sshd[1417]: Connection closed by authenticating user lp 116.141.105.6 port 35636 [preauth] May 13 00:54:13.391407 systemd[1]: sshd@1-139.178.70.106:22-116.141.105.6:35636.service: Deactivated successfully. May 13 00:54:14.474314 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2913442135.mount: Deactivated successfully. May 13 00:54:15.219728 env[1248]: time="2025-05-13T00:54:15.219691251Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:54:15.247244 env[1248]: time="2025-05-13T00:54:15.247210357Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:54:15.274420 env[1248]: time="2025-05-13T00:54:15.274396664Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:54:15.308526 env[1248]: time="2025-05-13T00:54:15.308495111Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:54:15.309086 env[1248]: time="2025-05-13T00:54:15.309057064Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" May 13 00:54:15.309510 env[1248]: time="2025-05-13T00:54:15.309493308Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 13 00:54:16.254480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1550053545.mount: Deactivated successfully. May 13 00:54:17.443751 env[1248]: time="2025-05-13T00:54:17.443703140Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:54:17.470299 env[1248]: time="2025-05-13T00:54:17.470265011Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:54:17.484909 env[1248]: time="2025-05-13T00:54:17.484882559Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:54:17.511844 env[1248]: time="2025-05-13T00:54:17.511788156Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:54:17.512494 env[1248]: time="2025-05-13T00:54:17.512468089Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 13 00:54:17.513356 env[1248]: time="2025-05-13T00:54:17.513330453Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 13 00:54:18.641853 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1480283539.mount: Deactivated successfully. May 13 00:54:18.673903 env[1248]: time="2025-05-13T00:54:18.673868667Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:54:18.681980 env[1248]: time="2025-05-13T00:54:18.681953496Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:54:18.689388 env[1248]: time="2025-05-13T00:54:18.689366407Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:54:18.694995 env[1248]: time="2025-05-13T00:54:18.694967563Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:54:18.695352 env[1248]: time="2025-05-13T00:54:18.695322506Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 13 00:54:18.695859 env[1248]: time="2025-05-13T00:54:18.695843661Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 13 00:54:19.573595 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount300200841.mount: Deactivated successfully. May 13 00:54:20.512889 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 13 00:54:20.513021 systemd[1]: Stopped kubelet.service. May 13 00:54:20.514080 systemd[1]: Starting kubelet.service... May 13 00:54:21.151321 update_engine[1241]: I0513 00:54:21.151104 1241 update_attempter.cc:509] Updating boot flags... May 13 00:54:22.188970 systemd[1]: Started kubelet.service. May 13 00:54:22.249545 kubelet[1611]: E0513 00:54:22.249512 1611 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:54:22.250927 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:54:22.251061 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:54:24.523414 env[1248]: time="2025-05-13T00:54:24.523385632Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:54:24.541425 env[1248]: time="2025-05-13T00:54:24.541398475Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:54:24.555536 env[1248]: time="2025-05-13T00:54:24.555518087Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:54:24.567413 env[1248]: time="2025-05-13T00:54:24.567391086Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:54:24.568121 env[1248]: time="2025-05-13T00:54:24.568098488Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 13 00:54:27.950020 systemd[1]: Stopped kubelet.service. May 13 00:54:27.951544 systemd[1]: Starting kubelet.service... May 13 00:54:27.969770 systemd[1]: Reloading. May 13 00:54:28.016542 /usr/lib/systemd/system-generators/torcx-generator[1661]: time="2025-05-13T00:54:28Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 00:54:28.016750 /usr/lib/systemd/system-generators/torcx-generator[1661]: time="2025-05-13T00:54:28Z" level=info msg="torcx already run" May 13 00:54:28.082836 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:54:28.082849 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:54:28.095756 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:54:28.155127 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 13 00:54:28.155183 systemd[1]: kubelet.service: Failed with result 'signal'. May 13 00:54:28.155417 systemd[1]: Stopped kubelet.service. May 13 00:54:28.157190 systemd[1]: Starting kubelet.service... May 13 00:54:29.898447 systemd[1]: Started kubelet.service. May 13 00:54:30.130898 kubelet[1724]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:54:30.130898 kubelet[1724]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 13 00:54:30.130898 kubelet[1724]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:54:30.131136 kubelet[1724]: I0513 00:54:30.130952 1724 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:54:30.550984 kubelet[1724]: I0513 00:54:30.550960 1724 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 13 00:54:30.551102 kubelet[1724]: I0513 00:54:30.551090 1724 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:54:30.551320 kubelet[1724]: I0513 00:54:30.551311 1724 server.go:954] "Client rotation is on, will bootstrap in background" May 13 00:54:30.577599 kubelet[1724]: E0513 00:54:30.577570 1724 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.106:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" May 13 00:54:30.578455 kubelet[1724]: I0513 00:54:30.578436 1724 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:54:30.585182 kubelet[1724]: E0513 00:54:30.585151 1724 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 13 00:54:30.585182 kubelet[1724]: I0513 00:54:30.585180 1724 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 13 00:54:30.587364 kubelet[1724]: I0513 00:54:30.587323 1724 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:54:30.588139 kubelet[1724]: I0513 00:54:30.588112 1724 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:54:30.588271 kubelet[1724]: I0513 00:54:30.588138 1724 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 00:54:30.588344 kubelet[1724]: I0513 00:54:30.588277 1724 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:54:30.588344 kubelet[1724]: I0513 00:54:30.588285 1724 container_manager_linux.go:304] "Creating device plugin manager" May 13 00:54:30.588387 kubelet[1724]: I0513 00:54:30.588371 1724 state_mem.go:36] "Initialized new in-memory state store" May 13 00:54:30.591478 kubelet[1724]: I0513 00:54:30.591461 1724 kubelet.go:446] "Attempting to sync node with API server" May 13 00:54:30.591478 kubelet[1724]: I0513 00:54:30.591477 1724 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:54:30.591544 kubelet[1724]: I0513 00:54:30.591496 1724 kubelet.go:352] "Adding apiserver pod source" May 13 00:54:30.591577 kubelet[1724]: I0513 00:54:30.591566 1724 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:54:30.597676 kubelet[1724]: W0513 00:54:30.597621 1724 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused May 13 00:54:30.597728 kubelet[1724]: E0513 00:54:30.597676 1724 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" May 13 00:54:30.599722 kubelet[1724]: I0513 00:54:30.599710 1724 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 13 00:54:30.600052 kubelet[1724]: I0513 00:54:30.600042 1724 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:54:30.602240 kubelet[1724]: W0513 00:54:30.602230 1724 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 00:54:30.606864 kubelet[1724]: I0513 00:54:30.606848 1724 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 13 00:54:30.606961 kubelet[1724]: I0513 00:54:30.606952 1724 server.go:1287] "Started kubelet" May 13 00:54:30.607129 kubelet[1724]: W0513 00:54:30.607105 1724 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused May 13 00:54:30.607192 kubelet[1724]: E0513 00:54:30.607180 1724 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" May 13 00:54:30.610457 kubelet[1724]: I0513 00:54:30.610436 1724 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:54:30.611149 kubelet[1724]: I0513 00:54:30.611134 1724 server.go:490] "Adding debug handlers to kubelet server" May 13 00:54:30.613347 kubelet[1724]: I0513 00:54:30.613303 1724 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:54:30.613559 kubelet[1724]: I0513 00:54:30.613550 1724 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:54:30.621041 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 13 00:54:30.621106 kubelet[1724]: E0513 00:54:30.621065 1724 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:54:30.621400 kubelet[1724]: I0513 00:54:30.621391 1724 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:54:30.624290 kubelet[1724]: I0513 00:54:30.624279 1724 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 00:54:30.624918 kubelet[1724]: E0513 00:54:30.621436 1724 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.106:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.106:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183ef0132c7051fc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 00:54:30.606934524 +0000 UTC m=+0.705438933,LastTimestamp:2025-05-13 00:54:30.606934524 +0000 UTC m=+0.705438933,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 00:54:30.625599 kubelet[1724]: E0513 00:54:30.625549 1724 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:54:30.625599 kubelet[1724]: I0513 00:54:30.625570 1724 volume_manager.go:297] "Starting Kubelet Volume Manager" May 13 00:54:30.625699 kubelet[1724]: I0513 00:54:30.625689 1724 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 00:54:30.625730 kubelet[1724]: I0513 00:54:30.625714 1724 reconciler.go:26] "Reconciler: start to sync state" May 13 00:54:30.626019 kubelet[1724]: W0513 00:54:30.625997 1724 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused May 13 00:54:30.626056 kubelet[1724]: E0513 00:54:30.626025 1724 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" May 13 00:54:30.629047 kubelet[1724]: I0513 00:54:30.629025 1724 factory.go:221] Registration of the containerd container factory successfully May 13 00:54:30.629047 kubelet[1724]: I0513 00:54:30.629044 1724 factory.go:221] Registration of the systemd container factory successfully May 13 00:54:30.629142 kubelet[1724]: I0513 00:54:30.629078 1724 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:54:30.631650 kubelet[1724]: E0513 00:54:30.631631 1724 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.106:6443: connect: connection refused" interval="200ms" May 13 00:54:30.653152 kubelet[1724]: I0513 00:54:30.653133 1724 cpu_manager.go:221] "Starting CPU manager" policy="none" May 13 00:54:30.653240 kubelet[1724]: I0513 00:54:30.653230 1724 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 13 00:54:30.653306 kubelet[1724]: I0513 00:54:30.653298 1724 state_mem.go:36] "Initialized new in-memory state store" May 13 00:54:30.695863 kubelet[1724]: I0513 00:54:30.695842 1724 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:54:30.782935 kubelet[1724]: I0513 00:54:30.697063 1724 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:54:30.782935 kubelet[1724]: I0513 00:54:30.697086 1724 status_manager.go:227] "Starting to sync pod status with apiserver" May 13 00:54:30.782935 kubelet[1724]: I0513 00:54:30.697101 1724 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 13 00:54:30.782935 kubelet[1724]: I0513 00:54:30.697106 1724 kubelet.go:2388] "Starting kubelet main sync loop" May 13 00:54:30.782935 kubelet[1724]: E0513 00:54:30.697151 1724 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:54:30.782935 kubelet[1724]: W0513 00:54:30.697581 1724 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused May 13 00:54:30.782935 kubelet[1724]: E0513 00:54:30.697603 1724 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" May 13 00:54:30.782935 kubelet[1724]: E0513 00:54:30.726450 1724 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:54:30.792270 kubelet[1724]: I0513 00:54:30.792249 1724 policy_none.go:49] "None policy: Start" May 13 00:54:30.792388 kubelet[1724]: I0513 00:54:30.792374 1724 memory_manager.go:186] "Starting memorymanager" policy="None" May 13 00:54:30.792464 kubelet[1724]: I0513 00:54:30.792455 1724 state_mem.go:35] "Initializing new in-memory state store" May 13 00:54:30.797791 kubelet[1724]: E0513 00:54:30.797768 1724 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 00:54:30.827182 kubelet[1724]: E0513 00:54:30.827147 1724 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:54:30.832674 kubelet[1724]: E0513 00:54:30.832648 1724 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.106:6443: connect: connection refused" interval="400ms" May 13 00:54:30.848875 systemd[1]: Created slice kubepods.slice. May 13 00:54:30.851865 systemd[1]: Created slice kubepods-burstable.slice. May 13 00:54:30.854540 systemd[1]: Created slice kubepods-besteffort.slice. May 13 00:54:30.862795 kubelet[1724]: I0513 00:54:30.862781 1724 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:54:30.862953 kubelet[1724]: I0513 00:54:30.862946 1724 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 00:54:30.863044 kubelet[1724]: I0513 00:54:30.863023 1724 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:54:30.865921 kubelet[1724]: I0513 00:54:30.865848 1724 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:54:30.867444 kubelet[1724]: E0513 00:54:30.867428 1724 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 13 00:54:30.867546 kubelet[1724]: E0513 00:54:30.867535 1724 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 13 00:54:30.964505 kubelet[1724]: I0513 00:54:30.964490 1724 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 00:54:30.964923 kubelet[1724]: E0513 00:54:30.964910 1724 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://139.178.70.106:6443/api/v1/nodes\": dial tcp 139.178.70.106:6443: connect: connection refused" node="localhost" May 13 00:54:31.002724 systemd[1]: Created slice kubepods-burstable-podb145083ecbdcfe417353aa3fdc20f4ef.slice. May 13 00:54:31.018530 kubelet[1724]: E0513 00:54:31.018509 1724 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:54:31.019977 systemd[1]: Created slice kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice. May 13 00:54:31.021590 kubelet[1724]: E0513 00:54:31.021563 1724 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:54:31.027437 systemd[1]: Created slice kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice. May 13 00:54:31.027889 kubelet[1724]: I0513 00:54:31.027864 1724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b145083ecbdcfe417353aa3fdc20f4ef-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b145083ecbdcfe417353aa3fdc20f4ef\") " pod="kube-system/kube-apiserver-localhost" May 13 00:54:31.027889 kubelet[1724]: I0513 00:54:31.027886 1724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b145083ecbdcfe417353aa3fdc20f4ef-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b145083ecbdcfe417353aa3fdc20f4ef\") " pod="kube-system/kube-apiserver-localhost" May 13 00:54:31.027966 kubelet[1724]: I0513 00:54:31.027899 1724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:54:31.027966 kubelet[1724]: I0513 00:54:31.027909 1724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:54:31.027966 kubelet[1724]: I0513 00:54:31.027920 1724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:54:31.027966 kubelet[1724]: I0513 00:54:31.027930 1724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b145083ecbdcfe417353aa3fdc20f4ef-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b145083ecbdcfe417353aa3fdc20f4ef\") " pod="kube-system/kube-apiserver-localhost" May 13 00:54:31.027966 kubelet[1724]: I0513 00:54:31.027941 1724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:54:31.028078 kubelet[1724]: I0513 00:54:31.027960 1724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:54:31.028078 kubelet[1724]: I0513 00:54:31.027971 1724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 13 00:54:31.029100 kubelet[1724]: E0513 00:54:31.029089 1724 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:54:31.167482 kubelet[1724]: I0513 00:54:31.167415 1724 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 00:54:31.170280 kubelet[1724]: E0513 00:54:31.170169 1724 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://139.178.70.106:6443/api/v1/nodes\": dial tcp 139.178.70.106:6443: connect: connection refused" node="localhost" May 13 00:54:31.233953 kubelet[1724]: E0513 00:54:31.233926 1724 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.106:6443: connect: connection refused" interval="800ms" May 13 00:54:31.319834 env[1248]: time="2025-05-13T00:54:31.319795369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b145083ecbdcfe417353aa3fdc20f4ef,Namespace:kube-system,Attempt:0,}" May 13 00:54:31.322747 env[1248]: time="2025-05-13T00:54:31.322669974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,}" May 13 00:54:31.330805 env[1248]: time="2025-05-13T00:54:31.330718363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,}" May 13 00:54:31.571862 kubelet[1724]: I0513 00:54:31.571799 1724 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 00:54:31.572373 kubelet[1724]: E0513 00:54:31.572328 1724 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://139.178.70.106:6443/api/v1/nodes\": dial tcp 139.178.70.106:6443: connect: connection refused" node="localhost" May 13 00:54:31.925196 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount706832899.mount: Deactivated successfully. May 13 00:54:31.957015 kubelet[1724]: W0513 00:54:31.956951 1724 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused May 13 00:54:31.957015 kubelet[1724]: E0513 00:54:31.956995 1724 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" May 13 00:54:31.957286 env[1248]: time="2025-05-13T00:54:31.957266885Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:54:31.983227 env[1248]: time="2025-05-13T00:54:31.983189458Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:54:31.988639 env[1248]: time="2025-05-13T00:54:31.988618239Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:54:31.995706 kubelet[1724]: W0513 00:54:31.995654 1724 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused May 13 00:54:31.995706 kubelet[1724]: E0513 00:54:31.995685 1724 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" May 13 00:54:31.996449 env[1248]: time="2025-05-13T00:54:31.996429720Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:54:32.019316 env[1248]: time="2025-05-13T00:54:32.019288802Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:54:32.026753 env[1248]: time="2025-05-13T00:54:32.026715616Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:54:32.034248 env[1248]: time="2025-05-13T00:54:32.034223093Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:54:32.034626 kubelet[1724]: E0513 00:54:32.034598 1724 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.106:6443: connect: connection refused" interval="1.6s" May 13 00:54:32.039202 env[1248]: time="2025-05-13T00:54:32.039180294Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:54:32.054750 env[1248]: time="2025-05-13T00:54:32.054718595Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:54:32.064063 kubelet[1724]: W0513 00:54:32.064019 1724 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused May 13 00:54:32.064063 kubelet[1724]: E0513 00:54:32.064044 1724 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" May 13 00:54:32.072792 env[1248]: time="2025-05-13T00:54:32.072771675Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:54:32.081919 env[1248]: time="2025-05-13T00:54:32.081887699Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:54:32.083120 env[1248]: time="2025-05-13T00:54:32.083103608Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:54:32.106244 env[1248]: time="2025-05-13T00:54:32.106050379Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:54:32.106244 env[1248]: time="2025-05-13T00:54:32.106075889Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:54:32.106244 env[1248]: time="2025-05-13T00:54:32.106083163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:54:32.106244 env[1248]: time="2025-05-13T00:54:32.106164679Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b6947a5f06e9e19bedee010d302b8c0df56be44cc92bacbf3f5bb8687cca644f pid=1771 runtime=io.containerd.runc.v2 May 13 00:54:32.130504 env[1248]: time="2025-05-13T00:54:32.110519390Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:54:32.130504 env[1248]: time="2025-05-13T00:54:32.110575407Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:54:32.130504 env[1248]: time="2025-05-13T00:54:32.110607950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:54:32.130504 env[1248]: time="2025-05-13T00:54:32.110754265Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/299ee1bd292d2649c36172c5d0c6988aa73f3e83dc44e01c9b5670fcb6ef2b0a pid=1779 runtime=io.containerd.runc.v2 May 13 00:54:32.120075 systemd[1]: Started cri-containerd-b6947a5f06e9e19bedee010d302b8c0df56be44cc92bacbf3f5bb8687cca644f.scope. May 13 00:54:32.129456 systemd[1]: Started cri-containerd-299ee1bd292d2649c36172c5d0c6988aa73f3e83dc44e01c9b5670fcb6ef2b0a.scope. May 13 00:54:32.161743 env[1248]: time="2025-05-13T00:54:32.161712727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6947a5f06e9e19bedee010d302b8c0df56be44cc92bacbf3f5bb8687cca644f\"" May 13 00:54:32.163882 env[1248]: time="2025-05-13T00:54:32.163852623Z" level=info msg="CreateContainer within sandbox \"b6947a5f06e9e19bedee010d302b8c0df56be44cc92bacbf3f5bb8687cca644f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 00:54:32.170125 env[1248]: time="2025-05-13T00:54:32.170078934Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:54:32.170265 env[1248]: time="2025-05-13T00:54:32.170105920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:54:32.170265 env[1248]: time="2025-05-13T00:54:32.170124709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:54:32.170265 env[1248]: time="2025-05-13T00:54:32.170217795Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/52ea5454afa2f2f39037df6f95c93b3914554316152151e54adf9ff4971266e4 pid=1841 runtime=io.containerd.runc.v2 May 13 00:54:32.178135 systemd[1]: Started cri-containerd-52ea5454afa2f2f39037df6f95c93b3914554316152151e54adf9ff4971266e4.scope. May 13 00:54:32.188010 env[1248]: time="2025-05-13T00:54:32.187985823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,} returns sandbox id \"299ee1bd292d2649c36172c5d0c6988aa73f3e83dc44e01c9b5670fcb6ef2b0a\"" May 13 00:54:32.189406 env[1248]: time="2025-05-13T00:54:32.189391404Z" level=info msg="CreateContainer within sandbox \"299ee1bd292d2649c36172c5d0c6988aa73f3e83dc44e01c9b5670fcb6ef2b0a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 00:54:32.198029 kubelet[1724]: W0513 00:54:32.197967 1724 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.106:6443: connect: connection refused May 13 00:54:32.198029 kubelet[1724]: E0513 00:54:32.198007 1724 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" May 13 00:54:32.213689 env[1248]: time="2025-05-13T00:54:32.213664211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b145083ecbdcfe417353aa3fdc20f4ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"52ea5454afa2f2f39037df6f95c93b3914554316152151e54adf9ff4971266e4\"" May 13 00:54:32.219924 env[1248]: time="2025-05-13T00:54:32.219905414Z" level=info msg="CreateContainer within sandbox \"52ea5454afa2f2f39037df6f95c93b3914554316152151e54adf9ff4971266e4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 00:54:32.300628 env[1248]: time="2025-05-13T00:54:32.300595521Z" level=info msg="CreateContainer within sandbox \"b6947a5f06e9e19bedee010d302b8c0df56be44cc92bacbf3f5bb8687cca644f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"de7173d8be1b4022482b6ee6cfa9b492bb1bb9e1ab8150e94812830eede49303\"" May 13 00:54:32.301882 env[1248]: time="2025-05-13T00:54:32.301865641Z" level=info msg="StartContainer for \"de7173d8be1b4022482b6ee6cfa9b492bb1bb9e1ab8150e94812830eede49303\"" May 13 00:54:32.308590 env[1248]: time="2025-05-13T00:54:32.308549994Z" level=info msg="CreateContainer within sandbox \"299ee1bd292d2649c36172c5d0c6988aa73f3e83dc44e01c9b5670fcb6ef2b0a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ff84d758761d3839713dc5d2bce641116754b89e3ea665d5b3256d2ec8d774bf\"" May 13 00:54:32.309115 env[1248]: time="2025-05-13T00:54:32.309100947Z" level=info msg="StartContainer for \"ff84d758761d3839713dc5d2bce641116754b89e3ea665d5b3256d2ec8d774bf\"" May 13 00:54:32.311373 env[1248]: time="2025-05-13T00:54:32.311321331Z" level=info msg="CreateContainer within sandbox \"52ea5454afa2f2f39037df6f95c93b3914554316152151e54adf9ff4971266e4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ca0dc2800d7ea43a4054288e08dce7be8e428f007ba40a02578463cd1adffd6a\"" May 13 00:54:32.311845 env[1248]: time="2025-05-13T00:54:32.311830262Z" level=info msg="StartContainer for \"ca0dc2800d7ea43a4054288e08dce7be8e428f007ba40a02578463cd1adffd6a\"" May 13 00:54:32.318475 systemd[1]: Started cri-containerd-de7173d8be1b4022482b6ee6cfa9b492bb1bb9e1ab8150e94812830eede49303.scope. May 13 00:54:32.330296 systemd[1]: Started cri-containerd-ff84d758761d3839713dc5d2bce641116754b89e3ea665d5b3256d2ec8d774bf.scope. May 13 00:54:32.353712 systemd[1]: Started cri-containerd-ca0dc2800d7ea43a4054288e08dce7be8e428f007ba40a02578463cd1adffd6a.scope. May 13 00:54:32.374404 kubelet[1724]: I0513 00:54:32.374144 1724 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 00:54:32.374404 kubelet[1724]: E0513 00:54:32.374372 1724 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://139.178.70.106:6443/api/v1/nodes\": dial tcp 139.178.70.106:6443: connect: connection refused" node="localhost" May 13 00:54:32.399223 env[1248]: time="2025-05-13T00:54:32.399196799Z" level=info msg="StartContainer for \"ff84d758761d3839713dc5d2bce641116754b89e3ea665d5b3256d2ec8d774bf\" returns successfully" May 13 00:54:32.399433 env[1248]: time="2025-05-13T00:54:32.399260699Z" level=info msg="StartContainer for \"de7173d8be1b4022482b6ee6cfa9b492bb1bb9e1ab8150e94812830eede49303\" returns successfully" May 13 00:54:32.413492 env[1248]: time="2025-05-13T00:54:32.413460756Z" level=info msg="StartContainer for \"ca0dc2800d7ea43a4054288e08dce7be8e428f007ba40a02578463cd1adffd6a\" returns successfully" May 13 00:54:32.681993 kubelet[1724]: E0513 00:54:32.681964 1724 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.106:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.106:6443: connect: connection refused" logger="UnhandledError" May 13 00:54:32.701932 kubelet[1724]: E0513 00:54:32.701917 1724 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:54:32.703139 kubelet[1724]: E0513 00:54:32.703130 1724 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:54:32.704134 kubelet[1724]: E0513 00:54:32.704126 1724 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:54:33.705732 kubelet[1724]: E0513 00:54:33.705716 1724 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:54:33.706251 kubelet[1724]: E0513 00:54:33.706241 1724 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:54:33.975894 kubelet[1724]: I0513 00:54:33.975826 1724 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 00:54:34.409118 kubelet[1724]: E0513 00:54:34.409090 1724 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 13 00:54:34.523848 kubelet[1724]: I0513 00:54:34.523828 1724 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 13 00:54:34.528738 kubelet[1724]: I0513 00:54:34.528717 1724 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 13 00:54:34.562050 kubelet[1724]: E0513 00:54:34.562025 1724 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 13 00:54:34.562184 kubelet[1724]: I0513 00:54:34.562176 1724 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 13 00:54:34.563388 kubelet[1724]: E0513 00:54:34.563377 1724 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 13 00:54:34.563463 kubelet[1724]: I0513 00:54:34.563456 1724 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 13 00:54:34.569545 kubelet[1724]: E0513 00:54:34.569521 1724 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 13 00:54:34.600962 kubelet[1724]: I0513 00:54:34.600943 1724 apiserver.go:52] "Watching apiserver" May 13 00:54:34.625914 kubelet[1724]: I0513 00:54:34.625886 1724 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 00:54:34.705678 kubelet[1724]: I0513 00:54:34.705621 1724 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 13 00:54:34.706904 kubelet[1724]: E0513 00:54:34.706892 1724 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 13 00:54:36.221915 systemd[1]: Reloading. May 13 00:54:36.295081 /usr/lib/systemd/system-generators/torcx-generator[2024]: time="2025-05-13T00:54:36Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 00:54:36.295981 /usr/lib/systemd/system-generators/torcx-generator[2024]: time="2025-05-13T00:54:36Z" level=info msg="torcx already run" May 13 00:54:36.338194 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:54:36.338210 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:54:36.350681 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:54:36.430375 systemd[1]: Stopping kubelet.service... May 13 00:54:36.445667 systemd[1]: kubelet.service: Deactivated successfully. May 13 00:54:36.445930 systemd[1]: Stopped kubelet.service. May 13 00:54:36.448196 systemd[1]: Starting kubelet.service... May 13 00:54:37.179215 systemd[1]: Started kubelet.service. May 13 00:54:37.244313 kubelet[2088]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:54:37.244535 kubelet[2088]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 13 00:54:37.244576 kubelet[2088]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:54:37.244667 kubelet[2088]: I0513 00:54:37.244647 2088 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:54:37.245458 sudo[2098]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 13 00:54:37.245598 sudo[2098]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) May 13 00:54:37.252660 kubelet[2088]: I0513 00:54:37.252640 2088 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 13 00:54:37.252777 kubelet[2088]: I0513 00:54:37.252767 2088 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:54:37.253715 kubelet[2088]: I0513 00:54:37.252997 2088 server.go:954] "Client rotation is on, will bootstrap in background" May 13 00:54:37.256374 kubelet[2088]: I0513 00:54:37.255483 2088 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 00:54:37.260002 kubelet[2088]: I0513 00:54:37.259986 2088 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:54:37.262619 kubelet[2088]: E0513 00:54:37.262603 2088 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 13 00:54:37.262692 kubelet[2088]: I0513 00:54:37.262683 2088 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 13 00:54:37.264256 kubelet[2088]: I0513 00:54:37.264209 2088 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:54:37.264471 kubelet[2088]: I0513 00:54:37.264454 2088 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:54:37.264624 kubelet[2088]: I0513 00:54:37.264517 2088 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 00:54:37.264717 kubelet[2088]: I0513 00:54:37.264709 2088 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:54:37.264799 kubelet[2088]: I0513 00:54:37.264792 2088 container_manager_linux.go:304] "Creating device plugin manager" May 13 00:54:37.268583 kubelet[2088]: I0513 00:54:37.268573 2088 state_mem.go:36] "Initialized new in-memory state store" May 13 00:54:37.268755 kubelet[2088]: I0513 00:54:37.268747 2088 kubelet.go:446] "Attempting to sync node with API server" May 13 00:54:37.268808 kubelet[2088]: I0513 00:54:37.268801 2088 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:54:37.268861 kubelet[2088]: I0513 00:54:37.268854 2088 kubelet.go:352] "Adding apiserver pod source" May 13 00:54:37.268912 kubelet[2088]: I0513 00:54:37.268905 2088 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:54:37.269378 kubelet[2088]: I0513 00:54:37.269355 2088 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 13 00:54:37.271051 kubelet[2088]: I0513 00:54:37.270870 2088 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:54:37.271136 kubelet[2088]: I0513 00:54:37.271125 2088 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 13 00:54:37.271168 kubelet[2088]: I0513 00:54:37.271144 2088 server.go:1287] "Started kubelet" May 13 00:54:37.278616 kubelet[2088]: I0513 00:54:37.278579 2088 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:54:37.278816 kubelet[2088]: I0513 00:54:37.278807 2088 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:54:37.283759 kubelet[2088]: I0513 00:54:37.283747 2088 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:54:37.284532 kubelet[2088]: I0513 00:54:37.284525 2088 volume_manager.go:297] "Starting Kubelet Volume Manager" May 13 00:54:37.284638 kubelet[2088]: I0513 00:54:37.284624 2088 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:54:37.292968 kubelet[2088]: I0513 00:54:37.292951 2088 server.go:490] "Adding debug handlers to kubelet server" May 13 00:54:37.300354 kubelet[2088]: I0513 00:54:37.297376 2088 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 00:54:37.308192 kubelet[2088]: I0513 00:54:37.307439 2088 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 00:54:37.308192 kubelet[2088]: I0513 00:54:37.307546 2088 reconciler.go:26] "Reconciler: start to sync state" May 13 00:54:37.316412 kubelet[2088]: I0513 00:54:37.316381 2088 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:54:37.317396 kubelet[2088]: I0513 00:54:37.317384 2088 factory.go:221] Registration of the containerd container factory successfully May 13 00:54:37.317396 kubelet[2088]: I0513 00:54:37.317393 2088 factory.go:221] Registration of the systemd container factory successfully May 13 00:54:37.324850 kubelet[2088]: E0513 00:54:37.324830 2088 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:54:37.332436 kubelet[2088]: I0513 00:54:37.332327 2088 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:54:37.335145 kubelet[2088]: I0513 00:54:37.333090 2088 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:54:37.335145 kubelet[2088]: I0513 00:54:37.333103 2088 status_manager.go:227] "Starting to sync pod status with apiserver" May 13 00:54:37.335145 kubelet[2088]: I0513 00:54:37.333115 2088 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 13 00:54:37.335145 kubelet[2088]: I0513 00:54:37.333119 2088 kubelet.go:2388] "Starting kubelet main sync loop" May 13 00:54:37.342627 kubelet[2088]: E0513 00:54:37.342606 2088 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:54:37.368935 kubelet[2088]: I0513 00:54:37.368910 2088 cpu_manager.go:221] "Starting CPU manager" policy="none" May 13 00:54:37.368935 kubelet[2088]: I0513 00:54:37.368929 2088 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 13 00:54:37.369047 kubelet[2088]: I0513 00:54:37.368944 2088 state_mem.go:36] "Initialized new in-memory state store" May 13 00:54:37.369047 kubelet[2088]: I0513 00:54:37.369027 2088 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 00:54:37.369047 kubelet[2088]: I0513 00:54:37.369033 2088 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 00:54:37.369047 kubelet[2088]: I0513 00:54:37.369044 2088 policy_none.go:49] "None policy: Start" May 13 00:54:37.369119 kubelet[2088]: I0513 00:54:37.369050 2088 memory_manager.go:186] "Starting memorymanager" policy="None" May 13 00:54:37.369119 kubelet[2088]: I0513 00:54:37.369056 2088 state_mem.go:35] "Initializing new in-memory state store" May 13 00:54:37.369119 kubelet[2088]: I0513 00:54:37.369111 2088 state_mem.go:75] "Updated machine memory state" May 13 00:54:37.371099 kubelet[2088]: I0513 00:54:37.371089 2088 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:54:37.371229 kubelet[2088]: I0513 00:54:37.371222 2088 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 00:54:37.371293 kubelet[2088]: I0513 00:54:37.371276 2088 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:54:37.372165 kubelet[2088]: I0513 00:54:37.372158 2088 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:54:37.381157 kubelet[2088]: E0513 00:54:37.381141 2088 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 13 00:54:37.443128 kubelet[2088]: I0513 00:54:37.443075 2088 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 13 00:54:37.448955 kubelet[2088]: I0513 00:54:37.447898 2088 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 13 00:54:37.448955 kubelet[2088]: I0513 00:54:37.448071 2088 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 13 00:54:37.488498 kubelet[2088]: I0513 00:54:37.488484 2088 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 00:54:37.492034 kubelet[2088]: I0513 00:54:37.492017 2088 kubelet_node_status.go:125] "Node was previously registered" node="localhost" May 13 00:54:37.492104 kubelet[2088]: I0513 00:54:37.492058 2088 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 13 00:54:37.608566 kubelet[2088]: I0513 00:54:37.608549 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:54:37.608566 kubelet[2088]: I0513 00:54:37.608566 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:54:37.608679 kubelet[2088]: I0513 00:54:37.608578 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 13 00:54:37.608679 kubelet[2088]: I0513 00:54:37.608587 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b145083ecbdcfe417353aa3fdc20f4ef-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b145083ecbdcfe417353aa3fdc20f4ef\") " pod="kube-system/kube-apiserver-localhost" May 13 00:54:37.608679 kubelet[2088]: I0513 00:54:37.608597 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b145083ecbdcfe417353aa3fdc20f4ef-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b145083ecbdcfe417353aa3fdc20f4ef\") " pod="kube-system/kube-apiserver-localhost" May 13 00:54:37.608679 kubelet[2088]: I0513 00:54:37.608605 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:54:37.608679 kubelet[2088]: I0513 00:54:37.608617 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:54:37.608784 kubelet[2088]: I0513 00:54:37.608627 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:54:37.608784 kubelet[2088]: I0513 00:54:37.608636 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b145083ecbdcfe417353aa3fdc20f4ef-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b145083ecbdcfe417353aa3fdc20f4ef\") " pod="kube-system/kube-apiserver-localhost" May 13 00:54:37.749122 sudo[2098]: pam_unix(sudo:session): session closed for user root May 13 00:54:38.276288 kubelet[2088]: I0513 00:54:38.276264 2088 apiserver.go:52] "Watching apiserver" May 13 00:54:38.307596 kubelet[2088]: I0513 00:54:38.307579 2088 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 00:54:38.360628 kubelet[2088]: I0513 00:54:38.358952 2088 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 13 00:54:38.362706 kubelet[2088]: E0513 00:54:38.362681 2088 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 13 00:54:38.393822 kubelet[2088]: I0513 00:54:38.393786 2088 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.393777186 podStartE2EDuration="1.393777186s" podCreationTimestamp="2025-05-13 00:54:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:54:38.393516735 +0000 UTC m=+1.202348758" watchObservedRunningTime="2025-05-13 00:54:38.393777186 +0000 UTC m=+1.202609206" May 13 00:54:38.401270 kubelet[2088]: I0513 00:54:38.401241 2088 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.4012305440000001 podStartE2EDuration="1.401230544s" podCreationTimestamp="2025-05-13 00:54:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:54:38.397702921 +0000 UTC m=+1.206534941" watchObservedRunningTime="2025-05-13 00:54:38.401230544 +0000 UTC m=+1.210062558" May 13 00:54:38.409806 kubelet[2088]: I0513 00:54:38.409763 2088 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.409751422 podStartE2EDuration="1.409751422s" podCreationTimestamp="2025-05-13 00:54:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:54:38.402807787 +0000 UTC m=+1.211639807" watchObservedRunningTime="2025-05-13 00:54:38.409751422 +0000 UTC m=+1.218583437" May 13 00:54:39.013381 sudo[1446]: pam_unix(sudo:session): session closed for user root May 13 00:54:39.018749 sshd[1442]: pam_unix(sshd:session): session closed for user core May 13 00:54:39.020569 systemd[1]: sshd@6-139.178.70.106:22-147.75.109.163:52630.service: Deactivated successfully. May 13 00:54:39.021145 systemd[1]: session-7.scope: Deactivated successfully. May 13 00:54:39.021231 systemd[1]: session-7.scope: Consumed 3.425s CPU time. May 13 00:54:39.021936 systemd-logind[1240]: Session 7 logged out. Waiting for processes to exit. May 13 00:54:39.022619 systemd-logind[1240]: Removed session 7. May 13 00:54:41.771224 kubelet[2088]: I0513 00:54:41.771197 2088 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 00:54:41.771487 env[1248]: time="2025-05-13T00:54:41.771426461Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 00:54:41.771693 kubelet[2088]: I0513 00:54:41.771682 2088 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 00:54:42.686315 systemd[1]: Created slice kubepods-burstable-pod39810f2a_680c_4d2b_855f_0328f1e5a87f.slice. May 13 00:54:42.690618 systemd[1]: Created slice kubepods-besteffort-podecf8f95a_07c9_409d_82df_3c204ccd5427.slice. May 13 00:54:42.741293 kubelet[2088]: I0513 00:54:42.741265 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ecf8f95a-07c9-409d-82df-3c204ccd5427-xtables-lock\") pod \"kube-proxy-ksjrp\" (UID: \"ecf8f95a-07c9-409d-82df-3c204ccd5427\") " pod="kube-system/kube-proxy-ksjrp" May 13 00:54:42.741459 kubelet[2088]: I0513 00:54:42.741447 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/39810f2a-680c-4d2b-855f-0328f1e5a87f-bpf-maps\") pod \"cilium-k5ctx\" (UID: \"39810f2a-680c-4d2b-855f-0328f1e5a87f\") " pod="kube-system/cilium-k5ctx" May 13 00:54:42.741530 kubelet[2088]: I0513 00:54:42.741521 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/39810f2a-680c-4d2b-855f-0328f1e5a87f-etc-cni-netd\") pod \"cilium-k5ctx\" (UID: \"39810f2a-680c-4d2b-855f-0328f1e5a87f\") " pod="kube-system/cilium-k5ctx" May 13 00:54:42.741590 kubelet[2088]: I0513 00:54:42.741582 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/39810f2a-680c-4d2b-855f-0328f1e5a87f-host-proc-sys-net\") pod \"cilium-k5ctx\" (UID: \"39810f2a-680c-4d2b-855f-0328f1e5a87f\") " pod="kube-system/cilium-k5ctx" May 13 00:54:42.741659 kubelet[2088]: I0513 00:54:42.741651 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/39810f2a-680c-4d2b-855f-0328f1e5a87f-host-proc-sys-kernel\") pod \"cilium-k5ctx\" (UID: \"39810f2a-680c-4d2b-855f-0328f1e5a87f\") " pod="kube-system/cilium-k5ctx" May 13 00:54:42.741721 kubelet[2088]: I0513 00:54:42.741714 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/39810f2a-680c-4d2b-855f-0328f1e5a87f-cilium-cgroup\") pod \"cilium-k5ctx\" (UID: \"39810f2a-680c-4d2b-855f-0328f1e5a87f\") " pod="kube-system/cilium-k5ctx" May 13 00:54:42.741780 kubelet[2088]: I0513 00:54:42.741772 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/39810f2a-680c-4d2b-855f-0328f1e5a87f-hubble-tls\") pod \"cilium-k5ctx\" (UID: \"39810f2a-680c-4d2b-855f-0328f1e5a87f\") " pod="kube-system/cilium-k5ctx" May 13 00:54:42.741843 kubelet[2088]: I0513 00:54:42.741836 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/39810f2a-680c-4d2b-855f-0328f1e5a87f-hostproc\") pod \"cilium-k5ctx\" (UID: \"39810f2a-680c-4d2b-855f-0328f1e5a87f\") " pod="kube-system/cilium-k5ctx" May 13 00:54:42.741901 kubelet[2088]: I0513 00:54:42.741893 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ct9n\" (UniqueName: \"kubernetes.io/projected/ecf8f95a-07c9-409d-82df-3c204ccd5427-kube-api-access-8ct9n\") pod \"kube-proxy-ksjrp\" (UID: \"ecf8f95a-07c9-409d-82df-3c204ccd5427\") " pod="kube-system/kube-proxy-ksjrp" May 13 00:54:42.741960 kubelet[2088]: I0513 00:54:42.741953 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkbq7\" (UniqueName: \"kubernetes.io/projected/39810f2a-680c-4d2b-855f-0328f1e5a87f-kube-api-access-fkbq7\") pod \"cilium-k5ctx\" (UID: \"39810f2a-680c-4d2b-855f-0328f1e5a87f\") " pod="kube-system/cilium-k5ctx" May 13 00:54:42.742017 kubelet[2088]: I0513 00:54:42.742009 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/39810f2a-680c-4d2b-855f-0328f1e5a87f-xtables-lock\") pod \"cilium-k5ctx\" (UID: \"39810f2a-680c-4d2b-855f-0328f1e5a87f\") " pod="kube-system/cilium-k5ctx" May 13 00:54:42.742075 kubelet[2088]: I0513 00:54:42.742068 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/39810f2a-680c-4d2b-855f-0328f1e5a87f-clustermesh-secrets\") pod \"cilium-k5ctx\" (UID: \"39810f2a-680c-4d2b-855f-0328f1e5a87f\") " pod="kube-system/cilium-k5ctx" May 13 00:54:42.742137 kubelet[2088]: I0513 00:54:42.742129 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ecf8f95a-07c9-409d-82df-3c204ccd5427-kube-proxy\") pod \"kube-proxy-ksjrp\" (UID: \"ecf8f95a-07c9-409d-82df-3c204ccd5427\") " pod="kube-system/kube-proxy-ksjrp" May 13 00:54:42.742197 kubelet[2088]: I0513 00:54:42.742189 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/39810f2a-680c-4d2b-855f-0328f1e5a87f-cilium-run\") pod \"cilium-k5ctx\" (UID: \"39810f2a-680c-4d2b-855f-0328f1e5a87f\") " pod="kube-system/cilium-k5ctx" May 13 00:54:42.742258 kubelet[2088]: I0513 00:54:42.742251 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/39810f2a-680c-4d2b-855f-0328f1e5a87f-cni-path\") pod \"cilium-k5ctx\" (UID: \"39810f2a-680c-4d2b-855f-0328f1e5a87f\") " pod="kube-system/cilium-k5ctx" May 13 00:54:42.742323 kubelet[2088]: I0513 00:54:42.742315 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/39810f2a-680c-4d2b-855f-0328f1e5a87f-lib-modules\") pod \"cilium-k5ctx\" (UID: \"39810f2a-680c-4d2b-855f-0328f1e5a87f\") " pod="kube-system/cilium-k5ctx" May 13 00:54:42.742394 kubelet[2088]: I0513 00:54:42.742387 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/39810f2a-680c-4d2b-855f-0328f1e5a87f-cilium-config-path\") pod \"cilium-k5ctx\" (UID: \"39810f2a-680c-4d2b-855f-0328f1e5a87f\") " pod="kube-system/cilium-k5ctx" May 13 00:54:42.742453 kubelet[2088]: I0513 00:54:42.742445 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ecf8f95a-07c9-409d-82df-3c204ccd5427-lib-modules\") pod \"kube-proxy-ksjrp\" (UID: \"ecf8f95a-07c9-409d-82df-3c204ccd5427\") " pod="kube-system/kube-proxy-ksjrp" May 13 00:54:42.774555 systemd[1]: Created slice kubepods-besteffort-pod69e2b727_2454_4e5b_8099_d8c5c48f0796.slice. May 13 00:54:42.843548 kubelet[2088]: I0513 00:54:42.843521 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq2nw\" (UniqueName: \"kubernetes.io/projected/69e2b727-2454-4e5b-8099-d8c5c48f0796-kube-api-access-qq2nw\") pod \"cilium-operator-6c4d7847fc-wptmt\" (UID: \"69e2b727-2454-4e5b-8099-d8c5c48f0796\") " pod="kube-system/cilium-operator-6c4d7847fc-wptmt" May 13 00:54:42.843774 kubelet[2088]: I0513 00:54:42.843559 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/69e2b727-2454-4e5b-8099-d8c5c48f0796-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-wptmt\" (UID: \"69e2b727-2454-4e5b-8099-d8c5c48f0796\") " pod="kube-system/cilium-operator-6c4d7847fc-wptmt" May 13 00:54:42.844095 kubelet[2088]: I0513 00:54:42.844080 2088 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 13 00:54:42.990497 env[1248]: time="2025-05-13T00:54:42.990248577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k5ctx,Uid:39810f2a-680c-4d2b-855f-0328f1e5a87f,Namespace:kube-system,Attempt:0,}" May 13 00:54:42.998028 env[1248]: time="2025-05-13T00:54:42.998001516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ksjrp,Uid:ecf8f95a-07c9-409d-82df-3c204ccd5427,Namespace:kube-system,Attempt:0,}" May 13 00:54:43.077548 env[1248]: time="2025-05-13T00:54:43.077515144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-wptmt,Uid:69e2b727-2454-4e5b-8099-d8c5c48f0796,Namespace:kube-system,Attempt:0,}" May 13 00:54:43.101929 env[1248]: time="2025-05-13T00:54:43.101871431Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:54:43.101929 env[1248]: time="2025-05-13T00:54:43.101908152Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:54:43.102103 env[1248]: time="2025-05-13T00:54:43.102064808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:54:43.102256 env[1248]: time="2025-05-13T00:54:43.102222154Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8be25f7728fef628f12cfe51f97831271bc1e4def38677bd06065a0eac1188a4 pid=2171 runtime=io.containerd.runc.v2 May 13 00:54:43.123202 systemd[1]: Started cri-containerd-8be25f7728fef628f12cfe51f97831271bc1e4def38677bd06065a0eac1188a4.scope. May 13 00:54:43.144122 env[1248]: time="2025-05-13T00:54:43.144093488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k5ctx,Uid:39810f2a-680c-4d2b-855f-0328f1e5a87f,Namespace:kube-system,Attempt:0,} returns sandbox id \"8be25f7728fef628f12cfe51f97831271bc1e4def38677bd06065a0eac1188a4\"" May 13 00:54:43.150285 env[1248]: time="2025-05-13T00:54:43.150240941Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:54:43.150392 env[1248]: time="2025-05-13T00:54:43.150293353Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:54:43.150392 env[1248]: time="2025-05-13T00:54:43.150310495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:54:43.150498 env[1248]: time="2025-05-13T00:54:43.150470354Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f05cb7ef5b226c08115a3a7713ccb98c8079d9b388af820d8b89803d17872e01 pid=2213 runtime=io.containerd.runc.v2 May 13 00:54:43.156666 env[1248]: time="2025-05-13T00:54:43.156451455Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 13 00:54:43.160326 systemd[1]: Started cri-containerd-f05cb7ef5b226c08115a3a7713ccb98c8079d9b388af820d8b89803d17872e01.scope. May 13 00:54:43.174975 env[1248]: time="2025-05-13T00:54:43.174949791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ksjrp,Uid:ecf8f95a-07c9-409d-82df-3c204ccd5427,Namespace:kube-system,Attempt:0,} returns sandbox id \"f05cb7ef5b226c08115a3a7713ccb98c8079d9b388af820d8b89803d17872e01\"" May 13 00:54:43.177584 env[1248]: time="2025-05-13T00:54:43.177539413Z" level=info msg="CreateContainer within sandbox \"f05cb7ef5b226c08115a3a7713ccb98c8079d9b388af820d8b89803d17872e01\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 00:54:43.306190 env[1248]: time="2025-05-13T00:54:43.306093223Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:54:43.306287 env[1248]: time="2025-05-13T00:54:43.306121046Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:54:43.306287 env[1248]: time="2025-05-13T00:54:43.306133362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:54:43.306662 env[1248]: time="2025-05-13T00:54:43.306639117Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4c09d003d792a76de630c15a0ce6b82d8c8eed28e480580903bfab9db54a0ee1 pid=2255 runtime=io.containerd.runc.v2 May 13 00:54:43.317045 systemd[1]: Started cri-containerd-4c09d003d792a76de630c15a0ce6b82d8c8eed28e480580903bfab9db54a0ee1.scope. May 13 00:54:43.350527 env[1248]: time="2025-05-13T00:54:43.350492382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-wptmt,Uid:69e2b727-2454-4e5b-8099-d8c5c48f0796,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c09d003d792a76de630c15a0ce6b82d8c8eed28e480580903bfab9db54a0ee1\"" May 13 00:54:43.421742 env[1248]: time="2025-05-13T00:54:43.421709986Z" level=info msg="CreateContainer within sandbox \"f05cb7ef5b226c08115a3a7713ccb98c8079d9b388af820d8b89803d17872e01\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e8e836bdc1601f532e4322e8454781900777d3c2f643875adc99c7fe34239dbf\"" May 13 00:54:43.426800 env[1248]: time="2025-05-13T00:54:43.423633544Z" level=info msg="StartContainer for \"e8e836bdc1601f532e4322e8454781900777d3c2f643875adc99c7fe34239dbf\"" May 13 00:54:43.437126 systemd[1]: Started cri-containerd-e8e836bdc1601f532e4322e8454781900777d3c2f643875adc99c7fe34239dbf.scope. May 13 00:54:43.482448 env[1248]: time="2025-05-13T00:54:43.482404980Z" level=info msg="StartContainer for \"e8e836bdc1601f532e4322e8454781900777d3c2f643875adc99c7fe34239dbf\" returns successfully" May 13 00:54:45.133963 kubelet[2088]: I0513 00:54:45.133819 2088 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ksjrp" podStartSLOduration=3.133804344 podStartE2EDuration="3.133804344s" podCreationTimestamp="2025-05-13 00:54:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:54:44.37837894 +0000 UTC m=+7.187210960" watchObservedRunningTime="2025-05-13 00:54:45.133804344 +0000 UTC m=+7.942636364" May 13 00:54:48.366321 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount414597376.mount: Deactivated successfully. May 13 00:54:52.421934 env[1248]: time="2025-05-13T00:54:52.421897579Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:54:52.424151 env[1248]: time="2025-05-13T00:54:52.424119139Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:54:52.425218 env[1248]: time="2025-05-13T00:54:52.425198281Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:54:52.425862 env[1248]: time="2025-05-13T00:54:52.425832551Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 13 00:54:52.440693 env[1248]: time="2025-05-13T00:54:52.440663706Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 13 00:54:52.442113 env[1248]: time="2025-05-13T00:54:52.442088132Z" level=info msg="CreateContainer within sandbox \"8be25f7728fef628f12cfe51f97831271bc1e4def38677bd06065a0eac1188a4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 00:54:52.454190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3782232328.mount: Deactivated successfully. May 13 00:54:52.459668 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount704148368.mount: Deactivated successfully. May 13 00:54:52.466241 env[1248]: time="2025-05-13T00:54:52.466199739Z" level=info msg="CreateContainer within sandbox \"8be25f7728fef628f12cfe51f97831271bc1e4def38677bd06065a0eac1188a4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"432a8a1b7257b881e00ce745ddd2984e23f218d4b7b7d9e49481a7acbdfd633c\"" May 13 00:54:52.466887 env[1248]: time="2025-05-13T00:54:52.466873269Z" level=info msg="StartContainer for \"432a8a1b7257b881e00ce745ddd2984e23f218d4b7b7d9e49481a7acbdfd633c\"" May 13 00:54:52.486716 systemd[1]: Started cri-containerd-432a8a1b7257b881e00ce745ddd2984e23f218d4b7b7d9e49481a7acbdfd633c.scope. May 13 00:54:52.525983 env[1248]: time="2025-05-13T00:54:52.525947100Z" level=info msg="StartContainer for \"432a8a1b7257b881e00ce745ddd2984e23f218d4b7b7d9e49481a7acbdfd633c\" returns successfully" May 13 00:54:52.535161 systemd[1]: cri-containerd-432a8a1b7257b881e00ce745ddd2984e23f218d4b7b7d9e49481a7acbdfd633c.scope: Deactivated successfully. May 13 00:54:53.104195 env[1248]: time="2025-05-13T00:54:53.104159575Z" level=info msg="shim disconnected" id=432a8a1b7257b881e00ce745ddd2984e23f218d4b7b7d9e49481a7acbdfd633c May 13 00:54:53.104404 env[1248]: time="2025-05-13T00:54:53.104386555Z" level=warning msg="cleaning up after shim disconnected" id=432a8a1b7257b881e00ce745ddd2984e23f218d4b7b7d9e49481a7acbdfd633c namespace=k8s.io May 13 00:54:53.104471 env[1248]: time="2025-05-13T00:54:53.104457896Z" level=info msg="cleaning up dead shim" May 13 00:54:53.110307 env[1248]: time="2025-05-13T00:54:53.110279942Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:54:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2502 runtime=io.containerd.runc.v2\n" May 13 00:54:53.436303 env[1248]: time="2025-05-13T00:54:53.436226153Z" level=info msg="CreateContainer within sandbox \"8be25f7728fef628f12cfe51f97831271bc1e4def38677bd06065a0eac1188a4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 00:54:53.451286 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-432a8a1b7257b881e00ce745ddd2984e23f218d4b7b7d9e49481a7acbdfd633c-rootfs.mount: Deactivated successfully. May 13 00:54:53.477405 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3489731685.mount: Deactivated successfully. May 13 00:54:53.483999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3146029402.mount: Deactivated successfully. May 13 00:54:53.493022 env[1248]: time="2025-05-13T00:54:53.492988425Z" level=info msg="CreateContainer within sandbox \"8be25f7728fef628f12cfe51f97831271bc1e4def38677bd06065a0eac1188a4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"13a77a34ec5b1785cc27eda938f3ab822fd83671dfdadefa7ba91fe69fc18d82\"" May 13 00:54:53.494551 env[1248]: time="2025-05-13T00:54:53.494305434Z" level=info msg="StartContainer for \"13a77a34ec5b1785cc27eda938f3ab822fd83671dfdadefa7ba91fe69fc18d82\"" May 13 00:54:53.507715 systemd[1]: Started cri-containerd-13a77a34ec5b1785cc27eda938f3ab822fd83671dfdadefa7ba91fe69fc18d82.scope. May 13 00:54:53.531986 env[1248]: time="2025-05-13T00:54:53.531948744Z" level=info msg="StartContainer for \"13a77a34ec5b1785cc27eda938f3ab822fd83671dfdadefa7ba91fe69fc18d82\" returns successfully" May 13 00:54:53.541994 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 00:54:53.542185 systemd[1]: Stopped systemd-sysctl.service. May 13 00:54:53.542635 systemd[1]: Stopping systemd-sysctl.service... May 13 00:54:53.544757 systemd[1]: Starting systemd-sysctl.service... May 13 00:54:53.550550 systemd[1]: cri-containerd-13a77a34ec5b1785cc27eda938f3ab822fd83671dfdadefa7ba91fe69fc18d82.scope: Deactivated successfully. May 13 00:54:53.583970 env[1248]: time="2025-05-13T00:54:53.583925204Z" level=info msg="shim disconnected" id=13a77a34ec5b1785cc27eda938f3ab822fd83671dfdadefa7ba91fe69fc18d82 May 13 00:54:53.588612 env[1248]: time="2025-05-13T00:54:53.584188498Z" level=warning msg="cleaning up after shim disconnected" id=13a77a34ec5b1785cc27eda938f3ab822fd83671dfdadefa7ba91fe69fc18d82 namespace=k8s.io May 13 00:54:53.588612 env[1248]: time="2025-05-13T00:54:53.584202854Z" level=info msg="cleaning up dead shim" May 13 00:54:53.593480 env[1248]: time="2025-05-13T00:54:53.593451169Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:54:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2566 runtime=io.containerd.runc.v2\n" May 13 00:54:53.613544 systemd[1]: Finished systemd-sysctl.service. May 13 00:54:54.444156 env[1248]: time="2025-05-13T00:54:54.444086954Z" level=info msg="CreateContainer within sandbox \"8be25f7728fef628f12cfe51f97831271bc1e4def38677bd06065a0eac1188a4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 00:54:54.472196 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2833196550.mount: Deactivated successfully. May 13 00:54:54.476155 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1370684844.mount: Deactivated successfully. May 13 00:54:54.486152 env[1248]: time="2025-05-13T00:54:54.486116565Z" level=info msg="CreateContainer within sandbox \"8be25f7728fef628f12cfe51f97831271bc1e4def38677bd06065a0eac1188a4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1549164d57f1451aa0409d3d0ccf2da05bce6ac55197d13b98c6ae26345c5b66\"" May 13 00:54:54.486646 env[1248]: time="2025-05-13T00:54:54.486630840Z" level=info msg="StartContainer for \"1549164d57f1451aa0409d3d0ccf2da05bce6ac55197d13b98c6ae26345c5b66\"" May 13 00:54:54.519308 systemd[1]: Started cri-containerd-1549164d57f1451aa0409d3d0ccf2da05bce6ac55197d13b98c6ae26345c5b66.scope. May 13 00:54:54.554746 env[1248]: time="2025-05-13T00:54:54.554718269Z" level=info msg="StartContainer for \"1549164d57f1451aa0409d3d0ccf2da05bce6ac55197d13b98c6ae26345c5b66\" returns successfully" May 13 00:54:54.599661 systemd[1]: cri-containerd-1549164d57f1451aa0409d3d0ccf2da05bce6ac55197d13b98c6ae26345c5b66.scope: Deactivated successfully. May 13 00:54:54.607842 env[1248]: time="2025-05-13T00:54:54.607808012Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:54:54.608446 env[1248]: time="2025-05-13T00:54:54.608429498Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:54:54.610005 env[1248]: time="2025-05-13T00:54:54.609969805Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 13 00:54:54.610562 env[1248]: time="2025-05-13T00:54:54.610541050Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:54:54.614493 env[1248]: time="2025-05-13T00:54:54.614462158Z" level=info msg="CreateContainer within sandbox \"4c09d003d792a76de630c15a0ce6b82d8c8eed28e480580903bfab9db54a0ee1\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 13 00:54:55.004167 env[1248]: time="2025-05-13T00:54:55.004127405Z" level=info msg="CreateContainer within sandbox \"4c09d003d792a76de630c15a0ce6b82d8c8eed28e480580903bfab9db54a0ee1\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c5e622174382957049a04e3b0167c8cc09f9406617cc7e850681334f884a056c\"" May 13 00:54:55.005658 env[1248]: time="2025-05-13T00:54:55.004764395Z" level=info msg="StartContainer for \"c5e622174382957049a04e3b0167c8cc09f9406617cc7e850681334f884a056c\"" May 13 00:54:55.008694 env[1248]: time="2025-05-13T00:54:55.008650341Z" level=info msg="shim disconnected" id=1549164d57f1451aa0409d3d0ccf2da05bce6ac55197d13b98c6ae26345c5b66 May 13 00:54:55.008694 env[1248]: time="2025-05-13T00:54:55.008687940Z" level=warning msg="cleaning up after shim disconnected" id=1549164d57f1451aa0409d3d0ccf2da05bce6ac55197d13b98c6ae26345c5b66 namespace=k8s.io May 13 00:54:55.008836 env[1248]: time="2025-05-13T00:54:55.008700420Z" level=info msg="cleaning up dead shim" May 13 00:54:55.018507 env[1248]: time="2025-05-13T00:54:55.018475637Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:54:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2630 runtime=io.containerd.runc.v2\n" May 13 00:54:55.021214 systemd[1]: Started cri-containerd-c5e622174382957049a04e3b0167c8cc09f9406617cc7e850681334f884a056c.scope. May 13 00:54:55.048082 env[1248]: time="2025-05-13T00:54:55.048048943Z" level=info msg="StartContainer for \"c5e622174382957049a04e3b0167c8cc09f9406617cc7e850681334f884a056c\" returns successfully" May 13 00:54:55.440112 env[1248]: time="2025-05-13T00:54:55.440086365Z" level=info msg="CreateContainer within sandbox \"8be25f7728fef628f12cfe51f97831271bc1e4def38677bd06065a0eac1188a4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 00:54:55.499376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3778228820.mount: Deactivated successfully. May 13 00:54:55.548532 env[1248]: time="2025-05-13T00:54:55.548501644Z" level=info msg="CreateContainer within sandbox \"8be25f7728fef628f12cfe51f97831271bc1e4def38677bd06065a0eac1188a4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8fd895916449a4e43754da86caa12ecf4af6558f96b59ab4b9ec0b0bc61b3d1e\"" May 13 00:54:55.549153 env[1248]: time="2025-05-13T00:54:55.549134897Z" level=info msg="StartContainer for \"8fd895916449a4e43754da86caa12ecf4af6558f96b59ab4b9ec0b0bc61b3d1e\"" May 13 00:54:55.561887 systemd[1]: Started cri-containerd-8fd895916449a4e43754da86caa12ecf4af6558f96b59ab4b9ec0b0bc61b3d1e.scope. May 13 00:54:55.625657 env[1248]: time="2025-05-13T00:54:55.625617532Z" level=info msg="StartContainer for \"8fd895916449a4e43754da86caa12ecf4af6558f96b59ab4b9ec0b0bc61b3d1e\" returns successfully" May 13 00:54:55.631310 systemd[1]: cri-containerd-8fd895916449a4e43754da86caa12ecf4af6558f96b59ab4b9ec0b0bc61b3d1e.scope: Deactivated successfully. May 13 00:54:56.274199 kubelet[2088]: I0513 00:54:55.836275 2088 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-wptmt" podStartSLOduration=2.547640408 podStartE2EDuration="13.807118714s" podCreationTimestamp="2025-05-13 00:54:42 +0000 UTC" firstStartedPulling="2025-05-13 00:54:43.351243306 +0000 UTC m=+6.160075314" lastFinishedPulling="2025-05-13 00:54:54.610721602 +0000 UTC m=+17.419553620" observedRunningTime="2025-05-13 00:54:55.698256547 +0000 UTC m=+18.507088566" watchObservedRunningTime="2025-05-13 00:54:55.807118714 +0000 UTC m=+18.615950728" May 13 00:54:56.292634 env[1248]: time="2025-05-13T00:54:56.292587008Z" level=info msg="shim disconnected" id=8fd895916449a4e43754da86caa12ecf4af6558f96b59ab4b9ec0b0bc61b3d1e May 13 00:54:56.292634 env[1248]: time="2025-05-13T00:54:56.292631870Z" level=warning msg="cleaning up after shim disconnected" id=8fd895916449a4e43754da86caa12ecf4af6558f96b59ab4b9ec0b0bc61b3d1e namespace=k8s.io May 13 00:54:56.292762 env[1248]: time="2025-05-13T00:54:56.292638086Z" level=info msg="cleaning up dead shim" May 13 00:54:56.305179 env[1248]: time="2025-05-13T00:54:56.305153359Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:54:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2717 runtime=io.containerd.runc.v2\n" May 13 00:54:56.445565 env[1248]: time="2025-05-13T00:54:56.445539532Z" level=info msg="CreateContainer within sandbox \"8be25f7728fef628f12cfe51f97831271bc1e4def38677bd06065a0eac1188a4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 00:54:56.450629 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8fd895916449a4e43754da86caa12ecf4af6558f96b59ab4b9ec0b0bc61b3d1e-rootfs.mount: Deactivated successfully. May 13 00:54:56.512385 env[1248]: time="2025-05-13T00:54:56.512326612Z" level=info msg="CreateContainer within sandbox \"8be25f7728fef628f12cfe51f97831271bc1e4def38677bd06065a0eac1188a4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"42d8d55342cb13127d1f62b005ef6f548537db37abdd2d7c88281a2c676b42e7\"" May 13 00:54:56.512958 env[1248]: time="2025-05-13T00:54:56.512941129Z" level=info msg="StartContainer for \"42d8d55342cb13127d1f62b005ef6f548537db37abdd2d7c88281a2c676b42e7\"" May 13 00:54:56.524244 systemd[1]: Started cri-containerd-42d8d55342cb13127d1f62b005ef6f548537db37abdd2d7c88281a2c676b42e7.scope. May 13 00:54:56.589831 env[1248]: time="2025-05-13T00:54:56.589804168Z" level=info msg="StartContainer for \"42d8d55342cb13127d1f62b005ef6f548537db37abdd2d7c88281a2c676b42e7\" returns successfully" May 13 00:54:56.895840 kubelet[2088]: I0513 00:54:56.885619 2088 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 13 00:54:57.046998 systemd[1]: Created slice kubepods-burstable-podb8ba2e91_2118_4e9f_a677_104ab66d3a53.slice. May 13 00:54:57.050307 systemd[1]: Created slice kubepods-burstable-pod4fe3a399_b84a_477d_9c35_b22879554d2f.slice. May 13 00:54:57.096747 kubelet[2088]: I0513 00:54:57.096721 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48kjx\" (UniqueName: \"kubernetes.io/projected/b8ba2e91-2118-4e9f-a677-104ab66d3a53-kube-api-access-48kjx\") pod \"coredns-668d6bf9bc-htr9q\" (UID: \"b8ba2e91-2118-4e9f-a677-104ab66d3a53\") " pod="kube-system/coredns-668d6bf9bc-htr9q" May 13 00:54:57.096899 kubelet[2088]: I0513 00:54:57.096887 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4fe3a399-b84a-477d-9c35-b22879554d2f-config-volume\") pod \"coredns-668d6bf9bc-qdx8d\" (UID: \"4fe3a399-b84a-477d-9c35-b22879554d2f\") " pod="kube-system/coredns-668d6bf9bc-qdx8d" May 13 00:54:57.096968 kubelet[2088]: I0513 00:54:57.096958 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q5xr\" (UniqueName: \"kubernetes.io/projected/4fe3a399-b84a-477d-9c35-b22879554d2f-kube-api-access-8q5xr\") pod \"coredns-668d6bf9bc-qdx8d\" (UID: \"4fe3a399-b84a-477d-9c35-b22879554d2f\") " pod="kube-system/coredns-668d6bf9bc-qdx8d" May 13 00:54:57.097037 kubelet[2088]: I0513 00:54:57.097027 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8ba2e91-2118-4e9f-a677-104ab66d3a53-config-volume\") pod \"coredns-668d6bf9bc-htr9q\" (UID: \"b8ba2e91-2118-4e9f-a677-104ab66d3a53\") " pod="kube-system/coredns-668d6bf9bc-htr9q" May 13 00:54:57.356089 env[1248]: time="2025-05-13T00:54:57.355556895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-htr9q,Uid:b8ba2e91-2118-4e9f-a677-104ab66d3a53,Namespace:kube-system,Attempt:0,}" May 13 00:54:57.356089 env[1248]: time="2025-05-13T00:54:57.355923350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qdx8d,Uid:4fe3a399-b84a-477d-9c35-b22879554d2f,Namespace:kube-system,Attempt:0,}" May 13 00:54:57.463547 kubelet[2088]: I0513 00:54:57.463509 2088 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-k5ctx" podStartSLOduration=6.168328271 podStartE2EDuration="15.463487263s" podCreationTimestamp="2025-05-13 00:54:42 +0000 UTC" firstStartedPulling="2025-05-13 00:54:43.144872612 +0000 UTC m=+5.953704621" lastFinishedPulling="2025-05-13 00:54:52.440031595 +0000 UTC m=+15.248863613" observedRunningTime="2025-05-13 00:54:57.463195068 +0000 UTC m=+20.272027083" watchObservedRunningTime="2025-05-13 00:54:57.463487263 +0000 UTC m=+20.272319278" May 13 00:54:58.628363 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! May 13 00:54:58.901359 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! May 13 00:55:00.515636 systemd-networkd[1061]: cilium_host: Link UP May 13 00:55:00.518134 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 13 00:55:00.518165 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 13 00:55:00.515731 systemd-networkd[1061]: cilium_net: Link UP May 13 00:55:00.517277 systemd-networkd[1061]: cilium_net: Gained carrier May 13 00:55:00.517394 systemd-networkd[1061]: cilium_host: Gained carrier May 13 00:55:00.617845 systemd-networkd[1061]: cilium_vxlan: Link UP May 13 00:55:00.617850 systemd-networkd[1061]: cilium_vxlan: Gained carrier May 13 00:55:00.950449 systemd-networkd[1061]: cilium_net: Gained IPv6LL May 13 00:55:01.246442 systemd-networkd[1061]: cilium_host: Gained IPv6LL May 13 00:55:01.495361 kernel: NET: Registered PF_ALG protocol family May 13 00:55:01.822429 systemd-networkd[1061]: cilium_vxlan: Gained IPv6LL May 13 00:55:02.094002 systemd-networkd[1061]: lxc_health: Link UP May 13 00:55:02.106632 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 13 00:55:02.104273 systemd-networkd[1061]: lxc_health: Gained carrier May 13 00:55:02.452331 systemd-networkd[1061]: lxc1df037bcb6a1: Link UP May 13 00:55:02.457394 kernel: eth0: renamed from tmp7e1b2 May 13 00:55:02.463383 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc1df037bcb6a1: link becomes ready May 13 00:55:02.463306 systemd-networkd[1061]: lxc1df037bcb6a1: Gained carrier May 13 00:55:02.472734 systemd-networkd[1061]: lxc876c50224c8f: Link UP May 13 00:55:02.481393 kernel: eth0: renamed from tmp9a47f May 13 00:55:02.484638 systemd-networkd[1061]: lxc876c50224c8f: Gained carrier May 13 00:55:02.487259 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc876c50224c8f: link becomes ready May 13 00:55:03.870467 systemd-networkd[1061]: lxc1df037bcb6a1: Gained IPv6LL May 13 00:55:04.062450 systemd-networkd[1061]: lxc_health: Gained IPv6LL May 13 00:55:04.126457 systemd-networkd[1061]: lxc876c50224c8f: Gained IPv6LL May 13 00:55:05.077396 env[1248]: time="2025-05-13T00:55:05.077323657Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:55:05.077396 env[1248]: time="2025-05-13T00:55:05.077396219Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:55:05.077663 env[1248]: time="2025-05-13T00:55:05.077415725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:55:05.077663 env[1248]: time="2025-05-13T00:55:05.077529106Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9a47fcd186e1d08ed17f452e1c48167d273dd652281c3d46411cb201782a8786 pid=3272 runtime=io.containerd.runc.v2 May 13 00:55:05.088461 env[1248]: time="2025-05-13T00:55:05.087505962Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:55:05.088550 env[1248]: time="2025-05-13T00:55:05.088471343Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:55:05.088550 env[1248]: time="2025-05-13T00:55:05.088492278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:55:05.088703 env[1248]: time="2025-05-13T00:55:05.088675900Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7e1b2bf1032f432765fba0fdecb6ea0df141ccf2b3071b8d16d5a403dd543a5f pid=3280 runtime=io.containerd.runc.v2 May 13 00:55:05.097671 systemd[1]: Started cri-containerd-9a47fcd186e1d08ed17f452e1c48167d273dd652281c3d46411cb201782a8786.scope. May 13 00:55:05.100360 systemd[1]: run-containerd-runc-k8s.io-9a47fcd186e1d08ed17f452e1c48167d273dd652281c3d46411cb201782a8786-runc.MglOIk.mount: Deactivated successfully. May 13 00:55:05.114429 systemd[1]: Started cri-containerd-7e1b2bf1032f432765fba0fdecb6ea0df141ccf2b3071b8d16d5a403dd543a5f.scope. May 13 00:55:05.128192 systemd-resolved[1207]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:55:05.137020 systemd-resolved[1207]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:55:05.159776 env[1248]: time="2025-05-13T00:55:05.159745483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qdx8d,Uid:4fe3a399-b84a-477d-9c35-b22879554d2f,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e1b2bf1032f432765fba0fdecb6ea0df141ccf2b3071b8d16d5a403dd543a5f\"" May 13 00:55:05.161655 env[1248]: time="2025-05-13T00:55:05.161612479Z" level=info msg="CreateContainer within sandbox \"7e1b2bf1032f432765fba0fdecb6ea0df141ccf2b3071b8d16d5a403dd543a5f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 00:55:05.177535 env[1248]: time="2025-05-13T00:55:05.176765291Z" level=info msg="CreateContainer within sandbox \"7e1b2bf1032f432765fba0fdecb6ea0df141ccf2b3071b8d16d5a403dd543a5f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f45e8a3b340352b7ae67e8610ef3616b51443e32fd4b77136a256d8df6c5ae3f\"" May 13 00:55:05.177535 env[1248]: time="2025-05-13T00:55:05.177238678Z" level=info msg="StartContainer for \"f45e8a3b340352b7ae67e8610ef3616b51443e32fd4b77136a256d8df6c5ae3f\"" May 13 00:55:05.187443 env[1248]: time="2025-05-13T00:55:05.187417454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-htr9q,Uid:b8ba2e91-2118-4e9f-a677-104ab66d3a53,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a47fcd186e1d08ed17f452e1c48167d273dd652281c3d46411cb201782a8786\"" May 13 00:55:05.190843 env[1248]: time="2025-05-13T00:55:05.190814181Z" level=info msg="CreateContainer within sandbox \"9a47fcd186e1d08ed17f452e1c48167d273dd652281c3d46411cb201782a8786\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 00:55:05.196103 env[1248]: time="2025-05-13T00:55:05.196070946Z" level=info msg="CreateContainer within sandbox \"9a47fcd186e1d08ed17f452e1c48167d273dd652281c3d46411cb201782a8786\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b667d19f90a534512365ba99c0e57532c54a40773dc77ef60d753f5e002833a8\"" May 13 00:55:05.196448 env[1248]: time="2025-05-13T00:55:05.196435082Z" level=info msg="StartContainer for \"b667d19f90a534512365ba99c0e57532c54a40773dc77ef60d753f5e002833a8\"" May 13 00:55:05.201292 systemd[1]: Started cri-containerd-f45e8a3b340352b7ae67e8610ef3616b51443e32fd4b77136a256d8df6c5ae3f.scope. May 13 00:55:05.221549 systemd[1]: Started cri-containerd-b667d19f90a534512365ba99c0e57532c54a40773dc77ef60d753f5e002833a8.scope. May 13 00:55:05.242426 env[1248]: time="2025-05-13T00:55:05.242402186Z" level=info msg="StartContainer for \"f45e8a3b340352b7ae67e8610ef3616b51443e32fd4b77136a256d8df6c5ae3f\" returns successfully" May 13 00:55:05.252860 env[1248]: time="2025-05-13T00:55:05.252831479Z" level=info msg="StartContainer for \"b667d19f90a534512365ba99c0e57532c54a40773dc77ef60d753f5e002833a8\" returns successfully" May 13 00:55:05.469829 kubelet[2088]: I0513 00:55:05.469756 2088 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qdx8d" podStartSLOduration=23.469743286 podStartE2EDuration="23.469743286s" podCreationTimestamp="2025-05-13 00:54:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:55:05.469621073 +0000 UTC m=+28.278453088" watchObservedRunningTime="2025-05-13 00:55:05.469743286 +0000 UTC m=+28.278575298" May 13 00:55:05.477855 kubelet[2088]: I0513 00:55:05.477824 2088 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-htr9q" podStartSLOduration=23.477801065 podStartE2EDuration="23.477801065s" podCreationTimestamp="2025-05-13 00:54:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:55:05.477253236 +0000 UTC m=+28.286085256" watchObservedRunningTime="2025-05-13 00:55:05.477801065 +0000 UTC m=+28.286633077" May 13 00:55:06.087039 systemd[1]: run-containerd-runc-k8s.io-7e1b2bf1032f432765fba0fdecb6ea0df141ccf2b3071b8d16d5a403dd543a5f-runc.dmLmwz.mount: Deactivated successfully. May 13 00:55:07.581045 kubelet[2088]: I0513 00:55:07.581013 2088 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:55:41.883409 systemd[1]: Started sshd@7-139.178.70.106:22-111.53.52.116:40626.service. May 13 00:55:45.729452 sshd[3427]: Invalid user postgres from 111.53.52.116 port 40626 May 13 00:55:45.754194 sshd[3427]: pam_faillock(sshd:auth): User unknown May 13 00:55:45.760573 sshd[3427]: pam_unix(sshd:auth): check pass; user unknown May 13 00:55:45.760617 sshd[3427]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=111.53.52.116 May 13 00:55:45.760854 sshd[3427]: pam_faillock(sshd:auth): User unknown May 13 00:55:47.854992 sshd[3427]: Failed password for invalid user postgres from 111.53.52.116 port 40626 ssh2 May 13 00:55:48.550536 sshd[3432]: pam_faillock(sshd:auth): User unknown May 13 00:55:48.558707 sshd[3427]: Postponed keyboard-interactive for invalid user postgres from 111.53.52.116 port 40626 ssh2 [preauth] May 13 00:55:49.128732 sshd[3432]: pam_unix(sshd:auth): check pass; user unknown May 13 00:55:49.129097 sshd[3432]: pam_faillock(sshd:auth): User unknown May 13 00:55:51.303432 sshd[3427]: PAM: Permission denied for illegal user postgres from 111.53.52.116 May 13 00:55:51.303759 sshd[3427]: Failed keyboard-interactive/pam for invalid user postgres from 111.53.52.116 port 40626 ssh2 May 13 00:55:51.569080 systemd[1]: Started sshd@8-139.178.70.106:22-147.75.109.163:38408.service. May 13 00:55:51.601312 sshd[3434]: Accepted publickey for core from 147.75.109.163 port 38408 ssh2: RSA SHA256:vqcd0a/HrGkVrwMSh8NRsk9omLQxrQEaC7i/Qmd/lrA May 13 00:55:51.602801 sshd[3434]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:55:51.606273 systemd[1]: Started session-8.scope. May 13 00:55:51.606619 systemd-logind[1240]: New session 8 of user core. May 13 00:55:51.911752 sshd[3427]: Connection closed by invalid user postgres 111.53.52.116 port 40626 [preauth] May 13 00:55:51.912565 systemd[1]: sshd@7-139.178.70.106:22-111.53.52.116:40626.service: Deactivated successfully. May 13 00:55:51.997513 sshd[3434]: pam_unix(sshd:session): session closed for user core May 13 00:55:51.999248 systemd[1]: sshd@8-139.178.70.106:22-147.75.109.163:38408.service: Deactivated successfully. May 13 00:55:51.999459 systemd-logind[1240]: Session 8 logged out. Waiting for processes to exit. May 13 00:55:51.999686 systemd[1]: session-8.scope: Deactivated successfully. May 13 00:55:52.000161 systemd-logind[1240]: Removed session 8. May 13 00:55:57.001733 systemd[1]: Started sshd@9-139.178.70.106:22-147.75.109.163:38410.service. May 13 00:55:57.045623 sshd[3447]: Accepted publickey for core from 147.75.109.163 port 38410 ssh2: RSA SHA256:vqcd0a/HrGkVrwMSh8NRsk9omLQxrQEaC7i/Qmd/lrA May 13 00:55:57.046646 sshd[3447]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:55:57.050110 systemd-logind[1240]: New session 9 of user core. May 13 00:55:57.050702 systemd[1]: Started session-9.scope. May 13 00:55:57.187249 sshd[3447]: pam_unix(sshd:session): session closed for user core May 13 00:55:57.188935 systemd[1]: sshd@9-139.178.70.106:22-147.75.109.163:38410.service: Deactivated successfully. May 13 00:55:57.189398 systemd[1]: session-9.scope: Deactivated successfully. May 13 00:55:57.189875 systemd-logind[1240]: Session 9 logged out. Waiting for processes to exit. May 13 00:55:57.190404 systemd-logind[1240]: Removed session 9. May 13 00:56:02.190224 systemd[1]: Started sshd@10-139.178.70.106:22-147.75.109.163:40234.service. May 13 00:56:02.222697 sshd[3461]: Accepted publickey for core from 147.75.109.163 port 40234 ssh2: RSA SHA256:vqcd0a/HrGkVrwMSh8NRsk9omLQxrQEaC7i/Qmd/lrA May 13 00:56:02.224034 sshd[3461]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:56:02.227370 systemd[1]: Started session-10.scope. May 13 00:56:02.227674 systemd-logind[1240]: New session 10 of user core. May 13 00:56:02.368392 sshd[3461]: pam_unix(sshd:session): session closed for user core May 13 00:56:02.370501 systemd-logind[1240]: Session 10 logged out. Waiting for processes to exit. May 13 00:56:02.371523 systemd[1]: sshd@10-139.178.70.106:22-147.75.109.163:40234.service: Deactivated successfully. May 13 00:56:02.371934 systemd[1]: session-10.scope: Deactivated successfully. May 13 00:56:02.372947 systemd-logind[1240]: Removed session 10. May 13 00:56:07.371318 systemd[1]: Started sshd@11-139.178.70.106:22-147.75.109.163:40238.service. May 13 00:56:07.404292 sshd[3477]: Accepted publickey for core from 147.75.109.163 port 40238 ssh2: RSA SHA256:vqcd0a/HrGkVrwMSh8NRsk9omLQxrQEaC7i/Qmd/lrA May 13 00:56:07.405167 sshd[3477]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:56:07.408171 systemd[1]: Started session-11.scope. May 13 00:56:07.408586 systemd-logind[1240]: New session 11 of user core. May 13 00:56:07.505146 sshd[3477]: pam_unix(sshd:session): session closed for user core May 13 00:56:07.506628 systemd-logind[1240]: Session 11 logged out. Waiting for processes to exit. May 13 00:56:07.506830 systemd[1]: sshd@11-139.178.70.106:22-147.75.109.163:40238.service: Deactivated successfully. May 13 00:56:07.507263 systemd[1]: session-11.scope: Deactivated successfully. May 13 00:56:07.507976 systemd-logind[1240]: Removed session 11. May 13 00:56:12.509760 systemd[1]: Started sshd@12-139.178.70.106:22-147.75.109.163:49978.service. May 13 00:56:12.549565 sshd[3491]: Accepted publickey for core from 147.75.109.163 port 49978 ssh2: RSA SHA256:vqcd0a/HrGkVrwMSh8NRsk9omLQxrQEaC7i/Qmd/lrA May 13 00:56:12.551084 sshd[3491]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:56:12.555204 systemd[1]: Started session-12.scope. May 13 00:56:12.555576 systemd-logind[1240]: New session 12 of user core. May 13 00:56:12.648118 systemd[1]: Started sshd@13-139.178.70.106:22-147.75.109.163:49992.service. May 13 00:56:12.648714 sshd[3491]: pam_unix(sshd:session): session closed for user core May 13 00:56:12.650244 systemd[1]: sshd@12-139.178.70.106:22-147.75.109.163:49978.service: Deactivated successfully. May 13 00:56:12.650651 systemd[1]: session-12.scope: Deactivated successfully. May 13 00:56:12.651172 systemd-logind[1240]: Session 12 logged out. Waiting for processes to exit. May 13 00:56:12.651696 systemd-logind[1240]: Removed session 12. May 13 00:56:12.682259 sshd[3502]: Accepted publickey for core from 147.75.109.163 port 49992 ssh2: RSA SHA256:vqcd0a/HrGkVrwMSh8NRsk9omLQxrQEaC7i/Qmd/lrA May 13 00:56:12.683028 sshd[3502]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:56:12.685951 systemd[1]: Started session-13.scope. May 13 00:56:12.686219 systemd-logind[1240]: New session 13 of user core. May 13 00:56:12.838569 systemd[1]: Started sshd@14-139.178.70.106:22-147.75.109.163:50008.service. May 13 00:56:12.842188 sshd[3502]: pam_unix(sshd:session): session closed for user core May 13 00:56:12.844237 systemd[1]: sshd@13-139.178.70.106:22-147.75.109.163:49992.service: Deactivated successfully. May 13 00:56:12.844762 systemd[1]: session-13.scope: Deactivated successfully. May 13 00:56:12.845212 systemd-logind[1240]: Session 13 logged out. Waiting for processes to exit. May 13 00:56:12.845735 systemd-logind[1240]: Removed session 13. May 13 00:56:12.883414 sshd[3512]: Accepted publickey for core from 147.75.109.163 port 50008 ssh2: RSA SHA256:vqcd0a/HrGkVrwMSh8NRsk9omLQxrQEaC7i/Qmd/lrA May 13 00:56:12.884450 sshd[3512]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:56:12.889861 systemd[1]: Started session-14.scope. May 13 00:56:12.890422 systemd-logind[1240]: New session 14 of user core. May 13 00:56:12.990829 sshd[3512]: pam_unix(sshd:session): session closed for user core May 13 00:56:12.992509 systemd[1]: sshd@14-139.178.70.106:22-147.75.109.163:50008.service: Deactivated successfully. May 13 00:56:12.992910 systemd[1]: session-14.scope: Deactivated successfully. May 13 00:56:12.993193 systemd-logind[1240]: Session 14 logged out. Waiting for processes to exit. May 13 00:56:12.993638 systemd-logind[1240]: Removed session 14. May 13 00:56:17.994347 systemd[1]: Started sshd@15-139.178.70.106:22-147.75.109.163:54014.service. May 13 00:56:18.027265 sshd[3527]: Accepted publickey for core from 147.75.109.163 port 54014 ssh2: RSA SHA256:vqcd0a/HrGkVrwMSh8NRsk9omLQxrQEaC7i/Qmd/lrA May 13 00:56:18.028299 sshd[3527]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:56:18.030873 systemd-logind[1240]: New session 15 of user core. May 13 00:56:18.031400 systemd[1]: Started session-15.scope. May 13 00:56:18.119647 sshd[3527]: pam_unix(sshd:session): session closed for user core May 13 00:56:18.121322 systemd[1]: sshd@15-139.178.70.106:22-147.75.109.163:54014.service: Deactivated successfully. May 13 00:56:18.121777 systemd[1]: session-15.scope: Deactivated successfully. May 13 00:56:18.122139 systemd-logind[1240]: Session 15 logged out. Waiting for processes to exit. May 13 00:56:18.122726 systemd-logind[1240]: Removed session 15. May 13 00:56:23.124009 systemd[1]: Started sshd@16-139.178.70.106:22-147.75.109.163:54022.service. May 13 00:56:23.161650 sshd[3539]: Accepted publickey for core from 147.75.109.163 port 54022 ssh2: RSA SHA256:vqcd0a/HrGkVrwMSh8NRsk9omLQxrQEaC7i/Qmd/lrA May 13 00:56:23.162527 sshd[3539]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:56:23.165372 systemd-logind[1240]: New session 16 of user core. May 13 00:56:23.165878 systemd[1]: Started session-16.scope. May 13 00:56:23.265575 sshd[3539]: pam_unix(sshd:session): session closed for user core May 13 00:56:23.268129 systemd[1]: Started sshd@17-139.178.70.106:22-147.75.109.163:54036.service. May 13 00:56:23.269890 systemd[1]: sshd@16-139.178.70.106:22-147.75.109.163:54022.service: Deactivated successfully. May 13 00:56:23.270307 systemd[1]: session-16.scope: Deactivated successfully. May 13 00:56:23.270717 systemd-logind[1240]: Session 16 logged out. Waiting for processes to exit. May 13 00:56:23.271313 systemd-logind[1240]: Removed session 16. May 13 00:56:23.302901 sshd[3549]: Accepted publickey for core from 147.75.109.163 port 54036 ssh2: RSA SHA256:vqcd0a/HrGkVrwMSh8NRsk9omLQxrQEaC7i/Qmd/lrA May 13 00:56:23.303923 sshd[3549]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:56:23.306717 systemd-logind[1240]: New session 17 of user core. May 13 00:56:23.307753 systemd[1]: Started session-17.scope. May 13 00:56:23.903044 sshd[3549]: pam_unix(sshd:session): session closed for user core May 13 00:56:23.905662 systemd[1]: Started sshd@18-139.178.70.106:22-147.75.109.163:54042.service. May 13 00:56:23.906704 systemd[1]: sshd@17-139.178.70.106:22-147.75.109.163:54036.service: Deactivated successfully. May 13 00:56:23.907085 systemd[1]: session-17.scope: Deactivated successfully. May 13 00:56:23.907614 systemd-logind[1240]: Session 17 logged out. Waiting for processes to exit. May 13 00:56:23.908112 systemd-logind[1240]: Removed session 17. May 13 00:56:23.956977 sshd[3559]: Accepted publickey for core from 147.75.109.163 port 54042 ssh2: RSA SHA256:vqcd0a/HrGkVrwMSh8NRsk9omLQxrQEaC7i/Qmd/lrA May 13 00:56:23.957870 sshd[3559]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:56:23.960735 systemd-logind[1240]: New session 18 of user core. May 13 00:56:23.961276 systemd[1]: Started session-18.scope. May 13 00:56:24.881849 sshd[3559]: pam_unix(sshd:session): session closed for user core May 13 00:56:24.883958 systemd[1]: Started sshd@19-139.178.70.106:22-147.75.109.163:54046.service. May 13 00:56:24.885608 systemd[1]: sshd@18-139.178.70.106:22-147.75.109.163:54042.service: Deactivated successfully. May 13 00:56:24.886331 systemd-logind[1240]: Session 18 logged out. Waiting for processes to exit. May 13 00:56:24.886387 systemd[1]: session-18.scope: Deactivated successfully. May 13 00:56:24.887101 systemd-logind[1240]: Removed session 18. May 13 00:56:24.928622 sshd[3575]: Accepted publickey for core from 147.75.109.163 port 54046 ssh2: RSA SHA256:vqcd0a/HrGkVrwMSh8NRsk9omLQxrQEaC7i/Qmd/lrA May 13 00:56:24.929483 sshd[3575]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:56:24.932041 systemd-logind[1240]: New session 19 of user core. May 13 00:56:24.932534 systemd[1]: Started session-19.scope. May 13 00:56:25.129425 sshd[3575]: pam_unix(sshd:session): session closed for user core May 13 00:56:25.131990 systemd[1]: Started sshd@20-139.178.70.106:22-147.75.109.163:54054.service. May 13 00:56:25.138152 systemd[1]: sshd@19-139.178.70.106:22-147.75.109.163:54046.service: Deactivated successfully. May 13 00:56:25.138668 systemd[1]: session-19.scope: Deactivated successfully. May 13 00:56:25.139779 systemd-logind[1240]: Session 19 logged out. Waiting for processes to exit. May 13 00:56:25.142532 systemd-logind[1240]: Removed session 19. May 13 00:56:25.167343 sshd[3587]: Accepted publickey for core from 147.75.109.163 port 54054 ssh2: RSA SHA256:vqcd0a/HrGkVrwMSh8NRsk9omLQxrQEaC7i/Qmd/lrA May 13 00:56:25.168213 sshd[3587]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:56:25.170522 systemd-logind[1240]: New session 20 of user core. May 13 00:56:25.171456 systemd[1]: Started session-20.scope. May 13 00:56:25.267404 sshd[3587]: pam_unix(sshd:session): session closed for user core May 13 00:56:25.269292 systemd[1]: sshd@20-139.178.70.106:22-147.75.109.163:54054.service: Deactivated successfully. May 13 00:56:25.269779 systemd[1]: session-20.scope: Deactivated successfully. May 13 00:56:25.270357 systemd-logind[1240]: Session 20 logged out. Waiting for processes to exit. May 13 00:56:25.270946 systemd-logind[1240]: Removed session 20. May 13 00:56:30.270322 systemd[1]: Started sshd@21-139.178.70.106:22-147.75.109.163:55842.service. May 13 00:56:30.303291 sshd[3602]: Accepted publickey for core from 147.75.109.163 port 55842 ssh2: RSA SHA256:vqcd0a/HrGkVrwMSh8NRsk9omLQxrQEaC7i/Qmd/lrA May 13 00:56:30.304219 sshd[3602]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:56:30.307549 systemd[1]: Started session-21.scope. May 13 00:56:30.307765 systemd-logind[1240]: New session 21 of user core. May 13 00:56:30.414505 sshd[3602]: pam_unix(sshd:session): session closed for user core May 13 00:56:30.416402 systemd[1]: sshd@21-139.178.70.106:22-147.75.109.163:55842.service: Deactivated successfully. May 13 00:56:30.416877 systemd[1]: session-21.scope: Deactivated successfully. May 13 00:56:30.417178 systemd-logind[1240]: Session 21 logged out. Waiting for processes to exit. May 13 00:56:30.417645 systemd-logind[1240]: Removed session 21. May 13 00:56:35.418159 systemd[1]: Started sshd@22-139.178.70.106:22-147.75.109.163:55844.service. May 13 00:56:35.461853 sshd[3614]: Accepted publickey for core from 147.75.109.163 port 55844 ssh2: RSA SHA256:vqcd0a/HrGkVrwMSh8NRsk9omLQxrQEaC7i/Qmd/lrA May 13 00:56:35.463460 sshd[3614]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:56:35.466792 systemd[1]: Started session-22.scope. May 13 00:56:35.467379 systemd-logind[1240]: New session 22 of user core. May 13 00:56:35.567205 sshd[3614]: pam_unix(sshd:session): session closed for user core May 13 00:56:35.570569 systemd[1]: sshd@22-139.178.70.106:22-147.75.109.163:55844.service: Deactivated successfully. May 13 00:56:35.571043 systemd[1]: session-22.scope: Deactivated successfully. May 13 00:56:35.571677 systemd-logind[1240]: Session 22 logged out. Waiting for processes to exit. May 13 00:56:35.572198 systemd-logind[1240]: Removed session 22. May 13 00:56:40.571163 systemd[1]: Started sshd@23-139.178.70.106:22-147.75.109.163:57410.service. May 13 00:56:40.604380 sshd[3627]: Accepted publickey for core from 147.75.109.163 port 57410 ssh2: RSA SHA256:vqcd0a/HrGkVrwMSh8NRsk9omLQxrQEaC7i/Qmd/lrA May 13 00:56:40.605566 sshd[3627]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:56:40.608725 systemd[1]: Started session-23.scope. May 13 00:56:40.609372 systemd-logind[1240]: New session 23 of user core. May 13 00:56:40.727597 sshd[3627]: pam_unix(sshd:session): session closed for user core May 13 00:56:40.730444 systemd[1]: sshd@23-139.178.70.106:22-147.75.109.163:57410.service: Deactivated successfully. May 13 00:56:40.730984 systemd[1]: session-23.scope: Deactivated successfully. May 13 00:56:40.731702 systemd-logind[1240]: Session 23 logged out. Waiting for processes to exit. May 13 00:56:40.732310 systemd-logind[1240]: Removed session 23. May 13 00:56:45.731627 systemd[1]: Started sshd@24-139.178.70.106:22-147.75.109.163:57416.service. May 13 00:56:45.764098 sshd[3641]: Accepted publickey for core from 147.75.109.163 port 57416 ssh2: RSA SHA256:vqcd0a/HrGkVrwMSh8NRsk9omLQxrQEaC7i/Qmd/lrA May 13 00:56:45.764921 sshd[3641]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:56:45.768018 systemd[1]: Started session-24.scope. May 13 00:56:45.768811 systemd-logind[1240]: New session 24 of user core. May 13 00:56:45.908839 sshd[3641]: pam_unix(sshd:session): session closed for user core May 13 00:56:45.911828 systemd[1]: Started sshd@25-139.178.70.106:22-147.75.109.163:57430.service. May 13 00:56:45.913081 systemd[1]: sshd@24-139.178.70.106:22-147.75.109.163:57416.service: Deactivated successfully. May 13 00:56:45.913578 systemd[1]: session-24.scope: Deactivated successfully. May 13 00:56:45.914262 systemd-logind[1240]: Session 24 logged out. Waiting for processes to exit. May 13 00:56:45.914893 systemd-logind[1240]: Removed session 24. May 13 00:56:45.947459 sshd[3651]: Accepted publickey for core from 147.75.109.163 port 57430 ssh2: RSA SHA256:vqcd0a/HrGkVrwMSh8NRsk9omLQxrQEaC7i/Qmd/lrA May 13 00:56:45.948650 sshd[3651]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:56:45.952536 systemd[1]: Started session-25.scope. May 13 00:56:45.954171 systemd-logind[1240]: New session 25 of user core. May 13 00:56:47.875443 systemd[1]: run-containerd-runc-k8s.io-42d8d55342cb13127d1f62b005ef6f548537db37abdd2d7c88281a2c676b42e7-runc.XbMXcA.mount: Deactivated successfully. May 13 00:56:47.894083 env[1248]: time="2025-05-13T00:56:47.894039148Z" level=info msg="StopContainer for \"c5e622174382957049a04e3b0167c8cc09f9406617cc7e850681334f884a056c\" with timeout 30 (s)" May 13 00:56:47.894834 env[1248]: time="2025-05-13T00:56:47.894820267Z" level=info msg="Stop container \"c5e622174382957049a04e3b0167c8cc09f9406617cc7e850681334f884a056c\" with signal terminated" May 13 00:56:47.896718 env[1248]: time="2025-05-13T00:56:47.896681571Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:56:47.901054 env[1248]: time="2025-05-13T00:56:47.901033367Z" level=info msg="StopContainer for \"42d8d55342cb13127d1f62b005ef6f548537db37abdd2d7c88281a2c676b42e7\" with timeout 2 (s)" May 13 00:56:47.901314 env[1248]: time="2025-05-13T00:56:47.901295834Z" level=info msg="Stop container \"42d8d55342cb13127d1f62b005ef6f548537db37abdd2d7c88281a2c676b42e7\" with signal terminated" May 13 00:56:47.906519 systemd-networkd[1061]: lxc_health: Link DOWN May 13 00:56:47.906524 systemd-networkd[1061]: lxc_health: Lost carrier May 13 00:56:47.907835 systemd[1]: cri-containerd-c5e622174382957049a04e3b0167c8cc09f9406617cc7e850681334f884a056c.scope: Deactivated successfully. May 13 00:56:47.929092 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c5e622174382957049a04e3b0167c8cc09f9406617cc7e850681334f884a056c-rootfs.mount: Deactivated successfully. May 13 00:56:47.931597 systemd[1]: cri-containerd-42d8d55342cb13127d1f62b005ef6f548537db37abdd2d7c88281a2c676b42e7.scope: Deactivated successfully. May 13 00:56:47.931778 systemd[1]: cri-containerd-42d8d55342cb13127d1f62b005ef6f548537db37abdd2d7c88281a2c676b42e7.scope: Consumed 4.493s CPU time. May 13 00:56:47.932937 env[1248]: time="2025-05-13T00:56:47.932904951Z" level=info msg="shim disconnected" id=c5e622174382957049a04e3b0167c8cc09f9406617cc7e850681334f884a056c May 13 00:56:47.932937 env[1248]: time="2025-05-13T00:56:47.932934992Z" level=warning msg="cleaning up after shim disconnected" id=c5e622174382957049a04e3b0167c8cc09f9406617cc7e850681334f884a056c namespace=k8s.io May 13 00:56:47.933030 env[1248]: time="2025-05-13T00:56:47.932940841Z" level=info msg="cleaning up dead shim" May 13 00:56:47.940559 env[1248]: time="2025-05-13T00:56:47.940528754Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:56:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3707 runtime=io.containerd.runc.v2\n" May 13 00:56:47.941442 env[1248]: time="2025-05-13T00:56:47.941421839Z" level=info msg="StopContainer for \"c5e622174382957049a04e3b0167c8cc09f9406617cc7e850681334f884a056c\" returns successfully" May 13 00:56:47.946091 env[1248]: time="2025-05-13T00:56:47.946069997Z" level=info msg="StopPodSandbox for \"4c09d003d792a76de630c15a0ce6b82d8c8eed28e480580903bfab9db54a0ee1\"" May 13 00:56:47.946237 env[1248]: time="2025-05-13T00:56:47.946224216Z" level=info msg="Container to stop \"c5e622174382957049a04e3b0167c8cc09f9406617cc7e850681334f884a056c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:56:47.947463 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-42d8d55342cb13127d1f62b005ef6f548537db37abdd2d7c88281a2c676b42e7-rootfs.mount: Deactivated successfully. May 13 00:56:47.947527 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4c09d003d792a76de630c15a0ce6b82d8c8eed28e480580903bfab9db54a0ee1-shm.mount: Deactivated successfully. May 13 00:56:47.953539 systemd[1]: cri-containerd-4c09d003d792a76de630c15a0ce6b82d8c8eed28e480580903bfab9db54a0ee1.scope: Deactivated successfully. May 13 00:56:47.975608 env[1248]: time="2025-05-13T00:56:47.975572302Z" level=info msg="shim disconnected" id=42d8d55342cb13127d1f62b005ef6f548537db37abdd2d7c88281a2c676b42e7 May 13 00:56:47.975608 env[1248]: time="2025-05-13T00:56:47.975600103Z" level=warning msg="cleaning up after shim disconnected" id=42d8d55342cb13127d1f62b005ef6f548537db37abdd2d7c88281a2c676b42e7 namespace=k8s.io May 13 00:56:47.975608 env[1248]: time="2025-05-13T00:56:47.975606575Z" level=info msg="cleaning up dead shim" May 13 00:56:47.975771 env[1248]: time="2025-05-13T00:56:47.975697683Z" level=info msg="shim disconnected" id=4c09d003d792a76de630c15a0ce6b82d8c8eed28e480580903bfab9db54a0ee1 May 13 00:56:47.975771 env[1248]: time="2025-05-13T00:56:47.975711896Z" level=warning msg="cleaning up after shim disconnected" id=4c09d003d792a76de630c15a0ce6b82d8c8eed28e480580903bfab9db54a0ee1 namespace=k8s.io May 13 00:56:47.975771 env[1248]: time="2025-05-13T00:56:47.975718806Z" level=info msg="cleaning up dead shim" May 13 00:56:47.981648 env[1248]: time="2025-05-13T00:56:47.981617087Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:56:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3751 runtime=io.containerd.runc.v2\n" May 13 00:56:47.983053 env[1248]: time="2025-05-13T00:56:47.983034680Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:56:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3750 runtime=io.containerd.runc.v2\n" May 13 00:56:47.985819 env[1248]: time="2025-05-13T00:56:47.985800381Z" level=info msg="TearDown network for sandbox \"4c09d003d792a76de630c15a0ce6b82d8c8eed28e480580903bfab9db54a0ee1\" successfully" May 13 00:56:47.985819 env[1248]: time="2025-05-13T00:56:47.985816874Z" level=info msg="StopPodSandbox for \"4c09d003d792a76de630c15a0ce6b82d8c8eed28e480580903bfab9db54a0ee1\" returns successfully" May 13 00:56:47.991443 env[1248]: time="2025-05-13T00:56:47.990428043Z" level=info msg="StopContainer for \"42d8d55342cb13127d1f62b005ef6f548537db37abdd2d7c88281a2c676b42e7\" returns successfully" May 13 00:56:47.997796 env[1248]: time="2025-05-13T00:56:47.991726295Z" level=info msg="StopPodSandbox for \"8be25f7728fef628f12cfe51f97831271bc1e4def38677bd06065a0eac1188a4\"" May 13 00:56:47.997796 env[1248]: time="2025-05-13T00:56:47.991768904Z" level=info msg="Container to stop \"13a77a34ec5b1785cc27eda938f3ab822fd83671dfdadefa7ba91fe69fc18d82\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:56:47.997796 env[1248]: time="2025-05-13T00:56:47.991779919Z" level=info msg="Container to stop \"432a8a1b7257b881e00ce745ddd2984e23f218d4b7b7d9e49481a7acbdfd633c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:56:47.997796 env[1248]: time="2025-05-13T00:56:47.991786276Z" level=info msg="Container to stop \"1549164d57f1451aa0409d3d0ccf2da05bce6ac55197d13b98c6ae26345c5b66\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:56:47.997796 env[1248]: time="2025-05-13T00:56:47.991792613Z" level=info msg="Container to stop \"8fd895916449a4e43754da86caa12ecf4af6558f96b59ab4b9ec0b0bc61b3d1e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:56:47.997796 env[1248]: time="2025-05-13T00:56:47.991798363Z" level=info msg="Container to stop \"42d8d55342cb13127d1f62b005ef6f548537db37abdd2d7c88281a2c676b42e7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:56:48.001792 systemd[1]: cri-containerd-8be25f7728fef628f12cfe51f97831271bc1e4def38677bd06065a0eac1188a4.scope: Deactivated successfully. May 13 00:56:48.019211 env[1248]: time="2025-05-13T00:56:48.019168056Z" level=info msg="shim disconnected" id=8be25f7728fef628f12cfe51f97831271bc1e4def38677bd06065a0eac1188a4 May 13 00:56:48.019211 env[1248]: time="2025-05-13T00:56:48.019207386Z" level=warning msg="cleaning up after shim disconnected" id=8be25f7728fef628f12cfe51f97831271bc1e4def38677bd06065a0eac1188a4 namespace=k8s.io May 13 00:56:48.019335 env[1248]: time="2025-05-13T00:56:48.019216545Z" level=info msg="cleaning up dead shim" May 13 00:56:48.023765 env[1248]: time="2025-05-13T00:56:48.023742645Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:56:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3795 runtime=io.containerd.runc.v2\n" May 13 00:56:48.024712 env[1248]: time="2025-05-13T00:56:48.024695297Z" level=info msg="TearDown network for sandbox \"8be25f7728fef628f12cfe51f97831271bc1e4def38677bd06065a0eac1188a4\" successfully" May 13 00:56:48.024778 env[1248]: time="2025-05-13T00:56:48.024766421Z" level=info msg="StopPodSandbox for \"8be25f7728fef628f12cfe51f97831271bc1e4def38677bd06065a0eac1188a4\" returns successfully" May 13 00:56:48.116429 kubelet[2088]: I0513 00:56:48.116356 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/39810f2a-680c-4d2b-855f-0328f1e5a87f-cni-path\") pod \"39810f2a-680c-4d2b-855f-0328f1e5a87f\" (UID: \"39810f2a-680c-4d2b-855f-0328f1e5a87f\") " May 13 00:56:48.116727 kubelet[2088]: I0513 00:56:48.116715 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qq2nw\" (UniqueName: \"kubernetes.io/projected/69e2b727-2454-4e5b-8099-d8c5c48f0796-kube-api-access-qq2nw\") pod \"69e2b727-2454-4e5b-8099-d8c5c48f0796\" (UID: \"69e2b727-2454-4e5b-8099-d8c5c48f0796\") " May 13 00:56:48.116794 kubelet[2088]: I0513 00:56:48.116785 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/39810f2a-680c-4d2b-855f-0328f1e5a87f-bpf-maps\") pod \"39810f2a-680c-4d2b-855f-0328f1e5a87f\" (UID: \"39810f2a-680c-4d2b-855f-0328f1e5a87f\") " May 13 00:56:48.116854 kubelet[2088]: I0513 00:56:48.116838 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/39810f2a-680c-4d2b-855f-0328f1e5a87f-etc-cni-netd\") pod \"39810f2a-680c-4d2b-855f-0328f1e5a87f\" (UID: \"39810f2a-680c-4d2b-855f-0328f1e5a87f\") " May 13 00:56:48.116916 kubelet[2088]: I0513 00:56:48.116908 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/39810f2a-680c-4d2b-855f-0328f1e5a87f-cilium-run\") pod \"39810f2a-680c-4d2b-855f-0328f1e5a87f\" (UID: \"39810f2a-680c-4d2b-855f-0328f1e5a87f\") " May 13 00:56:48.116978 kubelet[2088]: I0513 00:56:48.116970 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/39810f2a-680c-4d2b-855f-0328f1e5a87f-host-proc-sys-kernel\") pod \"39810f2a-680c-4d2b-855f-0328f1e5a87f\" (UID: \"39810f2a-680c-4d2b-855f-0328f1e5a87f\") " May 13 00:56:48.117077 kubelet[2088]: I0513 00:56:48.117025 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/39810f2a-680c-4d2b-855f-0328f1e5a87f-hubble-tls\") pod \"39810f2a-680c-4d2b-855f-0328f1e5a87f\" (UID: \"39810f2a-680c-4d2b-855f-0328f1e5a87f\") " May 13 00:56:48.117128 kubelet[2088]: I0513 00:56:48.117120 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/39810f2a-680c-4d2b-855f-0328f1e5a87f-xtables-lock\") pod \"39810f2a-680c-4d2b-855f-0328f1e5a87f\" (UID: \"39810f2a-680c-4d2b-855f-0328f1e5a87f\") " May 13 00:56:48.117195 kubelet[2088]: I0513 00:56:48.117187 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/39810f2a-680c-4d2b-855f-0328f1e5a87f-clustermesh-secrets\") pod \"39810f2a-680c-4d2b-855f-0328f1e5a87f\" (UID: \"39810f2a-680c-4d2b-855f-0328f1e5a87f\") " May 13 00:56:48.117443 kubelet[2088]: I0513 00:56:48.117303 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/39810f2a-680c-4d2b-855f-0328f1e5a87f-hostproc\") pod \"39810f2a-680c-4d2b-855f-0328f1e5a87f\" (UID: \"39810f2a-680c-4d2b-855f-0328f1e5a87f\") " May 13 00:56:48.117443 kubelet[2088]: I0513 00:56:48.117317 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/69e2b727-2454-4e5b-8099-d8c5c48f0796-cilium-config-path\") pod \"69e2b727-2454-4e5b-8099-d8c5c48f0796\" (UID: \"69e2b727-2454-4e5b-8099-d8c5c48f0796\") " May 13 00:56:48.117443 kubelet[2088]: I0513 00:56:48.117330 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkbq7\" (UniqueName: \"kubernetes.io/projected/39810f2a-680c-4d2b-855f-0328f1e5a87f-kube-api-access-fkbq7\") pod \"39810f2a-680c-4d2b-855f-0328f1e5a87f\" (UID: \"39810f2a-680c-4d2b-855f-0328f1e5a87f\") " May 13 00:56:48.117443 kubelet[2088]: I0513 00:56:48.117357 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/39810f2a-680c-4d2b-855f-0328f1e5a87f-cilium-config-path\") pod \"39810f2a-680c-4d2b-855f-0328f1e5a87f\" (UID: \"39810f2a-680c-4d2b-855f-0328f1e5a87f\") " May 13 00:56:48.117443 kubelet[2088]: I0513 00:56:48.117369 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/39810f2a-680c-4d2b-855f-0328f1e5a87f-lib-modules\") pod \"39810f2a-680c-4d2b-855f-0328f1e5a87f\" (UID: \"39810f2a-680c-4d2b-855f-0328f1e5a87f\") " May 13 00:56:48.117443 kubelet[2088]: I0513 00:56:48.117378 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/39810f2a-680c-4d2b-855f-0328f1e5a87f-host-proc-sys-net\") pod \"39810f2a-680c-4d2b-855f-0328f1e5a87f\" (UID: \"39810f2a-680c-4d2b-855f-0328f1e5a87f\") " May 13 00:56:48.117722 kubelet[2088]: I0513 00:56:48.117385 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/39810f2a-680c-4d2b-855f-0328f1e5a87f-cilium-cgroup\") pod \"39810f2a-680c-4d2b-855f-0328f1e5a87f\" (UID: \"39810f2a-680c-4d2b-855f-0328f1e5a87f\") " May 13 00:56:48.122379 kubelet[2088]: I0513 00:56:48.121426 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39810f2a-680c-4d2b-855f-0328f1e5a87f-cni-path" (OuterVolumeSpecName: "cni-path") pod "39810f2a-680c-4d2b-855f-0328f1e5a87f" (UID: "39810f2a-680c-4d2b-855f-0328f1e5a87f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:56:48.122426 kubelet[2088]: I0513 00:56:48.122388 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39810f2a-680c-4d2b-855f-0328f1e5a87f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "39810f2a-680c-4d2b-855f-0328f1e5a87f" (UID: "39810f2a-680c-4d2b-855f-0328f1e5a87f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:56:48.123143 kubelet[2088]: I0513 00:56:48.120879 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39810f2a-680c-4d2b-855f-0328f1e5a87f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "39810f2a-680c-4d2b-855f-0328f1e5a87f" (UID: "39810f2a-680c-4d2b-855f-0328f1e5a87f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:56:48.129002 kubelet[2088]: I0513 00:56:48.128946 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39810f2a-680c-4d2b-855f-0328f1e5a87f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "39810f2a-680c-4d2b-855f-0328f1e5a87f" (UID: "39810f2a-680c-4d2b-855f-0328f1e5a87f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 13 00:56:48.129098 kubelet[2088]: I0513 00:56:48.129087 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39810f2a-680c-4d2b-855f-0328f1e5a87f-hostproc" (OuterVolumeSpecName: "hostproc") pod "39810f2a-680c-4d2b-855f-0328f1e5a87f" (UID: "39810f2a-680c-4d2b-855f-0328f1e5a87f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:56:48.130855 kubelet[2088]: I0513 00:56:48.130843 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69e2b727-2454-4e5b-8099-d8c5c48f0796-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "69e2b727-2454-4e5b-8099-d8c5c48f0796" (UID: "69e2b727-2454-4e5b-8099-d8c5c48f0796"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 13 00:56:48.130990 kubelet[2088]: I0513 00:56:48.130969 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69e2b727-2454-4e5b-8099-d8c5c48f0796-kube-api-access-qq2nw" (OuterVolumeSpecName: "kube-api-access-qq2nw") pod "69e2b727-2454-4e5b-8099-d8c5c48f0796" (UID: "69e2b727-2454-4e5b-8099-d8c5c48f0796"). InnerVolumeSpecName "kube-api-access-qq2nw". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 13 00:56:48.131048 kubelet[2088]: I0513 00:56:48.131000 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39810f2a-680c-4d2b-855f-0328f1e5a87f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "39810f2a-680c-4d2b-855f-0328f1e5a87f" (UID: "39810f2a-680c-4d2b-855f-0328f1e5a87f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:56:48.131048 kubelet[2088]: I0513 00:56:48.131020 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39810f2a-680c-4d2b-855f-0328f1e5a87f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "39810f2a-680c-4d2b-855f-0328f1e5a87f" (UID: "39810f2a-680c-4d2b-855f-0328f1e5a87f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:56:48.131048 kubelet[2088]: I0513 00:56:48.131032 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39810f2a-680c-4d2b-855f-0328f1e5a87f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "39810f2a-680c-4d2b-855f-0328f1e5a87f" (UID: "39810f2a-680c-4d2b-855f-0328f1e5a87f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:56:48.131110 kubelet[2088]: I0513 00:56:48.131046 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39810f2a-680c-4d2b-855f-0328f1e5a87f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "39810f2a-680c-4d2b-855f-0328f1e5a87f" (UID: "39810f2a-680c-4d2b-855f-0328f1e5a87f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:56:48.133567 kubelet[2088]: I0513 00:56:48.133544 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39810f2a-680c-4d2b-855f-0328f1e5a87f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "39810f2a-680c-4d2b-855f-0328f1e5a87f" (UID: "39810f2a-680c-4d2b-855f-0328f1e5a87f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 13 00:56:48.134052 kubelet[2088]: I0513 00:56:48.134034 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39810f2a-680c-4d2b-855f-0328f1e5a87f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "39810f2a-680c-4d2b-855f-0328f1e5a87f" (UID: "39810f2a-680c-4d2b-855f-0328f1e5a87f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:56:48.134627 kubelet[2088]: I0513 00:56:48.134613 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39810f2a-680c-4d2b-855f-0328f1e5a87f-kube-api-access-fkbq7" (OuterVolumeSpecName: "kube-api-access-fkbq7") pod "39810f2a-680c-4d2b-855f-0328f1e5a87f" (UID: "39810f2a-680c-4d2b-855f-0328f1e5a87f"). InnerVolumeSpecName "kube-api-access-fkbq7". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 13 00:56:48.134691 kubelet[2088]: I0513 00:56:48.134681 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39810f2a-680c-4d2b-855f-0328f1e5a87f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "39810f2a-680c-4d2b-855f-0328f1e5a87f" (UID: "39810f2a-680c-4d2b-855f-0328f1e5a87f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:56:48.135794 kubelet[2088]: I0513 00:56:48.135775 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39810f2a-680c-4d2b-855f-0328f1e5a87f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "39810f2a-680c-4d2b-855f-0328f1e5a87f" (UID: "39810f2a-680c-4d2b-855f-0328f1e5a87f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 13 00:56:48.218127 kubelet[2088]: I0513 00:56:48.218097 2088 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/39810f2a-680c-4d2b-855f-0328f1e5a87f-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 13 00:56:48.218127 kubelet[2088]: I0513 00:56:48.218121 2088 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/39810f2a-680c-4d2b-855f-0328f1e5a87f-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 13 00:56:48.218127 kubelet[2088]: I0513 00:56:48.218128 2088 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/39810f2a-680c-4d2b-855f-0328f1e5a87f-cilium-run\") on node \"localhost\" DevicePath \"\"" May 13 00:56:48.218127 kubelet[2088]: I0513 00:56:48.218133 2088 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/39810f2a-680c-4d2b-855f-0328f1e5a87f-cni-path\") on node \"localhost\" DevicePath \"\"" May 13 00:56:48.218309 kubelet[2088]: I0513 00:56:48.218140 2088 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qq2nw\" (UniqueName: \"kubernetes.io/projected/69e2b727-2454-4e5b-8099-d8c5c48f0796-kube-api-access-qq2nw\") on node \"localhost\" DevicePath \"\"" May 13 00:56:48.218309 kubelet[2088]: I0513 00:56:48.218145 2088 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/39810f2a-680c-4d2b-855f-0328f1e5a87f-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 13 00:56:48.218309 kubelet[2088]: I0513 00:56:48.218149 2088 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/39810f2a-680c-4d2b-855f-0328f1e5a87f-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 13 00:56:48.218309 kubelet[2088]: I0513 00:56:48.218154 2088 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/39810f2a-680c-4d2b-855f-0328f1e5a87f-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 13 00:56:48.218309 kubelet[2088]: I0513 00:56:48.218158 2088 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/39810f2a-680c-4d2b-855f-0328f1e5a87f-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 13 00:56:48.218309 kubelet[2088]: I0513 00:56:48.218162 2088 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/39810f2a-680c-4d2b-855f-0328f1e5a87f-hostproc\") on node \"localhost\" DevicePath \"\"" May 13 00:56:48.218309 kubelet[2088]: I0513 00:56:48.218166 2088 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/69e2b727-2454-4e5b-8099-d8c5c48f0796-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 00:56:48.218309 kubelet[2088]: I0513 00:56:48.218171 2088 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fkbq7\" (UniqueName: \"kubernetes.io/projected/39810f2a-680c-4d2b-855f-0328f1e5a87f-kube-api-access-fkbq7\") on node \"localhost\" DevicePath \"\"" May 13 00:56:48.218481 kubelet[2088]: I0513 00:56:48.218175 2088 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/39810f2a-680c-4d2b-855f-0328f1e5a87f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 00:56:48.218481 kubelet[2088]: I0513 00:56:48.218179 2088 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/39810f2a-680c-4d2b-855f-0328f1e5a87f-lib-modules\") on node \"localhost\" DevicePath \"\"" May 13 00:56:48.218481 kubelet[2088]: I0513 00:56:48.218185 2088 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/39810f2a-680c-4d2b-855f-0328f1e5a87f-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 13 00:56:48.218481 kubelet[2088]: I0513 00:56:48.218189 2088 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/39810f2a-680c-4d2b-855f-0328f1e5a87f-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 13 00:56:48.611603 kubelet[2088]: I0513 00:56:48.611574 2088 scope.go:117] "RemoveContainer" containerID="c5e622174382957049a04e3b0167c8cc09f9406617cc7e850681334f884a056c" May 13 00:56:48.612845 env[1248]: time="2025-05-13T00:56:48.612755559Z" level=info msg="RemoveContainer for \"c5e622174382957049a04e3b0167c8cc09f9406617cc7e850681334f884a056c\"" May 13 00:56:48.618743 env[1248]: time="2025-05-13T00:56:48.618721022Z" level=info msg="RemoveContainer for \"c5e622174382957049a04e3b0167c8cc09f9406617cc7e850681334f884a056c\" returns successfully" May 13 00:56:48.638043 systemd[1]: Removed slice kubepods-besteffort-pod69e2b727_2454_4e5b_8099_d8c5c48f0796.slice. May 13 00:56:48.648208 kubelet[2088]: I0513 00:56:48.648193 2088 scope.go:117] "RemoveContainer" containerID="c5e622174382957049a04e3b0167c8cc09f9406617cc7e850681334f884a056c" May 13 00:56:48.648786 env[1248]: time="2025-05-13T00:56:48.648693883Z" level=error msg="ContainerStatus for \"c5e622174382957049a04e3b0167c8cc09f9406617cc7e850681334f884a056c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c5e622174382957049a04e3b0167c8cc09f9406617cc7e850681334f884a056c\": not found" May 13 00:56:48.648940 kubelet[2088]: E0513 00:56:48.648928 2088 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c5e622174382957049a04e3b0167c8cc09f9406617cc7e850681334f884a056c\": not found" containerID="c5e622174382957049a04e3b0167c8cc09f9406617cc7e850681334f884a056c" May 13 00:56:48.662446 kubelet[2088]: I0513 00:56:48.649008 2088 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c5e622174382957049a04e3b0167c8cc09f9406617cc7e850681334f884a056c"} err="failed to get container status \"c5e622174382957049a04e3b0167c8cc09f9406617cc7e850681334f884a056c\": rpc error: code = NotFound desc = an error occurred when try to find container \"c5e622174382957049a04e3b0167c8cc09f9406617cc7e850681334f884a056c\": not found" May 13 00:56:48.662649 kubelet[2088]: I0513 00:56:48.662638 2088 scope.go:117] "RemoveContainer" containerID="42d8d55342cb13127d1f62b005ef6f548537db37abdd2d7c88281a2c676b42e7" May 13 00:56:48.663929 env[1248]: time="2025-05-13T00:56:48.663872864Z" level=info msg="RemoveContainer for \"42d8d55342cb13127d1f62b005ef6f548537db37abdd2d7c88281a2c676b42e7\"" May 13 00:56:48.665235 env[1248]: time="2025-05-13T00:56:48.665215767Z" level=info msg="RemoveContainer for \"42d8d55342cb13127d1f62b005ef6f548537db37abdd2d7c88281a2c676b42e7\" returns successfully" May 13 00:56:48.667589 kubelet[2088]: I0513 00:56:48.666964 2088 scope.go:117] "RemoveContainer" containerID="8fd895916449a4e43754da86caa12ecf4af6558f96b59ab4b9ec0b0bc61b3d1e" May 13 00:56:48.667285 systemd[1]: Removed slice kubepods-burstable-pod39810f2a_680c_4d2b_855f_0328f1e5a87f.slice. May 13 00:56:48.667353 systemd[1]: kubepods-burstable-pod39810f2a_680c_4d2b_855f_0328f1e5a87f.slice: Consumed 4.569s CPU time. May 13 00:56:48.669876 env[1248]: time="2025-05-13T00:56:48.669166994Z" level=info msg="RemoveContainer for \"8fd895916449a4e43754da86caa12ecf4af6558f96b59ab4b9ec0b0bc61b3d1e\"" May 13 00:56:48.670944 env[1248]: time="2025-05-13T00:56:48.670919057Z" level=info msg="RemoveContainer for \"8fd895916449a4e43754da86caa12ecf4af6558f96b59ab4b9ec0b0bc61b3d1e\" returns successfully" May 13 00:56:48.674990 kubelet[2088]: I0513 00:56:48.674974 2088 scope.go:117] "RemoveContainer" containerID="1549164d57f1451aa0409d3d0ccf2da05bce6ac55197d13b98c6ae26345c5b66" May 13 00:56:48.677279 env[1248]: time="2025-05-13T00:56:48.676714032Z" level=info msg="RemoveContainer for \"1549164d57f1451aa0409d3d0ccf2da05bce6ac55197d13b98c6ae26345c5b66\"" May 13 00:56:48.678546 env[1248]: time="2025-05-13T00:56:48.678429809Z" level=info msg="RemoveContainer for \"1549164d57f1451aa0409d3d0ccf2da05bce6ac55197d13b98c6ae26345c5b66\" returns successfully" May 13 00:56:48.679483 kubelet[2088]: I0513 00:56:48.679095 2088 scope.go:117] "RemoveContainer" containerID="13a77a34ec5b1785cc27eda938f3ab822fd83671dfdadefa7ba91fe69fc18d82" May 13 00:56:48.679723 env[1248]: time="2025-05-13T00:56:48.679701248Z" level=info msg="RemoveContainer for \"13a77a34ec5b1785cc27eda938f3ab822fd83671dfdadefa7ba91fe69fc18d82\"" May 13 00:56:48.681178 env[1248]: time="2025-05-13T00:56:48.681117830Z" level=info msg="RemoveContainer for \"13a77a34ec5b1785cc27eda938f3ab822fd83671dfdadefa7ba91fe69fc18d82\" returns successfully" May 13 00:56:48.681442 kubelet[2088]: I0513 00:56:48.681411 2088 scope.go:117] "RemoveContainer" containerID="432a8a1b7257b881e00ce745ddd2984e23f218d4b7b7d9e49481a7acbdfd633c" May 13 00:56:48.684489 env[1248]: time="2025-05-13T00:56:48.684195576Z" level=info msg="RemoveContainer for \"432a8a1b7257b881e00ce745ddd2984e23f218d4b7b7d9e49481a7acbdfd633c\"" May 13 00:56:48.685646 env[1248]: time="2025-05-13T00:56:48.685631431Z" level=info msg="RemoveContainer for \"432a8a1b7257b881e00ce745ddd2984e23f218d4b7b7d9e49481a7acbdfd633c\" returns successfully" May 13 00:56:48.685855 kubelet[2088]: I0513 00:56:48.685845 2088 scope.go:117] "RemoveContainer" containerID="42d8d55342cb13127d1f62b005ef6f548537db37abdd2d7c88281a2c676b42e7" May 13 00:56:48.686074 env[1248]: time="2025-05-13T00:56:48.686021761Z" level=error msg="ContainerStatus for \"42d8d55342cb13127d1f62b005ef6f548537db37abdd2d7c88281a2c676b42e7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"42d8d55342cb13127d1f62b005ef6f548537db37abdd2d7c88281a2c676b42e7\": not found" May 13 00:56:48.686302 kubelet[2088]: E0513 00:56:48.686287 2088 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"42d8d55342cb13127d1f62b005ef6f548537db37abdd2d7c88281a2c676b42e7\": not found" containerID="42d8d55342cb13127d1f62b005ef6f548537db37abdd2d7c88281a2c676b42e7" May 13 00:56:48.686402 kubelet[2088]: I0513 00:56:48.686388 2088 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"42d8d55342cb13127d1f62b005ef6f548537db37abdd2d7c88281a2c676b42e7"} err="failed to get container status \"42d8d55342cb13127d1f62b005ef6f548537db37abdd2d7c88281a2c676b42e7\": rpc error: code = NotFound desc = an error occurred when try to find container \"42d8d55342cb13127d1f62b005ef6f548537db37abdd2d7c88281a2c676b42e7\": not found" May 13 00:56:48.686470 kubelet[2088]: I0513 00:56:48.686460 2088 scope.go:117] "RemoveContainer" containerID="8fd895916449a4e43754da86caa12ecf4af6558f96b59ab4b9ec0b0bc61b3d1e" May 13 00:56:48.686648 env[1248]: time="2025-05-13T00:56:48.686605278Z" level=error msg="ContainerStatus for \"8fd895916449a4e43754da86caa12ecf4af6558f96b59ab4b9ec0b0bc61b3d1e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8fd895916449a4e43754da86caa12ecf4af6558f96b59ab4b9ec0b0bc61b3d1e\": not found" May 13 00:56:48.686760 kubelet[2088]: E0513 00:56:48.686751 2088 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8fd895916449a4e43754da86caa12ecf4af6558f96b59ab4b9ec0b0bc61b3d1e\": not found" containerID="8fd895916449a4e43754da86caa12ecf4af6558f96b59ab4b9ec0b0bc61b3d1e" May 13 00:56:48.686827 kubelet[2088]: I0513 00:56:48.686813 2088 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8fd895916449a4e43754da86caa12ecf4af6558f96b59ab4b9ec0b0bc61b3d1e"} err="failed to get container status \"8fd895916449a4e43754da86caa12ecf4af6558f96b59ab4b9ec0b0bc61b3d1e\": rpc error: code = NotFound desc = an error occurred when try to find container \"8fd895916449a4e43754da86caa12ecf4af6558f96b59ab4b9ec0b0bc61b3d1e\": not found" May 13 00:56:48.686874 kubelet[2088]: I0513 00:56:48.686866 2088 scope.go:117] "RemoveContainer" containerID="1549164d57f1451aa0409d3d0ccf2da05bce6ac55197d13b98c6ae26345c5b66" May 13 00:56:48.687056 env[1248]: time="2025-05-13T00:56:48.687004285Z" level=error msg="ContainerStatus for \"1549164d57f1451aa0409d3d0ccf2da05bce6ac55197d13b98c6ae26345c5b66\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1549164d57f1451aa0409d3d0ccf2da05bce6ac55197d13b98c6ae26345c5b66\": not found" May 13 00:56:48.687208 kubelet[2088]: E0513 00:56:48.687182 2088 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1549164d57f1451aa0409d3d0ccf2da05bce6ac55197d13b98c6ae26345c5b66\": not found" containerID="1549164d57f1451aa0409d3d0ccf2da05bce6ac55197d13b98c6ae26345c5b66" May 13 00:56:48.687271 kubelet[2088]: I0513 00:56:48.687260 2088 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1549164d57f1451aa0409d3d0ccf2da05bce6ac55197d13b98c6ae26345c5b66"} err="failed to get container status \"1549164d57f1451aa0409d3d0ccf2da05bce6ac55197d13b98c6ae26345c5b66\": rpc error: code = NotFound desc = an error occurred when try to find container \"1549164d57f1451aa0409d3d0ccf2da05bce6ac55197d13b98c6ae26345c5b66\": not found" May 13 00:56:48.687318 kubelet[2088]: I0513 00:56:48.687310 2088 scope.go:117] "RemoveContainer" containerID="13a77a34ec5b1785cc27eda938f3ab822fd83671dfdadefa7ba91fe69fc18d82" May 13 00:56:48.687504 env[1248]: time="2025-05-13T00:56:48.687458255Z" level=error msg="ContainerStatus for \"13a77a34ec5b1785cc27eda938f3ab822fd83671dfdadefa7ba91fe69fc18d82\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"13a77a34ec5b1785cc27eda938f3ab822fd83671dfdadefa7ba91fe69fc18d82\": not found" May 13 00:56:48.687646 kubelet[2088]: E0513 00:56:48.687627 2088 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"13a77a34ec5b1785cc27eda938f3ab822fd83671dfdadefa7ba91fe69fc18d82\": not found" containerID="13a77a34ec5b1785cc27eda938f3ab822fd83671dfdadefa7ba91fe69fc18d82" May 13 00:56:48.687719 kubelet[2088]: I0513 00:56:48.687708 2088 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"13a77a34ec5b1785cc27eda938f3ab822fd83671dfdadefa7ba91fe69fc18d82"} err="failed to get container status \"13a77a34ec5b1785cc27eda938f3ab822fd83671dfdadefa7ba91fe69fc18d82\": rpc error: code = NotFound desc = an error occurred when try to find container \"13a77a34ec5b1785cc27eda938f3ab822fd83671dfdadefa7ba91fe69fc18d82\": not found" May 13 00:56:48.687770 kubelet[2088]: I0513 00:56:48.687762 2088 scope.go:117] "RemoveContainer" containerID="432a8a1b7257b881e00ce745ddd2984e23f218d4b7b7d9e49481a7acbdfd633c" May 13 00:56:48.687942 env[1248]: time="2025-05-13T00:56:48.687893578Z" level=error msg="ContainerStatus for \"432a8a1b7257b881e00ce745ddd2984e23f218d4b7b7d9e49481a7acbdfd633c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"432a8a1b7257b881e00ce745ddd2984e23f218d4b7b7d9e49481a7acbdfd633c\": not found" May 13 00:56:48.688055 kubelet[2088]: E0513 00:56:48.688045 2088 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"432a8a1b7257b881e00ce745ddd2984e23f218d4b7b7d9e49481a7acbdfd633c\": not found" containerID="432a8a1b7257b881e00ce745ddd2984e23f218d4b7b7d9e49481a7acbdfd633c" May 13 00:56:48.688120 kubelet[2088]: I0513 00:56:48.688109 2088 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"432a8a1b7257b881e00ce745ddd2984e23f218d4b7b7d9e49481a7acbdfd633c"} err="failed to get container status \"432a8a1b7257b881e00ce745ddd2984e23f218d4b7b7d9e49481a7acbdfd633c\": rpc error: code = NotFound desc = an error occurred when try to find container \"432a8a1b7257b881e00ce745ddd2984e23f218d4b7b7d9e49481a7acbdfd633c\": not found" May 13 00:56:48.868704 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c09d003d792a76de630c15a0ce6b82d8c8eed28e480580903bfab9db54a0ee1-rootfs.mount: Deactivated successfully. May 13 00:56:48.868769 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8be25f7728fef628f12cfe51f97831271bc1e4def38677bd06065a0eac1188a4-rootfs.mount: Deactivated successfully. May 13 00:56:48.868807 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8be25f7728fef628f12cfe51f97831271bc1e4def38677bd06065a0eac1188a4-shm.mount: Deactivated successfully. May 13 00:56:48.868842 systemd[1]: var-lib-kubelet-pods-69e2b727\x2d2454\x2d4e5b\x2d8099\x2dd8c5c48f0796-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqq2nw.mount: Deactivated successfully. May 13 00:56:48.868876 systemd[1]: var-lib-kubelet-pods-39810f2a\x2d680c\x2d4d2b\x2d855f\x2d0328f1e5a87f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 13 00:56:48.868950 systemd[1]: var-lib-kubelet-pods-39810f2a\x2d680c\x2d4d2b\x2d855f\x2d0328f1e5a87f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 13 00:56:48.869033 systemd[1]: var-lib-kubelet-pods-39810f2a\x2d680c\x2d4d2b\x2d855f\x2d0328f1e5a87f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfkbq7.mount: Deactivated successfully. May 13 00:56:49.337410 kubelet[2088]: I0513 00:56:49.337392 2088 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39810f2a-680c-4d2b-855f-0328f1e5a87f" path="/var/lib/kubelet/pods/39810f2a-680c-4d2b-855f-0328f1e5a87f/volumes" May 13 00:56:49.349241 kubelet[2088]: I0513 00:56:49.349230 2088 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69e2b727-2454-4e5b-8099-d8c5c48f0796" path="/var/lib/kubelet/pods/69e2b727-2454-4e5b-8099-d8c5c48f0796/volumes" May 13 00:56:49.812871 systemd[1]: Started sshd@26-139.178.70.106:22-147.75.109.163:33870.service. May 13 00:56:49.835383 sshd[3651]: pam_unix(sshd:session): session closed for user core May 13 00:56:49.851924 systemd[1]: sshd@25-139.178.70.106:22-147.75.109.163:57430.service: Deactivated successfully. May 13 00:56:49.852619 systemd[1]: session-25.scope: Deactivated successfully. May 13 00:56:49.854355 systemd-logind[1240]: Session 25 logged out. Waiting for processes to exit. May 13 00:56:49.855206 systemd-logind[1240]: Removed session 25. May 13 00:56:49.895608 sshd[3813]: Accepted publickey for core from 147.75.109.163 port 33870 ssh2: RSA SHA256:vqcd0a/HrGkVrwMSh8NRsk9omLQxrQEaC7i/Qmd/lrA May 13 00:56:49.896494 sshd[3813]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:56:49.900136 systemd[1]: Started session-26.scope. May 13 00:56:49.900852 systemd-logind[1240]: New session 26 of user core. May 13 00:56:50.307729 sshd[3813]: pam_unix(sshd:session): session closed for user core May 13 00:56:50.310984 systemd[1]: Started sshd@27-139.178.70.106:22-147.75.109.163:33882.service. May 13 00:56:50.313520 systemd[1]: sshd@26-139.178.70.106:22-147.75.109.163:33870.service: Deactivated successfully. May 13 00:56:50.314042 systemd[1]: session-26.scope: Deactivated successfully. May 13 00:56:50.314598 systemd-logind[1240]: Session 26 logged out. Waiting for processes to exit. May 13 00:56:50.315591 systemd-logind[1240]: Removed session 26. May 13 00:56:50.346950 sshd[3824]: Accepted publickey for core from 147.75.109.163 port 33882 ssh2: RSA SHA256:vqcd0a/HrGkVrwMSh8NRsk9omLQxrQEaC7i/Qmd/lrA May 13 00:56:50.348003 sshd[3824]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:56:50.351257 systemd[1]: Started session-27.scope. May 13 00:56:50.351462 systemd-logind[1240]: New session 27 of user core. May 13 00:56:50.374192 kubelet[2088]: I0513 00:56:50.374168 2088 memory_manager.go:355] "RemoveStaleState removing state" podUID="69e2b727-2454-4e5b-8099-d8c5c48f0796" containerName="cilium-operator" May 13 00:56:50.374454 kubelet[2088]: I0513 00:56:50.374445 2088 memory_manager.go:355] "RemoveStaleState removing state" podUID="39810f2a-680c-4d2b-855f-0328f1e5a87f" containerName="cilium-agent" May 13 00:56:50.391881 systemd[1]: Created slice kubepods-burstable-pod2722e2b6_66b9_4d34_8531_e6a7f069cccc.slice. May 13 00:56:50.545995 kubelet[2088]: I0513 00:56:50.545960 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2722e2b6-66b9-4d34-8531-e6a7f069cccc-host-proc-sys-kernel\") pod \"cilium-frtnq\" (UID: \"2722e2b6-66b9-4d34-8531-e6a7f069cccc\") " pod="kube-system/cilium-frtnq" May 13 00:56:50.546166 kubelet[2088]: I0513 00:56:50.546151 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2722e2b6-66b9-4d34-8531-e6a7f069cccc-xtables-lock\") pod \"cilium-frtnq\" (UID: \"2722e2b6-66b9-4d34-8531-e6a7f069cccc\") " pod="kube-system/cilium-frtnq" May 13 00:56:50.546260 kubelet[2088]: I0513 00:56:50.546246 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2722e2b6-66b9-4d34-8531-e6a7f069cccc-cilium-config-path\") pod \"cilium-frtnq\" (UID: \"2722e2b6-66b9-4d34-8531-e6a7f069cccc\") " pod="kube-system/cilium-frtnq" May 13 00:56:50.546375 kubelet[2088]: I0513 00:56:50.546363 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2722e2b6-66b9-4d34-8531-e6a7f069cccc-cilium-run\") pod \"cilium-frtnq\" (UID: \"2722e2b6-66b9-4d34-8531-e6a7f069cccc\") " pod="kube-system/cilium-frtnq" May 13 00:56:50.546465 kubelet[2088]: I0513 00:56:50.546454 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2722e2b6-66b9-4d34-8531-e6a7f069cccc-cilium-cgroup\") pod \"cilium-frtnq\" (UID: \"2722e2b6-66b9-4d34-8531-e6a7f069cccc\") " pod="kube-system/cilium-frtnq" May 13 00:56:50.546548 kubelet[2088]: I0513 00:56:50.546537 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2722e2b6-66b9-4d34-8531-e6a7f069cccc-cni-path\") pod \"cilium-frtnq\" (UID: \"2722e2b6-66b9-4d34-8531-e6a7f069cccc\") " pod="kube-system/cilium-frtnq" May 13 00:56:50.546635 kubelet[2088]: I0513 00:56:50.546624 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2722e2b6-66b9-4d34-8531-e6a7f069cccc-host-proc-sys-net\") pod \"cilium-frtnq\" (UID: \"2722e2b6-66b9-4d34-8531-e6a7f069cccc\") " pod="kube-system/cilium-frtnq" May 13 00:56:50.546760 kubelet[2088]: I0513 00:56:50.546721 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2722e2b6-66b9-4d34-8531-e6a7f069cccc-etc-cni-netd\") pod \"cilium-frtnq\" (UID: \"2722e2b6-66b9-4d34-8531-e6a7f069cccc\") " pod="kube-system/cilium-frtnq" May 13 00:56:50.546800 kubelet[2088]: I0513 00:56:50.546759 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq5mn\" (UniqueName: \"kubernetes.io/projected/2722e2b6-66b9-4d34-8531-e6a7f069cccc-kube-api-access-zq5mn\") pod \"cilium-frtnq\" (UID: \"2722e2b6-66b9-4d34-8531-e6a7f069cccc\") " pod="kube-system/cilium-frtnq" May 13 00:56:50.546800 kubelet[2088]: I0513 00:56:50.546775 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2722e2b6-66b9-4d34-8531-e6a7f069cccc-hostproc\") pod \"cilium-frtnq\" (UID: \"2722e2b6-66b9-4d34-8531-e6a7f069cccc\") " pod="kube-system/cilium-frtnq" May 13 00:56:50.546800 kubelet[2088]: I0513 00:56:50.546788 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2722e2b6-66b9-4d34-8531-e6a7f069cccc-cilium-ipsec-secrets\") pod \"cilium-frtnq\" (UID: \"2722e2b6-66b9-4d34-8531-e6a7f069cccc\") " pod="kube-system/cilium-frtnq" May 13 00:56:50.546800 kubelet[2088]: I0513 00:56:50.546799 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2722e2b6-66b9-4d34-8531-e6a7f069cccc-bpf-maps\") pod \"cilium-frtnq\" (UID: \"2722e2b6-66b9-4d34-8531-e6a7f069cccc\") " pod="kube-system/cilium-frtnq" May 13 00:56:50.546905 kubelet[2088]: I0513 00:56:50.546811 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2722e2b6-66b9-4d34-8531-e6a7f069cccc-lib-modules\") pod \"cilium-frtnq\" (UID: \"2722e2b6-66b9-4d34-8531-e6a7f069cccc\") " pod="kube-system/cilium-frtnq" May 13 00:56:50.546905 kubelet[2088]: I0513 00:56:50.546823 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2722e2b6-66b9-4d34-8531-e6a7f069cccc-clustermesh-secrets\") pod \"cilium-frtnq\" (UID: \"2722e2b6-66b9-4d34-8531-e6a7f069cccc\") " pod="kube-system/cilium-frtnq" May 13 00:56:50.546905 kubelet[2088]: I0513 00:56:50.546847 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2722e2b6-66b9-4d34-8531-e6a7f069cccc-hubble-tls\") pod \"cilium-frtnq\" (UID: \"2722e2b6-66b9-4d34-8531-e6a7f069cccc\") " pod="kube-system/cilium-frtnq" May 13 00:56:50.593897 sshd[3824]: pam_unix(sshd:session): session closed for user core May 13 00:56:50.596701 systemd[1]: Started sshd@28-139.178.70.106:22-147.75.109.163:33896.service. May 13 00:56:50.603052 systemd-logind[1240]: Session 27 logged out. Waiting for processes to exit. May 13 00:56:50.604622 systemd[1]: sshd@27-139.178.70.106:22-147.75.109.163:33882.service: Deactivated successfully. May 13 00:56:50.605122 systemd[1]: session-27.scope: Deactivated successfully. May 13 00:56:50.607138 kubelet[2088]: E0513 00:56:50.607072 2088 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-zq5mn lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-frtnq" podUID="2722e2b6-66b9-4d34-8531-e6a7f069cccc" May 13 00:56:50.607503 systemd-logind[1240]: Removed session 27. May 13 00:56:50.634289 sshd[3838]: Accepted publickey for core from 147.75.109.163 port 33896 ssh2: RSA SHA256:vqcd0a/HrGkVrwMSh8NRsk9omLQxrQEaC7i/Qmd/lrA May 13 00:56:50.635581 sshd[3838]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:56:50.639062 systemd[1]: Started session-28.scope. May 13 00:56:50.639399 systemd-logind[1240]: New session 28 of user core. May 13 00:56:50.849302 kubelet[2088]: I0513 00:56:50.849183 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2722e2b6-66b9-4d34-8531-e6a7f069cccc-etc-cni-netd\") pod \"2722e2b6-66b9-4d34-8531-e6a7f069cccc\" (UID: \"2722e2b6-66b9-4d34-8531-e6a7f069cccc\") " May 13 00:56:50.849302 kubelet[2088]: I0513 00:56:50.849219 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2722e2b6-66b9-4d34-8531-e6a7f069cccc-hostproc\") pod \"2722e2b6-66b9-4d34-8531-e6a7f069cccc\" (UID: \"2722e2b6-66b9-4d34-8531-e6a7f069cccc\") " May 13 00:56:50.849302 kubelet[2088]: I0513 00:56:50.849235 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2722e2b6-66b9-4d34-8531-e6a7f069cccc-host-proc-sys-kernel\") pod \"2722e2b6-66b9-4d34-8531-e6a7f069cccc\" (UID: \"2722e2b6-66b9-4d34-8531-e6a7f069cccc\") " May 13 00:56:50.849302 kubelet[2088]: I0513 00:56:50.849269 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2722e2b6-66b9-4d34-8531-e6a7f069cccc-cilium-config-path\") pod \"2722e2b6-66b9-4d34-8531-e6a7f069cccc\" (UID: \"2722e2b6-66b9-4d34-8531-e6a7f069cccc\") " May 13 00:56:50.849302 kubelet[2088]: I0513 00:56:50.849303 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2722e2b6-66b9-4d34-8531-e6a7f069cccc-cilium-ipsec-secrets\") pod \"2722e2b6-66b9-4d34-8531-e6a7f069cccc\" (UID: \"2722e2b6-66b9-4d34-8531-e6a7f069cccc\") " May 13 00:56:50.849692 kubelet[2088]: I0513 00:56:50.849319 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2722e2b6-66b9-4d34-8531-e6a7f069cccc-cilium-run\") pod \"2722e2b6-66b9-4d34-8531-e6a7f069cccc\" (UID: \"2722e2b6-66b9-4d34-8531-e6a7f069cccc\") " May 13 00:56:50.849692 kubelet[2088]: I0513 00:56:50.849332 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zq5mn\" (UniqueName: \"kubernetes.io/projected/2722e2b6-66b9-4d34-8531-e6a7f069cccc-kube-api-access-zq5mn\") pod \"2722e2b6-66b9-4d34-8531-e6a7f069cccc\" (UID: \"2722e2b6-66b9-4d34-8531-e6a7f069cccc\") " May 13 00:56:50.849692 kubelet[2088]: I0513 00:56:50.849361 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2722e2b6-66b9-4d34-8531-e6a7f069cccc-clustermesh-secrets\") pod \"2722e2b6-66b9-4d34-8531-e6a7f069cccc\" (UID: \"2722e2b6-66b9-4d34-8531-e6a7f069cccc\") " May 13 00:56:50.849692 kubelet[2088]: I0513 00:56:50.849378 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2722e2b6-66b9-4d34-8531-e6a7f069cccc-hubble-tls\") pod \"2722e2b6-66b9-4d34-8531-e6a7f069cccc\" (UID: \"2722e2b6-66b9-4d34-8531-e6a7f069cccc\") " May 13 00:56:50.849692 kubelet[2088]: I0513 00:56:50.849388 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2722e2b6-66b9-4d34-8531-e6a7f069cccc-bpf-maps\") pod \"2722e2b6-66b9-4d34-8531-e6a7f069cccc\" (UID: \"2722e2b6-66b9-4d34-8531-e6a7f069cccc\") " May 13 00:56:50.849692 kubelet[2088]: I0513 00:56:50.849401 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2722e2b6-66b9-4d34-8531-e6a7f069cccc-lib-modules\") pod \"2722e2b6-66b9-4d34-8531-e6a7f069cccc\" (UID: \"2722e2b6-66b9-4d34-8531-e6a7f069cccc\") " May 13 00:56:50.849850 kubelet[2088]: I0513 00:56:50.849411 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2722e2b6-66b9-4d34-8531-e6a7f069cccc-xtables-lock\") pod \"2722e2b6-66b9-4d34-8531-e6a7f069cccc\" (UID: \"2722e2b6-66b9-4d34-8531-e6a7f069cccc\") " May 13 00:56:50.849850 kubelet[2088]: I0513 00:56:50.849423 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2722e2b6-66b9-4d34-8531-e6a7f069cccc-cilium-cgroup\") pod \"2722e2b6-66b9-4d34-8531-e6a7f069cccc\" (UID: \"2722e2b6-66b9-4d34-8531-e6a7f069cccc\") " May 13 00:56:50.849850 kubelet[2088]: I0513 00:56:50.849434 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2722e2b6-66b9-4d34-8531-e6a7f069cccc-cni-path\") pod \"2722e2b6-66b9-4d34-8531-e6a7f069cccc\" (UID: \"2722e2b6-66b9-4d34-8531-e6a7f069cccc\") " May 13 00:56:50.849850 kubelet[2088]: I0513 00:56:50.849448 2088 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2722e2b6-66b9-4d34-8531-e6a7f069cccc-host-proc-sys-net\") pod \"2722e2b6-66b9-4d34-8531-e6a7f069cccc\" (UID: \"2722e2b6-66b9-4d34-8531-e6a7f069cccc\") " May 13 00:56:50.849850 kubelet[2088]: I0513 00:56:50.849498 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2722e2b6-66b9-4d34-8531-e6a7f069cccc-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2722e2b6-66b9-4d34-8531-e6a7f069cccc" (UID: "2722e2b6-66b9-4d34-8531-e6a7f069cccc"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:56:50.849977 kubelet[2088]: I0513 00:56:50.849520 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2722e2b6-66b9-4d34-8531-e6a7f069cccc-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2722e2b6-66b9-4d34-8531-e6a7f069cccc" (UID: "2722e2b6-66b9-4d34-8531-e6a7f069cccc"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:56:50.849977 kubelet[2088]: I0513 00:56:50.849531 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2722e2b6-66b9-4d34-8531-e6a7f069cccc-hostproc" (OuterVolumeSpecName: "hostproc") pod "2722e2b6-66b9-4d34-8531-e6a7f069cccc" (UID: "2722e2b6-66b9-4d34-8531-e6a7f069cccc"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:56:50.849977 kubelet[2088]: I0513 00:56:50.849541 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2722e2b6-66b9-4d34-8531-e6a7f069cccc-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2722e2b6-66b9-4d34-8531-e6a7f069cccc" (UID: "2722e2b6-66b9-4d34-8531-e6a7f069cccc"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:56:50.850721 kubelet[2088]: I0513 00:56:50.850702 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2722e2b6-66b9-4d34-8531-e6a7f069cccc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2722e2b6-66b9-4d34-8531-e6a7f069cccc" (UID: "2722e2b6-66b9-4d34-8531-e6a7f069cccc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 13 00:56:50.850791 kubelet[2088]: I0513 00:56:50.850725 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2722e2b6-66b9-4d34-8531-e6a7f069cccc-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2722e2b6-66b9-4d34-8531-e6a7f069cccc" (UID: "2722e2b6-66b9-4d34-8531-e6a7f069cccc"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:56:50.850791 kubelet[2088]: I0513 00:56:50.850739 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2722e2b6-66b9-4d34-8531-e6a7f069cccc-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2722e2b6-66b9-4d34-8531-e6a7f069cccc" (UID: "2722e2b6-66b9-4d34-8531-e6a7f069cccc"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:56:50.850791 kubelet[2088]: I0513 00:56:50.850753 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2722e2b6-66b9-4d34-8531-e6a7f069cccc-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2722e2b6-66b9-4d34-8531-e6a7f069cccc" (UID: "2722e2b6-66b9-4d34-8531-e6a7f069cccc"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:56:50.850791 kubelet[2088]: I0513 00:56:50.850764 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2722e2b6-66b9-4d34-8531-e6a7f069cccc-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2722e2b6-66b9-4d34-8531-e6a7f069cccc" (UID: "2722e2b6-66b9-4d34-8531-e6a7f069cccc"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:56:50.850791 kubelet[2088]: I0513 00:56:50.850775 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2722e2b6-66b9-4d34-8531-e6a7f069cccc-cni-path" (OuterVolumeSpecName: "cni-path") pod "2722e2b6-66b9-4d34-8531-e6a7f069cccc" (UID: "2722e2b6-66b9-4d34-8531-e6a7f069cccc"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:56:50.851133 kubelet[2088]: I0513 00:56:50.851117 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2722e2b6-66b9-4d34-8531-e6a7f069cccc-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2722e2b6-66b9-4d34-8531-e6a7f069cccc" (UID: "2722e2b6-66b9-4d34-8531-e6a7f069cccc"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:56:50.855722 systemd[1]: var-lib-kubelet-pods-2722e2b6\x2d66b9\x2d4d34\x2d8531\x2de6a7f069cccc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzq5mn.mount: Deactivated successfully. May 13 00:56:50.856729 kubelet[2088]: I0513 00:56:50.856710 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2722e2b6-66b9-4d34-8531-e6a7f069cccc-kube-api-access-zq5mn" (OuterVolumeSpecName: "kube-api-access-zq5mn") pod "2722e2b6-66b9-4d34-8531-e6a7f069cccc" (UID: "2722e2b6-66b9-4d34-8531-e6a7f069cccc"). InnerVolumeSpecName "kube-api-access-zq5mn". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 13 00:56:50.858186 kubelet[2088]: I0513 00:56:50.858160 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2722e2b6-66b9-4d34-8531-e6a7f069cccc-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2722e2b6-66b9-4d34-8531-e6a7f069cccc" (UID: "2722e2b6-66b9-4d34-8531-e6a7f069cccc"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 13 00:56:50.858673 kubelet[2088]: I0513 00:56:50.858653 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2722e2b6-66b9-4d34-8531-e6a7f069cccc-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2722e2b6-66b9-4d34-8531-e6a7f069cccc" (UID: "2722e2b6-66b9-4d34-8531-e6a7f069cccc"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 13 00:56:50.859313 kubelet[2088]: I0513 00:56:50.859298 2088 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2722e2b6-66b9-4d34-8531-e6a7f069cccc-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "2722e2b6-66b9-4d34-8531-e6a7f069cccc" (UID: "2722e2b6-66b9-4d34-8531-e6a7f069cccc"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 13 00:56:50.949640 kubelet[2088]: I0513 00:56:50.949612 2088 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2722e2b6-66b9-4d34-8531-e6a7f069cccc-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 13 00:56:50.949809 kubelet[2088]: I0513 00:56:50.949798 2088 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2722e2b6-66b9-4d34-8531-e6a7f069cccc-hostproc\") on node \"localhost\" DevicePath \"\"" May 13 00:56:50.949890 kubelet[2088]: I0513 00:56:50.949879 2088 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2722e2b6-66b9-4d34-8531-e6a7f069cccc-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 13 00:56:50.949962 kubelet[2088]: I0513 00:56:50.949953 2088 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2722e2b6-66b9-4d34-8531-e6a7f069cccc-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 00:56:50.950031 kubelet[2088]: I0513 00:56:50.950022 2088 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2722e2b6-66b9-4d34-8531-e6a7f069cccc-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" May 13 00:56:50.950108 kubelet[2088]: I0513 00:56:50.950098 2088 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2722e2b6-66b9-4d34-8531-e6a7f069cccc-cilium-run\") on node \"localhost\" DevicePath \"\"" May 13 00:56:50.950186 kubelet[2088]: I0513 00:56:50.950176 2088 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zq5mn\" (UniqueName: \"kubernetes.io/projected/2722e2b6-66b9-4d34-8531-e6a7f069cccc-kube-api-access-zq5mn\") on node \"localhost\" DevicePath \"\"" May 13 00:56:50.950253 kubelet[2088]: I0513 00:56:50.950244 2088 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2722e2b6-66b9-4d34-8531-e6a7f069cccc-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 13 00:56:50.954419 kubelet[2088]: I0513 00:56:50.950415 2088 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2722e2b6-66b9-4d34-8531-e6a7f069cccc-lib-modules\") on node \"localhost\" DevicePath \"\"" May 13 00:56:50.954419 kubelet[2088]: I0513 00:56:50.950424 2088 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2722e2b6-66b9-4d34-8531-e6a7f069cccc-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 13 00:56:50.954419 kubelet[2088]: I0513 00:56:50.950430 2088 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2722e2b6-66b9-4d34-8531-e6a7f069cccc-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 13 00:56:50.954419 kubelet[2088]: I0513 00:56:50.950436 2088 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2722e2b6-66b9-4d34-8531-e6a7f069cccc-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 13 00:56:50.954419 kubelet[2088]: I0513 00:56:50.950444 2088 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2722e2b6-66b9-4d34-8531-e6a7f069cccc-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 13 00:56:50.954419 kubelet[2088]: I0513 00:56:50.950450 2088 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2722e2b6-66b9-4d34-8531-e6a7f069cccc-cni-path\") on node \"localhost\" DevicePath \"\"" May 13 00:56:50.954419 kubelet[2088]: I0513 00:56:50.950456 2088 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2722e2b6-66b9-4d34-8531-e6a7f069cccc-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 13 00:56:51.339936 systemd[1]: Removed slice kubepods-burstable-pod2722e2b6_66b9_4d34_8531_e6a7f069cccc.slice. May 13 00:56:51.650850 systemd[1]: var-lib-kubelet-pods-2722e2b6\x2d66b9\x2d4d34\x2d8531\x2de6a7f069cccc-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 13 00:56:51.650907 systemd[1]: var-lib-kubelet-pods-2722e2b6\x2d66b9\x2d4d34\x2d8531\x2de6a7f069cccc-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 13 00:56:51.650943 systemd[1]: var-lib-kubelet-pods-2722e2b6\x2d66b9\x2d4d34\x2d8531\x2de6a7f069cccc-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 13 00:56:51.693321 systemd[1]: Created slice kubepods-burstable-pod56c05a71_b7af_4114_8c53_d59e02dc4fa3.slice. May 13 00:56:51.855091 kubelet[2088]: I0513 00:56:51.855066 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/56c05a71-b7af-4114-8c53-d59e02dc4fa3-hostproc\") pod \"cilium-wxdfq\" (UID: \"56c05a71-b7af-4114-8c53-d59e02dc4fa3\") " pod="kube-system/cilium-wxdfq" May 13 00:56:51.855091 kubelet[2088]: I0513 00:56:51.855091 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/56c05a71-b7af-4114-8c53-d59e02dc4fa3-cilium-cgroup\") pod \"cilium-wxdfq\" (UID: \"56c05a71-b7af-4114-8c53-d59e02dc4fa3\") " pod="kube-system/cilium-wxdfq" May 13 00:56:51.855371 kubelet[2088]: I0513 00:56:51.855105 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/56c05a71-b7af-4114-8c53-d59e02dc4fa3-cni-path\") pod \"cilium-wxdfq\" (UID: \"56c05a71-b7af-4114-8c53-d59e02dc4fa3\") " pod="kube-system/cilium-wxdfq" May 13 00:56:51.855371 kubelet[2088]: I0513 00:56:51.855113 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/56c05a71-b7af-4114-8c53-d59e02dc4fa3-hubble-tls\") pod \"cilium-wxdfq\" (UID: \"56c05a71-b7af-4114-8c53-d59e02dc4fa3\") " pod="kube-system/cilium-wxdfq" May 13 00:56:51.855371 kubelet[2088]: I0513 00:56:51.855124 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/56c05a71-b7af-4114-8c53-d59e02dc4fa3-cilium-run\") pod \"cilium-wxdfq\" (UID: \"56c05a71-b7af-4114-8c53-d59e02dc4fa3\") " pod="kube-system/cilium-wxdfq" May 13 00:56:51.855371 kubelet[2088]: I0513 00:56:51.855134 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56c05a71-b7af-4114-8c53-d59e02dc4fa3-xtables-lock\") pod \"cilium-wxdfq\" (UID: \"56c05a71-b7af-4114-8c53-d59e02dc4fa3\") " pod="kube-system/cilium-wxdfq" May 13 00:56:51.855371 kubelet[2088]: I0513 00:56:51.855143 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/56c05a71-b7af-4114-8c53-d59e02dc4fa3-cilium-config-path\") pod \"cilium-wxdfq\" (UID: \"56c05a71-b7af-4114-8c53-d59e02dc4fa3\") " pod="kube-system/cilium-wxdfq" May 13 00:56:51.855371 kubelet[2088]: I0513 00:56:51.855153 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn9w2\" (UniqueName: \"kubernetes.io/projected/56c05a71-b7af-4114-8c53-d59e02dc4fa3-kube-api-access-rn9w2\") pod \"cilium-wxdfq\" (UID: \"56c05a71-b7af-4114-8c53-d59e02dc4fa3\") " pod="kube-system/cilium-wxdfq" May 13 00:56:51.855505 kubelet[2088]: I0513 00:56:51.855164 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/56c05a71-b7af-4114-8c53-d59e02dc4fa3-cilium-ipsec-secrets\") pod \"cilium-wxdfq\" (UID: \"56c05a71-b7af-4114-8c53-d59e02dc4fa3\") " pod="kube-system/cilium-wxdfq" May 13 00:56:51.855505 kubelet[2088]: I0513 00:56:51.855172 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/56c05a71-b7af-4114-8c53-d59e02dc4fa3-etc-cni-netd\") pod \"cilium-wxdfq\" (UID: \"56c05a71-b7af-4114-8c53-d59e02dc4fa3\") " pod="kube-system/cilium-wxdfq" May 13 00:56:51.855505 kubelet[2088]: I0513 00:56:51.855216 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/56c05a71-b7af-4114-8c53-d59e02dc4fa3-bpf-maps\") pod \"cilium-wxdfq\" (UID: \"56c05a71-b7af-4114-8c53-d59e02dc4fa3\") " pod="kube-system/cilium-wxdfq" May 13 00:56:51.855505 kubelet[2088]: I0513 00:56:51.855235 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/56c05a71-b7af-4114-8c53-d59e02dc4fa3-clustermesh-secrets\") pod \"cilium-wxdfq\" (UID: \"56c05a71-b7af-4114-8c53-d59e02dc4fa3\") " pod="kube-system/cilium-wxdfq" May 13 00:56:51.855505 kubelet[2088]: I0513 00:56:51.855245 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/56c05a71-b7af-4114-8c53-d59e02dc4fa3-host-proc-sys-net\") pod \"cilium-wxdfq\" (UID: \"56c05a71-b7af-4114-8c53-d59e02dc4fa3\") " pod="kube-system/cilium-wxdfq" May 13 00:56:51.855505 kubelet[2088]: I0513 00:56:51.855255 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56c05a71-b7af-4114-8c53-d59e02dc4fa3-lib-modules\") pod \"cilium-wxdfq\" (UID: \"56c05a71-b7af-4114-8c53-d59e02dc4fa3\") " pod="kube-system/cilium-wxdfq" May 13 00:56:51.855624 kubelet[2088]: I0513 00:56:51.855267 2088 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/56c05a71-b7af-4114-8c53-d59e02dc4fa3-host-proc-sys-kernel\") pod \"cilium-wxdfq\" (UID: \"56c05a71-b7af-4114-8c53-d59e02dc4fa3\") " pod="kube-system/cilium-wxdfq" May 13 00:56:51.996115 env[1248]: time="2025-05-13T00:56:51.995649896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wxdfq,Uid:56c05a71-b7af-4114-8c53-d59e02dc4fa3,Namespace:kube-system,Attempt:0,}" May 13 00:56:52.028265 env[1248]: time="2025-05-13T00:56:52.028217875Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:56:52.028265 env[1248]: time="2025-05-13T00:56:52.028244181Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:56:52.028417 env[1248]: time="2025-05-13T00:56:52.028256711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:56:52.028998 env[1248]: time="2025-05-13T00:56:52.028506077Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d49fd1297bcd054baafc6facff5ef6f70e41e7e55c911c6f3a1c1cd93a856f2f pid=3868 runtime=io.containerd.runc.v2 May 13 00:56:52.036717 systemd[1]: Started cri-containerd-d49fd1297bcd054baafc6facff5ef6f70e41e7e55c911c6f3a1c1cd93a856f2f.scope. May 13 00:56:52.055505 env[1248]: time="2025-05-13T00:56:52.055480464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wxdfq,Uid:56c05a71-b7af-4114-8c53-d59e02dc4fa3,Namespace:kube-system,Attempt:0,} returns sandbox id \"d49fd1297bcd054baafc6facff5ef6f70e41e7e55c911c6f3a1c1cd93a856f2f\"" May 13 00:56:52.059697 env[1248]: time="2025-05-13T00:56:52.059672999Z" level=info msg="CreateContainer within sandbox \"d49fd1297bcd054baafc6facff5ef6f70e41e7e55c911c6f3a1c1cd93a856f2f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 00:56:52.095502 env[1248]: time="2025-05-13T00:56:52.095473955Z" level=info msg="CreateContainer within sandbox \"d49fd1297bcd054baafc6facff5ef6f70e41e7e55c911c6f3a1c1cd93a856f2f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c2c6da29118555bcfe1252e984782d5dd267629cce69cf504e2a3945fdc5ecac\"" May 13 00:56:52.096025 env[1248]: time="2025-05-13T00:56:52.096010788Z" level=info msg="StartContainer for \"c2c6da29118555bcfe1252e984782d5dd267629cce69cf504e2a3945fdc5ecac\"" May 13 00:56:52.105883 systemd[1]: Started cri-containerd-c2c6da29118555bcfe1252e984782d5dd267629cce69cf504e2a3945fdc5ecac.scope. May 13 00:56:52.125247 env[1248]: time="2025-05-13T00:56:52.125210133Z" level=info msg="StartContainer for \"c2c6da29118555bcfe1252e984782d5dd267629cce69cf504e2a3945fdc5ecac\" returns successfully" May 13 00:56:52.141533 systemd[1]: cri-containerd-c2c6da29118555bcfe1252e984782d5dd267629cce69cf504e2a3945fdc5ecac.scope: Deactivated successfully. May 13 00:56:52.201786 env[1248]: time="2025-05-13T00:56:52.201750847Z" level=info msg="shim disconnected" id=c2c6da29118555bcfe1252e984782d5dd267629cce69cf504e2a3945fdc5ecac May 13 00:56:52.201786 env[1248]: time="2025-05-13T00:56:52.201781565Z" level=warning msg="cleaning up after shim disconnected" id=c2c6da29118555bcfe1252e984782d5dd267629cce69cf504e2a3945fdc5ecac namespace=k8s.io May 13 00:56:52.201786 env[1248]: time="2025-05-13T00:56:52.201789024Z" level=info msg="cleaning up dead shim" May 13 00:56:52.207375 env[1248]: time="2025-05-13T00:56:52.207327973Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:56:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3953 runtime=io.containerd.runc.v2\n" May 13 00:56:52.489568 kubelet[2088]: E0513 00:56:52.489517 2088 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 00:56:52.671403 env[1248]: time="2025-05-13T00:56:52.671378031Z" level=info msg="CreateContainer within sandbox \"d49fd1297bcd054baafc6facff5ef6f70e41e7e55c911c6f3a1c1cd93a856f2f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 00:56:52.742664 env[1248]: time="2025-05-13T00:56:52.742593466Z" level=info msg="CreateContainer within sandbox \"d49fd1297bcd054baafc6facff5ef6f70e41e7e55c911c6f3a1c1cd93a856f2f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5230f2f54bebbc12bd9fd5df8ed912aaa2169ae7ef7badcff06b7bf96830f2ea\"" May 13 00:56:52.743503 env[1248]: time="2025-05-13T00:56:52.743491158Z" level=info msg="StartContainer for \"5230f2f54bebbc12bd9fd5df8ed912aaa2169ae7ef7badcff06b7bf96830f2ea\"" May 13 00:56:52.757174 systemd[1]: Started cri-containerd-5230f2f54bebbc12bd9fd5df8ed912aaa2169ae7ef7badcff06b7bf96830f2ea.scope. May 13 00:56:52.801164 env[1248]: time="2025-05-13T00:56:52.801118867Z" level=info msg="StartContainer for \"5230f2f54bebbc12bd9fd5df8ed912aaa2169ae7ef7badcff06b7bf96830f2ea\" returns successfully" May 13 00:56:52.851728 systemd[1]: cri-containerd-5230f2f54bebbc12bd9fd5df8ed912aaa2169ae7ef7badcff06b7bf96830f2ea.scope: Deactivated successfully. May 13 00:56:52.931707 env[1248]: time="2025-05-13T00:56:52.931677674Z" level=info msg="shim disconnected" id=5230f2f54bebbc12bd9fd5df8ed912aaa2169ae7ef7badcff06b7bf96830f2ea May 13 00:56:52.931881 env[1248]: time="2025-05-13T00:56:52.931869668Z" level=warning msg="cleaning up after shim disconnected" id=5230f2f54bebbc12bd9fd5df8ed912aaa2169ae7ef7badcff06b7bf96830f2ea namespace=k8s.io May 13 00:56:52.931931 env[1248]: time="2025-05-13T00:56:52.931921732Z" level=info msg="cleaning up dead shim" May 13 00:56:52.937090 env[1248]: time="2025-05-13T00:56:52.937062419Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:56:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4015 runtime=io.containerd.runc.v2\n" May 13 00:56:53.335841 kubelet[2088]: I0513 00:56:53.335715 2088 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2722e2b6-66b9-4d34-8531-e6a7f069cccc" path="/var/lib/kubelet/pods/2722e2b6-66b9-4d34-8531-e6a7f069cccc/volumes" May 13 00:56:53.652888 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5230f2f54bebbc12bd9fd5df8ed912aaa2169ae7ef7badcff06b7bf96830f2ea-rootfs.mount: Deactivated successfully. May 13 00:56:53.673098 env[1248]: time="2025-05-13T00:56:53.673075194Z" level=info msg="CreateContainer within sandbox \"d49fd1297bcd054baafc6facff5ef6f70e41e7e55c911c6f3a1c1cd93a856f2f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 00:56:53.679976 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount80979817.mount: Deactivated successfully. May 13 00:56:53.684066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3882832567.mount: Deactivated successfully. May 13 00:56:53.686923 env[1248]: time="2025-05-13T00:56:53.686899401Z" level=info msg="CreateContainer within sandbox \"d49fd1297bcd054baafc6facff5ef6f70e41e7e55c911c6f3a1c1cd93a856f2f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"303171ae6f4958ba0cc73d7a2c5f4e99f036779471bde166b8099d0a2621f0ff\"" May 13 00:56:53.687429 env[1248]: time="2025-05-13T00:56:53.687411702Z" level=info msg="StartContainer for \"303171ae6f4958ba0cc73d7a2c5f4e99f036779471bde166b8099d0a2621f0ff\"" May 13 00:56:53.703294 systemd[1]: Started cri-containerd-303171ae6f4958ba0cc73d7a2c5f4e99f036779471bde166b8099d0a2621f0ff.scope. May 13 00:56:53.725582 env[1248]: time="2025-05-13T00:56:53.725552607Z" level=info msg="StartContainer for \"303171ae6f4958ba0cc73d7a2c5f4e99f036779471bde166b8099d0a2621f0ff\" returns successfully" May 13 00:56:53.736741 systemd[1]: cri-containerd-303171ae6f4958ba0cc73d7a2c5f4e99f036779471bde166b8099d0a2621f0ff.scope: Deactivated successfully. May 13 00:56:53.750259 env[1248]: time="2025-05-13T00:56:53.750220773Z" level=info msg="shim disconnected" id=303171ae6f4958ba0cc73d7a2c5f4e99f036779471bde166b8099d0a2621f0ff May 13 00:56:53.750374 env[1248]: time="2025-05-13T00:56:53.750257894Z" level=warning msg="cleaning up after shim disconnected" id=303171ae6f4958ba0cc73d7a2c5f4e99f036779471bde166b8099d0a2621f0ff namespace=k8s.io May 13 00:56:53.750374 env[1248]: time="2025-05-13T00:56:53.750276770Z" level=info msg="cleaning up dead shim" May 13 00:56:53.754837 env[1248]: time="2025-05-13T00:56:53.754815757Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:56:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4075 runtime=io.containerd.runc.v2\n" May 13 00:56:54.676363 env[1248]: time="2025-05-13T00:56:54.675820327Z" level=info msg="CreateContainer within sandbox \"d49fd1297bcd054baafc6facff5ef6f70e41e7e55c911c6f3a1c1cd93a856f2f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 00:56:54.683508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2821347710.mount: Deactivated successfully. May 13 00:56:54.687773 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3340065876.mount: Deactivated successfully. May 13 00:56:54.691125 env[1248]: time="2025-05-13T00:56:54.691097204Z" level=info msg="CreateContainer within sandbox \"d49fd1297bcd054baafc6facff5ef6f70e41e7e55c911c6f3a1c1cd93a856f2f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1939167bb9a0f399a0e298304b153baf3c411af7e031fb7b4068128a8b09841b\"" May 13 00:56:54.691615 env[1248]: time="2025-05-13T00:56:54.691596636Z" level=info msg="StartContainer for \"1939167bb9a0f399a0e298304b153baf3c411af7e031fb7b4068128a8b09841b\"" May 13 00:56:54.712640 systemd[1]: Started cri-containerd-1939167bb9a0f399a0e298304b153baf3c411af7e031fb7b4068128a8b09841b.scope. May 13 00:56:54.748102 env[1248]: time="2025-05-13T00:56:54.748072444Z" level=info msg="StartContainer for \"1939167bb9a0f399a0e298304b153baf3c411af7e031fb7b4068128a8b09841b\" returns successfully" May 13 00:56:54.748960 systemd[1]: cri-containerd-1939167bb9a0f399a0e298304b153baf3c411af7e031fb7b4068128a8b09841b.scope: Deactivated successfully. May 13 00:56:54.760975 env[1248]: time="2025-05-13T00:56:54.760945374Z" level=info msg="shim disconnected" id=1939167bb9a0f399a0e298304b153baf3c411af7e031fb7b4068128a8b09841b May 13 00:56:54.760975 env[1248]: time="2025-05-13T00:56:54.760974789Z" level=warning msg="cleaning up after shim disconnected" id=1939167bb9a0f399a0e298304b153baf3c411af7e031fb7b4068128a8b09841b namespace=k8s.io May 13 00:56:54.761115 env[1248]: time="2025-05-13T00:56:54.760988427Z" level=info msg="cleaning up dead shim" May 13 00:56:54.765632 env[1248]: time="2025-05-13T00:56:54.765613521Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:56:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4133 runtime=io.containerd.runc.v2\n" May 13 00:56:55.679613 env[1248]: time="2025-05-13T00:56:55.679579420Z" level=info msg="CreateContainer within sandbox \"d49fd1297bcd054baafc6facff5ef6f70e41e7e55c911c6f3a1c1cd93a856f2f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 00:56:55.688638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount693842458.mount: Deactivated successfully. May 13 00:56:55.696322 env[1248]: time="2025-05-13T00:56:55.696290009Z" level=info msg="CreateContainer within sandbox \"d49fd1297bcd054baafc6facff5ef6f70e41e7e55c911c6f3a1c1cd93a856f2f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7d1a874e625205736445c1d52eb84d5192c885844e72f4bba64865f681d972eb\"" May 13 00:56:55.696966 env[1248]: time="2025-05-13T00:56:55.696945052Z" level=info msg="StartContainer for \"7d1a874e625205736445c1d52eb84d5192c885844e72f4bba64865f681d972eb\"" May 13 00:56:55.715811 systemd[1]: Started cri-containerd-7d1a874e625205736445c1d52eb84d5192c885844e72f4bba64865f681d972eb.scope. May 13 00:56:55.745375 env[1248]: time="2025-05-13T00:56:55.745318205Z" level=info msg="StartContainer for \"7d1a874e625205736445c1d52eb84d5192c885844e72f4bba64865f681d972eb\" returns successfully" May 13 00:56:56.482358 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 13 00:56:56.692270 kubelet[2088]: I0513 00:56:56.692222 2088 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wxdfq" podStartSLOduration=5.692209804 podStartE2EDuration="5.692209804s" podCreationTimestamp="2025-05-13 00:56:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:56:56.691795519 +0000 UTC m=+139.500627538" watchObservedRunningTime="2025-05-13 00:56:56.692209804 +0000 UTC m=+139.501041819" May 13 00:56:56.946582 systemd[1]: run-containerd-runc-k8s.io-7d1a874e625205736445c1d52eb84d5192c885844e72f4bba64865f681d972eb-runc.fkdLin.mount: Deactivated successfully. May 13 00:56:58.853253 systemd-networkd[1061]: lxc_health: Link UP May 13 00:56:58.859362 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 13 00:56:58.861657 systemd-networkd[1061]: lxc_health: Gained carrier May 13 00:57:00.286493 systemd-networkd[1061]: lxc_health: Gained IPv6LL May 13 00:57:01.212578 systemd[1]: run-containerd-runc-k8s.io-7d1a874e625205736445c1d52eb84d5192c885844e72f4bba64865f681d972eb-runc.vqGKTq.mount: Deactivated successfully. May 13 00:57:03.409250 systemd[1]: run-containerd-runc-k8s.io-7d1a874e625205736445c1d52eb84d5192c885844e72f4bba64865f681d972eb-runc.YF3RBM.mount: Deactivated successfully. May 13 00:57:05.544095 sshd[3838]: pam_unix(sshd:session): session closed for user core May 13 00:57:05.547253 systemd[1]: sshd@28-139.178.70.106:22-147.75.109.163:33896.service: Deactivated successfully. May 13 00:57:05.547885 systemd[1]: session-28.scope: Deactivated successfully. May 13 00:57:05.548177 systemd-logind[1240]: Session 28 logged out. Waiting for processes to exit. May 13 00:57:05.549003 systemd-logind[1240]: Removed session 28.