Jul 14 23:26:55.655265 kernel: Linux version 5.15.187-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Jul 14 20:42:36 -00 2025 Jul 14 23:26:55.655279 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=d9618a329f89744ce954b0fa1b02ce8164745af7389f9de9c3421ad2087e0dba Jul 14 23:26:55.655286 kernel: Disabled fast string operations Jul 14 23:26:55.655290 kernel: BIOS-provided physical RAM map: Jul 14 23:26:55.655294 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Jul 14 23:26:55.655298 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Jul 14 23:26:55.655304 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Jul 14 23:26:55.655308 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Jul 14 23:26:55.655312 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Jul 14 23:26:55.655316 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Jul 14 23:26:55.655320 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Jul 14 23:26:55.655324 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Jul 14 23:26:55.655328 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Jul 14 23:26:55.655332 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jul 14 23:26:55.655338 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Jul 14 23:26:55.655343 kernel: NX (Execute Disable) protection: active Jul 14 23:26:55.655348 kernel: SMBIOS 2.7 present. Jul 14 23:26:55.655352 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Jul 14 23:26:55.655357 kernel: vmware: hypercall mode: 0x00 Jul 14 23:26:55.655361 kernel: Hypervisor detected: VMware Jul 14 23:26:55.655366 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Jul 14 23:26:55.655371 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Jul 14 23:26:55.655375 kernel: vmware: using clock offset of 3580613916 ns Jul 14 23:26:55.655380 kernel: tsc: Detected 3408.000 MHz processor Jul 14 23:26:55.655385 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 14 23:26:55.655389 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 14 23:26:55.655394 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Jul 14 23:26:55.655399 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 14 23:26:55.655403 kernel: total RAM covered: 3072M Jul 14 23:26:55.655409 kernel: Found optimal setting for mtrr clean up Jul 14 23:26:55.655461 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Jul 14 23:26:55.655467 kernel: Using GB pages for direct mapping Jul 14 23:26:55.655472 kernel: ACPI: Early table checksum verification disabled Jul 14 23:26:55.655476 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Jul 14 23:26:55.655481 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Jul 14 23:26:55.655486 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Jul 14 23:26:55.655490 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Jul 14 23:26:55.655495 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jul 14 23:26:55.655499 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jul 14 23:26:55.655506 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Jul 14 23:26:55.655513 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Jul 14 23:26:55.655518 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Jul 14 23:26:55.655523 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Jul 14 23:26:55.655528 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Jul 14 23:26:55.655534 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Jul 14 23:26:55.655539 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Jul 14 23:26:55.655544 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Jul 14 23:26:55.655549 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jul 14 23:26:55.655553 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jul 14 23:26:55.655558 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Jul 14 23:26:55.655563 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Jul 14 23:26:55.655568 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Jul 14 23:26:55.655573 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Jul 14 23:26:55.655579 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Jul 14 23:26:55.655584 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Jul 14 23:26:55.655589 kernel: system APIC only can use physical flat Jul 14 23:26:55.655594 kernel: Setting APIC routing to physical flat. Jul 14 23:26:55.655599 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 14 23:26:55.655604 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jul 14 23:26:55.655609 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jul 14 23:26:55.655614 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jul 14 23:26:55.655618 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jul 14 23:26:55.655624 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jul 14 23:26:55.655629 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jul 14 23:26:55.655634 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jul 14 23:26:55.655639 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Jul 14 23:26:55.655644 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Jul 14 23:26:55.655648 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Jul 14 23:26:55.655653 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Jul 14 23:26:55.655658 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Jul 14 23:26:55.655663 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Jul 14 23:26:55.655668 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Jul 14 23:26:55.655673 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Jul 14 23:26:55.655678 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Jul 14 23:26:55.655683 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Jul 14 23:26:55.655688 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Jul 14 23:26:55.655693 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Jul 14 23:26:55.655698 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Jul 14 23:26:55.655702 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Jul 14 23:26:55.655707 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Jul 14 23:26:55.655712 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Jul 14 23:26:55.655717 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Jul 14 23:26:55.655723 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Jul 14 23:26:55.655761 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Jul 14 23:26:55.655767 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Jul 14 23:26:55.655772 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Jul 14 23:26:55.655777 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Jul 14 23:26:55.655781 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Jul 14 23:26:55.655786 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Jul 14 23:26:55.655791 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Jul 14 23:26:55.655796 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Jul 14 23:26:55.655801 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Jul 14 23:26:55.655808 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Jul 14 23:26:55.655813 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Jul 14 23:26:55.655818 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Jul 14 23:26:55.655822 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Jul 14 23:26:55.655827 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Jul 14 23:26:55.655832 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Jul 14 23:26:55.655837 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Jul 14 23:26:55.655842 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Jul 14 23:26:55.655847 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Jul 14 23:26:55.655852 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Jul 14 23:26:55.655857 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Jul 14 23:26:55.655862 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Jul 14 23:26:55.655867 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Jul 14 23:26:55.655872 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Jul 14 23:26:55.655877 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Jul 14 23:26:55.655882 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Jul 14 23:26:55.655886 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Jul 14 23:26:55.655891 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Jul 14 23:26:55.655896 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Jul 14 23:26:55.655902 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Jul 14 23:26:55.655907 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Jul 14 23:26:55.655912 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Jul 14 23:26:55.655916 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Jul 14 23:26:55.655921 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Jul 14 23:26:55.655926 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Jul 14 23:26:55.655931 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Jul 14 23:26:55.655940 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Jul 14 23:26:55.655946 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Jul 14 23:26:55.655956 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Jul 14 23:26:55.655962 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Jul 14 23:26:55.655967 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Jul 14 23:26:55.655973 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Jul 14 23:26:55.655978 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Jul 14 23:26:55.655983 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Jul 14 23:26:55.655989 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Jul 14 23:26:55.655994 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Jul 14 23:26:55.655999 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Jul 14 23:26:55.656005 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Jul 14 23:26:55.656011 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Jul 14 23:26:55.656016 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Jul 14 23:26:55.656021 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Jul 14 23:26:55.656026 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Jul 14 23:26:55.656031 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Jul 14 23:26:55.656054 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Jul 14 23:26:55.656060 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Jul 14 23:26:55.656066 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Jul 14 23:26:55.656071 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Jul 14 23:26:55.656078 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Jul 14 23:26:55.656083 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Jul 14 23:26:55.656088 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Jul 14 23:26:55.656093 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Jul 14 23:26:55.656099 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Jul 14 23:26:55.656104 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Jul 14 23:26:55.656109 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Jul 14 23:26:55.656114 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Jul 14 23:26:55.656119 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Jul 14 23:26:55.656125 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Jul 14 23:26:55.656131 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Jul 14 23:26:55.656136 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Jul 14 23:26:55.656141 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Jul 14 23:26:55.656146 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Jul 14 23:26:55.656151 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Jul 14 23:26:55.656157 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Jul 14 23:26:55.656162 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Jul 14 23:26:55.656167 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Jul 14 23:26:55.656172 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Jul 14 23:26:55.656177 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Jul 14 23:26:55.656183 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Jul 14 23:26:55.656189 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Jul 14 23:26:55.656194 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Jul 14 23:26:55.656199 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Jul 14 23:26:55.656204 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Jul 14 23:26:55.656209 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Jul 14 23:26:55.656214 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Jul 14 23:26:55.656220 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Jul 14 23:26:55.656225 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Jul 14 23:26:55.656230 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Jul 14 23:26:55.656236 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Jul 14 23:26:55.656241 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Jul 14 23:26:55.656247 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Jul 14 23:26:55.656252 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Jul 14 23:26:55.656257 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Jul 14 23:26:55.656262 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Jul 14 23:26:55.656267 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Jul 14 23:26:55.656272 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Jul 14 23:26:55.656277 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Jul 14 23:26:55.656284 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Jul 14 23:26:55.656289 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Jul 14 23:26:55.656294 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Jul 14 23:26:55.656299 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Jul 14 23:26:55.656304 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Jul 14 23:26:55.656310 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Jul 14 23:26:55.656315 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Jul 14 23:26:55.656320 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 14 23:26:55.656325 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jul 14 23:26:55.656331 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Jul 14 23:26:55.656337 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Jul 14 23:26:55.656343 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Jul 14 23:26:55.656348 kernel: Zone ranges: Jul 14 23:26:55.656354 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 14 23:26:55.656359 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Jul 14 23:26:55.656364 kernel: Normal empty Jul 14 23:26:55.656369 kernel: Movable zone start for each node Jul 14 23:26:55.656375 kernel: Early memory node ranges Jul 14 23:26:55.656380 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Jul 14 23:26:55.656386 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Jul 14 23:26:55.656392 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Jul 14 23:26:55.656397 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Jul 14 23:26:55.656402 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 14 23:26:55.656408 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Jul 14 23:26:55.656413 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Jul 14 23:26:55.656418 kernel: ACPI: PM-Timer IO Port: 0x1008 Jul 14 23:26:55.656424 kernel: system APIC only can use physical flat Jul 14 23:26:55.656429 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Jul 14 23:26:55.656434 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jul 14 23:26:55.656440 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jul 14 23:26:55.656446 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jul 14 23:26:55.656451 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jul 14 23:26:55.656456 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jul 14 23:26:55.656461 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jul 14 23:26:55.656467 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jul 14 23:26:55.656472 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jul 14 23:26:55.656478 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jul 14 23:26:55.656483 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jul 14 23:26:55.656537 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jul 14 23:26:55.656545 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jul 14 23:26:55.656550 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jul 14 23:26:55.656555 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jul 14 23:26:55.656561 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jul 14 23:26:55.656566 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jul 14 23:26:55.656571 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Jul 14 23:26:55.656577 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Jul 14 23:26:55.656582 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Jul 14 23:26:55.656587 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Jul 14 23:26:55.656594 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Jul 14 23:26:55.656600 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Jul 14 23:26:55.656605 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Jul 14 23:26:55.656610 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Jul 14 23:26:55.656615 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Jul 14 23:26:55.656620 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Jul 14 23:26:55.656626 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Jul 14 23:26:55.656631 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Jul 14 23:26:55.656636 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Jul 14 23:26:55.656642 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Jul 14 23:26:55.656648 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Jul 14 23:26:55.656653 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Jul 14 23:26:55.656658 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Jul 14 23:26:55.656663 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Jul 14 23:26:55.656669 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Jul 14 23:26:55.656674 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Jul 14 23:26:55.656679 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Jul 14 23:26:55.656685 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Jul 14 23:26:55.656690 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Jul 14 23:26:55.656696 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Jul 14 23:26:55.656702 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Jul 14 23:26:55.656707 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Jul 14 23:26:55.656712 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Jul 14 23:26:55.656717 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Jul 14 23:26:55.656722 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Jul 14 23:26:55.656728 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Jul 14 23:26:55.656733 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Jul 14 23:26:55.656738 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Jul 14 23:26:55.656744 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Jul 14 23:26:55.656749 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Jul 14 23:26:55.656755 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Jul 14 23:26:55.656760 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Jul 14 23:26:55.656765 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Jul 14 23:26:55.656770 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Jul 14 23:26:55.656776 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Jul 14 23:26:55.656781 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Jul 14 23:26:55.656786 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Jul 14 23:26:55.656792 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Jul 14 23:26:55.656798 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Jul 14 23:26:55.656803 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Jul 14 23:26:55.656808 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Jul 14 23:26:55.656813 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Jul 14 23:26:55.656818 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Jul 14 23:26:55.656824 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Jul 14 23:26:55.656829 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Jul 14 23:26:55.656834 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Jul 14 23:26:55.656840 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Jul 14 23:26:55.656846 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Jul 14 23:26:55.656851 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Jul 14 23:26:55.656857 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Jul 14 23:26:55.656862 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Jul 14 23:26:55.656867 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Jul 14 23:26:55.656872 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Jul 14 23:26:55.656877 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Jul 14 23:26:55.656883 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Jul 14 23:26:55.656888 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Jul 14 23:26:55.656894 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Jul 14 23:26:55.656899 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Jul 14 23:26:55.656904 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Jul 14 23:26:55.656910 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Jul 14 23:26:55.656915 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Jul 14 23:26:55.656920 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Jul 14 23:26:55.656925 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Jul 14 23:26:55.656930 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Jul 14 23:26:55.656936 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Jul 14 23:26:55.656941 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Jul 14 23:26:55.656952 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Jul 14 23:26:55.656958 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Jul 14 23:26:55.656963 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Jul 14 23:26:55.656968 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Jul 14 23:26:55.656973 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Jul 14 23:26:55.656978 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Jul 14 23:26:55.656984 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Jul 14 23:26:55.656989 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Jul 14 23:26:55.657009 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Jul 14 23:26:55.657017 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Jul 14 23:26:55.657039 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Jul 14 23:26:55.657044 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Jul 14 23:26:55.657050 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Jul 14 23:26:55.657055 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Jul 14 23:26:55.657060 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Jul 14 23:26:55.657065 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Jul 14 23:26:55.657070 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Jul 14 23:26:55.657075 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Jul 14 23:26:55.657082 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Jul 14 23:26:55.657088 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Jul 14 23:26:55.657093 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Jul 14 23:26:55.657098 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Jul 14 23:26:55.657103 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Jul 14 23:26:55.657108 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Jul 14 23:26:55.657114 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Jul 14 23:26:55.657119 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Jul 14 23:26:55.657125 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Jul 14 23:26:55.657130 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Jul 14 23:26:55.657136 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Jul 14 23:26:55.657141 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Jul 14 23:26:55.657146 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Jul 14 23:26:55.657152 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Jul 14 23:26:55.657157 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Jul 14 23:26:55.657162 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Jul 14 23:26:55.657167 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Jul 14 23:26:55.657172 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Jul 14 23:26:55.657178 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Jul 14 23:26:55.657184 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Jul 14 23:26:55.657190 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Jul 14 23:26:55.657195 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Jul 14 23:26:55.657200 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Jul 14 23:26:55.657205 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Jul 14 23:26:55.657211 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Jul 14 23:26:55.657226 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 14 23:26:55.657232 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Jul 14 23:26:55.657237 kernel: TSC deadline timer available Jul 14 23:26:55.657243 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Jul 14 23:26:55.657249 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Jul 14 23:26:55.657255 kernel: Booting paravirtualized kernel on VMware hypervisor Jul 14 23:26:55.657260 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 14 23:26:55.657266 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:128 nr_node_ids:1 Jul 14 23:26:55.657271 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Jul 14 23:26:55.657277 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Jul 14 23:26:55.657282 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Jul 14 23:26:55.657287 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Jul 14 23:26:55.657293 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Jul 14 23:26:55.657298 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Jul 14 23:26:55.657304 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Jul 14 23:26:55.657309 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Jul 14 23:26:55.657314 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Jul 14 23:26:55.657326 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Jul 14 23:26:55.657332 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Jul 14 23:26:55.657338 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Jul 14 23:26:55.657344 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Jul 14 23:26:55.657350 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Jul 14 23:26:55.657356 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Jul 14 23:26:55.657361 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Jul 14 23:26:55.657367 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Jul 14 23:26:55.657372 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Jul 14 23:26:55.657378 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Jul 14 23:26:55.657383 kernel: Policy zone: DMA32 Jul 14 23:26:55.657390 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=d9618a329f89744ce954b0fa1b02ce8164745af7389f9de9c3421ad2087e0dba Jul 14 23:26:55.657397 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 14 23:26:55.657402 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Jul 14 23:26:55.657408 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Jul 14 23:26:55.657414 kernel: printk: log_buf_len min size: 262144 bytes Jul 14 23:26:55.657419 kernel: printk: log_buf_len: 1048576 bytes Jul 14 23:26:55.657425 kernel: printk: early log buf free: 239728(91%) Jul 14 23:26:55.657430 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 14 23:26:55.657436 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 14 23:26:55.657442 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 14 23:26:55.657449 kernel: Memory: 1940392K/2096628K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47476K init, 4104K bss, 155976K reserved, 0K cma-reserved) Jul 14 23:26:55.657455 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Jul 14 23:26:55.657460 kernel: ftrace: allocating 34607 entries in 136 pages Jul 14 23:26:55.657466 kernel: ftrace: allocated 136 pages with 2 groups Jul 14 23:26:55.657473 kernel: rcu: Hierarchical RCU implementation. Jul 14 23:26:55.657481 kernel: rcu: RCU event tracing is enabled. Jul 14 23:26:55.657487 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Jul 14 23:26:55.657492 kernel: Rude variant of Tasks RCU enabled. Jul 14 23:26:55.657498 kernel: Tracing variant of Tasks RCU enabled. Jul 14 23:26:55.657503 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 14 23:26:55.657509 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Jul 14 23:26:55.657515 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Jul 14 23:26:55.657521 kernel: random: crng init done Jul 14 23:26:55.657526 kernel: Console: colour VGA+ 80x25 Jul 14 23:26:55.657532 kernel: printk: console [tty0] enabled Jul 14 23:26:55.657538 kernel: printk: console [ttyS0] enabled Jul 14 23:26:55.657544 kernel: ACPI: Core revision 20210730 Jul 14 23:26:55.657550 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Jul 14 23:26:55.657556 kernel: APIC: Switch to symmetric I/O mode setup Jul 14 23:26:55.657561 kernel: x2apic enabled Jul 14 23:26:55.657567 kernel: Switched APIC routing to physical x2apic. Jul 14 23:26:55.657573 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 14 23:26:55.657579 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jul 14 23:26:55.657584 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Jul 14 23:26:55.657591 kernel: Disabled fast string operations Jul 14 23:26:55.657597 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 14 23:26:55.657602 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jul 14 23:26:55.657608 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 14 23:26:55.657614 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Jul 14 23:26:55.657620 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jul 14 23:26:55.657626 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jul 14 23:26:55.657631 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jul 14 23:26:55.657638 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jul 14 23:26:55.657644 kernel: RETBleed: Mitigation: Enhanced IBRS Jul 14 23:26:55.657650 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 14 23:26:55.657655 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Jul 14 23:26:55.657661 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 14 23:26:55.657667 kernel: SRBDS: Unknown: Dependent on hypervisor status Jul 14 23:26:55.657672 kernel: GDS: Unknown: Dependent on hypervisor status Jul 14 23:26:55.657678 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 14 23:26:55.657684 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 14 23:26:55.657690 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 14 23:26:55.657696 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 14 23:26:55.657702 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 14 23:26:55.657707 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 14 23:26:55.657713 kernel: Freeing SMP alternatives memory: 32K Jul 14 23:26:55.657719 kernel: pid_max: default: 131072 minimum: 1024 Jul 14 23:26:55.657724 kernel: LSM: Security Framework initializing Jul 14 23:26:55.657730 kernel: SELinux: Initializing. Jul 14 23:26:55.657735 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 14 23:26:55.657742 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 14 23:26:55.657752 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jul 14 23:26:55.657758 kernel: Performance Events: Skylake events, core PMU driver. Jul 14 23:26:55.657764 kernel: core: CPUID marked event: 'cpu cycles' unavailable Jul 14 23:26:55.657769 kernel: core: CPUID marked event: 'instructions' unavailable Jul 14 23:26:55.657775 kernel: core: CPUID marked event: 'bus cycles' unavailable Jul 14 23:26:55.657781 kernel: core: CPUID marked event: 'cache references' unavailable Jul 14 23:26:55.657786 kernel: core: CPUID marked event: 'cache misses' unavailable Jul 14 23:26:55.657791 kernel: core: CPUID marked event: 'branch instructions' unavailable Jul 14 23:26:55.657798 kernel: core: CPUID marked event: 'branch misses' unavailable Jul 14 23:26:55.657804 kernel: ... version: 1 Jul 14 23:26:55.657809 kernel: ... bit width: 48 Jul 14 23:26:55.657815 kernel: ... generic registers: 4 Jul 14 23:26:55.657820 kernel: ... value mask: 0000ffffffffffff Jul 14 23:26:55.657827 kernel: ... max period: 000000007fffffff Jul 14 23:26:55.657833 kernel: ... fixed-purpose events: 0 Jul 14 23:26:55.657838 kernel: ... event mask: 000000000000000f Jul 14 23:26:55.657844 kernel: signal: max sigframe size: 1776 Jul 14 23:26:55.657851 kernel: rcu: Hierarchical SRCU implementation. Jul 14 23:26:55.657857 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 14 23:26:55.657862 kernel: smp: Bringing up secondary CPUs ... Jul 14 23:26:55.657868 kernel: x86: Booting SMP configuration: Jul 14 23:26:55.657873 kernel: .... node #0, CPUs: #1 Jul 14 23:26:55.657879 kernel: Disabled fast string operations Jul 14 23:26:55.657885 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Jul 14 23:26:55.657890 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jul 14 23:26:55.657896 kernel: smp: Brought up 1 node, 2 CPUs Jul 14 23:26:55.657901 kernel: smpboot: Max logical packages: 128 Jul 14 23:26:55.657908 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Jul 14 23:26:55.657914 kernel: devtmpfs: initialized Jul 14 23:26:55.657919 kernel: x86/mm: Memory block size: 128MB Jul 14 23:26:55.657925 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Jul 14 23:26:55.657931 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 14 23:26:55.657936 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Jul 14 23:26:55.657942 kernel: pinctrl core: initialized pinctrl subsystem Jul 14 23:26:55.657973 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 14 23:26:55.657981 kernel: audit: initializing netlink subsys (disabled) Jul 14 23:26:55.657996 kernel: audit: type=2000 audit(1752535614.086:1): state=initialized audit_enabled=0 res=1 Jul 14 23:26:55.658003 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 14 23:26:55.658009 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 14 23:26:55.658014 kernel: cpuidle: using governor menu Jul 14 23:26:55.658020 kernel: Simple Boot Flag at 0x36 set to 0x80 Jul 14 23:26:55.658026 kernel: ACPI: bus type PCI registered Jul 14 23:26:55.658031 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 14 23:26:55.658037 kernel: dca service started, version 1.12.1 Jul 14 23:26:55.658043 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Jul 14 23:26:55.658050 kernel: PCI: MMCONFIG at [mem 0xf0000000-0xf7ffffff] reserved in E820 Jul 14 23:26:55.658056 kernel: PCI: Using configuration type 1 for base access Jul 14 23:26:55.658061 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 14 23:26:55.658073 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 14 23:26:55.658079 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 14 23:26:55.658085 kernel: ACPI: Added _OSI(Module Device) Jul 14 23:26:55.658090 kernel: ACPI: Added _OSI(Processor Device) Jul 14 23:26:55.658096 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 14 23:26:55.658101 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 14 23:26:55.658109 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 14 23:26:55.658114 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 14 23:26:55.658120 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 14 23:26:55.658126 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jul 14 23:26:55.658132 kernel: ACPI: Interpreter enabled Jul 14 23:26:55.658137 kernel: ACPI: PM: (supports S0 S1 S5) Jul 14 23:26:55.658143 kernel: ACPI: Using IOAPIC for interrupt routing Jul 14 23:26:55.658149 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 14 23:26:55.658154 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Jul 14 23:26:55.658161 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Jul 14 23:26:55.658238 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 14 23:26:55.658290 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Jul 14 23:26:55.658351 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Jul 14 23:26:55.658361 kernel: PCI host bridge to bus 0000:00 Jul 14 23:26:55.658411 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 14 23:26:55.658459 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Jul 14 23:26:55.658502 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 14 23:26:55.658544 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 14 23:26:55.658586 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Jul 14 23:26:55.658628 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Jul 14 23:26:55.658683 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Jul 14 23:26:55.658742 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Jul 14 23:26:55.658798 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Jul 14 23:26:55.658851 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Jul 14 23:26:55.658900 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Jul 14 23:26:55.658984 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 14 23:26:55.659039 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 14 23:26:55.659088 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 14 23:26:55.659139 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 14 23:26:55.659194 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Jul 14 23:26:55.659244 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Jul 14 23:26:55.659292 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Jul 14 23:26:55.659347 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Jul 14 23:26:55.659396 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Jul 14 23:26:55.659446 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Jul 14 23:26:55.659498 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Jul 14 23:26:55.659547 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Jul 14 23:26:55.659595 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Jul 14 23:26:55.659642 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Jul 14 23:26:55.659691 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Jul 14 23:26:55.659737 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 14 23:26:55.659789 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Jul 14 23:26:55.659844 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Jul 14 23:26:55.659894 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Jul 14 23:26:55.659952 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Jul 14 23:26:55.660014 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Jul 14 23:26:55.660074 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Jul 14 23:26:55.660123 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Jul 14 23:26:55.660178 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Jul 14 23:26:55.660227 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Jul 14 23:26:55.660280 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Jul 14 23:26:55.660328 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Jul 14 23:26:55.660380 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Jul 14 23:26:55.660429 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Jul 14 23:26:55.660483 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Jul 14 23:26:55.660532 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Jul 14 23:26:55.660584 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Jul 14 23:26:55.660633 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Jul 14 23:26:55.660685 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Jul 14 23:26:55.660733 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Jul 14 23:26:55.660787 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Jul 14 23:26:55.660836 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Jul 14 23:26:55.660889 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Jul 14 23:26:55.660938 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Jul 14 23:26:55.661007 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Jul 14 23:26:55.661060 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Jul 14 23:26:55.661112 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Jul 14 23:26:55.661160 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Jul 14 23:26:55.661212 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Jul 14 23:26:55.661261 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Jul 14 23:26:55.661313 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Jul 14 23:26:55.661363 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Jul 14 23:26:55.661416 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Jul 14 23:26:55.661466 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Jul 14 23:26:55.661518 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Jul 14 23:26:55.661566 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Jul 14 23:26:55.661619 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Jul 14 23:26:55.661667 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Jul 14 23:26:55.661723 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Jul 14 23:26:55.661771 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Jul 14 23:26:55.661825 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Jul 14 23:26:55.661874 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Jul 14 23:26:55.661926 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Jul 14 23:26:55.662001 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Jul 14 23:26:55.662057 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Jul 14 23:26:55.662106 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Jul 14 23:26:55.662157 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Jul 14 23:26:55.662204 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Jul 14 23:26:55.662256 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Jul 14 23:26:55.662303 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Jul 14 23:26:55.662356 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Jul 14 23:26:55.662404 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Jul 14 23:26:55.662457 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Jul 14 23:26:55.662505 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Jul 14 23:26:55.662556 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Jul 14 23:26:55.662604 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Jul 14 23:26:55.662657 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Jul 14 23:26:55.662708 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Jul 14 23:26:55.662759 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Jul 14 23:26:55.662806 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Jul 14 23:26:55.662858 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Jul 14 23:26:55.662905 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Jul 14 23:26:55.663016 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Jul 14 23:26:55.663075 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Jul 14 23:26:55.663126 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Jul 14 23:26:55.663175 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Jul 14 23:26:55.663226 kernel: pci_bus 0000:01: extended config space not accessible Jul 14 23:26:55.663274 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 14 23:26:55.663325 kernel: pci_bus 0000:02: extended config space not accessible Jul 14 23:26:55.663335 kernel: acpiphp: Slot [32] registered Jul 14 23:26:55.663341 kernel: acpiphp: Slot [33] registered Jul 14 23:26:55.663347 kernel: acpiphp: Slot [34] registered Jul 14 23:26:55.663353 kernel: acpiphp: Slot [35] registered Jul 14 23:26:55.663359 kernel: acpiphp: Slot [36] registered Jul 14 23:26:55.663364 kernel: acpiphp: Slot [37] registered Jul 14 23:26:55.663370 kernel: acpiphp: Slot [38] registered Jul 14 23:26:55.663376 kernel: acpiphp: Slot [39] registered Jul 14 23:26:55.663381 kernel: acpiphp: Slot [40] registered Jul 14 23:26:55.663388 kernel: acpiphp: Slot [41] registered Jul 14 23:26:55.663394 kernel: acpiphp: Slot [42] registered Jul 14 23:26:55.663399 kernel: acpiphp: Slot [43] registered Jul 14 23:26:55.663405 kernel: acpiphp: Slot [44] registered Jul 14 23:26:55.663411 kernel: acpiphp: Slot [45] registered Jul 14 23:26:55.663417 kernel: acpiphp: Slot [46] registered Jul 14 23:26:55.663422 kernel: acpiphp: Slot [47] registered Jul 14 23:26:55.663428 kernel: acpiphp: Slot [48] registered Jul 14 23:26:55.663433 kernel: acpiphp: Slot [49] registered Jul 14 23:26:55.663439 kernel: acpiphp: Slot [50] registered Jul 14 23:26:55.663446 kernel: acpiphp: Slot [51] registered Jul 14 23:26:55.663451 kernel: acpiphp: Slot [52] registered Jul 14 23:26:55.663457 kernel: acpiphp: Slot [53] registered Jul 14 23:26:55.663463 kernel: acpiphp: Slot [54] registered Jul 14 23:26:55.663469 kernel: acpiphp: Slot [55] registered Jul 14 23:26:55.663474 kernel: acpiphp: Slot [56] registered Jul 14 23:26:55.663480 kernel: acpiphp: Slot [57] registered Jul 14 23:26:55.663485 kernel: acpiphp: Slot [58] registered Jul 14 23:26:55.663491 kernel: acpiphp: Slot [59] registered Jul 14 23:26:55.663497 kernel: acpiphp: Slot [60] registered Jul 14 23:26:55.663503 kernel: acpiphp: Slot [61] registered Jul 14 23:26:55.663509 kernel: acpiphp: Slot [62] registered Jul 14 23:26:55.663515 kernel: acpiphp: Slot [63] registered Jul 14 23:26:55.663562 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Jul 14 23:26:55.663611 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jul 14 23:26:55.663658 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jul 14 23:26:55.663705 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 14 23:26:55.663752 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Jul 14 23:26:55.663801 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Jul 14 23:26:55.663850 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Jul 14 23:26:55.663897 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Jul 14 23:26:55.663943 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Jul 14 23:26:55.664007 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Jul 14 23:26:55.664059 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Jul 14 23:26:55.664110 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Jul 14 23:26:55.664163 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jul 14 23:26:55.664213 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jul 14 23:26:55.664263 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jul 14 23:26:55.664311 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jul 14 23:26:55.664359 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jul 14 23:26:55.664407 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jul 14 23:26:55.664456 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jul 14 23:26:55.664506 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jul 14 23:26:55.664554 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jul 14 23:26:55.664602 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jul 14 23:26:55.664651 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jul 14 23:26:55.664699 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jul 14 23:26:55.664747 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jul 14 23:26:55.664795 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jul 14 23:26:55.664844 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jul 14 23:26:55.664894 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jul 14 23:26:55.664943 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jul 14 23:26:55.664998 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jul 14 23:26:55.665046 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jul 14 23:26:55.665094 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 14 23:26:55.665145 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jul 14 23:26:55.665193 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jul 14 23:26:55.665241 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jul 14 23:26:55.665289 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jul 14 23:26:55.665338 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jul 14 23:26:55.665385 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jul 14 23:26:55.665434 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jul 14 23:26:55.665483 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jul 14 23:26:55.665531 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jul 14 23:26:55.665587 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Jul 14 23:26:55.665639 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Jul 14 23:26:55.665689 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Jul 14 23:26:55.665738 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Jul 14 23:26:55.665787 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Jul 14 23:26:55.665836 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jul 14 23:26:55.665888 kernel: pci 0000:0b:00.0: supports D1 D2 Jul 14 23:26:55.665938 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 14 23:26:55.665999 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jul 14 23:26:55.666055 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jul 14 23:26:55.666103 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jul 14 23:26:55.666151 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jul 14 23:26:55.666199 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jul 14 23:26:55.666250 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jul 14 23:26:55.666298 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jul 14 23:26:55.666346 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jul 14 23:26:55.666395 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jul 14 23:26:55.666442 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jul 14 23:26:55.666499 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jul 14 23:26:55.666548 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jul 14 23:26:55.666598 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jul 14 23:26:55.666649 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jul 14 23:26:55.666707 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 14 23:26:55.666758 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jul 14 23:26:55.666806 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jul 14 23:26:55.667123 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 14 23:26:55.667177 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jul 14 23:26:55.667227 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jul 14 23:26:55.667275 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jul 14 23:26:55.667325 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jul 14 23:26:55.667373 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jul 14 23:26:55.667420 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jul 14 23:26:55.667468 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jul 14 23:26:55.667515 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jul 14 23:26:55.667562 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 14 23:26:55.667609 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jul 14 23:26:55.667657 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jul 14 23:26:55.667706 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jul 14 23:26:55.667754 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 14 23:26:55.667803 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jul 14 23:26:55.667851 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jul 14 23:26:55.667898 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jul 14 23:26:55.667945 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jul 14 23:26:55.668003 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jul 14 23:26:55.668053 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jul 14 23:26:55.668101 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jul 14 23:26:55.668149 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jul 14 23:26:55.668198 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jul 14 23:26:55.668246 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jul 14 23:26:55.668293 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 14 23:26:55.668340 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jul 14 23:26:55.668388 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jul 14 23:26:55.668437 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 14 23:26:55.668486 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jul 14 23:26:55.668534 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jul 14 23:26:55.668582 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jul 14 23:26:55.668630 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jul 14 23:26:55.668677 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jul 14 23:26:55.668724 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jul 14 23:26:55.668772 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jul 14 23:26:55.668821 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jul 14 23:26:55.668870 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 14 23:26:55.668918 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jul 14 23:26:55.668984 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jul 14 23:26:55.669034 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jul 14 23:26:55.669086 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jul 14 23:26:55.669135 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jul 14 23:26:55.669182 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jul 14 23:26:55.669232 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jul 14 23:26:55.669279 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jul 14 23:26:55.669328 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jul 14 23:26:55.669375 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jul 14 23:26:55.669423 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jul 14 23:26:55.669472 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jul 14 23:26:55.669519 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jul 14 23:26:55.669565 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 14 23:26:55.669616 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jul 14 23:26:55.669664 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jul 14 23:26:55.669712 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jul 14 23:26:55.669761 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jul 14 23:26:55.669814 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jul 14 23:26:55.669875 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jul 14 23:26:55.669937 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jul 14 23:26:55.670109 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jul 14 23:26:55.670162 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jul 14 23:26:55.670210 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jul 14 23:26:55.670258 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jul 14 23:26:55.670305 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 14 23:26:55.670314 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Jul 14 23:26:55.670320 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Jul 14 23:26:55.670326 kernel: ACPI: PCI: Interrupt link LNKB disabled Jul 14 23:26:55.670332 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 14 23:26:55.670339 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Jul 14 23:26:55.670345 kernel: iommu: Default domain type: Translated Jul 14 23:26:55.670351 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 14 23:26:55.670398 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Jul 14 23:26:55.670445 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 14 23:26:55.670492 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Jul 14 23:26:55.670500 kernel: vgaarb: loaded Jul 14 23:26:55.670506 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 14 23:26:55.670512 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 14 23:26:55.670520 kernel: PTP clock support registered Jul 14 23:26:55.670525 kernel: PCI: Using ACPI for IRQ routing Jul 14 23:26:55.670531 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 14 23:26:55.670537 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Jul 14 23:26:55.670543 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Jul 14 23:26:55.670548 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Jul 14 23:26:55.670554 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Jul 14 23:26:55.670560 kernel: clocksource: Switched to clocksource tsc-early Jul 14 23:26:55.670566 kernel: VFS: Disk quotas dquot_6.6.0 Jul 14 23:26:55.670573 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 14 23:26:55.670579 kernel: pnp: PnP ACPI init Jul 14 23:26:55.670628 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Jul 14 23:26:55.670673 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Jul 14 23:26:55.670747 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Jul 14 23:26:55.670808 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Jul 14 23:26:55.670856 kernel: pnp 00:06: [dma 2] Jul 14 23:26:55.670907 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Jul 14 23:26:55.670958 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Jul 14 23:26:55.671004 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Jul 14 23:26:55.671012 kernel: pnp: PnP ACPI: found 8 devices Jul 14 23:26:55.671018 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 14 23:26:55.671024 kernel: NET: Registered PF_INET protocol family Jul 14 23:26:55.671030 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 14 23:26:55.671044 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 14 23:26:55.671051 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 14 23:26:55.671057 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 14 23:26:55.671063 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Jul 14 23:26:55.671068 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 14 23:26:55.671074 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 14 23:26:55.671080 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 14 23:26:55.671086 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 14 23:26:55.671092 kernel: NET: Registered PF_XDP protocol family Jul 14 23:26:55.671145 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 14 23:26:55.671195 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jul 14 23:26:55.671246 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jul 14 23:26:55.671294 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jul 14 23:26:55.671343 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jul 14 23:26:55.671391 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jul 14 23:26:55.671461 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Jul 14 23:26:55.671533 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jul 14 23:26:55.671584 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jul 14 23:26:55.671637 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jul 14 23:26:55.671686 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jul 14 23:26:55.671734 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jul 14 23:26:55.671784 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jul 14 23:26:55.671832 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jul 14 23:26:55.671880 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jul 14 23:26:55.671928 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jul 14 23:26:55.672024 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jul 14 23:26:55.672079 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jul 14 23:26:55.672129 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jul 14 23:26:55.672177 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Jul 14 23:26:55.672225 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Jul 14 23:26:55.672273 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Jul 14 23:26:55.672321 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Jul 14 23:26:55.672369 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Jul 14 23:26:55.672419 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Jul 14 23:26:55.672467 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jul 14 23:26:55.672514 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jul 14 23:26:55.672562 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jul 14 23:26:55.672609 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jul 14 23:26:55.672656 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jul 14 23:26:55.672703 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jul 14 23:26:55.672753 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jul 14 23:26:55.672801 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jul 14 23:26:55.672849 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jul 14 23:26:55.672896 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jul 14 23:26:55.672944 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jul 14 23:26:55.672999 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jul 14 23:26:55.673047 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jul 14 23:26:55.673095 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jul 14 23:26:55.673153 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jul 14 23:26:55.673219 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jul 14 23:26:55.673268 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jul 14 23:26:55.673329 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jul 14 23:26:55.673377 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jul 14 23:26:55.673425 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jul 14 23:26:55.673472 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jul 14 23:26:55.673519 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jul 14 23:26:55.673567 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jul 14 23:26:55.673616 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jul 14 23:26:55.673664 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jul 14 23:26:55.673712 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jul 14 23:26:55.673759 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jul 14 23:26:55.673807 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jul 14 23:26:55.673855 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jul 14 23:26:55.673902 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jul 14 23:26:55.679968 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jul 14 23:26:55.680079 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jul 14 23:26:55.680154 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jul 14 23:26:55.680208 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jul 14 23:26:55.680258 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jul 14 23:26:55.680306 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jul 14 23:26:55.680356 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jul 14 23:26:55.680403 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jul 14 23:26:55.680452 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jul 14 23:26:55.680503 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jul 14 23:26:55.680551 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jul 14 23:26:55.680599 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jul 14 23:26:55.680647 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jul 14 23:26:55.680694 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jul 14 23:26:55.680742 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jul 14 23:26:55.680789 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jul 14 23:26:55.680836 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jul 14 23:26:55.680883 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jul 14 23:26:55.680933 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jul 14 23:26:55.688254 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jul 14 23:26:55.688325 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jul 14 23:26:55.688382 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jul 14 23:26:55.688434 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jul 14 23:26:55.688483 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jul 14 23:26:55.688533 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jul 14 23:26:55.688582 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jul 14 23:26:55.688632 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jul 14 23:26:55.688684 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jul 14 23:26:55.688734 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jul 14 23:26:55.688781 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jul 14 23:26:55.688830 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jul 14 23:26:55.688878 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jul 14 23:26:55.688928 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jul 14 23:26:55.690018 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jul 14 23:26:55.690079 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jul 14 23:26:55.690131 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jul 14 23:26:55.690183 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jul 14 23:26:55.690234 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jul 14 23:26:55.690284 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jul 14 23:26:55.691018 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jul 14 23:26:55.691078 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jul 14 23:26:55.691130 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jul 14 23:26:55.692232 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jul 14 23:26:55.692294 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jul 14 23:26:55.692348 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jul 14 23:26:55.692405 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jul 14 23:26:55.692461 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jul 14 23:26:55.692511 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jul 14 23:26:55.692561 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jul 14 23:26:55.692610 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jul 14 23:26:55.692660 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jul 14 23:26:55.692708 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jul 14 23:26:55.692758 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jul 14 23:26:55.692806 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jul 14 23:26:55.692856 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 14 23:26:55.692906 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Jul 14 23:26:55.693622 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jul 14 23:26:55.693684 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jul 14 23:26:55.693735 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 14 23:26:55.693791 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Jul 14 23:26:55.693843 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jul 14 23:26:55.693892 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jul 14 23:26:55.693941 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jul 14 23:26:55.694009 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Jul 14 23:26:55.694084 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jul 14 23:26:55.694139 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jul 14 23:26:55.694197 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jul 14 23:26:55.694248 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jul 14 23:26:55.694298 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jul 14 23:26:55.694346 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jul 14 23:26:55.694392 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jul 14 23:26:55.694443 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jul 14 23:26:55.694491 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jul 14 23:26:55.694543 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jul 14 23:26:55.694591 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jul 14 23:26:55.694639 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jul 14 23:26:55.694688 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jul 14 23:26:55.694735 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 14 23:26:55.694785 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jul 14 23:26:55.694835 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jul 14 23:26:55.694883 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jul 14 23:26:55.694931 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jul 14 23:26:55.695015 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jul 14 23:26:55.695070 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jul 14 23:26:55.695118 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jul 14 23:26:55.695166 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jul 14 23:26:55.695214 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jul 14 23:26:55.695266 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Jul 14 23:26:55.695318 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jul 14 23:26:55.695366 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jul 14 23:26:55.695415 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jul 14 23:26:55.695463 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Jul 14 23:26:55.695514 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jul 14 23:26:55.695562 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jul 14 23:26:55.695610 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jul 14 23:26:55.695657 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jul 14 23:26:55.695707 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jul 14 23:26:55.695757 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jul 14 23:26:55.695806 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jul 14 23:26:55.695854 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jul 14 23:26:55.695903 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jul 14 23:26:55.695971 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jul 14 23:26:55.696022 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 14 23:26:55.696078 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jul 14 23:26:55.696131 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jul 14 23:26:55.696189 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 14 23:26:55.696241 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jul 14 23:26:55.696292 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jul 14 23:26:55.696341 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jul 14 23:26:55.696390 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jul 14 23:26:55.696443 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jul 14 23:26:55.696491 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jul 14 23:26:55.696540 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jul 14 23:26:55.696588 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jul 14 23:26:55.696636 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 14 23:26:55.696686 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jul 14 23:26:55.696736 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jul 14 23:26:55.696784 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jul 14 23:26:55.696832 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 14 23:26:55.696881 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jul 14 23:26:55.696929 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jul 14 23:26:55.696989 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jul 14 23:26:55.697039 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jul 14 23:26:55.697089 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jul 14 23:26:55.697138 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jul 14 23:26:55.697188 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jul 14 23:26:55.697238 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jul 14 23:26:55.697302 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jul 14 23:26:55.697352 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jul 14 23:26:55.697401 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 14 23:26:55.697693 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jul 14 23:26:55.697745 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jul 14 23:26:55.697796 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 14 23:26:55.697852 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jul 14 23:26:55.697902 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jul 14 23:26:55.698200 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jul 14 23:26:55.698272 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jul 14 23:26:55.698325 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jul 14 23:26:55.698382 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jul 14 23:26:55.698438 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jul 14 23:26:55.698487 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jul 14 23:26:55.698535 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 14 23:26:55.698585 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jul 14 23:26:55.698633 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jul 14 23:26:55.698682 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jul 14 23:26:55.698732 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jul 14 23:26:55.698782 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jul 14 23:26:55.698829 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jul 14 23:26:55.698877 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jul 14 23:26:55.698925 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jul 14 23:26:55.699207 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jul 14 23:26:55.699264 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jul 14 23:26:55.699316 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jul 14 23:26:55.699370 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jul 14 23:26:55.699432 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jul 14 23:26:55.699487 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 14 23:26:55.699549 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jul 14 23:26:55.699600 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jul 14 23:26:55.699649 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jul 14 23:26:55.699699 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jul 14 23:26:55.699748 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jul 14 23:26:55.699796 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jul 14 23:26:55.699845 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jul 14 23:26:55.699894 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jul 14 23:26:55.699945 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jul 14 23:26:55.700284 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jul 14 23:26:55.700338 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jul 14 23:26:55.700387 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 14 23:26:55.700697 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Jul 14 23:26:55.700754 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Jul 14 23:26:55.701031 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Jul 14 23:26:55.701088 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Jul 14 23:26:55.701135 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Jul 14 23:26:55.701183 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Jul 14 23:26:55.701232 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Jul 14 23:26:55.701277 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 14 23:26:55.701322 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Jul 14 23:26:55.701365 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Jul 14 23:26:55.701409 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Jul 14 23:26:55.701455 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Jul 14 23:26:55.701499 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Jul 14 23:26:55.701549 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Jul 14 23:26:55.701595 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Jul 14 23:26:55.701639 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Jul 14 23:26:55.701689 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Jul 14 23:26:55.701735 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Jul 14 23:26:55.701782 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Jul 14 23:26:55.701832 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Jul 14 23:26:55.701878 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Jul 14 23:26:55.701921 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Jul 14 23:26:55.703550 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Jul 14 23:26:55.703604 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Jul 14 23:26:55.703656 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Jul 14 23:26:55.703706 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 14 23:26:55.703755 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Jul 14 23:26:55.703800 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Jul 14 23:26:55.703853 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Jul 14 23:26:55.703898 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Jul 14 23:26:55.703953 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Jul 14 23:26:55.704018 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Jul 14 23:26:55.704070 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Jul 14 23:26:55.704119 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Jul 14 23:26:55.704175 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Jul 14 23:26:55.704226 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Jul 14 23:26:55.704271 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Jul 14 23:26:55.704319 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Jul 14 23:26:55.704376 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Jul 14 23:26:55.704424 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Jul 14 23:26:55.704469 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Jul 14 23:26:55.704519 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Jul 14 23:26:55.704564 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 14 23:26:55.704614 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Jul 14 23:26:55.704662 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 14 23:26:55.704713 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Jul 14 23:26:55.704759 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Jul 14 23:26:55.704808 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Jul 14 23:26:55.704854 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Jul 14 23:26:55.704903 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Jul 14 23:26:55.704961 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 14 23:26:55.705016 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Jul 14 23:26:55.705067 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Jul 14 23:26:55.705112 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 14 23:26:55.705162 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Jul 14 23:26:55.705217 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Jul 14 23:26:55.705264 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Jul 14 23:26:55.705316 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Jul 14 23:26:55.705597 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Jul 14 23:26:55.705652 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Jul 14 23:26:55.705707 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Jul 14 23:26:55.705754 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 14 23:26:55.705809 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Jul 14 23:26:55.705857 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 14 23:26:55.705908 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Jul 14 23:26:55.706262 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Jul 14 23:26:55.706324 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Jul 14 23:26:55.706656 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Jul 14 23:26:55.706723 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Jul 14 23:26:55.706774 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 14 23:26:55.706825 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Jul 14 23:26:55.706871 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Jul 14 23:26:55.706916 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Jul 14 23:26:55.707039 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Jul 14 23:26:55.707086 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Jul 14 23:26:55.707131 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Jul 14 23:26:55.707182 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Jul 14 23:26:55.707228 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Jul 14 23:26:55.707279 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Jul 14 23:26:55.707329 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 14 23:26:55.707379 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Jul 14 23:26:55.707430 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Jul 14 23:26:55.707483 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Jul 14 23:26:55.707594 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Jul 14 23:26:55.707652 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Jul 14 23:26:55.707698 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Jul 14 23:26:55.707752 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Jul 14 23:26:55.707797 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 14 23:26:55.707856 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 14 23:26:55.707866 kernel: PCI: CLS 32 bytes, default 64 Jul 14 23:26:55.707873 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 14 23:26:55.707879 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jul 14 23:26:55.707885 kernel: clocksource: Switched to clocksource tsc Jul 14 23:26:55.707892 kernel: Initialise system trusted keyrings Jul 14 23:26:55.707898 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 14 23:26:55.707905 kernel: Key type asymmetric registered Jul 14 23:26:55.707913 kernel: Asymmetric key parser 'x509' registered Jul 14 23:26:55.707919 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 14 23:26:55.707925 kernel: io scheduler mq-deadline registered Jul 14 23:26:55.707932 kernel: io scheduler kyber registered Jul 14 23:26:55.707938 kernel: io scheduler bfq registered Jul 14 23:26:55.708263 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Jul 14 23:26:55.708320 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:26:55.708605 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Jul 14 23:26:55.708663 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:26:55.708723 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Jul 14 23:26:55.708777 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:26:55.708829 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Jul 14 23:26:55.708889 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:26:55.709189 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Jul 14 23:26:55.709256 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:26:55.709313 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Jul 14 23:26:55.709364 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:26:55.709415 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Jul 14 23:26:55.709466 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:26:55.709517 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Jul 14 23:26:55.709569 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:26:55.709620 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Jul 14 23:26:55.709669 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:26:55.709720 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Jul 14 23:26:55.709772 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:26:55.709823 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Jul 14 23:26:55.709872 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:26:55.709926 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Jul 14 23:26:55.709983 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:26:55.710034 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Jul 14 23:26:55.710083 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:26:55.710133 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Jul 14 23:26:55.710185 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:26:55.710235 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Jul 14 23:26:55.710284 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:26:55.710334 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Jul 14 23:26:55.710384 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:26:55.710436 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Jul 14 23:26:55.710485 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:26:55.710536 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Jul 14 23:26:55.710585 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:26:55.710636 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Jul 14 23:26:55.710694 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:26:55.710746 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Jul 14 23:26:55.711072 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:26:55.711133 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Jul 14 23:26:55.711185 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:26:55.711260 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Jul 14 23:26:55.711530 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:26:55.711799 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Jul 14 23:26:55.711861 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:26:55.711913 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Jul 14 23:26:55.712001 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:26:55.712281 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Jul 14 23:26:55.712337 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:26:55.712390 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Jul 14 23:26:55.712726 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:26:55.712783 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Jul 14 23:26:55.712836 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:26:55.713210 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Jul 14 23:26:55.713265 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:26:55.713321 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Jul 14 23:26:55.713372 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:26:55.713422 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Jul 14 23:26:55.713472 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:26:55.713522 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Jul 14 23:26:55.713571 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:26:55.713624 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Jul 14 23:26:55.713674 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 14 23:26:55.713683 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 14 23:26:55.713689 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 14 23:26:55.713696 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 14 23:26:55.713702 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Jul 14 23:26:55.713708 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 14 23:26:55.713716 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 14 23:26:55.713766 kernel: rtc_cmos 00:01: registered as rtc0 Jul 14 23:26:55.713812 kernel: rtc_cmos 00:01: setting system clock to 2025-07-14T23:26:55 UTC (1752535615) Jul 14 23:26:55.713856 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Jul 14 23:26:55.713865 kernel: intel_pstate: CPU model not supported Jul 14 23:26:55.713871 kernel: NET: Registered PF_INET6 protocol family Jul 14 23:26:55.713878 kernel: Segment Routing with IPv6 Jul 14 23:26:55.713884 kernel: In-situ OAM (IOAM) with IPv6 Jul 14 23:26:55.713892 kernel: NET: Registered PF_PACKET protocol family Jul 14 23:26:55.713898 kernel: Key type dns_resolver registered Jul 14 23:26:55.713904 kernel: IPI shorthand broadcast: enabled Jul 14 23:26:55.714133 kernel: sched_clock: Marking stable (885200413, 222191792)->(1173656142, -66263937) Jul 14 23:26:55.714144 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 14 23:26:55.714150 kernel: registered taskstats version 1 Jul 14 23:26:55.714156 kernel: Loading compiled-in X.509 certificates Jul 14 23:26:55.714163 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.187-flatcar: 14a6940dcbc00bab0c83ae71c4abeb315720716d' Jul 14 23:26:55.714169 kernel: Key type .fscrypt registered Jul 14 23:26:55.714178 kernel: Key type fscrypt-provisioning registered Jul 14 23:26:55.714184 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 14 23:26:55.714190 kernel: ima: Allocated hash algorithm: sha1 Jul 14 23:26:55.714196 kernel: ima: No architecture policies found Jul 14 23:26:55.714202 kernel: clk: Disabling unused clocks Jul 14 23:26:55.714209 kernel: Freeing unused kernel image (initmem) memory: 47476K Jul 14 23:26:55.714215 kernel: Write protecting the kernel read-only data: 28672k Jul 14 23:26:55.714221 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jul 14 23:26:55.714228 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Jul 14 23:26:55.714235 kernel: Run /init as init process Jul 14 23:26:55.714241 kernel: with arguments: Jul 14 23:26:55.714247 kernel: /init Jul 14 23:26:55.714253 kernel: with environment: Jul 14 23:26:55.714259 kernel: HOME=/ Jul 14 23:26:55.714265 kernel: TERM=linux Jul 14 23:26:55.714271 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 14 23:26:55.714279 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 14 23:26:55.714289 systemd[1]: Detected virtualization vmware. Jul 14 23:26:55.714296 systemd[1]: Detected architecture x86-64. Jul 14 23:26:55.714302 systemd[1]: Running in initrd. Jul 14 23:26:55.714308 systemd[1]: No hostname configured, using default hostname. Jul 14 23:26:55.714315 systemd[1]: Hostname set to . Jul 14 23:26:55.714322 systemd[1]: Initializing machine ID from random generator. Jul 14 23:26:55.714335 systemd[1]: Queued start job for default target initrd.target. Jul 14 23:26:55.714342 systemd[1]: Started systemd-ask-password-console.path. Jul 14 23:26:55.714350 systemd[1]: Reached target cryptsetup.target. Jul 14 23:26:55.714356 systemd[1]: Reached target paths.target. Jul 14 23:26:55.714363 systemd[1]: Reached target slices.target. Jul 14 23:26:55.714369 systemd[1]: Reached target swap.target. Jul 14 23:26:55.714375 systemd[1]: Reached target timers.target. Jul 14 23:26:55.714382 systemd[1]: Listening on iscsid.socket. Jul 14 23:26:55.714388 systemd[1]: Listening on iscsiuio.socket. Jul 14 23:26:55.714396 systemd[1]: Listening on systemd-journald-audit.socket. Jul 14 23:26:55.714403 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 14 23:26:55.714409 systemd[1]: Listening on systemd-journald.socket. Jul 14 23:26:55.714416 systemd[1]: Listening on systemd-networkd.socket. Jul 14 23:26:55.714423 systemd[1]: Listening on systemd-udevd-control.socket. Jul 14 23:26:55.714429 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 14 23:26:55.714436 systemd[1]: Reached target sockets.target. Jul 14 23:26:55.714442 systemd[1]: Starting kmod-static-nodes.service... Jul 14 23:26:55.714449 systemd[1]: Finished network-cleanup.service. Jul 14 23:26:55.714456 systemd[1]: Starting systemd-fsck-usr.service... Jul 14 23:26:55.714463 systemd[1]: Starting systemd-journald.service... Jul 14 23:26:55.714469 systemd[1]: Starting systemd-modules-load.service... Jul 14 23:26:55.714475 systemd[1]: Starting systemd-resolved.service... Jul 14 23:26:55.714482 systemd[1]: Starting systemd-vconsole-setup.service... Jul 14 23:26:55.714489 systemd[1]: Finished kmod-static-nodes.service. Jul 14 23:26:55.714495 systemd[1]: Finished systemd-fsck-usr.service. Jul 14 23:26:55.714502 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 14 23:26:55.714508 systemd[1]: Finished systemd-vconsole-setup.service. Jul 14 23:26:55.714516 kernel: audit: type=1130 audit(1752535615.653:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:26:55.714523 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 14 23:26:55.714530 kernel: audit: type=1130 audit(1752535615.662:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:26:55.714536 systemd[1]: Starting dracut-cmdline-ask.service... Jul 14 23:26:55.714542 systemd[1]: Finished dracut-cmdline-ask.service. Jul 14 23:26:55.714549 kernel: audit: type=1130 audit(1752535615.678:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:26:55.714556 systemd[1]: Starting dracut-cmdline.service... Jul 14 23:26:55.714562 systemd[1]: Started systemd-resolved.service. Jul 14 23:26:55.714570 systemd[1]: Reached target nss-lookup.target. Jul 14 23:26:55.714577 kernel: audit: type=1130 audit(1752535615.693:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:26:55.714583 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 14 23:26:55.714593 systemd-journald[216]: Journal started Jul 14 23:26:55.714628 systemd-journald[216]: Runtime Journal (/run/log/journal/717697f9103646a4abb16290c7bee000) is 4.8M, max 38.8M, 34.0M free. Jul 14 23:26:55.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:26:55.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:26:55.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:26:55.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:26:55.652209 systemd-modules-load[217]: Inserted module 'overlay' Jul 14 23:26:55.717357 systemd[1]: Started systemd-journald.service. Jul 14 23:26:55.687374 systemd-resolved[218]: Positive Trust Anchors: Jul 14 23:26:55.720161 kernel: audit: type=1130 audit(1752535615.715:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:26:55.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:26:55.687379 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 23:26:55.721082 kernel: Bridge firewalling registered Jul 14 23:26:55.687405 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 14 23:26:55.691498 systemd-resolved[218]: Defaulting to hostname 'linux'. Jul 14 23:26:55.721220 systemd-modules-load[217]: Inserted module 'br_netfilter' Jul 14 23:26:55.722491 dracut-cmdline[232]: dracut-dracut-053 Jul 14 23:26:55.722491 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Jul 14 23:26:55.722491 dracut-cmdline[232]: BEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=d9618a329f89744ce954b0fa1b02ce8164745af7389f9de9c3421ad2087e0dba Jul 14 23:26:55.736962 kernel: SCSI subsystem initialized Jul 14 23:26:55.745302 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 14 23:26:55.745330 kernel: device-mapper: uevent: version 1.0.3 Jul 14 23:26:55.745339 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 14 23:26:55.751010 systemd-modules-load[217]: Inserted module 'dm_multipath' Jul 14 23:26:55.751437 systemd[1]: Finished systemd-modules-load.service. Jul 14 23:26:55.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:26:55.751972 systemd[1]: Starting systemd-sysctl.service... Jul 14 23:26:55.755250 kernel: audit: type=1130 audit(1752535615.749:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:26:55.759970 kernel: Loading iSCSI transport class v2.0-870. Jul 14 23:26:55.760749 systemd[1]: Finished systemd-sysctl.service. Jul 14 23:26:55.763277 kernel: audit: type=1130 audit(1752535615.758:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:26:55.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:26:55.773964 kernel: iscsi: registered transport (tcp) Jul 14 23:26:55.791375 kernel: iscsi: registered transport (qla4xxx) Jul 14 23:26:55.791416 kernel: QLogic iSCSI HBA Driver Jul 14 23:26:55.810968 kernel: audit: type=1130 audit(1752535615.805:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:26:55.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:26:55.807592 systemd[1]: Finished dracut-cmdline.service. Jul 14 23:26:55.808202 systemd[1]: Starting dracut-pre-udev.service... Jul 14 23:26:55.845973 kernel: raid6: avx2x4 gen() 46699 MB/s Jul 14 23:26:55.862968 kernel: raid6: avx2x4 xor() 19327 MB/s Jul 14 23:26:55.879959 kernel: raid6: avx2x2 gen() 53302 MB/s Jul 14 23:26:55.896971 kernel: raid6: avx2x2 xor() 31834 MB/s Jul 14 23:26:55.913980 kernel: raid6: avx2x1 gen() 43681 MB/s Jul 14 23:26:55.931024 kernel: raid6: avx2x1 xor() 24101 MB/s Jul 14 23:26:55.947965 kernel: raid6: sse2x4 gen() 19801 MB/s Jul 14 23:26:55.964963 kernel: raid6: sse2x4 xor() 11852 MB/s Jul 14 23:26:55.981966 kernel: raid6: sse2x2 gen() 21319 MB/s Jul 14 23:26:55.998963 kernel: raid6: sse2x2 xor() 13298 MB/s Jul 14 23:26:56.015982 kernel: raid6: sse2x1 gen() 16511 MB/s Jul 14 23:26:56.033432 kernel: raid6: sse2x1 xor() 7733 MB/s Jul 14 23:26:56.033478 kernel: raid6: using algorithm avx2x2 gen() 53302 MB/s Jul 14 23:26:56.033495 kernel: raid6: .... xor() 31834 MB/s, rmw enabled Jul 14 23:26:56.034148 kernel: raid6: using avx2x2 recovery algorithm Jul 14 23:26:56.044970 kernel: xor: automatically using best checksumming function avx Jul 14 23:26:56.106963 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jul 14 23:26:56.112370 systemd[1]: Finished dracut-pre-udev.service. Jul 14 23:26:56.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:26:56.113000 audit: BPF prog-id=7 op=LOAD Jul 14 23:26:56.113000 audit: BPF prog-id=8 op=LOAD Jul 14 23:26:56.115969 kernel: audit: type=1130 audit(1752535616.110:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:26:56.115650 systemd[1]: Starting systemd-udevd.service... Jul 14 23:26:56.125968 systemd-udevd[414]: Using default interface naming scheme 'v252'. Jul 14 23:26:56.129580 systemd[1]: Started systemd-udevd.service. Jul 14 23:26:56.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:26:56.132421 systemd[1]: Starting dracut-pre-trigger.service... Jul 14 23:26:56.140668 dracut-pre-trigger[428]: rd.md=0: removing MD RAID activation Jul 14 23:26:56.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:26:56.156894 systemd[1]: Finished dracut-pre-trigger.service. Jul 14 23:26:56.157433 systemd[1]: Starting systemd-udev-trigger.service... Jul 14 23:26:56.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:26:56.222294 systemd[1]: Finished systemd-udev-trigger.service. Jul 14 23:26:56.285761 kernel: VMware vmxnet3 virtual NIC driver - version 1.6.0.0-k-NAPI Jul 14 23:26:56.285807 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Jul 14 23:26:56.300220 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Jul 14 23:26:56.302478 kernel: VMware PVSCSI driver - version 1.0.7.0-k Jul 14 23:26:56.302509 kernel: vmw_pvscsi: using 64bit dma Jul 14 23:26:56.302536 kernel: vmw_pvscsi: max_id: 16 Jul 14 23:26:56.302563 kernel: vmw_pvscsi: setting ring_pages to 8 Jul 14 23:26:56.302579 kernel: vmw_pvscsi: enabling reqCallThreshold Jul 14 23:26:56.302590 kernel: vmw_pvscsi: driver-based request coalescing enabled Jul 14 23:26:56.302606 kernel: vmw_pvscsi: using MSI-X Jul 14 23:26:56.320265 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Jul 14 23:26:56.320480 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Jul 14 23:26:56.320634 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Jul 14 23:26:56.323242 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Jul 14 23:26:56.323956 kernel: cryptd: max_cpu_qlen set to 1000 Jul 14 23:26:56.331363 kernel: libata version 3.00 loaded. Jul 14 23:26:56.331386 kernel: AVX2 version of gcm_enc/dec engaged. Jul 14 23:26:56.332062 kernel: AES CTR mode by8 optimization enabled Jul 14 23:26:56.337960 kernel: ata_piix 0000:00:07.1: version 2.13 Jul 14 23:26:56.348019 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Jul 14 23:26:56.391269 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 14 23:26:56.391407 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Jul 14 23:26:56.391527 kernel: sd 0:0:0:0: [sda] Cache data unavailable Jul 14 23:26:56.391622 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Jul 14 23:26:56.391725 kernel: scsi host1: ata_piix Jul 14 23:26:56.391837 kernel: scsi host2: ata_piix Jul 14 23:26:56.391922 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Jul 14 23:26:56.391934 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Jul 14 23:26:56.391944 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 14 23:26:56.391971 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 14 23:26:56.515967 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Jul 14 23:26:56.523274 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Jul 14 23:26:56.549664 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Jul 14 23:26:56.565684 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 14 23:26:56.565697 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (466) Jul 14 23:26:56.565705 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 14 23:26:56.560209 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 14 23:26:56.562545 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 14 23:26:56.572014 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 14 23:26:56.575249 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 14 23:26:56.575493 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 14 23:26:56.576198 systemd[1]: Starting disk-uuid.service... Jul 14 23:26:56.666972 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 14 23:26:56.672967 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 14 23:26:57.683310 disk-uuid[549]: The operation has completed successfully. Jul 14 23:26:57.686371 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 14 23:26:57.718344 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 14 23:26:57.718665 systemd[1]: Finished disk-uuid.service. Jul 14 23:26:57.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:26:57.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:26:57.719460 systemd[1]: Starting verity-setup.service... Jul 14 23:26:57.743967 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 14 23:26:57.823260 systemd[1]: Found device dev-mapper-usr.device. Jul 14 23:26:57.824785 systemd[1]: Mounting sysusr-usr.mount... Jul 14 23:26:57.826106 systemd[1]: Finished verity-setup.service. Jul 14 23:26:57.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:26:57.963648 systemd[1]: Mounted sysusr-usr.mount. Jul 14 23:26:57.963957 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 14 23:26:57.964264 systemd[1]: Starting afterburn-network-kargs.service... Jul 14 23:26:57.964725 systemd[1]: Starting ignition-setup.service... Jul 14 23:26:58.002424 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 14 23:26:58.002456 kernel: BTRFS info (device sda6): using free space tree Jul 14 23:26:58.002466 kernel: BTRFS info (device sda6): has skinny extents Jul 14 23:26:58.006970 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 14 23:26:58.013902 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 14 23:26:58.023284 systemd[1]: Finished ignition-setup.service. Jul 14 23:26:58.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:26:58.023993 systemd[1]: Starting ignition-fetch-offline.service... Jul 14 23:26:58.172139 systemd[1]: Finished afterburn-network-kargs.service. Jul 14 23:26:58.172748 systemd[1]: Starting parse-ip-for-networkd.service... Jul 14 23:26:58.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:26:58.224878 systemd[1]: Finished parse-ip-for-networkd.service. Jul 14 23:26:58.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:26:58.223000 audit: BPF prog-id=9 op=LOAD Jul 14 23:26:58.225766 systemd[1]: Starting systemd-networkd.service... Jul 14 23:26:58.239944 systemd-networkd[733]: lo: Link UP Jul 14 23:26:58.239968 systemd-networkd[733]: lo: Gained carrier Jul 14 23:26:58.240269 systemd-networkd[733]: Enumeration completed Jul 14 23:26:58.244320 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jul 14 23:26:58.244450 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jul 14 23:26:58.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:26:58.240481 systemd-networkd[733]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Jul 14 23:26:58.240513 systemd[1]: Started systemd-networkd.service. Jul 14 23:26:58.240659 systemd[1]: Reached target network.target. Jul 14 23:26:58.241158 systemd[1]: Starting iscsiuio.service... Jul 14 23:26:58.244700 systemd-networkd[733]: ens192: Link UP Jul 14 23:26:58.244703 systemd-networkd[733]: ens192: Gained carrier Jul 14 23:26:58.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:26:58.245494 systemd[1]: Started iscsiuio.service. Jul 14 23:26:58.246077 systemd[1]: Starting iscsid.service... Jul 14 23:26:58.247989 iscsid[738]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 14 23:26:58.247989 iscsid[738]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 14 23:26:58.247989 iscsid[738]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 14 23:26:58.247989 iscsid[738]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 14 23:26:58.247989 iscsid[738]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 14 23:26:58.248937 iscsid[738]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 14 23:26:58.248955 systemd[1]: Started iscsid.service. Jul 14 23:26:58.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:26:58.249606 systemd[1]: Starting dracut-initqueue.service... Jul 14 23:26:58.256222 systemd[1]: Finished dracut-initqueue.service. Jul 14 23:26:58.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:26:58.256381 systemd[1]: Reached target remote-fs-pre.target. Jul 14 23:26:58.256499 systemd[1]: Reached target remote-cryptsetup.target. Jul 14 23:26:58.256667 systemd[1]: Reached target remote-fs.target. Jul 14 23:26:58.257227 systemd[1]: Starting dracut-pre-mount.service... Jul 14 23:26:58.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:26:58.262261 systemd[1]: Finished dracut-pre-mount.service. Jul 14 23:26:58.382403 ignition[604]: Ignition 2.14.0 Jul 14 23:26:58.382410 ignition[604]: Stage: fetch-offline Jul 14 23:26:58.382443 ignition[604]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 14 23:26:58.382458 ignition[604]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Jul 14 23:26:58.385855 ignition[604]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 14 23:26:58.386117 ignition[604]: parsed url from cmdline: "" Jul 14 23:26:58.386159 ignition[604]: no config URL provided Jul 14 23:26:58.386287 ignition[604]: reading system config file "/usr/lib/ignition/user.ign" Jul 14 23:26:58.386427 ignition[604]: no config at "/usr/lib/ignition/user.ign" Jul 14 23:26:58.386936 ignition[604]: config successfully fetched Jul 14 23:26:58.387020 ignition[604]: parsing config with SHA512: 59f71493524b342e675923a4a02eca171b014ee922e65f03b07430e7207fb7192fca58c656e5b5d8ee6cc8b67a05caad6ada63e1f4f4f663a3c42482ef6f267c Jul 14 23:26:58.398225 unknown[604]: fetched base config from "system" Jul 14 23:26:58.398434 unknown[604]: fetched user config from "vmware" Jul 14 23:26:58.398982 ignition[604]: fetch-offline: fetch-offline passed Jul 14 23:26:58.399160 ignition[604]: Ignition finished successfully Jul 14 23:26:58.399862 systemd[1]: Finished ignition-fetch-offline.service. Jul 14 23:26:58.400022 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 14 23:26:58.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:26:58.400496 systemd[1]: Starting ignition-kargs.service... Jul 14 23:26:58.406287 ignition[752]: Ignition 2.14.0 Jul 14 23:26:58.406299 ignition[752]: Stage: kargs Jul 14 23:26:58.406376 ignition[752]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 14 23:26:58.406387 ignition[752]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Jul 14 23:26:58.407671 ignition[752]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 14 23:26:58.409214 ignition[752]: kargs: kargs passed Jul 14 23:26:58.409242 ignition[752]: Ignition finished successfully Jul 14 23:26:58.410343 systemd[1]: Finished ignition-kargs.service. Jul 14 23:26:58.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:26:58.411021 systemd[1]: Starting ignition-disks.service... Jul 14 23:26:58.415872 ignition[757]: Ignition 2.14.0 Jul 14 23:26:58.416149 ignition[757]: Stage: disks Jul 14 23:26:58.416324 ignition[757]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 14 23:26:58.416481 ignition[757]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Jul 14 23:26:58.417959 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 14 23:26:58.419679 ignition[757]: disks: disks passed Jul 14 23:26:58.419720 ignition[757]: Ignition finished successfully Jul 14 23:26:58.420459 systemd[1]: Finished ignition-disks.service. Jul 14 23:26:58.420644 systemd[1]: Reached target initrd-root-device.target. Jul 14 23:26:58.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:26:58.420753 systemd[1]: Reached target local-fs-pre.target. Jul 14 23:26:58.420911 systemd[1]: Reached target local-fs.target. Jul 14 23:26:58.421073 systemd[1]: Reached target sysinit.target. Jul 14 23:26:58.421226 systemd[1]: Reached target basic.target. Jul 14 23:26:58.421913 systemd[1]: Starting systemd-fsck-root.service... Jul 14 23:26:58.482477 systemd-fsck[765]: ROOT: clean, 619/1628000 files, 124060/1617920 blocks Jul 14 23:26:58.484043 systemd[1]: Finished systemd-fsck-root.service. Jul 14 23:26:58.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:26:58.484688 systemd[1]: Mounting sysroot.mount... Jul 14 23:26:58.492812 systemd[1]: Mounted sysroot.mount. Jul 14 23:26:58.493032 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 14 23:26:58.492971 systemd[1]: Reached target initrd-root-fs.target. Jul 14 23:26:58.493854 systemd[1]: Mounting sysroot-usr.mount... Jul 14 23:26:58.494326 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 14 23:26:58.494358 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 14 23:26:58.494374 systemd[1]: Reached target ignition-diskful.target. Jul 14 23:26:58.496305 systemd[1]: Mounted sysroot-usr.mount. Jul 14 23:26:58.497005 systemd[1]: Starting initrd-setup-root.service... Jul 14 23:26:58.500176 initrd-setup-root[775]: cut: /sysroot/etc/passwd: No such file or directory Jul 14 23:26:58.504269 initrd-setup-root[783]: cut: /sysroot/etc/group: No such file or directory Jul 14 23:26:58.506860 initrd-setup-root[791]: cut: /sysroot/etc/shadow: No such file or directory Jul 14 23:26:58.515427 initrd-setup-root[799]: cut: /sysroot/etc/gshadow: No such file or directory Jul 14 23:26:58.629058 systemd[1]: Finished initrd-setup-root.service. Jul 14 23:26:58.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:26:58.629644 systemd[1]: Starting ignition-mount.service... Jul 14 23:26:58.630108 systemd[1]: Starting sysroot-boot.service... Jul 14 23:26:58.633505 bash[816]: umount: /sysroot/usr/share/oem: not mounted. Jul 14 23:26:58.639133 ignition[817]: INFO : Ignition 2.14.0 Jul 14 23:26:58.639397 ignition[817]: INFO : Stage: mount Jul 14 23:26:58.639578 ignition[817]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 14 23:26:58.639729 ignition[817]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Jul 14 23:26:58.641348 ignition[817]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 14 23:26:58.642922 ignition[817]: INFO : mount: mount passed Jul 14 23:26:58.643077 ignition[817]: INFO : Ignition finished successfully Jul 14 23:26:58.643698 systemd[1]: Finished ignition-mount.service. Jul 14 23:26:58.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:26:58.681822 systemd[1]: Finished sysroot-boot.service. Jul 14 23:26:58.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:26:58.910935 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 14 23:26:58.928974 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (826) Jul 14 23:26:58.931747 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 14 23:26:58.931775 kernel: BTRFS info (device sda6): using free space tree Jul 14 23:26:58.931792 kernel: BTRFS info (device sda6): has skinny extents Jul 14 23:26:58.935971 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 14 23:26:58.938101 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 14 23:26:58.938753 systemd[1]: Starting ignition-files.service... Jul 14 23:26:58.949737 ignition[846]: INFO : Ignition 2.14.0 Jul 14 23:26:58.949737 ignition[846]: INFO : Stage: files Jul 14 23:26:58.950166 ignition[846]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 14 23:26:58.950166 ignition[846]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Jul 14 23:26:58.951397 ignition[846]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 14 23:26:58.955180 ignition[846]: DEBUG : files: compiled without relabeling support, skipping Jul 14 23:26:58.959400 ignition[846]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 14 23:26:58.959400 ignition[846]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 14 23:26:58.981867 ignition[846]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 14 23:26:58.982086 ignition[846]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 14 23:26:58.990684 unknown[846]: wrote ssh authorized keys file for user: core Jul 14 23:26:58.990913 ignition[846]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 14 23:26:58.996489 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 14 23:26:58.996691 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 14 23:26:59.060085 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 14 23:26:59.195758 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 14 23:26:59.196242 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 14 23:26:59.196467 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 14 23:26:59.767212 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 14 23:26:59.933321 systemd-networkd[733]: ens192: Gained IPv6LL Jul 14 23:26:59.980187 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 14 23:26:59.980489 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 14 23:26:59.980856 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 14 23:26:59.981115 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 14 23:26:59.981428 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 14 23:26:59.981657 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 23:26:59.981940 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 23:26:59.982183 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 23:26:59.982458 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 23:26:59.989028 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 23:26:59.989297 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 23:26:59.989527 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 14 23:26:59.989833 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 14 23:26:59.994116 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" Jul 14 23:26:59.994495 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Jul 14 23:26:59.998246 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem883345193" Jul 14 23:26:59.998519 ignition[846]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem883345193": device or resource busy Jul 14 23:26:59.998767 ignition[846]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem883345193", trying btrfs: device or resource busy Jul 14 23:26:59.999090 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem883345193" Jul 14 23:27:00.001031 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem883345193" Jul 14 23:27:00.010251 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem883345193" Jul 14 23:27:00.011307 systemd[1]: mnt-oem883345193.mount: Deactivated successfully. Jul 14 23:27:00.012400 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem883345193" Jul 14 23:27:00.012796 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" Jul 14 23:27:00.013026 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 14 23:27:00.013026 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 14 23:27:00.575353 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET result: OK Jul 14 23:27:01.494488 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 14 23:27:01.497712 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jul 14 23:27:01.497889 ignition[846]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jul 14 23:27:01.497889 ignition[846]: INFO : files: op(11): [started] processing unit "vmtoolsd.service" Jul 14 23:27:01.497889 ignition[846]: INFO : files: op(11): [finished] processing unit "vmtoolsd.service" Jul 14 23:27:01.497889 ignition[846]: INFO : files: op(12): [started] processing unit "prepare-helm.service" Jul 14 23:27:01.497889 ignition[846]: INFO : files: op(12): op(13): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 23:27:01.497889 ignition[846]: INFO : files: op(12): op(13): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 23:27:01.497889 ignition[846]: INFO : files: op(12): [finished] processing unit "prepare-helm.service" Jul 14 23:27:01.497889 ignition[846]: INFO : files: op(14): [started] processing unit "coreos-metadata.service" Jul 14 23:27:01.499160 ignition[846]: INFO : files: op(14): op(15): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 23:27:01.499160 ignition[846]: INFO : files: op(14): op(15): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 23:27:01.499160 ignition[846]: INFO : files: op(14): [finished] processing unit "coreos-metadata.service" Jul 14 23:27:01.499160 ignition[846]: INFO : files: op(16): [started] setting preset to enabled for "vmtoolsd.service" Jul 14 23:27:01.499160 ignition[846]: INFO : files: op(16): [finished] setting preset to enabled for "vmtoolsd.service" Jul 14 23:27:01.499160 ignition[846]: INFO : files: op(17): [started] setting preset to enabled for "prepare-helm.service" Jul 14 23:27:01.499160 ignition[846]: INFO : files: op(17): [finished] setting preset to enabled for "prepare-helm.service" Jul 14 23:27:01.499160 ignition[846]: INFO : files: op(18): [started] setting preset to disabled for "coreos-metadata.service" Jul 14 23:27:01.499160 ignition[846]: INFO : files: op(18): op(19): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 23:27:01.624747 ignition[846]: INFO : files: op(18): op(19): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 23:27:01.624979 ignition[846]: INFO : files: op(18): [finished] setting preset to disabled for "coreos-metadata.service" Jul 14 23:27:01.624979 ignition[846]: INFO : files: createResultFile: createFiles: op(1a): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 14 23:27:01.624979 ignition[846]: INFO : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 14 23:27:01.624979 ignition[846]: INFO : files: files passed Jul 14 23:27:01.625545 ignition[846]: INFO : Ignition finished successfully Jul 14 23:27:01.629310 kernel: kauditd_printk_skb: 24 callbacks suppressed Jul 14 23:27:01.629338 kernel: audit: type=1130 audit(1752535621.625:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:01.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:01.627123 systemd[1]: Finished ignition-files.service. Jul 14 23:27:01.627776 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 14 23:27:01.630862 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 14 23:27:01.631376 systemd[1]: Starting ignition-quench.service... Jul 14 23:27:01.634313 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 14 23:27:01.634383 systemd[1]: Finished ignition-quench.service. Jul 14 23:27:01.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:01.635723 initrd-setup-root-after-ignition[872]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 23:27:01.641042 kernel: audit: type=1130 audit(1752535621.633:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:01.641061 kernel: audit: type=1131 audit(1752535621.633:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:01.641070 kernel: audit: type=1130 audit(1752535621.639:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:01.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:01.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:01.640800 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 14 23:27:01.641005 systemd[1]: Reached target ignition-complete.target. Jul 14 23:27:01.644157 systemd[1]: Starting initrd-parse-etc.service... Jul 14 23:27:01.654428 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 14 23:27:01.654651 systemd[1]: Finished initrd-parse-etc.service. Jul 14 23:27:01.654933 systemd[1]: Reached target initrd-fs.target. Jul 14 23:27:01.659766 kernel: audit: type=1130 audit(1752535621.652:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:01.659781 kernel: audit: type=1131 audit(1752535621.652:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:01.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:01.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:01.659829 systemd[1]: Reached target initrd.target. Jul 14 23:27:01.659981 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 14 23:27:01.660417 systemd[1]: Starting dracut-pre-pivot.service... Jul 14 23:27:01.666930 systemd[1]: Finished dracut-pre-pivot.service. Jul 14 23:27:01.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:01.669969 kernel: audit: type=1130 audit(1752535621.665:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:01.669736 systemd[1]: Starting initrd-cleanup.service... Jul 14 23:27:01.674927 systemd[1]: Stopped target nss-lookup.target. Jul 14 23:27:01.675207 systemd[1]: Stopped target remote-cryptsetup.target. Jul 14 23:27:01.675478 systemd[1]: Stopped target timers.target. Jul 14 23:27:01.675722 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 14 23:27:01.675914 systemd[1]: Stopped dracut-pre-pivot.service. Jul 14 23:27:01.676239 systemd[1]: Stopped target initrd.target. Jul 14 23:27:01.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:01.678839 systemd[1]: Stopped target basic.target. Jul 14 23:27:01.679030 kernel: audit: type=1131 audit(1752535621.674:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:01.679002 systemd[1]: Stopped target ignition-complete.target. Jul 14 23:27:01.679170 systemd[1]: Stopped target ignition-diskful.target. Jul 14 23:27:01.679341 systemd[1]: Stopped target initrd-root-device.target. Jul 14 23:27:01.679514 systemd[1]: Stopped target remote-fs.target. Jul 14 23:27:01.679687 systemd[1]: Stopped target remote-fs-pre.target. Jul 14 23:27:01.679862 systemd[1]: Stopped target sysinit.target. Jul 14 23:27:01.680030 systemd[1]: Stopped target local-fs.target. Jul 14 23:27:01.680185 systemd[1]: Stopped target local-fs-pre.target. Jul 14 23:27:01.680347 systemd[1]: Stopped target swap.target. Jul 14 23:27:01.680516 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 14 23:27:01.680576 systemd[1]: Stopped dracut-pre-mount.service. Jul 14 23:27:01.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:01.680830 systemd[1]: Stopped target cryptsetup.target. Jul 14 23:27:01.683210 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 14 23:27:01.683264 systemd[1]: Stopped dracut-initqueue.service. Jul 14 23:27:01.685821 kernel: audit: type=1131 audit(1752535621.678:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:01.685836 kernel: audit: type=1131 audit(1752535621.682:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:01.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:01.683534 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 14 23:27:01.683607 systemd[1]: Stopped ignition-fetch-offline.service. Jul 14 23:27:01.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:01.686072 systemd[1]: Stopped target paths.target. Jul 14 23:27:01.686277 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 14 23:27:01.687975 systemd[1]: Stopped systemd-ask-password-console.path. Jul 14 23:27:01.688159 systemd[1]: Stopped target slices.target. Jul 14 23:27:01.688321 systemd[1]: Stopped target sockets.target. Jul 14 23:27:01.688469 systemd[1]: iscsid.socket: Deactivated successfully. Jul 14 23:27:01.688526 systemd[1]: Closed iscsid.socket. Jul 14 23:27:01.688743 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 14 23:27:01.688803 systemd[1]: Closed iscsiuio.socket. Jul 14 23:27:01.689016 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 14 23:27:01.689104 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 14 23:27:01.689325 systemd[1]: ignition-files.service: Deactivated successfully. Jul 14 23:27:01.687000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:01.689398 systemd[1]: Stopped ignition-files.service. Jul 14 23:27:01.687000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:01.689000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:01.689000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:01.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:01.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:01.690046 systemd[1]: Stopping ignition-mount.service... Jul 14 23:27:01.690551 systemd[1]: Stopping sysroot-boot.service... Jul 14 23:27:01.695914 ignition[885]: INFO : Ignition 2.14.0 Jul 14 23:27:01.695914 ignition[885]: INFO : Stage: umount Jul 14 23:27:01.695914 ignition[885]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 14 23:27:01.695914 ignition[885]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Jul 14 23:27:01.695914 ignition[885]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 14 23:27:01.690796 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 14 23:27:01.690884 systemd[1]: Stopped systemd-udev-trigger.service. Jul 14 23:27:01.691078 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 14 23:27:01.691156 systemd[1]: Stopped dracut-pre-trigger.service. Jul 14 23:27:01.692845 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 14 23:27:01.692899 systemd[1]: Finished initrd-cleanup.service. Jul 14 23:27:01.702192 ignition[885]: INFO : umount: umount passed Jul 14 23:27:01.702335 ignition[885]: INFO : Ignition finished successfully Jul 14 23:27:01.702888 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 14 23:27:01.702955 systemd[1]: Stopped ignition-mount.service. Jul 14 23:27:01.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:01.703178 systemd[1]: Stopped target network.target. Jul 14 23:27:01.703291 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 14 23:27:01.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:01.703314 systemd[1]: Stopped ignition-disks.service. Jul 14 23:27:01.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:01.703480 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 14 23:27:01.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:01.703501 systemd[1]: Stopped ignition-kargs.service. Jul 14 23:27:01.704194 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 14 23:27:01.704216 systemd[1]: Stopped ignition-setup.service. Jul 14 23:27:01.704391 systemd[1]: Stopping systemd-networkd.service... Jul 14 23:27:01.704556 systemd[1]: Stopping systemd-resolved.service... Jul 14 23:27:01.708941 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 14 23:27:01.709007 systemd[1]: Stopped systemd-networkd.service. Jul 14 23:27:01.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:01.709462 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 14 23:27:01.709495 systemd[1]: Closed systemd-networkd.socket. Jul 14 23:27:01.708000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:01.708000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:01.708000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:01.708000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:01.710087 systemd[1]: Stopping network-cleanup.service... Jul 14 23:27:01.710197 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 14 23:27:01.710227 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 14 23:27:01.710359 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Jul 14 23:27:01.710382 systemd[1]: Stopped afterburn-network-kargs.service. Jul 14 23:27:01.710497 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 14 23:27:01.710518 systemd[1]: Stopped systemd-sysctl.service. Jul 14 23:27:01.710672 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 14 23:27:01.710692 systemd[1]: Stopped systemd-modules-load.service. Jul 14 23:27:01.710834 systemd[1]: Stopping systemd-udevd.service... Jul 14 23:27:01.712000 audit: BPF prog-id=9 op=UNLOAD Jul 14 23:27:01.714694 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 14 23:27:01.717331 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 14 23:27:01.717538 systemd[1]: Stopped systemd-resolved.service. Jul 14 23:27:01.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:01.718390 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 14 23:27:01.719985 systemd[1]: Stopped systemd-udevd.service. Jul 14 23:27:01.718000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:01.721418 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 14 23:27:01.721803 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 14 23:27:01.721997 systemd[1]: Stopped network-cleanup.service. Jul 14 23:27:01.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:01.720000 audit: BPF prog-id=6 op=UNLOAD Jul 14 23:27:01.722346 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 14 23:27:01.722504 systemd[1]: Closed systemd-udevd-control.socket. Jul 14 23:27:01.722726 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 14 23:27:01.722877 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 14 23:27:01.723115 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 14 23:27:01.723263 systemd[1]: Stopped dracut-pre-udev.service. Jul 14 23:27:01.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:01.723517 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 14 23:27:01.723665 systemd[1]: Stopped dracut-cmdline.service. Jul 14 23:27:01.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:01.723916 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 14 23:27:01.724075 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 14 23:27:01.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:01.724698 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 14 23:27:01.724986 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 14 23:27:01.725152 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Jul 14 23:27:01.723000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:01.725461 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 14 23:27:01.725613 systemd[1]: Stopped kmod-static-nodes.service. Jul 14 23:27:01.723000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:01.725869 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 23:27:01.726152 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 14 23:27:01.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:01.726985 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 14 23:27:01.728008 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 14 23:27:01.728063 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 14 23:27:01.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:01.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:01.888600 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 14 23:27:01.888679 systemd[1]: Stopped sysroot-boot.service. Jul 14 23:27:01.889011 systemd[1]: Reached target initrd-switch-root.target. Jul 14 23:27:01.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:01.889157 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 14 23:27:01.889190 systemd[1]: Stopped initrd-setup-root.service. Jul 14 23:27:01.887000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:01.889907 systemd[1]: Starting initrd-switch-root.service... Jul 14 23:27:01.898899 systemd[1]: Switching root. Jul 14 23:27:01.917936 systemd-journald[216]: Journal stopped Jul 14 23:27:05.424847 systemd-journald[216]: Received SIGTERM from PID 1 (systemd). Jul 14 23:27:05.424869 kernel: SELinux: Class mctp_socket not defined in policy. Jul 14 23:27:05.424877 kernel: SELinux: Class anon_inode not defined in policy. Jul 14 23:27:05.424887 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 14 23:27:05.424897 kernel: SELinux: policy capability network_peer_controls=1 Jul 14 23:27:05.424909 kernel: SELinux: policy capability open_perms=1 Jul 14 23:27:05.424920 kernel: SELinux: policy capability extended_socket_class=1 Jul 14 23:27:05.424929 kernel: SELinux: policy capability always_check_network=0 Jul 14 23:27:05.424935 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 14 23:27:05.424944 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 14 23:27:05.424965 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 14 23:27:05.424976 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 14 23:27:05.424986 systemd[1]: Successfully loaded SELinux policy in 107.337ms. Jul 14 23:27:05.424993 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.567ms. Jul 14 23:27:05.425002 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 14 23:27:05.425009 systemd[1]: Detected virtualization vmware. Jul 14 23:27:05.425017 systemd[1]: Detected architecture x86-64. Jul 14 23:27:05.425024 systemd[1]: Detected first boot. Jul 14 23:27:05.425034 systemd[1]: Initializing machine ID from random generator. Jul 14 23:27:05.425042 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 14 23:27:05.425048 systemd[1]: Populated /etc with preset unit settings. Jul 14 23:27:05.425057 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 14 23:27:05.425069 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 14 23:27:05.425082 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 23:27:05.425095 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 14 23:27:05.425104 systemd[1]: Stopped iscsiuio.service. Jul 14 23:27:05.425111 systemd[1]: iscsid.service: Deactivated successfully. Jul 14 23:27:05.425118 systemd[1]: Stopped iscsid.service. Jul 14 23:27:05.425125 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 14 23:27:05.425131 systemd[1]: Stopped initrd-switch-root.service. Jul 14 23:27:05.425138 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 14 23:27:05.425146 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 14 23:27:05.425154 systemd[1]: Created slice system-addon\x2drun.slice. Jul 14 23:27:05.425161 systemd[1]: Created slice system-getty.slice. Jul 14 23:27:05.425168 systemd[1]: Created slice system-modprobe.slice. Jul 14 23:27:05.425174 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 14 23:27:05.425181 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 14 23:27:05.425188 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 14 23:27:05.425194 systemd[1]: Created slice user.slice. Jul 14 23:27:05.425207 systemd[1]: Started systemd-ask-password-console.path. Jul 14 23:27:05.425220 systemd[1]: Started systemd-ask-password-wall.path. Jul 14 23:27:05.425234 systemd[1]: Set up automount boot.automount. Jul 14 23:27:05.425244 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 14 23:27:05.425251 systemd[1]: Stopped target initrd-switch-root.target. Jul 14 23:27:05.425258 systemd[1]: Stopped target initrd-fs.target. Jul 14 23:27:05.425265 systemd[1]: Stopped target initrd-root-fs.target. Jul 14 23:27:05.425272 systemd[1]: Reached target integritysetup.target. Jul 14 23:27:05.425282 systemd[1]: Reached target remote-cryptsetup.target. Jul 14 23:27:05.425296 systemd[1]: Reached target remote-fs.target. Jul 14 23:27:05.425304 systemd[1]: Reached target slices.target. Jul 14 23:27:05.425312 systemd[1]: Reached target swap.target. Jul 14 23:27:05.425324 systemd[1]: Reached target torcx.target. Jul 14 23:27:05.425335 systemd[1]: Reached target veritysetup.target. Jul 14 23:27:05.425344 systemd[1]: Listening on systemd-coredump.socket. Jul 14 23:27:05.425352 systemd[1]: Listening on systemd-initctl.socket. Jul 14 23:27:05.425359 systemd[1]: Listening on systemd-networkd.socket. Jul 14 23:27:05.425371 systemd[1]: Listening on systemd-udevd-control.socket. Jul 14 23:27:05.425383 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 14 23:27:05.425391 systemd[1]: Listening on systemd-userdbd.socket. Jul 14 23:27:05.425398 systemd[1]: Mounting dev-hugepages.mount... Jul 14 23:27:05.425405 systemd[1]: Mounting dev-mqueue.mount... Jul 14 23:27:05.425413 systemd[1]: Mounting media.mount... Jul 14 23:27:05.425421 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 23:27:05.425428 systemd[1]: Mounting sys-kernel-debug.mount... Jul 14 23:27:05.425435 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 14 23:27:05.425442 systemd[1]: Mounting tmp.mount... Jul 14 23:27:05.425449 systemd[1]: Starting flatcar-tmpfiles.service... Jul 14 23:27:05.425456 systemd[1]: Starting ignition-delete-config.service... Jul 14 23:27:05.425463 systemd[1]: Starting kmod-static-nodes.service... Jul 14 23:27:05.425469 systemd[1]: Starting modprobe@configfs.service... Jul 14 23:27:05.425477 systemd[1]: Starting modprobe@dm_mod.service... Jul 14 23:27:05.425484 systemd[1]: Starting modprobe@drm.service... Jul 14 23:27:05.425491 systemd[1]: Starting modprobe@efi_pstore.service... Jul 14 23:27:05.425498 systemd[1]: Starting modprobe@fuse.service... Jul 14 23:27:05.425509 systemd[1]: Starting modprobe@loop.service... Jul 14 23:27:05.425517 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 14 23:27:05.425524 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 14 23:27:05.425531 systemd[1]: Stopped systemd-fsck-root.service. Jul 14 23:27:05.425537 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 14 23:27:05.425546 systemd[1]: Stopped systemd-fsck-usr.service. Jul 14 23:27:05.425553 systemd[1]: Stopped systemd-journald.service. Jul 14 23:27:05.425560 systemd[1]: Starting systemd-journald.service... Jul 14 23:27:05.425567 systemd[1]: Starting systemd-modules-load.service... Jul 14 23:27:05.425574 systemd[1]: Starting systemd-network-generator.service... Jul 14 23:27:05.425584 systemd[1]: Starting systemd-remount-fs.service... Jul 14 23:27:05.425592 systemd[1]: Starting systemd-udev-trigger.service... Jul 14 23:27:05.425599 systemd[1]: verity-setup.service: Deactivated successfully. Jul 14 23:27:05.425606 systemd[1]: Stopped verity-setup.service. Jul 14 23:27:05.425619 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 23:27:05.425631 systemd[1]: Mounted dev-hugepages.mount. Jul 14 23:27:05.425643 systemd[1]: Mounted dev-mqueue.mount. Jul 14 23:27:05.425655 systemd[1]: Mounted media.mount. Jul 14 23:27:05.425663 systemd[1]: Mounted sys-kernel-debug.mount. Jul 14 23:27:05.425673 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 14 23:27:05.425688 systemd-journald[998]: Journal started Jul 14 23:27:05.425731 systemd-journald[998]: Runtime Journal (/run/log/journal/865d0ee2969f4c4ba11e770386270c5d) is 4.8M, max 38.8M, 34.0M free. Jul 14 23:27:02.148000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 14 23:27:02.198000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 14 23:27:02.198000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 14 23:27:02.198000 audit: BPF prog-id=10 op=LOAD Jul 14 23:27:02.198000 audit: BPF prog-id=10 op=UNLOAD Jul 14 23:27:02.198000 audit: BPF prog-id=11 op=LOAD Jul 14 23:27:02.198000 audit: BPF prog-id=11 op=UNLOAD Jul 14 23:27:02.550000 audit[918]: AVC avc: denied { associate } for pid=918 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 14 23:27:02.550000 audit[918]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8a2 a1=c0000cede0 a2=c0000d7040 a3=32 items=0 ppid=901 pid=918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 23:27:02.550000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 14 23:27:02.551000 audit[918]: AVC avc: denied { associate } for pid=918 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 14 23:27:02.551000 audit[918]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d979 a2=1ed a3=0 items=2 ppid=901 pid=918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 23:27:02.551000 audit: CWD cwd="/" Jul 14 23:27:02.551000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:02.551000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:02.551000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 14 23:27:05.308000 audit: BPF prog-id=12 op=LOAD Jul 14 23:27:05.308000 audit: BPF prog-id=3 op=UNLOAD Jul 14 23:27:05.308000 audit: BPF prog-id=13 op=LOAD Jul 14 23:27:05.308000 audit: BPF prog-id=14 op=LOAD Jul 14 23:27:05.308000 audit: BPF prog-id=4 op=UNLOAD Jul 14 23:27:05.308000 audit: BPF prog-id=5 op=UNLOAD Jul 14 23:27:05.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:05.312000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:05.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:05.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:05.314000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:05.317000 audit: BPF prog-id=12 op=UNLOAD Jul 14 23:27:05.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:05.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:05.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:05.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:05.390000 audit: BPF prog-id=15 op=LOAD Jul 14 23:27:05.390000 audit: BPF prog-id=16 op=LOAD Jul 14 23:27:05.390000 audit: BPF prog-id=17 op=LOAD Jul 14 23:27:05.390000 audit: BPF prog-id=13 op=UNLOAD Jul 14 23:27:05.390000 audit: BPF prog-id=14 op=UNLOAD Jul 14 23:27:05.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:05.420000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 14 23:27:05.420000 audit[998]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffc71e6a2c0 a2=4000 a3=7ffc71e6a35c items=0 ppid=1 pid=998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 23:27:05.420000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 14 23:27:05.307633 systemd[1]: Queued start job for default target multi-user.target. Jul 14 23:27:02.518166 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2025-07-14T23:27:02Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.101 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.101 /var/lib/torcx/store]" Jul 14 23:27:05.307643 systemd[1]: Unnecessary job was removed for dev-sda6.device. Jul 14 23:27:05.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:02.534376 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2025-07-14T23:27:02Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 14 23:27:05.311296 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 14 23:27:05.426984 systemd[1]: Started systemd-journald.service. Jul 14 23:27:02.534390 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2025-07-14T23:27:02Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 14 23:27:05.426894 systemd[1]: Mounted tmp.mount. Jul 14 23:27:02.534413 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2025-07-14T23:27:02Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Jul 14 23:27:02.534419 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2025-07-14T23:27:02Z" level=debug msg="skipped missing lower profile" missing profile=oem Jul 14 23:27:02.534448 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2025-07-14T23:27:02Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Jul 14 23:27:02.534456 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2025-07-14T23:27:02Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Jul 14 23:27:05.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:05.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:05.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:05.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:05.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:05.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:05.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:05.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:05.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:02.534589 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2025-07-14T23:27:02Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Jul 14 23:27:05.427684 systemd[1]: Finished kmod-static-nodes.service. Jul 14 23:27:05.430593 jq[985]: true Jul 14 23:27:02.534617 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2025-07-14T23:27:02Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 14 23:27:05.427936 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 23:27:02.534626 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2025-07-14T23:27:02Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 14 23:27:05.428030 systemd[1]: Finished modprobe@dm_mod.service. Jul 14 23:27:02.545033 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2025-07-14T23:27:02Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Jul 14 23:27:05.429142 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 23:27:02.545054 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2025-07-14T23:27:02Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Jul 14 23:27:05.429221 systemd[1]: Finished modprobe@drm.service. Jul 14 23:27:02.545065 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2025-07-14T23:27:02Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.101: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.101 Jul 14 23:27:05.429437 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 23:27:02.545073 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2025-07-14T23:27:02Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Jul 14 23:27:05.429507 systemd[1]: Finished modprobe@efi_pstore.service. Jul 14 23:27:02.545083 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2025-07-14T23:27:02Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.101: no such file or directory" path=/var/lib/torcx/store/3510.3.101 Jul 14 23:27:05.429768 systemd[1]: Finished systemd-network-generator.service. Jul 14 23:27:02.545096 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2025-07-14T23:27:02Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Jul 14 23:27:05.430096 systemd[1]: Finished systemd-remount-fs.service. Jul 14 23:27:04.918347 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2025-07-14T23:27:04Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 14 23:27:05.430663 systemd[1]: Reached target network-pre.target. Jul 14 23:27:04.918520 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2025-07-14T23:27:04Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 14 23:27:05.430770 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 14 23:27:04.918605 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2025-07-14T23:27:04Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 14 23:27:04.918749 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2025-07-14T23:27:04Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 14 23:27:05.435287 jq[1007]: true Jul 14 23:27:04.918792 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2025-07-14T23:27:04Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Jul 14 23:27:04.918847 /usr/lib/systemd/system-generators/torcx-generator[918]: time="2025-07-14T23:27:04Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Jul 14 23:27:05.441873 systemd[1]: Starting systemd-hwdb-update.service... Jul 14 23:27:05.442757 systemd[1]: Starting systemd-journal-flush.service... Jul 14 23:27:05.442883 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 23:27:05.443537 systemd[1]: Starting systemd-random-seed.service... Jul 14 23:27:05.444208 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 14 23:27:05.444301 systemd[1]: Finished modprobe@configfs.service. Jul 14 23:27:05.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:05.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:05.445799 systemd[1]: Mounting sys-kernel-config.mount... Jul 14 23:27:05.447692 systemd[1]: Mounted sys-kernel-config.mount. Jul 14 23:27:05.452417 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 23:27:05.452505 systemd[1]: Finished modprobe@loop.service. Jul 14 23:27:05.452685 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 14 23:27:05.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:05.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:05.452959 kernel: loop: module loaded Jul 14 23:27:05.458648 systemd-journald[998]: Time spent on flushing to /var/log/journal/865d0ee2969f4c4ba11e770386270c5d is 40.681ms for 1985 entries. Jul 14 23:27:05.458648 systemd-journald[998]: System Journal (/var/log/journal/865d0ee2969f4c4ba11e770386270c5d) is 8.0M, max 584.8M, 576.8M free. Jul 14 23:27:05.566192 systemd-journald[998]: Received client request to flush runtime journal. Jul 14 23:27:05.566241 kernel: fuse: init (API version 7.34) Jul 14 23:27:05.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:05.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:05.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:05.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:05.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:05.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:05.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:05.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:05.461296 systemd[1]: Finished systemd-modules-load.service. Jul 14 23:27:05.462225 systemd[1]: Starting systemd-sysctl.service... Jul 14 23:27:05.466868 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 14 23:27:05.466961 systemd[1]: Finished modprobe@fuse.service. Jul 14 23:27:05.567778 udevadm[1044]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 14 23:27:05.469028 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 14 23:27:05.471202 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 14 23:27:05.478330 systemd[1]: Finished systemd-random-seed.service. Jul 14 23:27:05.478472 systemd[1]: Reached target first-boot-complete.target. Jul 14 23:27:05.488034 systemd[1]: Finished flatcar-tmpfiles.service. Jul 14 23:27:05.489013 systemd[1]: Starting systemd-sysusers.service... Jul 14 23:27:05.517407 systemd[1]: Finished systemd-sysctl.service. Jul 14 23:27:05.549772 systemd[1]: Finished systemd-udev-trigger.service. Jul 14 23:27:05.550756 systemd[1]: Starting systemd-udev-settle.service... Jul 14 23:27:05.567032 systemd[1]: Finished systemd-journal-flush.service. Jul 14 23:27:05.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:05.610362 systemd[1]: Finished systemd-sysusers.service. Jul 14 23:27:05.611314 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 14 23:27:05.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:05.663785 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 14 23:27:06.032136 ignition[1023]: Ignition 2.14.0 Jul 14 23:27:06.032426 ignition[1023]: deleting config from guestinfo properties Jul 14 23:27:06.035823 ignition[1023]: Successfully deleted config Jul 14 23:27:06.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ignition-delete-config comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:06.036641 systemd[1]: Finished ignition-delete-config.service. Jul 14 23:27:06.256556 systemd[1]: Finished systemd-hwdb-update.service. Jul 14 23:27:06.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:06.255000 audit: BPF prog-id=18 op=LOAD Jul 14 23:27:06.255000 audit: BPF prog-id=19 op=LOAD Jul 14 23:27:06.255000 audit: BPF prog-id=7 op=UNLOAD Jul 14 23:27:06.255000 audit: BPF prog-id=8 op=UNLOAD Jul 14 23:27:06.257685 systemd[1]: Starting systemd-udevd.service... Jul 14 23:27:06.269321 systemd-udevd[1051]: Using default interface naming scheme 'v252'. Jul 14 23:27:06.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:06.289000 audit: BPF prog-id=20 op=LOAD Jul 14 23:27:06.289822 systemd[1]: Started systemd-udevd.service. Jul 14 23:27:06.291555 systemd[1]: Starting systemd-networkd.service... Jul 14 23:27:06.295000 audit: BPF prog-id=21 op=LOAD Jul 14 23:27:06.296000 audit: BPF prog-id=22 op=LOAD Jul 14 23:27:06.296000 audit: BPF prog-id=23 op=LOAD Jul 14 23:27:06.298536 systemd[1]: Starting systemd-userdbd.service... Jul 14 23:27:06.317107 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Jul 14 23:27:06.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:06.333694 systemd[1]: Started systemd-userdbd.service. Jul 14 23:27:06.366986 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 14 23:27:06.371983 kernel: ACPI: button: Power Button [PWRF] Jul 14 23:27:06.406457 systemd-networkd[1063]: lo: Link UP Jul 14 23:27:06.406654 systemd-networkd[1063]: lo: Gained carrier Jul 14 23:27:06.406986 systemd-networkd[1063]: Enumeration completed Jul 14 23:27:06.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:06.407041 systemd[1]: Started systemd-networkd.service. Jul 14 23:27:06.407051 systemd-networkd[1063]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Jul 14 23:27:06.410125 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jul 14 23:27:06.410257 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jul 14 23:27:06.410341 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): ens192: link becomes ready Jul 14 23:27:06.411447 systemd-networkd[1063]: ens192: Link UP Jul 14 23:27:06.411537 systemd-networkd[1063]: ens192: Gained carrier Jul 14 23:27:06.440000 audit[1053]: AVC avc: denied { confidentiality } for pid=1053 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 14 23:27:06.440000 audit[1053]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5610e5dd0600 a1=338ac a2=7fb178dbdbc5 a3=5 items=110 ppid=1051 pid=1053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 23:27:06.440000 audit: CWD cwd="/" Jul 14 23:27:06.440000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=1 name=(null) inode=24744 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=2 name=(null) inode=24744 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=3 name=(null) inode=24745 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=4 name=(null) inode=24744 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=5 name=(null) inode=24746 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=6 name=(null) inode=24744 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=7 name=(null) inode=24747 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=8 name=(null) inode=24747 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=9 name=(null) inode=24748 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=10 name=(null) inode=24747 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=11 name=(null) inode=24749 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=12 name=(null) inode=24747 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=13 name=(null) inode=24750 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=14 name=(null) inode=24747 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=15 name=(null) inode=24751 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=16 name=(null) inode=24747 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=17 name=(null) inode=24752 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=18 name=(null) inode=24744 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=19 name=(null) inode=24753 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=20 name=(null) inode=24753 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=21 name=(null) inode=24754 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=22 name=(null) inode=24753 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=23 name=(null) inode=24755 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=24 name=(null) inode=24753 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=25 name=(null) inode=24756 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=26 name=(null) inode=24753 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=27 name=(null) inode=24757 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=28 name=(null) inode=24753 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=29 name=(null) inode=24758 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=30 name=(null) inode=24744 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=31 name=(null) inode=24759 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=32 name=(null) inode=24759 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=33 name=(null) inode=24760 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=34 name=(null) inode=24759 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=35 name=(null) inode=24761 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=36 name=(null) inode=24759 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=37 name=(null) inode=24762 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=38 name=(null) inode=24759 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=39 name=(null) inode=24763 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=40 name=(null) inode=24759 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=41 name=(null) inode=24764 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=42 name=(null) inode=24744 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=43 name=(null) inode=24765 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=44 name=(null) inode=24765 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=45 name=(null) inode=24766 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=46 name=(null) inode=24765 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=47 name=(null) inode=24767 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=48 name=(null) inode=24765 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=49 name=(null) inode=24768 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=50 name=(null) inode=24765 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=51 name=(null) inode=24769 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=52 name=(null) inode=24765 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=53 name=(null) inode=24770 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=55 name=(null) inode=24771 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=56 name=(null) inode=24771 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=57 name=(null) inode=24772 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=58 name=(null) inode=24771 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=59 name=(null) inode=24773 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=60 name=(null) inode=24771 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=61 name=(null) inode=24774 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=62 name=(null) inode=24774 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=63 name=(null) inode=24775 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=64 name=(null) inode=24774 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=65 name=(null) inode=24776 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=66 name=(null) inode=24774 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=67 name=(null) inode=24777 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=68 name=(null) inode=24774 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=69 name=(null) inode=24778 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=70 name=(null) inode=24774 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=71 name=(null) inode=24779 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=72 name=(null) inode=24771 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=73 name=(null) inode=24780 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=74 name=(null) inode=24780 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=75 name=(null) inode=24781 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=76 name=(null) inode=24780 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=77 name=(null) inode=24782 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=78 name=(null) inode=24780 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=79 name=(null) inode=24783 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=80 name=(null) inode=24780 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=81 name=(null) inode=24784 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=82 name=(null) inode=24780 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=83 name=(null) inode=24785 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=84 name=(null) inode=24771 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=85 name=(null) inode=24786 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=86 name=(null) inode=24786 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=87 name=(null) inode=24787 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=88 name=(null) inode=24786 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=89 name=(null) inode=24788 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=90 name=(null) inode=24786 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=91 name=(null) inode=24789 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=92 name=(null) inode=24786 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=93 name=(null) inode=24790 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=94 name=(null) inode=24786 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=95 name=(null) inode=24791 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=96 name=(null) inode=24771 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=97 name=(null) inode=24792 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=98 name=(null) inode=24792 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=99 name=(null) inode=24793 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=100 name=(null) inode=24792 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=101 name=(null) inode=24794 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=102 name=(null) inode=24792 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=103 name=(null) inode=24795 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=104 name=(null) inode=24792 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=105 name=(null) inode=24796 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=106 name=(null) inode=24792 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=107 name=(null) inode=24797 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PATH item=109 name=(null) inode=24798 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 23:27:06.440000 audit: PROCTITLE proctitle="(udev-worker)" Jul 14 23:27:06.458230 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Jul 14 23:27:06.458113 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 14 23:27:06.463094 kernel: vmw_vmci 0000:00:07.7: Found VMCI PCI device at 0x11080, irq 16 Jul 14 23:27:06.463477 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Jul 14 23:27:06.463558 kernel: Guest personality initialized and is active Jul 14 23:27:06.465327 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 14 23:27:06.465357 kernel: Initialized host personality Jul 14 23:27:06.472960 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Jul 14 23:27:06.492515 (udev-worker)[1064]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Jul 14 23:27:06.492956 kernel: mousedev: PS/2 mouse device common for all mice Jul 14 23:27:06.514313 systemd[1]: Finished systemd-udev-settle.service. Jul 14 23:27:06.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:06.515243 systemd[1]: Starting lvm2-activation-early.service... Jul 14 23:27:06.640708 lvm[1085]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 23:27:06.665624 systemd[1]: Finished lvm2-activation-early.service. Jul 14 23:27:06.666965 kernel: kauditd_printk_skb: 224 callbacks suppressed Jul 14 23:27:06.667014 kernel: audit: type=1130 audit(1752535626.663:147): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:06.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:06.665834 systemd[1]: Reached target cryptsetup.target. Jul 14 23:27:06.670438 systemd[1]: Starting lvm2-activation.service... Jul 14 23:27:06.673411 lvm[1086]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 23:27:06.693586 systemd[1]: Finished lvm2-activation.service. Jul 14 23:27:06.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:06.693758 systemd[1]: Reached target local-fs-pre.target. Jul 14 23:27:06.696959 kernel: audit: type=1130 audit(1752535626.691:148): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:06.696354 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 14 23:27:06.696369 systemd[1]: Reached target local-fs.target. Jul 14 23:27:06.696460 systemd[1]: Reached target machines.target. Jul 14 23:27:06.697426 systemd[1]: Starting ldconfig.service... Jul 14 23:27:06.712074 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 14 23:27:06.712102 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 23:27:06.712831 systemd[1]: Starting systemd-boot-update.service... Jul 14 23:27:06.713475 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 14 23:27:06.714268 systemd[1]: Starting systemd-machine-id-commit.service... Jul 14 23:27:06.715056 systemd[1]: Starting systemd-sysext.service... Jul 14 23:27:06.737777 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1088 (bootctl) Jul 14 23:27:06.738472 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 14 23:27:06.749803 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 14 23:27:06.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:06.753147 kernel: audit: type=1130 audit(1752535626.748:149): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:06.752897 systemd[1]: Unmounting usr-share-oem.mount... Jul 14 23:27:06.769183 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 14 23:27:06.769296 systemd[1]: Unmounted usr-share-oem.mount. Jul 14 23:27:06.805979 kernel: loop0: detected capacity change from 0 to 224512 Jul 14 23:27:07.619564 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 14 23:27:07.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:07.619924 systemd[1]: Finished systemd-machine-id-commit.service. Jul 14 23:27:07.623970 kernel: audit: type=1130 audit(1752535627.618:150): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:07.637969 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 14 23:27:07.639675 systemd-fsck[1098]: fsck.fat 4.2 (2021-01-31) Jul 14 23:27:07.639675 systemd-fsck[1098]: /dev/sda1: 790 files, 120725/258078 clusters Jul 14 23:27:07.643484 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 14 23:27:07.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:07.644607 systemd[1]: Mounting boot.mount... Jul 14 23:27:07.646961 kernel: audit: type=1130 audit(1752535627.641:151): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:07.655933 systemd[1]: Mounted boot.mount. Jul 14 23:27:07.658308 kernel: loop1: detected capacity change from 0 to 224512 Jul 14 23:27:07.665572 systemd[1]: Finished systemd-boot-update.service. Jul 14 23:27:07.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:07.668965 kernel: audit: type=1130 audit(1752535627.663:152): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:07.756498 (sd-sysext)[1102]: Using extensions 'kubernetes'. Jul 14 23:27:07.757010 (sd-sysext)[1102]: Merged extensions into '/usr'. Jul 14 23:27:07.768509 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 23:27:07.770048 systemd[1]: Mounting usr-share-oem.mount... Jul 14 23:27:07.770920 systemd[1]: Starting modprobe@dm_mod.service... Jul 14 23:27:07.772856 systemd[1]: Starting modprobe@efi_pstore.service... Jul 14 23:27:07.773907 systemd[1]: Starting modprobe@loop.service... Jul 14 23:27:07.774091 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 14 23:27:07.774174 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 23:27:07.774263 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 23:27:07.777005 systemd[1]: Mounted usr-share-oem.mount. Jul 14 23:27:07.777310 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 23:27:07.777422 systemd[1]: Finished modprobe@dm_mod.service. Jul 14 23:27:07.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:07.775000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:07.777805 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 23:27:07.777899 systemd[1]: Finished modprobe@efi_pstore.service. Jul 14 23:27:07.780968 kernel: audit: type=1130 audit(1752535627.775:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:07.780999 kernel: audit: type=1131 audit(1752535627.775:154): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:07.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:07.783552 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 23:27:07.783640 systemd[1]: Finished modprobe@loop.service. Jul 14 23:27:07.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:07.789011 kernel: audit: type=1130 audit(1752535627.781:155): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:07.789044 kernel: audit: type=1131 audit(1752535627.781:156): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:07.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:07.787000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:07.790193 systemd[1]: Finished systemd-sysext.service. Jul 14 23:27:07.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:07.791935 systemd[1]: Starting ensure-sysext.service... Jul 14 23:27:07.792200 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 23:27:07.792247 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 14 23:27:07.793171 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 14 23:27:07.797483 systemd[1]: Reloading. Jul 14 23:27:07.806795 systemd-tmpfiles[1109]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 14 23:27:07.812595 systemd-tmpfiles[1109]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 14 23:27:07.818153 systemd-tmpfiles[1109]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 14 23:27:07.846591 /usr/lib/systemd/system-generators/torcx-generator[1128]: time="2025-07-14T23:27:07Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.101 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.101 /var/lib/torcx/store]" Jul 14 23:27:07.846609 /usr/lib/systemd/system-generators/torcx-generator[1128]: time="2025-07-14T23:27:07Z" level=info msg="torcx already run" Jul 14 23:27:07.876596 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 14 23:27:07.876609 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 14 23:27:07.889012 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 23:27:07.921000 audit: BPF prog-id=24 op=LOAD Jul 14 23:27:07.921000 audit: BPF prog-id=15 op=UNLOAD Jul 14 23:27:07.921000 audit: BPF prog-id=25 op=LOAD Jul 14 23:27:07.921000 audit: BPF prog-id=26 op=LOAD Jul 14 23:27:07.921000 audit: BPF prog-id=16 op=UNLOAD Jul 14 23:27:07.921000 audit: BPF prog-id=17 op=UNLOAD Jul 14 23:27:07.922000 audit: BPF prog-id=27 op=LOAD Jul 14 23:27:07.922000 audit: BPF prog-id=28 op=LOAD Jul 14 23:27:07.922000 audit: BPF prog-id=18 op=UNLOAD Jul 14 23:27:07.922000 audit: BPF prog-id=19 op=UNLOAD Jul 14 23:27:07.922000 audit: BPF prog-id=29 op=LOAD Jul 14 23:27:07.922000 audit: BPF prog-id=21 op=UNLOAD Jul 14 23:27:07.922000 audit: BPF prog-id=30 op=LOAD Jul 14 23:27:07.922000 audit: BPF prog-id=31 op=LOAD Jul 14 23:27:07.922000 audit: BPF prog-id=22 op=UNLOAD Jul 14 23:27:07.922000 audit: BPF prog-id=23 op=UNLOAD Jul 14 23:27:07.923000 audit: BPF prog-id=32 op=LOAD Jul 14 23:27:07.923000 audit: BPF prog-id=20 op=UNLOAD Jul 14 23:27:07.931041 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 23:27:07.931844 systemd[1]: Starting modprobe@dm_mod.service... Jul 14 23:27:07.933077 systemd[1]: Starting modprobe@efi_pstore.service... Jul 14 23:27:07.933860 systemd[1]: Starting modprobe@loop.service... Jul 14 23:27:07.934105 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 14 23:27:07.934177 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 23:27:07.934240 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 23:27:07.934719 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 23:27:07.934799 systemd[1]: Finished modprobe@dm_mod.service. Jul 14 23:27:07.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:07.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:07.935469 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 23:27:07.935589 systemd[1]: Finished modprobe@efi_pstore.service. Jul 14 23:27:07.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:07.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:07.936066 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 23:27:07.936192 systemd[1]: Finished modprobe@loop.service. Jul 14 23:27:07.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:07.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:07.937318 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 23:27:07.938066 systemd[1]: Starting modprobe@dm_mod.service... Jul 14 23:27:07.939310 systemd[1]: Starting modprobe@efi_pstore.service... Jul 14 23:27:07.940102 systemd[1]: Starting modprobe@loop.service... Jul 14 23:27:07.940316 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 14 23:27:07.940383 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 23:27:07.940446 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 23:27:07.940891 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 23:27:07.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:07.939000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:07.940980 systemd[1]: Finished modprobe@dm_mod.service. Jul 14 23:27:07.941277 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 23:27:07.941348 systemd[1]: Finished modprobe@efi_pstore.service. Jul 14 23:27:07.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:07.939000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:07.941661 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 23:27:07.941736 systemd[1]: Finished modprobe@loop.service. Jul 14 23:27:07.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:07.939000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:07.942015 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 23:27:07.942075 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 14 23:27:07.943556 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 23:27:07.944386 systemd[1]: Starting modprobe@dm_mod.service... Jul 14 23:27:07.945897 systemd[1]: Starting modprobe@drm.service... Jul 14 23:27:07.946604 systemd[1]: Starting modprobe@efi_pstore.service... Jul 14 23:27:07.947414 systemd[1]: Starting modprobe@loop.service... Jul 14 23:27:07.948440 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 14 23:27:07.948519 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 23:27:07.949372 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 14 23:27:07.949531 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 23:27:07.950260 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 23:27:07.950368 systemd[1]: Finished modprobe@dm_mod.service. Jul 14 23:27:07.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:07.948000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:07.950705 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 23:27:07.950779 systemd[1]: Finished modprobe@drm.service. Jul 14 23:27:07.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:07.948000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:07.951153 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 23:27:07.951243 systemd[1]: Finished modprobe@efi_pstore.service. Jul 14 23:27:07.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:07.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:07.951589 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 23:27:07.951670 systemd[1]: Finished modprobe@loop.service. Jul 14 23:27:07.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:07.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:07.952070 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 23:27:07.952139 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 14 23:27:07.952809 systemd[1]: Finished ensure-sysext.service. Jul 14 23:27:07.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:08.088780 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 14 23:27:08.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:08.089966 systemd[1]: Starting audit-rules.service... Jul 14 23:27:08.090867 systemd[1]: Starting clean-ca-certificates.service... Jul 14 23:27:08.091803 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 14 23:27:08.090000 audit: BPF prog-id=33 op=LOAD Jul 14 23:27:08.094000 audit: BPF prog-id=34 op=LOAD Jul 14 23:27:08.095488 systemd[1]: Starting systemd-resolved.service... Jul 14 23:27:08.101000 audit[1205]: SYSTEM_BOOT pid=1205 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 14 23:27:08.096776 systemd[1]: Starting systemd-timesyncd.service... Jul 14 23:27:08.097673 systemd[1]: Starting systemd-update-utmp.service... Jul 14 23:27:08.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:08.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:08.104964 systemd[1]: Finished systemd-update-utmp.service. Jul 14 23:27:08.105331 systemd[1]: Finished clean-ca-certificates.service. Jul 14 23:27:08.105461 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 14 23:27:08.158858 systemd[1]: Started systemd-timesyncd.service. Jul 14 23:27:08.159033 systemd[1]: Reached target time-set.target. Jul 14 23:27:08.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:08.173292 systemd-resolved[1203]: Positive Trust Anchors: Jul 14 23:27:08.173302 systemd-resolved[1203]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 23:27:08.173322 systemd-resolved[1203]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 14 23:27:08.189589 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 14 23:27:08.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 23:27:08.194000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 14 23:27:08.194000 audit[1221]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe09618a70 a2=420 a3=0 items=0 ppid=1200 pid=1221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 23:27:08.194000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 14 23:27:08.196914 augenrules[1221]: No rules Jul 14 23:27:08.197479 systemd[1]: Finished audit-rules.service. Jul 14 23:27:08.206150 systemd-resolved[1203]: Defaulting to hostname 'linux'. Jul 14 23:28:34.863976 systemd-timesyncd[1204]: Contacted time server 23.186.168.125:123 (0.flatcar.pool.ntp.org). Jul 14 23:28:34.864047 systemd-timesyncd[1204]: Initial clock synchronization to Mon 2025-07-14 23:28:34.863864 UTC. Jul 14 23:28:34.864499 systemd-resolved[1203]: Clock change detected. Flushing caches. Jul 14 23:28:34.864519 systemd[1]: Started systemd-resolved.service. Jul 14 23:28:34.864662 systemd[1]: Reached target network.target. Jul 14 23:28:34.864752 systemd[1]: Reached target nss-lookup.target. Jul 14 23:28:35.037938 systemd-networkd[1063]: ens192: Gained IPv6LL Jul 14 23:28:35.038713 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 14 23:28:35.038888 systemd[1]: Reached target network-online.target. Jul 14 23:28:35.092851 ldconfig[1087]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 14 23:28:35.095772 systemd[1]: Finished ldconfig.service. Jul 14 23:28:35.097462 systemd[1]: Starting systemd-update-done.service... Jul 14 23:28:35.101875 systemd[1]: Finished systemd-update-done.service. Jul 14 23:28:35.102046 systemd[1]: Reached target sysinit.target. Jul 14 23:28:35.102237 systemd[1]: Started motdgen.path. Jul 14 23:28:35.102342 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 14 23:28:35.102528 systemd[1]: Started logrotate.timer. Jul 14 23:28:35.102678 systemd[1]: Started mdadm.timer. Jul 14 23:28:35.102766 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 14 23:28:35.102979 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 14 23:28:35.103001 systemd[1]: Reached target paths.target. Jul 14 23:28:35.103087 systemd[1]: Reached target timers.target. Jul 14 23:28:35.103341 systemd[1]: Listening on dbus.socket. Jul 14 23:28:35.104311 systemd[1]: Starting docker.socket... Jul 14 23:28:35.112788 systemd[1]: Listening on sshd.socket. Jul 14 23:28:35.112967 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 23:28:35.113227 systemd[1]: Listening on docker.socket. Jul 14 23:28:35.113359 systemd[1]: Reached target sockets.target. Jul 14 23:28:35.113458 systemd[1]: Reached target basic.target. Jul 14 23:28:35.113572 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 14 23:28:35.113591 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 14 23:28:35.114355 systemd[1]: Starting containerd.service... Jul 14 23:28:35.115159 systemd[1]: Starting dbus.service... Jul 14 23:28:35.116270 systemd[1]: Starting enable-oem-cloudinit.service... Jul 14 23:28:35.117211 systemd[1]: Starting extend-filesystems.service... Jul 14 23:28:35.117727 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 14 23:28:35.118342 jq[1231]: false Jul 14 23:28:35.123036 systemd[1]: Starting kubelet.service... Jul 14 23:28:35.123871 systemd[1]: Starting motdgen.service... Jul 14 23:28:35.124613 systemd[1]: Starting prepare-helm.service... Jul 14 23:28:35.125397 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 14 23:28:35.126143 systemd[1]: Starting sshd-keygen.service... Jul 14 23:28:35.127666 systemd[1]: Starting systemd-logind.service... Jul 14 23:28:35.127776 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 23:28:35.127814 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 14 23:28:35.128200 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 14 23:28:35.128539 systemd[1]: Starting update-engine.service... Jul 14 23:28:35.129435 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 14 23:28:35.130438 systemd[1]: Starting vmtoolsd.service... Jul 14 23:28:35.131866 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 14 23:28:35.131980 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 14 23:28:35.138290 jq[1242]: true Jul 14 23:28:35.139836 jq[1247]: true Jul 14 23:28:35.141750 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 14 23:28:35.141862 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 14 23:28:35.156603 systemd[1]: Started vmtoolsd.service. Jul 14 23:28:35.163682 systemd[1]: motdgen.service: Deactivated successfully. Jul 14 23:28:35.163839 systemd[1]: Finished motdgen.service. Jul 14 23:28:35.167736 extend-filesystems[1232]: Found loop1 Jul 14 23:28:35.168163 extend-filesystems[1232]: Found sda Jul 14 23:28:35.168315 extend-filesystems[1232]: Found sda1 Jul 14 23:28:35.168465 extend-filesystems[1232]: Found sda2 Jul 14 23:28:35.168611 extend-filesystems[1232]: Found sda3 Jul 14 23:28:35.170032 extend-filesystems[1232]: Found usr Jul 14 23:28:35.170032 extend-filesystems[1232]: Found sda4 Jul 14 23:28:35.170032 extend-filesystems[1232]: Found sda6 Jul 14 23:28:35.170032 extend-filesystems[1232]: Found sda7 Jul 14 23:28:35.170032 extend-filesystems[1232]: Found sda9 Jul 14 23:28:35.170032 extend-filesystems[1232]: Checking size of /dev/sda9 Jul 14 23:28:35.182964 tar[1246]: linux-amd64/LICENSE Jul 14 23:28:35.182964 tar[1246]: linux-amd64/helm Jul 14 23:28:35.189745 extend-filesystems[1232]: Old size kept for /dev/sda9 Jul 14 23:28:35.189745 extend-filesystems[1232]: Found sr0 Jul 14 23:28:35.190789 bash[1268]: Updated "/home/core/.ssh/authorized_keys" Jul 14 23:28:35.189340 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 14 23:28:35.189473 systemd[1]: Finished extend-filesystems.service. Jul 14 23:28:35.193204 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 14 23:28:35.223549 systemd-logind[1240]: Watching system buttons on /dev/input/event1 (Power Button) Jul 14 23:28:35.223564 systemd-logind[1240]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 14 23:28:35.223692 systemd-logind[1240]: New seat seat0. Jul 14 23:28:35.257533 env[1270]: time="2025-07-14T23:28:35.257503904Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 14 23:28:35.281486 dbus-daemon[1230]: [system] SELinux support is enabled Jul 14 23:28:35.281651 systemd[1]: Started dbus.service. Jul 14 23:28:35.282943 env[1270]: time="2025-07-14T23:28:35.280786515Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 14 23:28:35.282998 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 14 23:28:35.283020 systemd[1]: Reached target system-config.target. Jul 14 23:28:35.283144 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 14 23:28:35.283155 systemd[1]: Reached target user-config.target. Jul 14 23:28:35.285040 kernel: NET: Registered PF_VSOCK protocol family Jul 14 23:28:35.285328 systemd[1]: Started systemd-logind.service. Jul 14 23:28:35.287151 env[1270]: time="2025-07-14T23:28:35.287125704Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 14 23:28:35.289708 env[1270]: time="2025-07-14T23:28:35.289684532Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.187-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 14 23:28:35.292408 env[1270]: time="2025-07-14T23:28:35.292393385Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 14 23:28:35.292675 env[1270]: time="2025-07-14T23:28:35.292661525Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 23:28:35.292741 env[1270]: time="2025-07-14T23:28:35.292730433Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 14 23:28:35.292792 env[1270]: time="2025-07-14T23:28:35.292780903Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 14 23:28:35.292861 env[1270]: time="2025-07-14T23:28:35.292848325Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 14 23:28:35.292972 env[1270]: time="2025-07-14T23:28:35.292962184Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 14 23:28:35.293759 env[1270]: time="2025-07-14T23:28:35.293746951Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 14 23:28:35.294708 env[1270]: time="2025-07-14T23:28:35.294691148Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 23:28:35.295461 env[1270]: time="2025-07-14T23:28:35.295448445Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 14 23:28:35.295718 update_engine[1241]: I0714 23:28:35.295048 1241 main.cc:92] Flatcar Update Engine starting Jul 14 23:28:35.296212 env[1270]: time="2025-07-14T23:28:35.296195957Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 14 23:28:35.296286 env[1270]: time="2025-07-14T23:28:35.296269275Z" level=info msg="metadata content store policy set" policy=shared Jul 14 23:28:35.298181 systemd[1]: Started update-engine.service. Jul 14 23:28:35.298334 update_engine[1241]: I0714 23:28:35.298205 1241 update_check_scheduler.cc:74] Next update check in 5m17s Jul 14 23:28:35.299780 systemd[1]: Started locksmithd.service. Jul 14 23:28:35.303176 env[1270]: time="2025-07-14T23:28:35.303150282Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 14 23:28:35.303449 env[1270]: time="2025-07-14T23:28:35.303434905Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 14 23:28:35.303541 env[1270]: time="2025-07-14T23:28:35.303529841Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 14 23:28:35.303631 env[1270]: time="2025-07-14T23:28:35.303617818Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 14 23:28:35.303777 env[1270]: time="2025-07-14T23:28:35.303762735Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 14 23:28:35.303848 env[1270]: time="2025-07-14T23:28:35.303838810Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 14 23:28:35.303905 env[1270]: time="2025-07-14T23:28:35.303890199Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 14 23:28:35.303956 env[1270]: time="2025-07-14T23:28:35.303947206Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 14 23:28:35.304017 env[1270]: time="2025-07-14T23:28:35.303997226Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 14 23:28:35.304999 env[1270]: time="2025-07-14T23:28:35.304983618Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 14 23:28:35.305094 env[1270]: time="2025-07-14T23:28:35.305083492Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 14 23:28:35.305165 env[1270]: time="2025-07-14T23:28:35.305153362Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 14 23:28:35.305371 env[1270]: time="2025-07-14T23:28:35.305328094Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 14 23:28:35.305553 env[1270]: time="2025-07-14T23:28:35.305529697Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 14 23:28:35.305847 env[1270]: time="2025-07-14T23:28:35.305833724Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 14 23:28:35.305955 env[1270]: time="2025-07-14T23:28:35.305943125Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 14 23:28:35.307061 env[1270]: time="2025-07-14T23:28:35.307043845Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 14 23:28:35.307167 env[1270]: time="2025-07-14T23:28:35.307155969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 14 23:28:35.307385 env[1270]: time="2025-07-14T23:28:35.307258797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 14 23:28:35.307441 env[1270]: time="2025-07-14T23:28:35.307428996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 14 23:28:35.307490 env[1270]: time="2025-07-14T23:28:35.307479830Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 14 23:28:35.307799 env[1270]: time="2025-07-14T23:28:35.307787489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 14 23:28:35.307882 env[1270]: time="2025-07-14T23:28:35.307864194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 14 23:28:35.307971 env[1270]: time="2025-07-14T23:28:35.307958084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 14 23:28:35.308038 env[1270]: time="2025-07-14T23:28:35.308025944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 14 23:28:35.308107 env[1270]: time="2025-07-14T23:28:35.308094044Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 14 23:28:35.308239 env[1270]: time="2025-07-14T23:28:35.308228762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 14 23:28:35.308295 env[1270]: time="2025-07-14T23:28:35.308282110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 14 23:28:35.308346 env[1270]: time="2025-07-14T23:28:35.308335053Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 14 23:28:35.308407 env[1270]: time="2025-07-14T23:28:35.308395611Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 14 23:28:35.308466 env[1270]: time="2025-07-14T23:28:35.308451841Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 14 23:28:35.309516 env[1270]: time="2025-07-14T23:28:35.309504731Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 14 23:28:35.309587 env[1270]: time="2025-07-14T23:28:35.309570482Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 14 23:28:35.309675 env[1270]: time="2025-07-14T23:28:35.309663985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 14 23:28:35.310456 env[1270]: time="2025-07-14T23:28:35.310419201Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 14 23:28:35.313395 env[1270]: time="2025-07-14T23:28:35.310594888Z" level=info msg="Connect containerd service" Jul 14 23:28:35.313395 env[1270]: time="2025-07-14T23:28:35.310624309Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 14 23:28:35.314717 env[1270]: time="2025-07-14T23:28:35.314696944Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 14 23:28:35.315309 env[1270]: time="2025-07-14T23:28:35.315297578Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 14 23:28:35.315461 env[1270]: time="2025-07-14T23:28:35.315450336Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 14 23:28:35.315767 env[1270]: time="2025-07-14T23:28:35.315756394Z" level=info msg="containerd successfully booted in 0.059546s" Jul 14 23:28:35.315818 systemd[1]: Started containerd.service. Jul 14 23:28:35.318042 env[1270]: time="2025-07-14T23:28:35.317977694Z" level=info msg="Start subscribing containerd event" Jul 14 23:28:35.318113 env[1270]: time="2025-07-14T23:28:35.318101987Z" level=info msg="Start recovering state" Jul 14 23:28:35.318230 env[1270]: time="2025-07-14T23:28:35.318221684Z" level=info msg="Start event monitor" Jul 14 23:28:35.318283 env[1270]: time="2025-07-14T23:28:35.318270283Z" level=info msg="Start snapshots syncer" Jul 14 23:28:35.318338 env[1270]: time="2025-07-14T23:28:35.318324358Z" level=info msg="Start cni network conf syncer for default" Jul 14 23:28:35.318552 env[1270]: time="2025-07-14T23:28:35.318542138Z" level=info msg="Start streaming server" Jul 14 23:28:35.633601 tar[1246]: linux-amd64/README.md Jul 14 23:28:35.636996 systemd[1]: Finished prepare-helm.service. Jul 14 23:28:35.862980 locksmithd[1293]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 14 23:28:36.534865 sshd_keygen[1256]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 14 23:28:36.554835 systemd[1]: Finished sshd-keygen.service. Jul 14 23:28:36.556224 systemd[1]: Starting issuegen.service... Jul 14 23:28:36.560285 systemd[1]: issuegen.service: Deactivated successfully. Jul 14 23:28:36.560398 systemd[1]: Finished issuegen.service. Jul 14 23:28:36.561895 systemd[1]: Starting systemd-user-sessions.service... Jul 14 23:28:36.567128 systemd[1]: Finished systemd-user-sessions.service. Jul 14 23:28:36.568253 systemd[1]: Started getty@tty1.service. Jul 14 23:28:36.569201 systemd[1]: Started serial-getty@ttyS0.service. Jul 14 23:28:36.569428 systemd[1]: Reached target getty.target. Jul 14 23:28:37.718784 systemd[1]: Started kubelet.service. Jul 14 23:28:37.719164 systemd[1]: Reached target multi-user.target. Jul 14 23:28:37.720271 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 14 23:28:37.725069 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 14 23:28:37.725166 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 14 23:28:37.725342 systemd[1]: Startup finished in 922ms (kernel) + 6.485s (initrd) + 9.039s (userspace) = 16.447s. Jul 14 23:28:37.854710 login[1357]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 14 23:28:37.857271 login[1358]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 14 23:28:37.865395 systemd[1]: Created slice user-500.slice. Jul 14 23:28:37.866558 systemd[1]: Starting user-runtime-dir@500.service... Jul 14 23:28:37.871236 systemd-logind[1240]: New session 2 of user core. Jul 14 23:28:37.874353 systemd-logind[1240]: New session 1 of user core. Jul 14 23:28:37.877333 systemd[1]: Finished user-runtime-dir@500.service. Jul 14 23:28:37.878521 systemd[1]: Starting user@500.service... Jul 14 23:28:37.881139 (systemd)[1364]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 14 23:28:38.014058 systemd[1364]: Queued start job for default target default.target. Jul 14 23:28:38.014889 systemd[1364]: Reached target paths.target. Jul 14 23:28:38.014970 systemd[1364]: Reached target sockets.target. Jul 14 23:28:38.015043 systemd[1364]: Reached target timers.target. Jul 14 23:28:38.015104 systemd[1364]: Reached target basic.target. Jul 14 23:28:38.015209 systemd[1364]: Reached target default.target. Jul 14 23:28:38.015282 systemd[1364]: Startup finished in 130ms. Jul 14 23:28:38.015803 systemd[1]: Started user@500.service. Jul 14 23:28:38.016783 systemd[1]: Started session-1.scope. Jul 14 23:28:38.017484 systemd[1]: Started session-2.scope. Jul 14 23:28:39.030845 kubelet[1361]: E0714 23:28:39.030805 1361 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 23:28:39.032288 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 23:28:39.032396 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 23:28:49.282952 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 14 23:28:49.283078 systemd[1]: Stopped kubelet.service. Jul 14 23:28:49.284045 systemd[1]: Starting kubelet.service... Jul 14 23:28:49.404280 systemd[1]: Started kubelet.service. Jul 14 23:28:49.425021 kubelet[1393]: E0714 23:28:49.424995 1393 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 23:28:49.426973 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 23:28:49.427048 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 23:28:59.514857 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 14 23:28:59.515011 systemd[1]: Stopped kubelet.service. Jul 14 23:28:59.516199 systemd[1]: Starting kubelet.service... Jul 14 23:28:59.794329 systemd[1]: Started kubelet.service. Jul 14 23:28:59.834553 kubelet[1403]: E0714 23:28:59.834518 1403 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 23:28:59.835650 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 23:28:59.835730 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 23:29:05.398403 systemd[1]: Created slice system-sshd.slice. Jul 14 23:29:05.399436 systemd[1]: Started sshd@0-139.178.70.107:22-139.178.89.65:36770.service. Jul 14 23:29:05.445762 sshd[1410]: Accepted publickey for core from 139.178.89.65 port 36770 ssh2: RSA SHA256:XtFLP+nsyPN7YR75cpt5lclh1ThW2mP4NmG7F3yw0l4 Jul 14 23:29:05.446663 sshd[1410]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 23:29:05.450448 systemd[1]: Started session-3.scope. Jul 14 23:29:05.450822 systemd-logind[1240]: New session 3 of user core. Jul 14 23:29:05.499908 systemd[1]: Started sshd@1-139.178.70.107:22-139.178.89.65:36782.service. Jul 14 23:29:05.534708 sshd[1415]: Accepted publickey for core from 139.178.89.65 port 36782 ssh2: RSA SHA256:XtFLP+nsyPN7YR75cpt5lclh1ThW2mP4NmG7F3yw0l4 Jul 14 23:29:05.535800 sshd[1415]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 23:29:05.538235 systemd-logind[1240]: New session 4 of user core. Jul 14 23:29:05.539004 systemd[1]: Started session-4.scope. Jul 14 23:29:05.589656 sshd[1415]: pam_unix(sshd:session): session closed for user core Jul 14 23:29:05.592645 systemd[1]: Started sshd@2-139.178.70.107:22-139.178.89.65:36794.service. Jul 14 23:29:05.593801 systemd[1]: sshd@1-139.178.70.107:22-139.178.89.65:36782.service: Deactivated successfully. Jul 14 23:29:05.594353 systemd[1]: session-4.scope: Deactivated successfully. Jul 14 23:29:05.595269 systemd-logind[1240]: Session 4 logged out. Waiting for processes to exit. Jul 14 23:29:05.595820 systemd-logind[1240]: Removed session 4. Jul 14 23:29:05.629246 sshd[1420]: Accepted publickey for core from 139.178.89.65 port 36794 ssh2: RSA SHA256:XtFLP+nsyPN7YR75cpt5lclh1ThW2mP4NmG7F3yw0l4 Jul 14 23:29:05.630336 sshd[1420]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 23:29:05.633982 systemd[1]: Started session-5.scope. Jul 14 23:29:05.634488 systemd-logind[1240]: New session 5 of user core. Jul 14 23:29:05.683743 sshd[1420]: pam_unix(sshd:session): session closed for user core Jul 14 23:29:05.686328 systemd[1]: sshd@2-139.178.70.107:22-139.178.89.65:36794.service: Deactivated successfully. Jul 14 23:29:05.686762 systemd[1]: session-5.scope: Deactivated successfully. Jul 14 23:29:05.687265 systemd-logind[1240]: Session 5 logged out. Waiting for processes to exit. Jul 14 23:29:05.688054 systemd[1]: Started sshd@3-139.178.70.107:22-139.178.89.65:36804.service. Jul 14 23:29:05.688639 systemd-logind[1240]: Removed session 5. Jul 14 23:29:05.723773 sshd[1427]: Accepted publickey for core from 139.178.89.65 port 36804 ssh2: RSA SHA256:XtFLP+nsyPN7YR75cpt5lclh1ThW2mP4NmG7F3yw0l4 Jul 14 23:29:05.724629 sshd[1427]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 23:29:05.728385 systemd[1]: Started session-6.scope. Jul 14 23:29:05.728652 systemd-logind[1240]: New session 6 of user core. Jul 14 23:29:05.779714 sshd[1427]: pam_unix(sshd:session): session closed for user core Jul 14 23:29:05.782780 systemd[1]: Started sshd@4-139.178.70.107:22-139.178.89.65:36810.service. Jul 14 23:29:05.783408 systemd[1]: sshd@3-139.178.70.107:22-139.178.89.65:36804.service: Deactivated successfully. Jul 14 23:29:05.783949 systemd[1]: session-6.scope: Deactivated successfully. Jul 14 23:29:05.784427 systemd-logind[1240]: Session 6 logged out. Waiting for processes to exit. Jul 14 23:29:05.785157 systemd-logind[1240]: Removed session 6. Jul 14 23:29:05.820172 sshd[1432]: Accepted publickey for core from 139.178.89.65 port 36810 ssh2: RSA SHA256:XtFLP+nsyPN7YR75cpt5lclh1ThW2mP4NmG7F3yw0l4 Jul 14 23:29:05.821222 sshd[1432]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 23:29:05.824215 systemd-logind[1240]: New session 7 of user core. Jul 14 23:29:05.824741 systemd[1]: Started session-7.scope. Jul 14 23:29:05.903079 sudo[1436]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 14 23:29:05.903258 sudo[1436]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 14 23:29:05.921304 systemd[1]: Starting docker.service... Jul 14 23:29:05.944002 env[1446]: time="2025-07-14T23:29:05.943384921Z" level=info msg="Starting up" Jul 14 23:29:05.944606 env[1446]: time="2025-07-14T23:29:05.944592792Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 14 23:29:05.944606 env[1446]: time="2025-07-14T23:29:05.944603259Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 14 23:29:05.944666 env[1446]: time="2025-07-14T23:29:05.944614539Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 14 23:29:05.944666 env[1446]: time="2025-07-14T23:29:05.944620813Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 14 23:29:05.945573 env[1446]: time="2025-07-14T23:29:05.945559100Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 14 23:29:05.945573 env[1446]: time="2025-07-14T23:29:05.945570387Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 14 23:29:05.945622 env[1446]: time="2025-07-14T23:29:05.945578564Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 14 23:29:05.945622 env[1446]: time="2025-07-14T23:29:05.945586130Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 14 23:29:05.948455 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport191640735-merged.mount: Deactivated successfully. Jul 14 23:29:05.959541 env[1446]: time="2025-07-14T23:29:05.959524788Z" level=info msg="Loading containers: start." Jul 14 23:29:06.035843 kernel: Initializing XFRM netlink socket Jul 14 23:29:06.057175 env[1446]: time="2025-07-14T23:29:06.057159144Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 14 23:29:06.095799 systemd-networkd[1063]: docker0: Link UP Jul 14 23:29:06.103589 env[1446]: time="2025-07-14T23:29:06.103570687Z" level=info msg="Loading containers: done." Jul 14 23:29:06.109706 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3840497148-merged.mount: Deactivated successfully. Jul 14 23:29:06.112113 env[1446]: time="2025-07-14T23:29:06.112096614Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 14 23:29:06.112295 env[1446]: time="2025-07-14T23:29:06.112285184Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 14 23:29:06.112385 env[1446]: time="2025-07-14T23:29:06.112376422Z" level=info msg="Daemon has completed initialization" Jul 14 23:29:06.117943 systemd[1]: Started docker.service. Jul 14 23:29:06.120094 env[1446]: time="2025-07-14T23:29:06.120076669Z" level=info msg="API listen on /run/docker.sock" Jul 14 23:29:06.986130 env[1270]: time="2025-07-14T23:29:06.986106222Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 14 23:29:07.528723 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2143423083.mount: Deactivated successfully. Jul 14 23:29:08.903881 env[1270]: time="2025-07-14T23:29:08.903854003Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 23:29:08.906305 env[1270]: time="2025-07-14T23:29:08.906288901Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 23:29:08.908957 env[1270]: time="2025-07-14T23:29:08.908939341Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 23:29:08.909471 env[1270]: time="2025-07-14T23:29:08.909454817Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 23:29:08.910437 env[1270]: time="2025-07-14T23:29:08.910421902Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jul 14 23:29:08.910834 env[1270]: time="2025-07-14T23:29:08.910814348Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 14 23:29:10.014689 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 14 23:29:10.014807 systemd[1]: Stopped kubelet.service. Jul 14 23:29:10.015797 systemd[1]: Starting kubelet.service... Jul 14 23:29:10.239506 systemd[1]: Started kubelet.service. Jul 14 23:29:10.273258 kubelet[1573]: E0714 23:29:10.273194 1573 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 23:29:10.274143 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 23:29:10.274218 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 23:29:10.757628 env[1270]: time="2025-07-14T23:29:10.757545521Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 23:29:10.772889 env[1270]: time="2025-07-14T23:29:10.772869432Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 23:29:10.783876 env[1270]: time="2025-07-14T23:29:10.783860384Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 23:29:10.792477 env[1270]: time="2025-07-14T23:29:10.792460535Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 23:29:10.793197 env[1270]: time="2025-07-14T23:29:10.793179226Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jul 14 23:29:10.794267 env[1270]: time="2025-07-14T23:29:10.794251690Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 14 23:29:12.304117 env[1270]: time="2025-07-14T23:29:12.304071363Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 23:29:12.305420 env[1270]: time="2025-07-14T23:29:12.305399682Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 23:29:12.306475 env[1270]: time="2025-07-14T23:29:12.306459819Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 23:29:12.307218 env[1270]: time="2025-07-14T23:29:12.307198677Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 23:29:12.307648 env[1270]: time="2025-07-14T23:29:12.307628931Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jul 14 23:29:12.308006 env[1270]: time="2025-07-14T23:29:12.307987189Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 14 23:29:13.205265 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount964461302.mount: Deactivated successfully. Jul 14 23:29:13.678811 env[1270]: time="2025-07-14T23:29:13.678779038Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 23:29:13.689853 env[1270]: time="2025-07-14T23:29:13.689817690Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 23:29:13.698787 env[1270]: time="2025-07-14T23:29:13.698763969Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 23:29:13.709948 env[1270]: time="2025-07-14T23:29:13.709929485Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 23:29:13.710424 env[1270]: time="2025-07-14T23:29:13.710404933Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jul 14 23:29:13.711215 env[1270]: time="2025-07-14T23:29:13.711175395Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 14 23:29:14.282289 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2863664203.mount: Deactivated successfully. Jul 14 23:29:15.171294 env[1270]: time="2025-07-14T23:29:15.171264212Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 23:29:15.172095 env[1270]: time="2025-07-14T23:29:15.172077844Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 23:29:15.173188 env[1270]: time="2025-07-14T23:29:15.173172739Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 23:29:15.174228 env[1270]: time="2025-07-14T23:29:15.174213706Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 23:29:15.174781 env[1270]: time="2025-07-14T23:29:15.174766807Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 14 23:29:15.175130 env[1270]: time="2025-07-14T23:29:15.175119757Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 14 23:29:15.615076 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3772130634.mount: Deactivated successfully. Jul 14 23:29:15.617176 env[1270]: time="2025-07-14T23:29:15.617147441Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 23:29:15.617684 env[1270]: time="2025-07-14T23:29:15.617668562Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 23:29:15.618357 env[1270]: time="2025-07-14T23:29:15.618343576Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 23:29:15.619131 env[1270]: time="2025-07-14T23:29:15.619117043Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 23:29:15.619494 env[1270]: time="2025-07-14T23:29:15.619479479Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 14 23:29:15.620075 env[1270]: time="2025-07-14T23:29:15.620060472Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 14 23:29:16.152272 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount32286872.mount: Deactivated successfully. Jul 14 23:29:18.496550 env[1270]: time="2025-07-14T23:29:18.496520911Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 23:29:18.513269 env[1270]: time="2025-07-14T23:29:18.513242051Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 23:29:18.533234 env[1270]: time="2025-07-14T23:29:18.533205158Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 23:29:18.546793 env[1270]: time="2025-07-14T23:29:18.546775027Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 23:29:18.547418 env[1270]: time="2025-07-14T23:29:18.547398994Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jul 14 23:29:20.292513 update_engine[1241]: I0714 23:29:20.292025 1241 update_attempter.cc:509] Updating boot flags... Jul 14 23:29:20.307572 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 14 23:29:20.307717 systemd[1]: Stopped kubelet.service. Jul 14 23:29:20.308980 systemd[1]: Starting kubelet.service... Jul 14 23:29:21.030250 systemd[1]: Started kubelet.service. Jul 14 23:29:21.062124 kubelet[1621]: E0714 23:29:21.062094 1621 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 23:29:21.062949 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 23:29:21.063027 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 23:29:21.221738 systemd[1]: Stopped kubelet.service. Jul 14 23:29:21.223935 systemd[1]: Starting kubelet.service... Jul 14 23:29:21.241532 systemd[1]: Reloading. Jul 14 23:29:21.303402 /usr/lib/systemd/system-generators/torcx-generator[1655]: time="2025-07-14T23:29:21Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.101 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.101 /var/lib/torcx/store]" Jul 14 23:29:21.303633 /usr/lib/systemd/system-generators/torcx-generator[1655]: time="2025-07-14T23:29:21Z" level=info msg="torcx already run" Jul 14 23:29:21.359992 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 14 23:29:21.360011 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 14 23:29:21.371496 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 23:29:21.496657 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 14 23:29:21.496725 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 14 23:29:21.496914 systemd[1]: Stopped kubelet.service. Jul 14 23:29:21.498655 systemd[1]: Starting kubelet.service... Jul 14 23:29:22.183994 systemd[1]: Started kubelet.service. Jul 14 23:29:22.265682 kubelet[1720]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 23:29:22.265682 kubelet[1720]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 14 23:29:22.265682 kubelet[1720]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 23:29:22.265950 kubelet[1720]: I0714 23:29:22.265717 1720 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 23:29:22.510455 kubelet[1720]: I0714 23:29:22.510396 1720 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 14 23:29:22.510455 kubelet[1720]: I0714 23:29:22.510417 1720 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 23:29:22.510903 kubelet[1720]: I0714 23:29:22.510889 1720 server.go:954] "Client rotation is on, will bootstrap in background" Jul 14 23:29:22.530595 kubelet[1720]: E0714 23:29:22.530580 1720 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.107:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.107:6443: connect: connection refused" logger="UnhandledError" Jul 14 23:29:22.531302 kubelet[1720]: I0714 23:29:22.531289 1720 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 23:29:22.540098 kubelet[1720]: E0714 23:29:22.540083 1720 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 14 23:29:22.540098 kubelet[1720]: I0714 23:29:22.540097 1720 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 14 23:29:22.542180 kubelet[1720]: I0714 23:29:22.542166 1720 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 23:29:22.543920 kubelet[1720]: I0714 23:29:22.543899 1720 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 23:29:22.544013 kubelet[1720]: I0714 23:29:22.543919 1720 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 14 23:29:22.544079 kubelet[1720]: I0714 23:29:22.544017 1720 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 23:29:22.544079 kubelet[1720]: I0714 23:29:22.544023 1720 container_manager_linux.go:304] "Creating device plugin manager" Jul 14 23:29:22.544169 kubelet[1720]: I0714 23:29:22.544088 1720 state_mem.go:36] "Initialized new in-memory state store" Jul 14 23:29:22.547446 kubelet[1720]: I0714 23:29:22.547435 1720 kubelet.go:446] "Attempting to sync node with API server" Jul 14 23:29:22.547476 kubelet[1720]: I0714 23:29:22.547455 1720 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 23:29:22.547476 kubelet[1720]: I0714 23:29:22.547466 1720 kubelet.go:352] "Adding apiserver pod source" Jul 14 23:29:22.547476 kubelet[1720]: I0714 23:29:22.547472 1720 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 23:29:22.564447 kubelet[1720]: W0714 23:29:22.564329 1720 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.107:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.107:6443: connect: connection refused Jul 14 23:29:22.564447 kubelet[1720]: E0714 23:29:22.564379 1720 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.107:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.107:6443: connect: connection refused" logger="UnhandledError" Jul 14 23:29:22.564549 kubelet[1720]: W0714 23:29:22.564439 1720 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.107:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.107:6443: connect: connection refused Jul 14 23:29:22.564549 kubelet[1720]: E0714 23:29:22.564464 1720 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.107:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.107:6443: connect: connection refused" logger="UnhandledError" Jul 14 23:29:22.564549 kubelet[1720]: I0714 23:29:22.564510 1720 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 14 23:29:22.564803 kubelet[1720]: I0714 23:29:22.564787 1720 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 14 23:29:22.564856 kubelet[1720]: W0714 23:29:22.564834 1720 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 14 23:29:22.568724 kubelet[1720]: I0714 23:29:22.568706 1720 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 14 23:29:22.568775 kubelet[1720]: I0714 23:29:22.568731 1720 server.go:1287] "Started kubelet" Jul 14 23:29:22.580282 kubelet[1720]: I0714 23:29:22.580224 1720 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 23:29:22.580587 kubelet[1720]: I0714 23:29:22.580574 1720 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 23:29:22.583074 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 14 23:29:22.583134 kubelet[1720]: I0714 23:29:22.583125 1720 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 23:29:22.587197 kubelet[1720]: E0714 23:29:22.586132 1720 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.107:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.107:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185241fda013a93a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-14 23:29:22.56871865 +0000 UTC m=+0.381184470,LastTimestamp:2025-07-14 23:29:22.56871865 +0000 UTC m=+0.381184470,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 14 23:29:22.588573 kubelet[1720]: I0714 23:29:22.588556 1720 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 23:29:22.589211 kubelet[1720]: I0714 23:29:22.589201 1720 server.go:479] "Adding debug handlers to kubelet server" Jul 14 23:29:22.589799 kubelet[1720]: I0714 23:29:22.589787 1720 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 23:29:22.591957 kubelet[1720]: E0714 23:29:22.591947 1720 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 23:29:22.592027 kubelet[1720]: I0714 23:29:22.592018 1720 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 14 23:29:22.592164 kubelet[1720]: I0714 23:29:22.592154 1720 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 14 23:29:22.592236 kubelet[1720]: I0714 23:29:22.592229 1720 reconciler.go:26] "Reconciler: start to sync state" Jul 14 23:29:22.592484 kubelet[1720]: W0714 23:29:22.592463 1720 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.107:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.107:6443: connect: connection refused Jul 14 23:29:22.592559 kubelet[1720]: E0714 23:29:22.592547 1720 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.107:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.107:6443: connect: connection refused" logger="UnhandledError" Jul 14 23:29:22.592855 kubelet[1720]: E0714 23:29:22.592842 1720 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.107:6443: connect: connection refused" interval="200ms" Jul 14 23:29:22.592952 kubelet[1720]: E0714 23:29:22.592942 1720 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 14 23:29:22.593111 kubelet[1720]: I0714 23:29:22.593103 1720 factory.go:221] Registration of the systemd container factory successfully Jul 14 23:29:22.593193 kubelet[1720]: I0714 23:29:22.593184 1720 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 23:29:22.594018 kubelet[1720]: I0714 23:29:22.594009 1720 factory.go:221] Registration of the containerd container factory successfully Jul 14 23:29:22.611235 kubelet[1720]: I0714 23:29:22.611185 1720 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 14 23:29:22.612076 kubelet[1720]: I0714 23:29:22.612060 1720 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 14 23:29:22.612076 kubelet[1720]: I0714 23:29:22.612073 1720 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 14 23:29:22.612146 kubelet[1720]: I0714 23:29:22.612084 1720 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 14 23:29:22.612146 kubelet[1720]: I0714 23:29:22.612088 1720 kubelet.go:2382] "Starting kubelet main sync loop" Jul 14 23:29:22.612146 kubelet[1720]: E0714 23:29:22.612110 1720 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 14 23:29:22.614246 kubelet[1720]: W0714 23:29:22.614207 1720 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.107:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.107:6443: connect: connection refused Jul 14 23:29:22.614285 kubelet[1720]: E0714 23:29:22.614251 1720 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.107:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.107:6443: connect: connection refused" logger="UnhandledError" Jul 14 23:29:22.614483 kubelet[1720]: I0714 23:29:22.614470 1720 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 14 23:29:22.614483 kubelet[1720]: I0714 23:29:22.614480 1720 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 14 23:29:22.614521 kubelet[1720]: I0714 23:29:22.614489 1720 state_mem.go:36] "Initialized new in-memory state store" Jul 14 23:29:22.615398 kubelet[1720]: I0714 23:29:22.615386 1720 policy_none.go:49] "None policy: Start" Jul 14 23:29:22.615398 kubelet[1720]: I0714 23:29:22.615398 1720 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 14 23:29:22.615449 kubelet[1720]: I0714 23:29:22.615405 1720 state_mem.go:35] "Initializing new in-memory state store" Jul 14 23:29:22.618051 systemd[1]: Created slice kubepods.slice. Jul 14 23:29:22.620533 systemd[1]: Created slice kubepods-burstable.slice. Jul 14 23:29:22.622418 systemd[1]: Created slice kubepods-besteffort.slice. Jul 14 23:29:22.629426 kubelet[1720]: I0714 23:29:22.629410 1720 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 14 23:29:22.629600 kubelet[1720]: I0714 23:29:22.629589 1720 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 23:29:22.629686 kubelet[1720]: I0714 23:29:22.629654 1720 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 23:29:22.630181 kubelet[1720]: I0714 23:29:22.630170 1720 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 23:29:22.631190 kubelet[1720]: E0714 23:29:22.631178 1720 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 14 23:29:22.631286 kubelet[1720]: E0714 23:29:22.631273 1720 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 14 23:29:22.718689 systemd[1]: Created slice kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice. Jul 14 23:29:22.731176 kubelet[1720]: I0714 23:29:22.731129 1720 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 23:29:22.731417 kubelet[1720]: E0714 23:29:22.731397 1720 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.107:6443/api/v1/nodes\": dial tcp 139.178.70.107:6443: connect: connection refused" node="localhost" Jul 14 23:29:22.733941 kubelet[1720]: E0714 23:29:22.733929 1720 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 23:29:22.735854 systemd[1]: Created slice kubepods-burstable-podabc141c29eba6ebf5f9741fb66c9046a.slice. Jul 14 23:29:22.737345 kubelet[1720]: E0714 23:29:22.737327 1720 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 23:29:22.745139 systemd[1]: Created slice kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice. Jul 14 23:29:22.746955 kubelet[1720]: E0714 23:29:22.746944 1720 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 23:29:22.793434 kubelet[1720]: I0714 23:29:22.793404 1720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/abc141c29eba6ebf5f9741fb66c9046a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"abc141c29eba6ebf5f9741fb66c9046a\") " pod="kube-system/kube-apiserver-localhost" Jul 14 23:29:22.793534 kubelet[1720]: I0714 23:29:22.793429 1720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 23:29:22.793534 kubelet[1720]: I0714 23:29:22.793490 1720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 23:29:22.793534 kubelet[1720]: I0714 23:29:22.793507 1720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 14 23:29:22.793534 kubelet[1720]: I0714 23:29:22.793520 1720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/abc141c29eba6ebf5f9741fb66c9046a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"abc141c29eba6ebf5f9741fb66c9046a\") " pod="kube-system/kube-apiserver-localhost" Jul 14 23:29:22.793534 kubelet[1720]: I0714 23:29:22.793531 1720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/abc141c29eba6ebf5f9741fb66c9046a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"abc141c29eba6ebf5f9741fb66c9046a\") " pod="kube-system/kube-apiserver-localhost" Jul 14 23:29:22.793662 kubelet[1720]: I0714 23:29:22.793542 1720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 23:29:22.793662 kubelet[1720]: I0714 23:29:22.793566 1720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 23:29:22.793662 kubelet[1720]: I0714 23:29:22.793580 1720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 23:29:22.793737 kubelet[1720]: E0714 23:29:22.793680 1720 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.107:6443: connect: connection refused" interval="400ms" Jul 14 23:29:22.932442 kubelet[1720]: I0714 23:29:22.932424 1720 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 23:29:22.932885 kubelet[1720]: E0714 23:29:22.932866 1720 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.107:6443/api/v1/nodes\": dial tcp 139.178.70.107:6443: connect: connection refused" node="localhost" Jul 14 23:29:23.035688 env[1270]: time="2025-07-14T23:29:23.035427942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,}" Jul 14 23:29:23.037882 env[1270]: time="2025-07-14T23:29:23.037856009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:abc141c29eba6ebf5f9741fb66c9046a,Namespace:kube-system,Attempt:0,}" Jul 14 23:29:23.047595 env[1270]: time="2025-07-14T23:29:23.047463346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,}" Jul 14 23:29:23.194100 kubelet[1720]: E0714 23:29:23.194068 1720 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.107:6443: connect: connection refused" interval="800ms" Jul 14 23:29:23.333888 kubelet[1720]: I0714 23:29:23.333679 1720 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 23:29:23.334128 kubelet[1720]: E0714 23:29:23.333910 1720 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.107:6443/api/v1/nodes\": dial tcp 139.178.70.107:6443: connect: connection refused" node="localhost" Jul 14 23:29:23.509512 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1260093557.mount: Deactivated successfully. Jul 14 23:29:23.510539 env[1270]: time="2025-07-14T23:29:23.510494497Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 23:29:23.511681 env[1270]: time="2025-07-14T23:29:23.511654858Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 23:29:23.512438 env[1270]: time="2025-07-14T23:29:23.512404563Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 23:29:23.513459 env[1270]: time="2025-07-14T23:29:23.513444599Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 23:29:23.515217 env[1270]: time="2025-07-14T23:29:23.515201349Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 23:29:23.515749 env[1270]: time="2025-07-14T23:29:23.515734879Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 23:29:23.516597 env[1270]: time="2025-07-14T23:29:23.516581270Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 23:29:23.518520 env[1270]: time="2025-07-14T23:29:23.518508087Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 23:29:23.519266 env[1270]: time="2025-07-14T23:29:23.519254820Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 23:29:23.520350 env[1270]: time="2025-07-14T23:29:23.520326691Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 23:29:23.520786 env[1270]: time="2025-07-14T23:29:23.520769900Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 23:29:23.521291 env[1270]: time="2025-07-14T23:29:23.521276004Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 23:29:23.533405 env[1270]: time="2025-07-14T23:29:23.532126374Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 23:29:23.533405 env[1270]: time="2025-07-14T23:29:23.532179464Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 23:29:23.533405 env[1270]: time="2025-07-14T23:29:23.532193400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:29:23.533405 env[1270]: time="2025-07-14T23:29:23.532439214Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e974d14989f8047238fc8310f68a3e1fa7e47a2ffb6efdbacc238615a888c041 pid=1765 runtime=io.containerd.runc.v2 Jul 14 23:29:23.535085 env[1270]: time="2025-07-14T23:29:23.535008632Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 23:29:23.535268 env[1270]: time="2025-07-14T23:29:23.535244448Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 23:29:23.535387 env[1270]: time="2025-07-14T23:29:23.535347276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:29:23.535574 env[1270]: time="2025-07-14T23:29:23.535551829Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7fc6fc07996c952ae1a88cfe1e1e2d82519ad89bf21c48122589acfc146cd076 pid=1758 runtime=io.containerd.runc.v2 Jul 14 23:29:23.547033 systemd[1]: Started cri-containerd-7fc6fc07996c952ae1a88cfe1e1e2d82519ad89bf21c48122589acfc146cd076.scope. Jul 14 23:29:23.554054 systemd[1]: Started cri-containerd-e974d14989f8047238fc8310f68a3e1fa7e47a2ffb6efdbacc238615a888c041.scope. Jul 14 23:29:23.582475 kubelet[1720]: W0714 23:29:23.582431 1720 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.107:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.107:6443: connect: connection refused Jul 14 23:29:23.582475 kubelet[1720]: E0714 23:29:23.582455 1720 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.107:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.107:6443: connect: connection refused" logger="UnhandledError" Jul 14 23:29:23.608201 env[1270]: time="2025-07-14T23:29:23.607023156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:abc141c29eba6ebf5f9741fb66c9046a,Namespace:kube-system,Attempt:0,} returns sandbox id \"7fc6fc07996c952ae1a88cfe1e1e2d82519ad89bf21c48122589acfc146cd076\"" Jul 14 23:29:23.609480 env[1270]: time="2025-07-14T23:29:23.609445429Z" level=info msg="CreateContainer within sandbox \"7fc6fc07996c952ae1a88cfe1e1e2d82519ad89bf21c48122589acfc146cd076\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 14 23:29:23.612893 env[1270]: time="2025-07-14T23:29:23.612278044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"e974d14989f8047238fc8310f68a3e1fa7e47a2ffb6efdbacc238615a888c041\"" Jul 14 23:29:23.613649 env[1270]: time="2025-07-14T23:29:23.613630698Z" level=info msg="CreateContainer within sandbox \"e974d14989f8047238fc8310f68a3e1fa7e47a2ffb6efdbacc238615a888c041\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 14 23:29:23.630712 env[1270]: time="2025-07-14T23:29:23.630686569Z" level=info msg="CreateContainer within sandbox \"e974d14989f8047238fc8310f68a3e1fa7e47a2ffb6efdbacc238615a888c041\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a81c06e3797a681cf720507c32b7361c4c89ad81cd411217f12151cb3f34104d\"" Jul 14 23:29:23.631139 env[1270]: time="2025-07-14T23:29:23.631110768Z" level=info msg="StartContainer for \"a81c06e3797a681cf720507c32b7361c4c89ad81cd411217f12151cb3f34104d\"" Jul 14 23:29:23.633229 env[1270]: time="2025-07-14T23:29:23.633206679Z" level=info msg="CreateContainer within sandbox \"7fc6fc07996c952ae1a88cfe1e1e2d82519ad89bf21c48122589acfc146cd076\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c245ed63873df7b143ba8554dbd4a3e18d9860d95dd78f0409b4dc00c3af0c7c\"" Jul 14 23:29:23.633518 env[1270]: time="2025-07-14T23:29:23.633498799Z" level=info msg="StartContainer for \"c245ed63873df7b143ba8554dbd4a3e18d9860d95dd78f0409b4dc00c3af0c7c\"" Jul 14 23:29:23.635626 env[1270]: time="2025-07-14T23:29:23.635599552Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 23:29:23.635711 env[1270]: time="2025-07-14T23:29:23.635697044Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 23:29:23.635778 env[1270]: time="2025-07-14T23:29:23.635764282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:29:23.635923 env[1270]: time="2025-07-14T23:29:23.635906834Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/610ee0fa212b014c9efcf59bc87e1c8a729a5cd35c624bfb650b8479c80772f4 pid=1842 runtime=io.containerd.runc.v2 Jul 14 23:29:23.653489 systemd[1]: Started cri-containerd-610ee0fa212b014c9efcf59bc87e1c8a729a5cd35c624bfb650b8479c80772f4.scope. Jul 14 23:29:23.659465 systemd[1]: Started cri-containerd-a81c06e3797a681cf720507c32b7361c4c89ad81cd411217f12151cb3f34104d.scope. Jul 14 23:29:23.673840 systemd[1]: Started cri-containerd-c245ed63873df7b143ba8554dbd4a3e18d9860d95dd78f0409b4dc00c3af0c7c.scope. Jul 14 23:29:23.690797 env[1270]: time="2025-07-14T23:29:23.690774090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"610ee0fa212b014c9efcf59bc87e1c8a729a5cd35c624bfb650b8479c80772f4\"" Jul 14 23:29:23.692123 env[1270]: time="2025-07-14T23:29:23.692105082Z" level=info msg="CreateContainer within sandbox \"610ee0fa212b014c9efcf59bc87e1c8a729a5cd35c624bfb650b8479c80772f4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 14 23:29:23.696908 env[1270]: time="2025-07-14T23:29:23.696892655Z" level=info msg="CreateContainer within sandbox \"610ee0fa212b014c9efcf59bc87e1c8a729a5cd35c624bfb650b8479c80772f4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"84f438cda4f86f046fd097692468298d0a916e93d551a516d7574c0477786ad4\"" Jul 14 23:29:23.697195 env[1270]: time="2025-07-14T23:29:23.697179503Z" level=info msg="StartContainer for \"84f438cda4f86f046fd097692468298d0a916e93d551a516d7574c0477786ad4\"" Jul 14 23:29:23.710002 env[1270]: time="2025-07-14T23:29:23.709981312Z" level=info msg="StartContainer for \"a81c06e3797a681cf720507c32b7361c4c89ad81cd411217f12151cb3f34104d\" returns successfully" Jul 14 23:29:23.721697 systemd[1]: Started cri-containerd-84f438cda4f86f046fd097692468298d0a916e93d551a516d7574c0477786ad4.scope. Jul 14 23:29:23.733249 env[1270]: time="2025-07-14T23:29:23.733222222Z" level=info msg="StartContainer for \"c245ed63873df7b143ba8554dbd4a3e18d9860d95dd78f0409b4dc00c3af0c7c\" returns successfully" Jul 14 23:29:23.756555 env[1270]: time="2025-07-14T23:29:23.756523759Z" level=info msg="StartContainer for \"84f438cda4f86f046fd097692468298d0a916e93d551a516d7574c0477786ad4\" returns successfully" Jul 14 23:29:23.776801 kubelet[1720]: W0714 23:29:23.776757 1720 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.107:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.107:6443: connect: connection refused Jul 14 23:29:23.776801 kubelet[1720]: E0714 23:29:23.776804 1720 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.107:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.107:6443: connect: connection refused" logger="UnhandledError" Jul 14 23:29:23.995165 kubelet[1720]: E0714 23:29:23.995095 1720 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.107:6443: connect: connection refused" interval="1.6s" Jul 14 23:29:24.132300 kubelet[1720]: W0714 23:29:24.132264 1720 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.107:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.107:6443: connect: connection refused Jul 14 23:29:24.132300 kubelet[1720]: E0714 23:29:24.132303 1720 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.107:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.107:6443: connect: connection refused" logger="UnhandledError" Jul 14 23:29:24.135123 kubelet[1720]: I0714 23:29:24.135111 1720 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 23:29:24.135256 kubelet[1720]: E0714 23:29:24.135239 1720 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.70.107:6443/api/v1/nodes\": dial tcp 139.178.70.107:6443: connect: connection refused" node="localhost" Jul 14 23:29:24.153551 kubelet[1720]: W0714 23:29:24.153528 1720 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.107:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.107:6443: connect: connection refused Jul 14 23:29:24.153591 kubelet[1720]: E0714 23:29:24.153554 1720 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.107:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.107:6443: connect: connection refused" logger="UnhandledError" Jul 14 23:29:24.592345 kubelet[1720]: E0714 23:29:24.592320 1720 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.107:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.107:6443: connect: connection refused" logger="UnhandledError" Jul 14 23:29:24.617782 kubelet[1720]: E0714 23:29:24.617682 1720 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 23:29:24.619253 kubelet[1720]: E0714 23:29:24.619244 1720 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 23:29:24.620284 kubelet[1720]: E0714 23:29:24.620276 1720 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 23:29:25.597178 kubelet[1720]: E0714 23:29:25.597158 1720 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 14 23:29:25.626199 kubelet[1720]: E0714 23:29:25.626174 1720 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 23:29:25.627292 kubelet[1720]: E0714 23:29:25.627274 1720 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 23:29:25.627424 kubelet[1720]: E0714 23:29:25.627389 1720 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 23:29:25.736713 kubelet[1720]: I0714 23:29:25.736694 1720 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 23:29:25.746431 kubelet[1720]: I0714 23:29:25.746410 1720 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 14 23:29:25.746563 kubelet[1720]: E0714 23:29:25.746551 1720 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 14 23:29:25.752680 kubelet[1720]: E0714 23:29:25.752655 1720 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 23:29:25.853608 kubelet[1720]: E0714 23:29:25.853542 1720 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 23:29:25.954537 kubelet[1720]: E0714 23:29:25.954507 1720 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 23:29:26.055722 kubelet[1720]: E0714 23:29:26.055695 1720 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 23:29:26.156491 kubelet[1720]: E0714 23:29:26.156420 1720 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 23:29:26.257442 kubelet[1720]: E0714 23:29:26.257417 1720 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 23:29:26.358420 kubelet[1720]: E0714 23:29:26.358389 1720 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 23:29:26.459170 kubelet[1720]: E0714 23:29:26.459095 1720 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 23:29:26.560033 kubelet[1720]: E0714 23:29:26.560005 1720 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 23:29:26.623277 kubelet[1720]: E0714 23:29:26.623261 1720 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 23:29:26.623963 kubelet[1720]: E0714 23:29:26.623633 1720 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 23:29:26.660257 kubelet[1720]: E0714 23:29:26.660236 1720 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 23:29:26.762692 kubelet[1720]: E0714 23:29:26.762627 1720 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 23:29:26.863355 kubelet[1720]: E0714 23:29:26.863337 1720 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 23:29:26.963929 kubelet[1720]: E0714 23:29:26.963909 1720 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 23:29:27.093051 kubelet[1720]: I0714 23:29:27.093030 1720 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 14 23:29:27.099628 kubelet[1720]: I0714 23:29:27.099607 1720 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 14 23:29:27.102317 kubelet[1720]: I0714 23:29:27.102302 1720 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 14 23:29:27.173009 systemd[1]: Reloading. Jul 14 23:29:27.225801 /usr/lib/systemd/system-generators/torcx-generator[2003]: time="2025-07-14T23:29:27Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.101 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.101 /var/lib/torcx/store]" Jul 14 23:29:27.225820 /usr/lib/systemd/system-generators/torcx-generator[2003]: time="2025-07-14T23:29:27Z" level=info msg="torcx already run" Jul 14 23:29:27.281758 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 14 23:29:27.281908 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 14 23:29:27.293507 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 23:29:27.360619 systemd[1]: Stopping kubelet.service... Jul 14 23:29:27.371200 systemd[1]: kubelet.service: Deactivated successfully. Jul 14 23:29:27.371422 systemd[1]: Stopped kubelet.service. Jul 14 23:29:27.373051 systemd[1]: Starting kubelet.service... Jul 14 23:29:28.413613 systemd[1]: Started kubelet.service. Jul 14 23:29:28.457508 kubelet[2067]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 23:29:28.457711 kubelet[2067]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 14 23:29:28.457752 kubelet[2067]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 23:29:28.457873 kubelet[2067]: I0714 23:29:28.457856 2067 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 23:29:28.468311 kubelet[2067]: I0714 23:29:28.468291 2067 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 14 23:29:28.468401 kubelet[2067]: I0714 23:29:28.468392 2067 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 23:29:28.468578 kubelet[2067]: I0714 23:29:28.468570 2067 server.go:954] "Client rotation is on, will bootstrap in background" Jul 14 23:29:28.470424 kubelet[2067]: I0714 23:29:28.470413 2067 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 14 23:29:28.471668 sudo[2080]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 14 23:29:28.471832 kubelet[2067]: I0714 23:29:28.471665 2067 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 23:29:28.471809 sudo[2080]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 14 23:29:28.476490 kubelet[2067]: E0714 23:29:28.476464 2067 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 14 23:29:28.476490 kubelet[2067]: I0714 23:29:28.476480 2067 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 14 23:29:28.483690 kubelet[2067]: I0714 23:29:28.480146 2067 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 23:29:28.483690 kubelet[2067]: I0714 23:29:28.482058 2067 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 23:29:28.483690 kubelet[2067]: I0714 23:29:28.482079 2067 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 14 23:29:28.483690 kubelet[2067]: I0714 23:29:28.482227 2067 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 23:29:28.483868 kubelet[2067]: I0714 23:29:28.482236 2067 container_manager_linux.go:304] "Creating device plugin manager" Jul 14 23:29:28.483868 kubelet[2067]: I0714 23:29:28.483514 2067 state_mem.go:36] "Initialized new in-memory state store" Jul 14 23:29:28.483868 kubelet[2067]: I0714 23:29:28.483626 2067 kubelet.go:446] "Attempting to sync node with API server" Jul 14 23:29:28.484383 kubelet[2067]: I0714 23:29:28.484368 2067 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 23:29:28.484421 kubelet[2067]: I0714 23:29:28.484390 2067 kubelet.go:352] "Adding apiserver pod source" Jul 14 23:29:28.484421 kubelet[2067]: I0714 23:29:28.484396 2067 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 23:29:28.488742 kubelet[2067]: I0714 23:29:28.488685 2067 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 14 23:29:28.492794 kubelet[2067]: I0714 23:29:28.492782 2067 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 14 23:29:28.498770 kubelet[2067]: I0714 23:29:28.498759 2067 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 14 23:29:28.498856 kubelet[2067]: I0714 23:29:28.498849 2067 server.go:1287] "Started kubelet" Jul 14 23:29:28.504741 kubelet[2067]: I0714 23:29:28.504610 2067 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 23:29:28.504817 kubelet[2067]: I0714 23:29:28.504808 2067 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 23:29:28.504866 kubelet[2067]: I0714 23:29:28.504850 2067 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 23:29:28.505293 kubelet[2067]: I0714 23:29:28.505284 2067 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 23:29:28.506190 kubelet[2067]: I0714 23:29:28.505622 2067 server.go:479] "Adding debug handlers to kubelet server" Jul 14 23:29:28.506434 kubelet[2067]: I0714 23:29:28.506415 2067 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 23:29:28.509001 kubelet[2067]: E0714 23:29:28.508989 2067 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 14 23:29:28.511333 kubelet[2067]: I0714 23:29:28.511320 2067 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 14 23:29:28.511417 kubelet[2067]: I0714 23:29:28.511405 2067 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 14 23:29:28.511506 kubelet[2067]: I0714 23:29:28.511495 2067 reconciler.go:26] "Reconciler: start to sync state" Jul 14 23:29:28.512822 kubelet[2067]: I0714 23:29:28.512810 2067 factory.go:221] Registration of the systemd container factory successfully Jul 14 23:29:28.512895 kubelet[2067]: I0714 23:29:28.512880 2067 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 23:29:28.513857 kubelet[2067]: I0714 23:29:28.513846 2067 factory.go:221] Registration of the containerd container factory successfully Jul 14 23:29:28.518793 kubelet[2067]: I0714 23:29:28.518778 2067 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 14 23:29:28.519563 kubelet[2067]: I0714 23:29:28.519554 2067 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 14 23:29:28.519622 kubelet[2067]: I0714 23:29:28.519615 2067 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 14 23:29:28.519675 kubelet[2067]: I0714 23:29:28.519668 2067 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 14 23:29:28.519725 kubelet[2067]: I0714 23:29:28.519716 2067 kubelet.go:2382] "Starting kubelet main sync loop" Jul 14 23:29:28.520847 kubelet[2067]: E0714 23:29:28.520797 2067 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 14 23:29:28.564248 kubelet[2067]: I0714 23:29:28.564233 2067 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 14 23:29:28.565412 kubelet[2067]: I0714 23:29:28.565401 2067 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 14 23:29:28.565462 kubelet[2067]: I0714 23:29:28.565455 2067 state_mem.go:36] "Initialized new in-memory state store" Jul 14 23:29:28.565596 kubelet[2067]: I0714 23:29:28.565588 2067 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 14 23:29:28.565657 kubelet[2067]: I0714 23:29:28.565640 2067 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 14 23:29:28.565702 kubelet[2067]: I0714 23:29:28.565695 2067 policy_none.go:49] "None policy: Start" Jul 14 23:29:28.565749 kubelet[2067]: I0714 23:29:28.565742 2067 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 14 23:29:28.565796 kubelet[2067]: I0714 23:29:28.565789 2067 state_mem.go:35] "Initializing new in-memory state store" Jul 14 23:29:28.565916 kubelet[2067]: I0714 23:29:28.565909 2067 state_mem.go:75] "Updated machine memory state" Jul 14 23:29:28.572085 kubelet[2067]: I0714 23:29:28.569867 2067 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 14 23:29:28.572085 kubelet[2067]: I0714 23:29:28.570223 2067 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 23:29:28.572085 kubelet[2067]: I0714 23:29:28.570231 2067 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 23:29:28.572085 kubelet[2067]: I0714 23:29:28.570903 2067 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 23:29:28.573095 kubelet[2067]: E0714 23:29:28.572857 2067 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 14 23:29:28.621972 kubelet[2067]: I0714 23:29:28.621950 2067 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 14 23:29:28.622866 kubelet[2067]: I0714 23:29:28.622853 2067 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 14 23:29:28.622981 kubelet[2067]: I0714 23:29:28.622969 2067 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 14 23:29:28.625595 kubelet[2067]: E0714 23:29:28.625570 2067 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 14 23:29:28.626455 kubelet[2067]: E0714 23:29:28.626425 2067 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 14 23:29:28.626816 kubelet[2067]: E0714 23:29:28.626804 2067 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 14 23:29:28.673754 kubelet[2067]: I0714 23:29:28.673697 2067 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 23:29:28.679576 kubelet[2067]: I0714 23:29:28.679549 2067 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 14 23:29:28.679668 kubelet[2067]: I0714 23:29:28.679599 2067 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 14 23:29:28.713169 kubelet[2067]: I0714 23:29:28.713142 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/abc141c29eba6ebf5f9741fb66c9046a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"abc141c29eba6ebf5f9741fb66c9046a\") " pod="kube-system/kube-apiserver-localhost" Jul 14 23:29:28.713341 kubelet[2067]: I0714 23:29:28.713330 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 23:29:28.713449 kubelet[2067]: I0714 23:29:28.713438 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 23:29:28.713516 kubelet[2067]: I0714 23:29:28.713506 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 23:29:28.713567 kubelet[2067]: I0714 23:29:28.713558 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/abc141c29eba6ebf5f9741fb66c9046a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"abc141c29eba6ebf5f9741fb66c9046a\") " pod="kube-system/kube-apiserver-localhost" Jul 14 23:29:28.713634 kubelet[2067]: I0714 23:29:28.713621 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/abc141c29eba6ebf5f9741fb66c9046a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"abc141c29eba6ebf5f9741fb66c9046a\") " pod="kube-system/kube-apiserver-localhost" Jul 14 23:29:28.713699 kubelet[2067]: I0714 23:29:28.713690 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 23:29:28.713752 kubelet[2067]: I0714 23:29:28.713743 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 23:29:28.713820 kubelet[2067]: I0714 23:29:28.713810 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 14 23:29:29.132691 sudo[2080]: pam_unix(sudo:session): session closed for user root Jul 14 23:29:29.496148 kubelet[2067]: I0714 23:29:29.496089 2067 apiserver.go:52] "Watching apiserver" Jul 14 23:29:29.554895 kubelet[2067]: I0714 23:29:29.554877 2067 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 14 23:29:29.555637 kubelet[2067]: I0714 23:29:29.555628 2067 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 14 23:29:29.571578 kubelet[2067]: E0714 23:29:29.571557 2067 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 14 23:29:29.572527 kubelet[2067]: E0714 23:29:29.572515 2067 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 14 23:29:29.596751 kubelet[2067]: I0714 23:29:29.596718 2067 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.596706917 podStartE2EDuration="2.596706917s" podCreationTimestamp="2025-07-14 23:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 23:29:29.583000274 +0000 UTC m=+1.157825632" watchObservedRunningTime="2025-07-14 23:29:29.596706917 +0000 UTC m=+1.171532270" Jul 14 23:29:29.604587 kubelet[2067]: I0714 23:29:29.604562 2067 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.604551647 podStartE2EDuration="2.604551647s" podCreationTimestamp="2025-07-14 23:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 23:29:29.597013268 +0000 UTC m=+1.171838617" watchObservedRunningTime="2025-07-14 23:29:29.604551647 +0000 UTC m=+1.179377000" Jul 14 23:29:29.611578 kubelet[2067]: I0714 23:29:29.611564 2067 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 14 23:29:29.619742 kubelet[2067]: I0714 23:29:29.619718 2067 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.619708526 podStartE2EDuration="2.619708526s" podCreationTimestamp="2025-07-14 23:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 23:29:29.604795658 +0000 UTC m=+1.179621006" watchObservedRunningTime="2025-07-14 23:29:29.619708526 +0000 UTC m=+1.194533876" Jul 14 23:29:30.434674 sudo[1436]: pam_unix(sudo:session): session closed for user root Jul 14 23:29:30.436343 sshd[1432]: pam_unix(sshd:session): session closed for user core Jul 14 23:29:30.438570 systemd-logind[1240]: Session 7 logged out. Waiting for processes to exit. Jul 14 23:29:30.438753 systemd[1]: sshd@4-139.178.70.107:22-139.178.89.65:36810.service: Deactivated successfully. Jul 14 23:29:30.439363 systemd[1]: session-7.scope: Deactivated successfully. Jul 14 23:29:30.439475 systemd[1]: session-7.scope: Consumed 3.280s CPU time. Jul 14 23:29:30.440649 systemd-logind[1240]: Removed session 7. Jul 14 23:29:32.973375 kubelet[2067]: I0714 23:29:32.973345 2067 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 14 23:29:32.973660 env[1270]: time="2025-07-14T23:29:32.973558770Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 14 23:29:32.973808 kubelet[2067]: I0714 23:29:32.973666 2067 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 14 23:29:33.435114 systemd[1]: Created slice kubepods-besteffort-pod83a82827_50ed_4532_85de_b1a08aaf218f.slice. Jul 14 23:29:33.441088 kubelet[2067]: I0714 23:29:33.441058 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83a82827-50ed-4532-85de-b1a08aaf218f-lib-modules\") pod \"kube-proxy-w7k4b\" (UID: \"83a82827-50ed-4532-85de-b1a08aaf218f\") " pod="kube-system/kube-proxy-w7k4b" Jul 14 23:29:33.441230 kubelet[2067]: I0714 23:29:33.441218 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tpwb\" (UniqueName: \"kubernetes.io/projected/83a82827-50ed-4532-85de-b1a08aaf218f-kube-api-access-7tpwb\") pod \"kube-proxy-w7k4b\" (UID: \"83a82827-50ed-4532-85de-b1a08aaf218f\") " pod="kube-system/kube-proxy-w7k4b" Jul 14 23:29:33.441344 kubelet[2067]: I0714 23:29:33.441334 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/83a82827-50ed-4532-85de-b1a08aaf218f-kube-proxy\") pod \"kube-proxy-w7k4b\" (UID: \"83a82827-50ed-4532-85de-b1a08aaf218f\") " pod="kube-system/kube-proxy-w7k4b" Jul 14 23:29:33.441441 kubelet[2067]: I0714 23:29:33.441431 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/83a82827-50ed-4532-85de-b1a08aaf218f-xtables-lock\") pod \"kube-proxy-w7k4b\" (UID: \"83a82827-50ed-4532-85de-b1a08aaf218f\") " pod="kube-system/kube-proxy-w7k4b" Jul 14 23:29:33.447218 systemd[1]: Created slice kubepods-burstable-pod20edd75d_d4cb_42f4_8f69_b13b554a1959.slice. Jul 14 23:29:33.542007 kubelet[2067]: I0714 23:29:33.541983 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/20edd75d-d4cb-42f4-8f69-b13b554a1959-bpf-maps\") pod \"cilium-h7lfd\" (UID: \"20edd75d-d4cb-42f4-8f69-b13b554a1959\") " pod="kube-system/cilium-h7lfd" Jul 14 23:29:33.542141 kubelet[2067]: I0714 23:29:33.542130 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/20edd75d-d4cb-42f4-8f69-b13b554a1959-host-proc-sys-net\") pod \"cilium-h7lfd\" (UID: \"20edd75d-d4cb-42f4-8f69-b13b554a1959\") " pod="kube-system/cilium-h7lfd" Jul 14 23:29:33.542201 kubelet[2067]: I0714 23:29:33.542192 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/20edd75d-d4cb-42f4-8f69-b13b554a1959-host-proc-sys-kernel\") pod \"cilium-h7lfd\" (UID: \"20edd75d-d4cb-42f4-8f69-b13b554a1959\") " pod="kube-system/cilium-h7lfd" Jul 14 23:29:33.542277 kubelet[2067]: I0714 23:29:33.542267 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/20edd75d-d4cb-42f4-8f69-b13b554a1959-clustermesh-secrets\") pod \"cilium-h7lfd\" (UID: \"20edd75d-d4cb-42f4-8f69-b13b554a1959\") " pod="kube-system/cilium-h7lfd" Jul 14 23:29:33.542332 kubelet[2067]: I0714 23:29:33.542323 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/20edd75d-d4cb-42f4-8f69-b13b554a1959-cilium-config-path\") pod \"cilium-h7lfd\" (UID: \"20edd75d-d4cb-42f4-8f69-b13b554a1959\") " pod="kube-system/cilium-h7lfd" Jul 14 23:29:33.542396 kubelet[2067]: I0714 23:29:33.542387 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/20edd75d-d4cb-42f4-8f69-b13b554a1959-cilium-run\") pod \"cilium-h7lfd\" (UID: \"20edd75d-d4cb-42f4-8f69-b13b554a1959\") " pod="kube-system/cilium-h7lfd" Jul 14 23:29:33.542450 kubelet[2067]: I0714 23:29:33.542441 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/20edd75d-d4cb-42f4-8f69-b13b554a1959-etc-cni-netd\") pod \"cilium-h7lfd\" (UID: \"20edd75d-d4cb-42f4-8f69-b13b554a1959\") " pod="kube-system/cilium-h7lfd" Jul 14 23:29:33.542534 kubelet[2067]: I0714 23:29:33.542523 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plvsl\" (UniqueName: \"kubernetes.io/projected/20edd75d-d4cb-42f4-8f69-b13b554a1959-kube-api-access-plvsl\") pod \"cilium-h7lfd\" (UID: \"20edd75d-d4cb-42f4-8f69-b13b554a1959\") " pod="kube-system/cilium-h7lfd" Jul 14 23:29:33.542595 kubelet[2067]: I0714 23:29:33.542586 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/20edd75d-d4cb-42f4-8f69-b13b554a1959-cni-path\") pod \"cilium-h7lfd\" (UID: \"20edd75d-d4cb-42f4-8f69-b13b554a1959\") " pod="kube-system/cilium-h7lfd" Jul 14 23:29:33.542655 kubelet[2067]: I0714 23:29:33.542645 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/20edd75d-d4cb-42f4-8f69-b13b554a1959-hostproc\") pod \"cilium-h7lfd\" (UID: \"20edd75d-d4cb-42f4-8f69-b13b554a1959\") " pod="kube-system/cilium-h7lfd" Jul 14 23:29:33.542710 kubelet[2067]: I0714 23:29:33.542701 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/20edd75d-d4cb-42f4-8f69-b13b554a1959-cilium-cgroup\") pod \"cilium-h7lfd\" (UID: \"20edd75d-d4cb-42f4-8f69-b13b554a1959\") " pod="kube-system/cilium-h7lfd" Jul 14 23:29:33.542762 kubelet[2067]: I0714 23:29:33.542754 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/20edd75d-d4cb-42f4-8f69-b13b554a1959-hubble-tls\") pod \"cilium-h7lfd\" (UID: \"20edd75d-d4cb-42f4-8f69-b13b554a1959\") " pod="kube-system/cilium-h7lfd" Jul 14 23:29:33.542840 kubelet[2067]: I0714 23:29:33.542820 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20edd75d-d4cb-42f4-8f69-b13b554a1959-lib-modules\") pod \"cilium-h7lfd\" (UID: \"20edd75d-d4cb-42f4-8f69-b13b554a1959\") " pod="kube-system/cilium-h7lfd" Jul 14 23:29:33.542901 kubelet[2067]: I0714 23:29:33.542892 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/20edd75d-d4cb-42f4-8f69-b13b554a1959-xtables-lock\") pod \"cilium-h7lfd\" (UID: \"20edd75d-d4cb-42f4-8f69-b13b554a1959\") " pod="kube-system/cilium-h7lfd" Jul 14 23:29:33.561492 kubelet[2067]: E0714 23:29:33.561462 2067 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 14 23:29:33.561492 kubelet[2067]: E0714 23:29:33.561491 2067 projected.go:194] Error preparing data for projected volume kube-api-access-7tpwb for pod kube-system/kube-proxy-w7k4b: configmap "kube-root-ca.crt" not found Jul 14 23:29:33.561615 kubelet[2067]: E0714 23:29:33.561551 2067 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/83a82827-50ed-4532-85de-b1a08aaf218f-kube-api-access-7tpwb podName:83a82827-50ed-4532-85de-b1a08aaf218f nodeName:}" failed. No retries permitted until 2025-07-14 23:29:34.061533238 +0000 UTC m=+5.636358590 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7tpwb" (UniqueName: "kubernetes.io/projected/83a82827-50ed-4532-85de-b1a08aaf218f-kube-api-access-7tpwb") pod "kube-proxy-w7k4b" (UID: "83a82827-50ed-4532-85de-b1a08aaf218f") : configmap "kube-root-ca.crt" not found Jul 14 23:29:33.644727 kubelet[2067]: I0714 23:29:33.644677 2067 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jul 14 23:29:33.663682 kubelet[2067]: E0714 23:29:33.663654 2067 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 14 23:29:33.663682 kubelet[2067]: E0714 23:29:33.663678 2067 projected.go:194] Error preparing data for projected volume kube-api-access-plvsl for pod kube-system/cilium-h7lfd: configmap "kube-root-ca.crt" not found Jul 14 23:29:33.663845 kubelet[2067]: E0714 23:29:33.663769 2067 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/20edd75d-d4cb-42f4-8f69-b13b554a1959-kube-api-access-plvsl podName:20edd75d-d4cb-42f4-8f69-b13b554a1959 nodeName:}" failed. No retries permitted until 2025-07-14 23:29:34.163752337 +0000 UTC m=+5.738577694 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-plvsl" (UniqueName: "kubernetes.io/projected/20edd75d-d4cb-42f4-8f69-b13b554a1959-kube-api-access-plvsl") pod "cilium-h7lfd" (UID: "20edd75d-d4cb-42f4-8f69-b13b554a1959") : configmap "kube-root-ca.crt" not found Jul 14 23:29:33.696801 systemd[1]: Created slice kubepods-besteffort-podcebddc3f_6131_426b_8af7_64a883b43f0a.slice. Jul 14 23:29:33.699085 kubelet[2067]: I0714 23:29:33.699064 2067 status_manager.go:890] "Failed to get status for pod" podUID="cebddc3f-6131-426b-8af7-64a883b43f0a" pod="kube-system/cilium-operator-6c4d7847fc-cqxrg" err="pods \"cilium-operator-6c4d7847fc-cqxrg\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" Jul 14 23:29:33.744556 kubelet[2067]: I0714 23:29:33.744535 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmlct\" (UniqueName: \"kubernetes.io/projected/cebddc3f-6131-426b-8af7-64a883b43f0a-kube-api-access-gmlct\") pod \"cilium-operator-6c4d7847fc-cqxrg\" (UID: \"cebddc3f-6131-426b-8af7-64a883b43f0a\") " pod="kube-system/cilium-operator-6c4d7847fc-cqxrg" Jul 14 23:29:33.744725 kubelet[2067]: I0714 23:29:33.744714 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cebddc3f-6131-426b-8af7-64a883b43f0a-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-cqxrg\" (UID: \"cebddc3f-6131-426b-8af7-64a883b43f0a\") " pod="kube-system/cilium-operator-6c4d7847fc-cqxrg" Jul 14 23:29:33.999431 env[1270]: time="2025-07-14T23:29:33.999190886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-cqxrg,Uid:cebddc3f-6131-426b-8af7-64a883b43f0a,Namespace:kube-system,Attempt:0,}" Jul 14 23:29:34.230290 env[1270]: time="2025-07-14T23:29:34.230145850Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 23:29:34.230290 env[1270]: time="2025-07-14T23:29:34.230177836Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 23:29:34.230290 env[1270]: time="2025-07-14T23:29:34.230189405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:29:34.230789 env[1270]: time="2025-07-14T23:29:34.230305547Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/848a87257382c565f8227bd6a57d716ae00223a5eeb685536b9be8ec8a8c0489 pid=2149 runtime=io.containerd.runc.v2 Jul 14 23:29:34.242626 systemd[1]: Started cri-containerd-848a87257382c565f8227bd6a57d716ae00223a5eeb685536b9be8ec8a8c0489.scope. Jul 14 23:29:34.273639 env[1270]: time="2025-07-14T23:29:34.273608261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-cqxrg,Uid:cebddc3f-6131-426b-8af7-64a883b43f0a,Namespace:kube-system,Attempt:0,} returns sandbox id \"848a87257382c565f8227bd6a57d716ae00223a5eeb685536b9be8ec8a8c0489\"" Jul 14 23:29:34.279032 env[1270]: time="2025-07-14T23:29:34.279009459Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 14 23:29:34.345225 env[1270]: time="2025-07-14T23:29:34.345200369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w7k4b,Uid:83a82827-50ed-4532-85de-b1a08aaf218f,Namespace:kube-system,Attempt:0,}" Jul 14 23:29:34.349784 env[1270]: time="2025-07-14T23:29:34.349766587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h7lfd,Uid:20edd75d-d4cb-42f4-8f69-b13b554a1959,Namespace:kube-system,Attempt:0,}" Jul 14 23:29:34.387945 env[1270]: time="2025-07-14T23:29:34.387886660Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 23:29:34.387945 env[1270]: time="2025-07-14T23:29:34.387929647Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 23:29:34.388097 env[1270]: time="2025-07-14T23:29:34.388070528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:29:34.388313 env[1270]: time="2025-07-14T23:29:34.388273349Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3e7a527d0a99fba44b9d9f87d8c4c31dae3e657a51c297c42bb917b83a67ed57 pid=2191 runtime=io.containerd.runc.v2 Jul 14 23:29:34.392151 env[1270]: time="2025-07-14T23:29:34.392103191Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 23:29:34.392229 env[1270]: time="2025-07-14T23:29:34.392129921Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 23:29:34.392229 env[1270]: time="2025-07-14T23:29:34.392138527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:29:34.392229 env[1270]: time="2025-07-14T23:29:34.392207195Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/09a5c71fc1a6d568c3918b06a912f9a4d39a56623f6585fb0962cacb5c769036 pid=2214 runtime=io.containerd.runc.v2 Jul 14 23:29:34.398957 systemd[1]: Started cri-containerd-3e7a527d0a99fba44b9d9f87d8c4c31dae3e657a51c297c42bb917b83a67ed57.scope. Jul 14 23:29:34.411590 systemd[1]: Started cri-containerd-09a5c71fc1a6d568c3918b06a912f9a4d39a56623f6585fb0962cacb5c769036.scope. Jul 14 23:29:34.437819 env[1270]: time="2025-07-14T23:29:34.437776875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h7lfd,Uid:20edd75d-d4cb-42f4-8f69-b13b554a1959,Namespace:kube-system,Attempt:0,} returns sandbox id \"09a5c71fc1a6d568c3918b06a912f9a4d39a56623f6585fb0962cacb5c769036\"" Jul 14 23:29:34.438764 env[1270]: time="2025-07-14T23:29:34.438741626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w7k4b,Uid:83a82827-50ed-4532-85de-b1a08aaf218f,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e7a527d0a99fba44b9d9f87d8c4c31dae3e657a51c297c42bb917b83a67ed57\"" Jul 14 23:29:34.443607 env[1270]: time="2025-07-14T23:29:34.443574746Z" level=info msg="CreateContainer within sandbox \"3e7a527d0a99fba44b9d9f87d8c4c31dae3e657a51c297c42bb917b83a67ed57\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 14 23:29:34.453584 env[1270]: time="2025-07-14T23:29:34.453550737Z" level=info msg="CreateContainer within sandbox \"3e7a527d0a99fba44b9d9f87d8c4c31dae3e657a51c297c42bb917b83a67ed57\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"50f863530422af83e1f9705e00792ab72dfc33ee904518e0a2220d48e50bf2a0\"" Jul 14 23:29:34.455325 env[1270]: time="2025-07-14T23:29:34.455213091Z" level=info msg="StartContainer for \"50f863530422af83e1f9705e00792ab72dfc33ee904518e0a2220d48e50bf2a0\"" Jul 14 23:29:34.467155 systemd[1]: Started cri-containerd-50f863530422af83e1f9705e00792ab72dfc33ee904518e0a2220d48e50bf2a0.scope. Jul 14 23:29:34.502518 env[1270]: time="2025-07-14T23:29:34.502465488Z" level=info msg="StartContainer for \"50f863530422af83e1f9705e00792ab72dfc33ee904518e0a2220d48e50bf2a0\" returns successfully" Jul 14 23:29:35.138038 kubelet[2067]: I0714 23:29:35.137984 2067 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-w7k4b" podStartSLOduration=2.137969645 podStartE2EDuration="2.137969645s" podCreationTimestamp="2025-07-14 23:29:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 23:29:35.075768697 +0000 UTC m=+6.650594066" watchObservedRunningTime="2025-07-14 23:29:35.137969645 +0000 UTC m=+6.712795005" Jul 14 23:29:36.370314 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2071489903.mount: Deactivated successfully. Jul 14 23:29:37.080740 env[1270]: time="2025-07-14T23:29:37.080711052Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 23:29:37.082045 env[1270]: time="2025-07-14T23:29:37.082027272Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 23:29:37.085994 env[1270]: time="2025-07-14T23:29:37.085967326Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 23:29:37.086612 env[1270]: time="2025-07-14T23:29:37.086594172Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 14 23:29:37.088149 env[1270]: time="2025-07-14T23:29:37.088127598Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 14 23:29:37.088819 env[1270]: time="2025-07-14T23:29:37.088800068Z" level=info msg="CreateContainer within sandbox \"848a87257382c565f8227bd6a57d716ae00223a5eeb685536b9be8ec8a8c0489\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 14 23:29:37.112725 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4069636829.mount: Deactivated successfully. Jul 14 23:29:37.128338 env[1270]: time="2025-07-14T23:29:37.128298543Z" level=info msg="CreateContainer within sandbox \"848a87257382c565f8227bd6a57d716ae00223a5eeb685536b9be8ec8a8c0489\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f96f3ac4f0ffdd1f4540a08456f03e0ed77e34bf268a631d71e965c45a52733b\"" Jul 14 23:29:37.128712 env[1270]: time="2025-07-14T23:29:37.128656462Z" level=info msg="StartContainer for \"f96f3ac4f0ffdd1f4540a08456f03e0ed77e34bf268a631d71e965c45a52733b\"" Jul 14 23:29:37.145468 systemd[1]: Started cri-containerd-f96f3ac4f0ffdd1f4540a08456f03e0ed77e34bf268a631d71e965c45a52733b.scope. Jul 14 23:29:37.174869 env[1270]: time="2025-07-14T23:29:37.174823990Z" level=info msg="StartContainer for \"f96f3ac4f0ffdd1f4540a08456f03e0ed77e34bf268a631d71e965c45a52733b\" returns successfully" Jul 14 23:29:37.577054 kubelet[2067]: I0714 23:29:37.577021 2067 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-cqxrg" podStartSLOduration=1.763575007 podStartE2EDuration="4.577009328s" podCreationTimestamp="2025-07-14 23:29:33 +0000 UTC" firstStartedPulling="2025-07-14 23:29:34.274353068 +0000 UTC m=+5.849178420" lastFinishedPulling="2025-07-14 23:29:37.087787391 +0000 UTC m=+8.662612741" observedRunningTime="2025-07-14 23:29:37.576673441 +0000 UTC m=+9.151498792" watchObservedRunningTime="2025-07-14 23:29:37.577009328 +0000 UTC m=+9.151834678" Jul 14 23:29:44.286049 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount733081504.mount: Deactivated successfully. Jul 14 23:29:48.172032 env[1270]: time="2025-07-14T23:29:48.171996390Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 23:29:48.174724 env[1270]: time="2025-07-14T23:29:48.174702110Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 23:29:48.176092 env[1270]: time="2025-07-14T23:29:48.176075390Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 23:29:48.176510 env[1270]: time="2025-07-14T23:29:48.176493409Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 14 23:29:48.178669 env[1270]: time="2025-07-14T23:29:48.178648235Z" level=info msg="CreateContainer within sandbox \"09a5c71fc1a6d568c3918b06a912f9a4d39a56623f6585fb0962cacb5c769036\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 14 23:29:48.210680 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4170835322.mount: Deactivated successfully. Jul 14 23:29:48.219343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount124218186.mount: Deactivated successfully. Jul 14 23:29:48.323925 env[1270]: time="2025-07-14T23:29:48.323886723Z" level=info msg="CreateContainer within sandbox \"09a5c71fc1a6d568c3918b06a912f9a4d39a56623f6585fb0962cacb5c769036\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d2c2bb689fb74bcbf1e27484899b1e229bc16d811d4b34d269c011cbaced7a1f\"" Jul 14 23:29:48.324525 env[1270]: time="2025-07-14T23:29:48.324511242Z" level=info msg="StartContainer for \"d2c2bb689fb74bcbf1e27484899b1e229bc16d811d4b34d269c011cbaced7a1f\"" Jul 14 23:29:48.338424 systemd[1]: Started cri-containerd-d2c2bb689fb74bcbf1e27484899b1e229bc16d811d4b34d269c011cbaced7a1f.scope. Jul 14 23:29:48.362436 env[1270]: time="2025-07-14T23:29:48.362409332Z" level=info msg="StartContainer for \"d2c2bb689fb74bcbf1e27484899b1e229bc16d811d4b34d269c011cbaced7a1f\" returns successfully" Jul 14 23:29:48.368891 systemd[1]: cri-containerd-d2c2bb689fb74bcbf1e27484899b1e229bc16d811d4b34d269c011cbaced7a1f.scope: Deactivated successfully. Jul 14 23:29:48.711305 env[1270]: time="2025-07-14T23:29:48.711265458Z" level=info msg="shim disconnected" id=d2c2bb689fb74bcbf1e27484899b1e229bc16d811d4b34d269c011cbaced7a1f Jul 14 23:29:48.711305 env[1270]: time="2025-07-14T23:29:48.711302058Z" level=warning msg="cleaning up after shim disconnected" id=d2c2bb689fb74bcbf1e27484899b1e229bc16d811d4b34d269c011cbaced7a1f namespace=k8s.io Jul 14 23:29:48.711953 env[1270]: time="2025-07-14T23:29:48.711311782Z" level=info msg="cleaning up dead shim" Jul 14 23:29:48.716482 env[1270]: time="2025-07-14T23:29:48.716451040Z" level=warning msg="cleanup warnings time=\"2025-07-14T23:29:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2519 runtime=io.containerd.runc.v2\n" Jul 14 23:29:49.203088 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d2c2bb689fb74bcbf1e27484899b1e229bc16d811d4b34d269c011cbaced7a1f-rootfs.mount: Deactivated successfully. Jul 14 23:29:49.605392 env[1270]: time="2025-07-14T23:29:49.605365847Z" level=info msg="CreateContainer within sandbox \"09a5c71fc1a6d568c3918b06a912f9a4d39a56623f6585fb0962cacb5c769036\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 14 23:29:49.623267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount955808922.mount: Deactivated successfully. Jul 14 23:29:49.630768 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1150514807.mount: Deactivated successfully. Jul 14 23:29:49.633911 env[1270]: time="2025-07-14T23:29:49.633786504Z" level=info msg="CreateContainer within sandbox \"09a5c71fc1a6d568c3918b06a912f9a4d39a56623f6585fb0962cacb5c769036\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"34041ef5306c10e219be059765a4a520f1e33336dd495f0f86901378e2c955af\"" Jul 14 23:29:49.634636 env[1270]: time="2025-07-14T23:29:49.634610977Z" level=info msg="StartContainer for \"34041ef5306c10e219be059765a4a520f1e33336dd495f0f86901378e2c955af\"" Jul 14 23:29:49.645476 systemd[1]: Started cri-containerd-34041ef5306c10e219be059765a4a520f1e33336dd495f0f86901378e2c955af.scope. Jul 14 23:29:49.668947 env[1270]: time="2025-07-14T23:29:49.668913271Z" level=info msg="StartContainer for \"34041ef5306c10e219be059765a4a520f1e33336dd495f0f86901378e2c955af\" returns successfully" Jul 14 23:29:49.677775 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 14 23:29:49.677988 systemd[1]: Stopped systemd-sysctl.service. Jul 14 23:29:49.678214 systemd[1]: Stopping systemd-sysctl.service... Jul 14 23:29:49.679592 systemd[1]: Starting systemd-sysctl.service... Jul 14 23:29:49.682083 systemd[1]: cri-containerd-34041ef5306c10e219be059765a4a520f1e33336dd495f0f86901378e2c955af.scope: Deactivated successfully. Jul 14 23:29:49.902620 env[1270]: time="2025-07-14T23:29:49.902533255Z" level=info msg="shim disconnected" id=34041ef5306c10e219be059765a4a520f1e33336dd495f0f86901378e2c955af Jul 14 23:29:49.903042 env[1270]: time="2025-07-14T23:29:49.902973370Z" level=warning msg="cleaning up after shim disconnected" id=34041ef5306c10e219be059765a4a520f1e33336dd495f0f86901378e2c955af namespace=k8s.io Jul 14 23:29:49.903217 env[1270]: time="2025-07-14T23:29:49.903205152Z" level=info msg="cleaning up dead shim" Jul 14 23:29:49.907805 env[1270]: time="2025-07-14T23:29:49.907789611Z" level=warning msg="cleanup warnings time=\"2025-07-14T23:29:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2584 runtime=io.containerd.runc.v2\n" Jul 14 23:29:49.918481 systemd[1]: Finished systemd-sysctl.service. Jul 14 23:29:50.606419 env[1270]: time="2025-07-14T23:29:50.606396449Z" level=info msg="CreateContainer within sandbox \"09a5c71fc1a6d568c3918b06a912f9a4d39a56623f6585fb0962cacb5c769036\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 14 23:29:50.648039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3061039624.mount: Deactivated successfully. Jul 14 23:29:50.673887 env[1270]: time="2025-07-14T23:29:50.673858993Z" level=info msg="CreateContainer within sandbox \"09a5c71fc1a6d568c3918b06a912f9a4d39a56623f6585fb0962cacb5c769036\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b226cf9bbc865441170edd8ae45fbba1d5e688373a11b76eeefac50bc162ba68\"" Jul 14 23:29:50.674232 env[1270]: time="2025-07-14T23:29:50.674219556Z" level=info msg="StartContainer for \"b226cf9bbc865441170edd8ae45fbba1d5e688373a11b76eeefac50bc162ba68\"" Jul 14 23:29:50.685589 systemd[1]: Started cri-containerd-b226cf9bbc865441170edd8ae45fbba1d5e688373a11b76eeefac50bc162ba68.scope. Jul 14 23:29:50.716895 env[1270]: time="2025-07-14T23:29:50.716865123Z" level=info msg="StartContainer for \"b226cf9bbc865441170edd8ae45fbba1d5e688373a11b76eeefac50bc162ba68\" returns successfully" Jul 14 23:29:50.735604 systemd[1]: cri-containerd-b226cf9bbc865441170edd8ae45fbba1d5e688373a11b76eeefac50bc162ba68.scope: Deactivated successfully. Jul 14 23:29:50.774231 env[1270]: time="2025-07-14T23:29:50.774196460Z" level=info msg="shim disconnected" id=b226cf9bbc865441170edd8ae45fbba1d5e688373a11b76eeefac50bc162ba68 Jul 14 23:29:50.774415 env[1270]: time="2025-07-14T23:29:50.774403478Z" level=warning msg="cleaning up after shim disconnected" id=b226cf9bbc865441170edd8ae45fbba1d5e688373a11b76eeefac50bc162ba68 namespace=k8s.io Jul 14 23:29:50.774481 env[1270]: time="2025-07-14T23:29:50.774471105Z" level=info msg="cleaning up dead shim" Jul 14 23:29:50.779426 env[1270]: time="2025-07-14T23:29:50.779403621Z" level=warning msg="cleanup warnings time=\"2025-07-14T23:29:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2640 runtime=io.containerd.runc.v2\n" Jul 14 23:29:51.609524 env[1270]: time="2025-07-14T23:29:51.609487264Z" level=info msg="CreateContainer within sandbox \"09a5c71fc1a6d568c3918b06a912f9a4d39a56623f6585fb0962cacb5c769036\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 14 23:29:51.672391 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2802325663.mount: Deactivated successfully. Jul 14 23:29:51.675917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount536746684.mount: Deactivated successfully. Jul 14 23:29:51.712595 env[1270]: time="2025-07-14T23:29:51.712556781Z" level=info msg="CreateContainer within sandbox \"09a5c71fc1a6d568c3918b06a912f9a4d39a56623f6585fb0962cacb5c769036\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4f3e28e29966e85a60e0f730e6c434945268a3b2574b22d7f3f8b125c7e27536\"" Jul 14 23:29:51.713707 env[1270]: time="2025-07-14T23:29:51.712943939Z" level=info msg="StartContainer for \"4f3e28e29966e85a60e0f730e6c434945268a3b2574b22d7f3f8b125c7e27536\"" Jul 14 23:29:51.723742 systemd[1]: Started cri-containerd-4f3e28e29966e85a60e0f730e6c434945268a3b2574b22d7f3f8b125c7e27536.scope. Jul 14 23:29:51.744021 systemd[1]: cri-containerd-4f3e28e29966e85a60e0f730e6c434945268a3b2574b22d7f3f8b125c7e27536.scope: Deactivated successfully. Jul 14 23:29:51.745155 env[1270]: time="2025-07-14T23:29:51.745099128Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20edd75d_d4cb_42f4_8f69_b13b554a1959.slice/cri-containerd-4f3e28e29966e85a60e0f730e6c434945268a3b2574b22d7f3f8b125c7e27536.scope/memory.events\": no such file or directory" Jul 14 23:29:51.750953 env[1270]: time="2025-07-14T23:29:51.750917760Z" level=info msg="StartContainer for \"4f3e28e29966e85a60e0f730e6c434945268a3b2574b22d7f3f8b125c7e27536\" returns successfully" Jul 14 23:29:51.835729 env[1270]: time="2025-07-14T23:29:51.835695481Z" level=info msg="shim disconnected" id=4f3e28e29966e85a60e0f730e6c434945268a3b2574b22d7f3f8b125c7e27536 Jul 14 23:29:51.835729 env[1270]: time="2025-07-14T23:29:51.835724811Z" level=warning msg="cleaning up after shim disconnected" id=4f3e28e29966e85a60e0f730e6c434945268a3b2574b22d7f3f8b125c7e27536 namespace=k8s.io Jul 14 23:29:51.835729 env[1270]: time="2025-07-14T23:29:51.835733233Z" level=info msg="cleaning up dead shim" Jul 14 23:29:51.840333 env[1270]: time="2025-07-14T23:29:51.840312555Z" level=warning msg="cleanup warnings time=\"2025-07-14T23:29:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2697 runtime=io.containerd.runc.v2\n" Jul 14 23:29:52.613321 env[1270]: time="2025-07-14T23:29:52.613272796Z" level=info msg="CreateContainer within sandbox \"09a5c71fc1a6d568c3918b06a912f9a4d39a56623f6585fb0962cacb5c769036\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 14 23:29:52.656187 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount986197984.mount: Deactivated successfully. Jul 14 23:29:52.661676 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount82685150.mount: Deactivated successfully. Jul 14 23:29:52.666103 env[1270]: time="2025-07-14T23:29:52.666049595Z" level=info msg="CreateContainer within sandbox \"09a5c71fc1a6d568c3918b06a912f9a4d39a56623f6585fb0962cacb5c769036\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8223aee4d40453f492c4020af4bc41f19a320b801eea425a190eee6bd65e367d\"" Jul 14 23:29:52.666648 env[1270]: time="2025-07-14T23:29:52.666613889Z" level=info msg="StartContainer for \"8223aee4d40453f492c4020af4bc41f19a320b801eea425a190eee6bd65e367d\"" Jul 14 23:29:52.681143 systemd[1]: Started cri-containerd-8223aee4d40453f492c4020af4bc41f19a320b801eea425a190eee6bd65e367d.scope. Jul 14 23:29:52.733155 env[1270]: time="2025-07-14T23:29:52.733121410Z" level=info msg="StartContainer for \"8223aee4d40453f492c4020af4bc41f19a320b801eea425a190eee6bd65e367d\" returns successfully" Jul 14 23:29:52.951592 kubelet[2067]: I0714 23:29:52.951522 2067 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 14 23:29:53.119744 systemd[1]: Created slice kubepods-burstable-pod54d0d611_0cc9_4dae_9da1_21b43f472d5d.slice. Jul 14 23:29:53.124562 systemd[1]: Created slice kubepods-burstable-podfc535e6a_7687_4c82_8a2f_a93b85215ff4.slice. Jul 14 23:29:53.138860 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Jul 14 23:29:53.283135 kubelet[2067]: I0714 23:29:53.283116 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54d0d611-0cc9-4dae-9da1-21b43f472d5d-config-volume\") pod \"coredns-668d6bf9bc-n454f\" (UID: \"54d0d611-0cc9-4dae-9da1-21b43f472d5d\") " pod="kube-system/coredns-668d6bf9bc-n454f" Jul 14 23:29:53.283282 kubelet[2067]: I0714 23:29:53.283268 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jbmg\" (UniqueName: \"kubernetes.io/projected/fc535e6a-7687-4c82-8a2f-a93b85215ff4-kube-api-access-2jbmg\") pod \"coredns-668d6bf9bc-lq764\" (UID: \"fc535e6a-7687-4c82-8a2f-a93b85215ff4\") " pod="kube-system/coredns-668d6bf9bc-lq764" Jul 14 23:29:53.283366 kubelet[2067]: I0714 23:29:53.283354 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fc535e6a-7687-4c82-8a2f-a93b85215ff4-config-volume\") pod \"coredns-668d6bf9bc-lq764\" (UID: \"fc535e6a-7687-4c82-8a2f-a93b85215ff4\") " pod="kube-system/coredns-668d6bf9bc-lq764" Jul 14 23:29:53.283442 kubelet[2067]: I0714 23:29:53.283433 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq229\" (UniqueName: \"kubernetes.io/projected/54d0d611-0cc9-4dae-9da1-21b43f472d5d-kube-api-access-xq229\") pod \"coredns-668d6bf9bc-n454f\" (UID: \"54d0d611-0cc9-4dae-9da1-21b43f472d5d\") " pod="kube-system/coredns-668d6bf9bc-n454f" Jul 14 23:29:53.398841 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Jul 14 23:29:53.444777 env[1270]: time="2025-07-14T23:29:53.444635948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n454f,Uid:54d0d611-0cc9-4dae-9da1-21b43f472d5d,Namespace:kube-system,Attempt:0,}" Jul 14 23:29:53.444777 env[1270]: time="2025-07-14T23:29:53.444662289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lq764,Uid:fc535e6a-7687-4c82-8a2f-a93b85215ff4,Namespace:kube-system,Attempt:0,}" Jul 14 23:29:53.633515 kubelet[2067]: I0714 23:29:53.633399 2067 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-h7lfd" podStartSLOduration=6.895296135 podStartE2EDuration="20.633383217s" podCreationTimestamp="2025-07-14 23:29:33 +0000 UTC" firstStartedPulling="2025-07-14 23:29:34.439123863 +0000 UTC m=+6.013949216" lastFinishedPulling="2025-07-14 23:29:48.177210948 +0000 UTC m=+19.752036298" observedRunningTime="2025-07-14 23:29:53.633241626 +0000 UTC m=+25.208066984" watchObservedRunningTime="2025-07-14 23:29:53.633383217 +0000 UTC m=+25.208208570" Jul 14 23:30:16.150488 systemd-networkd[1063]: cilium_host: Link UP Jul 14 23:30:16.150616 systemd-networkd[1063]: cilium_net: Link UP Jul 14 23:30:16.152469 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Jul 14 23:30:16.152502 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 14 23:30:16.151616 systemd-networkd[1063]: cilium_net: Gained carrier Jul 14 23:30:16.152638 systemd-networkd[1063]: cilium_host: Gained carrier Jul 14 23:30:16.310187 systemd-networkd[1063]: cilium_vxlan: Link UP Jul 14 23:30:16.310192 systemd-networkd[1063]: cilium_vxlan: Gained carrier Jul 14 23:30:16.597944 systemd-networkd[1063]: cilium_net: Gained IPv6LL Jul 14 23:30:16.733970 systemd-networkd[1063]: cilium_host: Gained IPv6LL Jul 14 23:30:17.965849 kernel: NET: Registered PF_ALG protocol family Jul 14 23:30:18.077933 systemd-networkd[1063]: cilium_vxlan: Gained IPv6LL Jul 14 23:30:18.582095 systemd-networkd[1063]: lxc_health: Link UP Jul 14 23:30:18.601238 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 14 23:30:18.600787 systemd-networkd[1063]: lxc_health: Gained carrier Jul 14 23:30:19.066416 systemd-networkd[1063]: lxc7f8c9fdff365: Link UP Jul 14 23:30:19.077413 kernel: eth0: renamed from tmpf5890 Jul 14 23:30:19.091356 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc7f8c9fdff365: link becomes ready Jul 14 23:30:19.091391 kernel: eth0: renamed from tmpdccbb Jul 14 23:30:19.091407 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc2a35731c6bc1: link becomes ready Jul 14 23:30:19.078361 systemd-networkd[1063]: lxc2a35731c6bc1: Link UP Jul 14 23:30:19.083283 systemd-networkd[1063]: lxc7f8c9fdff365: Gained carrier Jul 14 23:30:19.090262 systemd-networkd[1063]: lxc2a35731c6bc1: Gained carrier Jul 14 23:30:20.061933 systemd-networkd[1063]: lxc_health: Gained IPv6LL Jul 14 23:30:20.253945 systemd-networkd[1063]: lxc2a35731c6bc1: Gained IPv6LL Jul 14 23:30:20.637966 systemd-networkd[1063]: lxc7f8c9fdff365: Gained IPv6LL Jul 14 23:30:21.754274 env[1270]: time="2025-07-14T23:30:21.750944904Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 23:30:21.754274 env[1270]: time="2025-07-14T23:30:21.750975182Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 23:30:21.754274 env[1270]: time="2025-07-14T23:30:21.751067388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:30:21.754274 env[1270]: time="2025-07-14T23:30:21.751276143Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dccbb3d728717c288d55b42b1c2225a7d1a42c7ebc6260182ca99dfb64a4611a pid=3252 runtime=io.containerd.runc.v2 Jul 14 23:30:21.760021 env[1270]: time="2025-07-14T23:30:21.759604464Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 23:30:21.760021 env[1270]: time="2025-07-14T23:30:21.759641148Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 23:30:21.760021 env[1270]: time="2025-07-14T23:30:21.759648278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:30:21.760021 env[1270]: time="2025-07-14T23:30:21.759740743Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f58902cce9ed0fb35077aeb2af08ffbb42c51ac69c6d73d18983e418434d6a8f pid=3271 runtime=io.containerd.runc.v2 Jul 14 23:30:21.773704 systemd[1]: Started cri-containerd-f58902cce9ed0fb35077aeb2af08ffbb42c51ac69c6d73d18983e418434d6a8f.scope. Jul 14 23:30:21.779834 systemd[1]: run-containerd-runc-k8s.io-f58902cce9ed0fb35077aeb2af08ffbb42c51ac69c6d73d18983e418434d6a8f-runc.PuNFcw.mount: Deactivated successfully. Jul 14 23:30:21.789552 systemd[1]: Started cri-containerd-dccbb3d728717c288d55b42b1c2225a7d1a42c7ebc6260182ca99dfb64a4611a.scope. Jul 14 23:30:21.808847 systemd-resolved[1203]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 23:30:21.809228 systemd-resolved[1203]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 23:30:21.834583 env[1270]: time="2025-07-14T23:30:21.834550253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lq764,Uid:fc535e6a-7687-4c82-8a2f-a93b85215ff4,Namespace:kube-system,Attempt:0,} returns sandbox id \"f58902cce9ed0fb35077aeb2af08ffbb42c51ac69c6d73d18983e418434d6a8f\"" Jul 14 23:30:21.838663 env[1270]: time="2025-07-14T23:30:21.838102581Z" level=info msg="CreateContainer within sandbox \"f58902cce9ed0fb35077aeb2af08ffbb42c51ac69c6d73d18983e418434d6a8f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 14 23:30:21.852813 env[1270]: time="2025-07-14T23:30:21.852785870Z" level=info msg="CreateContainer within sandbox \"f58902cce9ed0fb35077aeb2af08ffbb42c51ac69c6d73d18983e418434d6a8f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e8b56f37ab92adf13a8f522ab04b88366eef7475e696b31803fe1a88178c8ba9\"" Jul 14 23:30:21.855792 env[1270]: time="2025-07-14T23:30:21.853436752Z" level=info msg="StartContainer for \"e8b56f37ab92adf13a8f522ab04b88366eef7475e696b31803fe1a88178c8ba9\"" Jul 14 23:30:21.855792 env[1270]: time="2025-07-14T23:30:21.855765740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n454f,Uid:54d0d611-0cc9-4dae-9da1-21b43f472d5d,Namespace:kube-system,Attempt:0,} returns sandbox id \"dccbb3d728717c288d55b42b1c2225a7d1a42c7ebc6260182ca99dfb64a4611a\"" Jul 14 23:30:21.857317 env[1270]: time="2025-07-14T23:30:21.857292772Z" level=info msg="CreateContainer within sandbox \"dccbb3d728717c288d55b42b1c2225a7d1a42c7ebc6260182ca99dfb64a4611a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 14 23:30:21.876084 env[1270]: time="2025-07-14T23:30:21.876052048Z" level=info msg="CreateContainer within sandbox \"dccbb3d728717c288d55b42b1c2225a7d1a42c7ebc6260182ca99dfb64a4611a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"37682769a2790acaee0384bbbba9271c1a87a4df8a8db0fec11fb8ca712b2126\"" Jul 14 23:30:21.876639 env[1270]: time="2025-07-14T23:30:21.876619509Z" level=info msg="StartContainer for \"37682769a2790acaee0384bbbba9271c1a87a4df8a8db0fec11fb8ca712b2126\"" Jul 14 23:30:21.887101 systemd[1]: Started cri-containerd-e8b56f37ab92adf13a8f522ab04b88366eef7475e696b31803fe1a88178c8ba9.scope. Jul 14 23:30:21.899868 systemd[1]: Started cri-containerd-37682769a2790acaee0384bbbba9271c1a87a4df8a8db0fec11fb8ca712b2126.scope. Jul 14 23:30:21.952569 env[1270]: time="2025-07-14T23:30:21.952532551Z" level=info msg="StartContainer for \"37682769a2790acaee0384bbbba9271c1a87a4df8a8db0fec11fb8ca712b2126\" returns successfully" Jul 14 23:30:21.953546 env[1270]: time="2025-07-14T23:30:21.953531827Z" level=info msg="StartContainer for \"e8b56f37ab92adf13a8f522ab04b88366eef7475e696b31803fe1a88178c8ba9\" returns successfully" Jul 14 23:30:22.672752 kubelet[2067]: I0714 23:30:22.672712 2067 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-n454f" podStartSLOduration=49.672693431 podStartE2EDuration="49.672693431s" podCreationTimestamp="2025-07-14 23:29:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 23:30:22.671238553 +0000 UTC m=+54.246063911" watchObservedRunningTime="2025-07-14 23:30:22.672693431 +0000 UTC m=+54.247518781" Jul 14 23:30:47.539337 systemd[1]: Started sshd@5-139.178.70.107:22-139.178.89.65:43928.service. Jul 14 23:30:47.599399 sshd[3420]: Accepted publickey for core from 139.178.89.65 port 43928 ssh2: RSA SHA256:XtFLP+nsyPN7YR75cpt5lclh1ThW2mP4NmG7F3yw0l4 Jul 14 23:30:47.600717 sshd[3420]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 23:30:47.605156 systemd[1]: Started session-8.scope. Jul 14 23:30:47.605428 systemd-logind[1240]: New session 8 of user core. Jul 14 23:30:47.899130 sshd[3420]: pam_unix(sshd:session): session closed for user core Jul 14 23:30:47.901626 systemd[1]: sshd@5-139.178.70.107:22-139.178.89.65:43928.service: Deactivated successfully. Jul 14 23:30:47.902179 systemd[1]: session-8.scope: Deactivated successfully. Jul 14 23:30:47.902457 systemd-logind[1240]: Session 8 logged out. Waiting for processes to exit. Jul 14 23:30:47.902973 systemd-logind[1240]: Removed session 8. Jul 14 23:30:52.902155 systemd[1]: Started sshd@6-139.178.70.107:22-139.178.89.65:40342.service. Jul 14 23:30:53.090736 sshd[3432]: Accepted publickey for core from 139.178.89.65 port 40342 ssh2: RSA SHA256:XtFLP+nsyPN7YR75cpt5lclh1ThW2mP4NmG7F3yw0l4 Jul 14 23:30:53.091876 sshd[3432]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 23:30:53.143444 systemd[1]: Started session-9.scope. Jul 14 23:30:53.143660 systemd-logind[1240]: New session 9 of user core. Jul 14 23:30:53.455810 sshd[3432]: pam_unix(sshd:session): session closed for user core Jul 14 23:30:53.457777 systemd[1]: sshd@6-139.178.70.107:22-139.178.89.65:40342.service: Deactivated successfully. Jul 14 23:30:53.458232 systemd[1]: session-9.scope: Deactivated successfully. Jul 14 23:30:53.458466 systemd-logind[1240]: Session 9 logged out. Waiting for processes to exit. Jul 14 23:30:53.458928 systemd-logind[1240]: Removed session 9. Jul 14 23:30:58.460120 systemd[1]: Started sshd@7-139.178.70.107:22-139.178.89.65:40346.service. Jul 14 23:30:58.495622 sshd[3446]: Accepted publickey for core from 139.178.89.65 port 40346 ssh2: RSA SHA256:XtFLP+nsyPN7YR75cpt5lclh1ThW2mP4NmG7F3yw0l4 Jul 14 23:30:58.496983 sshd[3446]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 23:30:58.499865 systemd-logind[1240]: New session 10 of user core. Jul 14 23:30:58.500611 systemd[1]: Started session-10.scope. Jul 14 23:30:58.605998 sshd[3446]: pam_unix(sshd:session): session closed for user core Jul 14 23:30:58.607934 systemd[1]: sshd@7-139.178.70.107:22-139.178.89.65:40346.service: Deactivated successfully. Jul 14 23:30:58.608376 systemd[1]: session-10.scope: Deactivated successfully. Jul 14 23:30:58.608869 systemd-logind[1240]: Session 10 logged out. Waiting for processes to exit. Jul 14 23:30:58.609458 systemd-logind[1240]: Removed session 10. Jul 14 23:31:03.609442 systemd[1]: Started sshd@8-139.178.70.107:22-139.178.89.65:36636.service. Jul 14 23:31:03.641907 sshd[3458]: Accepted publickey for core from 139.178.89.65 port 36636 ssh2: RSA SHA256:XtFLP+nsyPN7YR75cpt5lclh1ThW2mP4NmG7F3yw0l4 Jul 14 23:31:03.643088 sshd[3458]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 23:31:03.646064 systemd[1]: Started session-11.scope. Jul 14 23:31:03.646962 systemd-logind[1240]: New session 11 of user core. Jul 14 23:31:03.749142 sshd[3458]: pam_unix(sshd:session): session closed for user core Jul 14 23:31:03.751955 systemd[1]: Started sshd@9-139.178.70.107:22-139.178.89.65:36640.service. Jul 14 23:31:03.755312 systemd-logind[1240]: Session 11 logged out. Waiting for processes to exit. Jul 14 23:31:03.755626 systemd[1]: sshd@8-139.178.70.107:22-139.178.89.65:36636.service: Deactivated successfully. Jul 14 23:31:03.756125 systemd[1]: session-11.scope: Deactivated successfully. Jul 14 23:31:03.756985 systemd-logind[1240]: Removed session 11. Jul 14 23:31:03.787199 sshd[3469]: Accepted publickey for core from 139.178.89.65 port 36640 ssh2: RSA SHA256:XtFLP+nsyPN7YR75cpt5lclh1ThW2mP4NmG7F3yw0l4 Jul 14 23:31:03.788472 sshd[3469]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 23:31:03.792142 systemd[1]: Started session-12.scope. Jul 14 23:31:03.792606 systemd-logind[1240]: New session 12 of user core. Jul 14 23:31:03.913737 sshd[3469]: pam_unix(sshd:session): session closed for user core Jul 14 23:31:03.918213 systemd[1]: Started sshd@10-139.178.70.107:22-139.178.89.65:36644.service. Jul 14 23:31:03.924099 systemd[1]: sshd@9-139.178.70.107:22-139.178.89.65:36640.service: Deactivated successfully. Jul 14 23:31:03.924618 systemd[1]: session-12.scope: Deactivated successfully. Jul 14 23:31:03.925622 systemd-logind[1240]: Session 12 logged out. Waiting for processes to exit. Jul 14 23:31:03.926313 systemd-logind[1240]: Removed session 12. Jul 14 23:31:03.956771 sshd[3479]: Accepted publickey for core from 139.178.89.65 port 36644 ssh2: RSA SHA256:XtFLP+nsyPN7YR75cpt5lclh1ThW2mP4NmG7F3yw0l4 Jul 14 23:31:03.957930 sshd[3479]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 23:31:03.960995 systemd[1]: Started session-13.scope. Jul 14 23:31:03.961383 systemd-logind[1240]: New session 13 of user core. Jul 14 23:31:04.051566 sshd[3479]: pam_unix(sshd:session): session closed for user core Jul 14 23:31:04.053262 systemd-logind[1240]: Session 13 logged out. Waiting for processes to exit. Jul 14 23:31:04.053707 systemd[1]: sshd@10-139.178.70.107:22-139.178.89.65:36644.service: Deactivated successfully. Jul 14 23:31:04.054138 systemd[1]: session-13.scope: Deactivated successfully. Jul 14 23:31:04.054609 systemd-logind[1240]: Removed session 13. Jul 14 23:31:09.054725 systemd[1]: Started sshd@11-139.178.70.107:22-139.178.89.65:48378.service. Jul 14 23:31:09.087492 sshd[3494]: Accepted publickey for core from 139.178.89.65 port 48378 ssh2: RSA SHA256:XtFLP+nsyPN7YR75cpt5lclh1ThW2mP4NmG7F3yw0l4 Jul 14 23:31:09.088366 sshd[3494]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 23:31:09.090856 systemd-logind[1240]: New session 14 of user core. Jul 14 23:31:09.091771 systemd[1]: Started session-14.scope. Jul 14 23:31:09.184710 sshd[3494]: pam_unix(sshd:session): session closed for user core Jul 14 23:31:09.186819 systemd[1]: sshd@11-139.178.70.107:22-139.178.89.65:48378.service: Deactivated successfully. Jul 14 23:31:09.187277 systemd[1]: session-14.scope: Deactivated successfully. Jul 14 23:31:09.187735 systemd-logind[1240]: Session 14 logged out. Waiting for processes to exit. Jul 14 23:31:09.188245 systemd-logind[1240]: Removed session 14. Jul 14 23:31:14.187751 systemd[1]: Started sshd@12-139.178.70.107:22-139.178.89.65:48384.service. Jul 14 23:31:14.220371 sshd[3506]: Accepted publickey for core from 139.178.89.65 port 48384 ssh2: RSA SHA256:XtFLP+nsyPN7YR75cpt5lclh1ThW2mP4NmG7F3yw0l4 Jul 14 23:31:14.221434 sshd[3506]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 23:31:14.224478 systemd[1]: Started session-15.scope. Jul 14 23:31:14.224879 systemd-logind[1240]: New session 15 of user core. Jul 14 23:31:14.309140 sshd[3506]: pam_unix(sshd:session): session closed for user core Jul 14 23:31:14.311898 systemd[1]: Started sshd@13-139.178.70.107:22-139.178.89.65:48388.service. Jul 14 23:31:14.314915 systemd-logind[1240]: Session 15 logged out. Waiting for processes to exit. Jul 14 23:31:14.315212 systemd[1]: sshd@12-139.178.70.107:22-139.178.89.65:48384.service: Deactivated successfully. Jul 14 23:31:14.315624 systemd[1]: session-15.scope: Deactivated successfully. Jul 14 23:31:14.316443 systemd-logind[1240]: Removed session 15. Jul 14 23:31:14.345773 sshd[3517]: Accepted publickey for core from 139.178.89.65 port 48388 ssh2: RSA SHA256:XtFLP+nsyPN7YR75cpt5lclh1ThW2mP4NmG7F3yw0l4 Jul 14 23:31:14.346814 sshd[3517]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 23:31:14.349806 systemd[1]: Started session-16.scope. Jul 14 23:31:14.350305 systemd-logind[1240]: New session 16 of user core. Jul 14 23:31:15.351968 sshd[3517]: pam_unix(sshd:session): session closed for user core Jul 14 23:31:15.353494 systemd[1]: Started sshd@14-139.178.70.107:22-139.178.89.65:48390.service. Jul 14 23:31:15.357758 systemd[1]: sshd@13-139.178.70.107:22-139.178.89.65:48388.service: Deactivated successfully. Jul 14 23:31:15.358228 systemd[1]: session-16.scope: Deactivated successfully. Jul 14 23:31:15.358985 systemd-logind[1240]: Session 16 logged out. Waiting for processes to exit. Jul 14 23:31:15.359443 systemd-logind[1240]: Removed session 16. Jul 14 23:31:15.394477 sshd[3527]: Accepted publickey for core from 139.178.89.65 port 48390 ssh2: RSA SHA256:XtFLP+nsyPN7YR75cpt5lclh1ThW2mP4NmG7F3yw0l4 Jul 14 23:31:15.395522 sshd[3527]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 23:31:15.398641 systemd[1]: Started session-17.scope. Jul 14 23:31:15.399071 systemd-logind[1240]: New session 17 of user core. Jul 14 23:31:16.235000 sshd[3527]: pam_unix(sshd:session): session closed for user core Jul 14 23:31:16.237200 systemd[1]: Started sshd@15-139.178.70.107:22-139.178.89.65:48404.service. Jul 14 23:31:16.246944 systemd[1]: sshd@14-139.178.70.107:22-139.178.89.65:48390.service: Deactivated successfully. Jul 14 23:31:16.247543 systemd[1]: session-17.scope: Deactivated successfully. Jul 14 23:31:16.249791 systemd-logind[1240]: Session 17 logged out. Waiting for processes to exit. Jul 14 23:31:16.251803 systemd-logind[1240]: Removed session 17. Jul 14 23:31:16.286462 sshd[3542]: Accepted publickey for core from 139.178.89.65 port 48404 ssh2: RSA SHA256:XtFLP+nsyPN7YR75cpt5lclh1ThW2mP4NmG7F3yw0l4 Jul 14 23:31:16.287388 sshd[3542]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 23:31:16.290921 systemd[1]: Started session-18.scope. Jul 14 23:31:16.291374 systemd-logind[1240]: New session 18 of user core. Jul 14 23:31:16.510813 sshd[3542]: pam_unix(sshd:session): session closed for user core Jul 14 23:31:16.512970 systemd[1]: Started sshd@16-139.178.70.107:22-139.178.89.65:48406.service. Jul 14 23:31:16.518114 systemd[1]: sshd@15-139.178.70.107:22-139.178.89.65:48404.service: Deactivated successfully. Jul 14 23:31:16.518665 systemd[1]: session-18.scope: Deactivated successfully. Jul 14 23:31:16.519137 systemd-logind[1240]: Session 18 logged out. Waiting for processes to exit. Jul 14 23:31:16.519615 systemd-logind[1240]: Removed session 18. Jul 14 23:31:16.550842 sshd[3553]: Accepted publickey for core from 139.178.89.65 port 48406 ssh2: RSA SHA256:XtFLP+nsyPN7YR75cpt5lclh1ThW2mP4NmG7F3yw0l4 Jul 14 23:31:16.551701 sshd[3553]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 23:31:16.554791 systemd[1]: Started session-19.scope. Jul 14 23:31:16.555033 systemd-logind[1240]: New session 19 of user core. Jul 14 23:31:16.682058 sshd[3553]: pam_unix(sshd:session): session closed for user core Jul 14 23:31:16.683844 systemd[1]: sshd@16-139.178.70.107:22-139.178.89.65:48406.service: Deactivated successfully. Jul 14 23:31:16.684282 systemd[1]: session-19.scope: Deactivated successfully. Jul 14 23:31:16.684525 systemd-logind[1240]: Session 19 logged out. Waiting for processes to exit. Jul 14 23:31:16.685066 systemd-logind[1240]: Removed session 19. Jul 14 23:31:21.686527 systemd[1]: Started sshd@17-139.178.70.107:22-139.178.89.65:51860.service. Jul 14 23:31:21.720589 sshd[3568]: Accepted publickey for core from 139.178.89.65 port 51860 ssh2: RSA SHA256:XtFLP+nsyPN7YR75cpt5lclh1ThW2mP4NmG7F3yw0l4 Jul 14 23:31:21.721961 sshd[3568]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 23:31:21.726083 systemd-logind[1240]: New session 20 of user core. Jul 14 23:31:21.726095 systemd[1]: Started session-20.scope. Jul 14 23:31:21.817330 sshd[3568]: pam_unix(sshd:session): session closed for user core Jul 14 23:31:21.818890 systemd-logind[1240]: Session 20 logged out. Waiting for processes to exit. Jul 14 23:31:21.818993 systemd[1]: sshd@17-139.178.70.107:22-139.178.89.65:51860.service: Deactivated successfully. Jul 14 23:31:21.819425 systemd[1]: session-20.scope: Deactivated successfully. Jul 14 23:31:21.820024 systemd-logind[1240]: Removed session 20. Jul 14 23:31:26.821680 systemd[1]: Started sshd@18-139.178.70.107:22-139.178.89.65:51866.service. Jul 14 23:31:26.855392 sshd[3581]: Accepted publickey for core from 139.178.89.65 port 51866 ssh2: RSA SHA256:XtFLP+nsyPN7YR75cpt5lclh1ThW2mP4NmG7F3yw0l4 Jul 14 23:31:26.856480 sshd[3581]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 23:31:26.859938 systemd[1]: Started session-21.scope. Jul 14 23:31:26.860861 systemd-logind[1240]: New session 21 of user core. Jul 14 23:31:26.955443 sshd[3581]: pam_unix(sshd:session): session closed for user core Jul 14 23:31:26.957210 systemd[1]: sshd@18-139.178.70.107:22-139.178.89.65:51866.service: Deactivated successfully. Jul 14 23:31:26.957668 systemd[1]: session-21.scope: Deactivated successfully. Jul 14 23:31:26.958239 systemd-logind[1240]: Session 21 logged out. Waiting for processes to exit. Jul 14 23:31:26.958734 systemd-logind[1240]: Removed session 21. Jul 14 23:31:31.960004 systemd[1]: Started sshd@19-139.178.70.107:22-139.178.89.65:37560.service. Jul 14 23:31:31.995504 sshd[3595]: Accepted publickey for core from 139.178.89.65 port 37560 ssh2: RSA SHA256:XtFLP+nsyPN7YR75cpt5lclh1ThW2mP4NmG7F3yw0l4 Jul 14 23:31:31.996942 sshd[3595]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 23:31:32.000922 systemd[1]: Started session-22.scope. Jul 14 23:31:32.002096 systemd-logind[1240]: New session 22 of user core. Jul 14 23:31:32.095243 sshd[3595]: pam_unix(sshd:session): session closed for user core Jul 14 23:31:32.097065 systemd[1]: sshd@19-139.178.70.107:22-139.178.89.65:37560.service: Deactivated successfully. Jul 14 23:31:32.097547 systemd[1]: session-22.scope: Deactivated successfully. Jul 14 23:31:32.098200 systemd-logind[1240]: Session 22 logged out. Waiting for processes to exit. Jul 14 23:31:32.098773 systemd-logind[1240]: Removed session 22. Jul 14 23:31:37.098844 systemd[1]: Started sshd@20-139.178.70.107:22-139.178.89.65:37568.service. Jul 14 23:31:37.131137 sshd[3609]: Accepted publickey for core from 139.178.89.65 port 37568 ssh2: RSA SHA256:XtFLP+nsyPN7YR75cpt5lclh1ThW2mP4NmG7F3yw0l4 Jul 14 23:31:37.132047 sshd[3609]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 23:31:37.135171 systemd[1]: Started session-23.scope. Jul 14 23:31:37.135964 systemd-logind[1240]: New session 23 of user core. Jul 14 23:31:37.219767 sshd[3609]: pam_unix(sshd:session): session closed for user core Jul 14 23:31:37.222357 systemd[1]: Started sshd@21-139.178.70.107:22-139.178.89.65:37578.service. Jul 14 23:31:37.226020 systemd-logind[1240]: Session 23 logged out. Waiting for processes to exit. Jul 14 23:31:37.226889 systemd[1]: sshd@20-139.178.70.107:22-139.178.89.65:37568.service: Deactivated successfully. Jul 14 23:31:37.227308 systemd[1]: session-23.scope: Deactivated successfully. Jul 14 23:31:37.228282 systemd-logind[1240]: Removed session 23. Jul 14 23:31:37.255205 sshd[3619]: Accepted publickey for core from 139.178.89.65 port 37578 ssh2: RSA SHA256:XtFLP+nsyPN7YR75cpt5lclh1ThW2mP4NmG7F3yw0l4 Jul 14 23:31:37.256416 sshd[3619]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 23:31:37.260101 systemd[1]: Started session-24.scope. Jul 14 23:31:37.260348 systemd-logind[1240]: New session 24 of user core. Jul 14 23:31:38.837430 kubelet[2067]: I0714 23:31:38.837384 2067 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-lq764" podStartSLOduration=125.837363431 podStartE2EDuration="2m5.837363431s" podCreationTimestamp="2025-07-14 23:29:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 23:30:22.685593872 +0000 UTC m=+54.260419231" watchObservedRunningTime="2025-07-14 23:31:38.837363431 +0000 UTC m=+130.412188790" Jul 14 23:31:38.853274 env[1270]: time="2025-07-14T23:31:38.853236693Z" level=info msg="StopContainer for \"f96f3ac4f0ffdd1f4540a08456f03e0ed77e34bf268a631d71e965c45a52733b\" with timeout 30 (s)" Jul 14 23:31:38.853606 env[1270]: time="2025-07-14T23:31:38.853577822Z" level=info msg="Stop container \"f96f3ac4f0ffdd1f4540a08456f03e0ed77e34bf268a631d71e965c45a52733b\" with signal terminated" Jul 14 23:31:38.885154 systemd[1]: cri-containerd-f96f3ac4f0ffdd1f4540a08456f03e0ed77e34bf268a631d71e965c45a52733b.scope: Deactivated successfully. Jul 14 23:31:38.890341 env[1270]: time="2025-07-14T23:31:38.889934667Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 14 23:31:38.896223 env[1270]: time="2025-07-14T23:31:38.896195839Z" level=info msg="StopContainer for \"8223aee4d40453f492c4020af4bc41f19a320b801eea425a190eee6bd65e367d\" with timeout 2 (s)" Jul 14 23:31:38.896430 env[1270]: time="2025-07-14T23:31:38.896396361Z" level=info msg="Stop container \"8223aee4d40453f492c4020af4bc41f19a320b801eea425a190eee6bd65e367d\" with signal terminated" Jul 14 23:31:38.902655 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f96f3ac4f0ffdd1f4540a08456f03e0ed77e34bf268a631d71e965c45a52733b-rootfs.mount: Deactivated successfully. Jul 14 23:31:38.904751 env[1270]: time="2025-07-14T23:31:38.904721110Z" level=info msg="shim disconnected" id=f96f3ac4f0ffdd1f4540a08456f03e0ed77e34bf268a631d71e965c45a52733b Jul 14 23:31:38.904751 env[1270]: time="2025-07-14T23:31:38.904748112Z" level=warning msg="cleaning up after shim disconnected" id=f96f3ac4f0ffdd1f4540a08456f03e0ed77e34bf268a631d71e965c45a52733b namespace=k8s.io Jul 14 23:31:38.904751 env[1270]: time="2025-07-14T23:31:38.904754409Z" level=info msg="cleaning up dead shim" Jul 14 23:31:38.908442 systemd-networkd[1063]: lxc_health: Link DOWN Jul 14 23:31:38.908446 systemd-networkd[1063]: lxc_health: Lost carrier Jul 14 23:31:38.914483 env[1270]: time="2025-07-14T23:31:38.913030249Z" level=warning msg="cleanup warnings time=\"2025-07-14T23:31:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3666 runtime=io.containerd.runc.v2\n" Jul 14 23:31:38.918727 env[1270]: time="2025-07-14T23:31:38.918694545Z" level=info msg="StopContainer for \"f96f3ac4f0ffdd1f4540a08456f03e0ed77e34bf268a631d71e965c45a52733b\" returns successfully" Jul 14 23:31:38.921258 env[1270]: time="2025-07-14T23:31:38.921192678Z" level=info msg="StopPodSandbox for \"848a87257382c565f8227bd6a57d716ae00223a5eeb685536b9be8ec8a8c0489\"" Jul 14 23:31:38.921258 env[1270]: time="2025-07-14T23:31:38.921246779Z" level=info msg="Container to stop \"f96f3ac4f0ffdd1f4540a08456f03e0ed77e34bf268a631d71e965c45a52733b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 23:31:38.923780 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-848a87257382c565f8227bd6a57d716ae00223a5eeb685536b9be8ec8a8c0489-shm.mount: Deactivated successfully. Jul 14 23:31:38.930917 systemd[1]: cri-containerd-848a87257382c565f8227bd6a57d716ae00223a5eeb685536b9be8ec8a8c0489.scope: Deactivated successfully. Jul 14 23:31:38.932419 systemd[1]: cri-containerd-8223aee4d40453f492c4020af4bc41f19a320b801eea425a190eee6bd65e367d.scope: Deactivated successfully. Jul 14 23:31:38.932598 systemd[1]: cri-containerd-8223aee4d40453f492c4020af4bc41f19a320b801eea425a190eee6bd65e367d.scope: Consumed 4.553s CPU time. Jul 14 23:31:38.947981 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8223aee4d40453f492c4020af4bc41f19a320b801eea425a190eee6bd65e367d-rootfs.mount: Deactivated successfully. Jul 14 23:31:38.953529 env[1270]: time="2025-07-14T23:31:38.953474857Z" level=info msg="shim disconnected" id=8223aee4d40453f492c4020af4bc41f19a320b801eea425a190eee6bd65e367d Jul 14 23:31:38.953529 env[1270]: time="2025-07-14T23:31:38.953529471Z" level=warning msg="cleaning up after shim disconnected" id=8223aee4d40453f492c4020af4bc41f19a320b801eea425a190eee6bd65e367d namespace=k8s.io Jul 14 23:31:38.953529 env[1270]: time="2025-07-14T23:31:38.953536717Z" level=info msg="cleaning up dead shim" Jul 14 23:31:38.960554 env[1270]: time="2025-07-14T23:31:38.960516079Z" level=warning msg="cleanup warnings time=\"2025-07-14T23:31:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3716 runtime=io.containerd.runc.v2\n" Jul 14 23:31:38.961248 env[1270]: time="2025-07-14T23:31:38.961228109Z" level=info msg="StopContainer for \"8223aee4d40453f492c4020af4bc41f19a320b801eea425a190eee6bd65e367d\" returns successfully" Jul 14 23:31:38.963238 env[1270]: time="2025-07-14T23:31:38.963218879Z" level=info msg="StopPodSandbox for \"09a5c71fc1a6d568c3918b06a912f9a4d39a56623f6585fb0962cacb5c769036\"" Jul 14 23:31:38.963601 env[1270]: time="2025-07-14T23:31:38.963579672Z" level=info msg="Container to stop \"d2c2bb689fb74bcbf1e27484899b1e229bc16d811d4b34d269c011cbaced7a1f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 23:31:38.963708 env[1270]: time="2025-07-14T23:31:38.963663536Z" level=info msg="Container to stop \"34041ef5306c10e219be059765a4a520f1e33336dd495f0f86901378e2c955af\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 23:31:38.963772 env[1270]: time="2025-07-14T23:31:38.963760812Z" level=info msg="Container to stop \"b226cf9bbc865441170edd8ae45fbba1d5e688373a11b76eeefac50bc162ba68\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 23:31:38.963955 env[1270]: time="2025-07-14T23:31:38.963942717Z" level=info msg="Container to stop \"4f3e28e29966e85a60e0f730e6c434945268a3b2574b22d7f3f8b125c7e27536\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 23:31:38.964085 env[1270]: time="2025-07-14T23:31:38.964074952Z" level=info msg="Container to stop \"8223aee4d40453f492c4020af4bc41f19a320b801eea425a190eee6bd65e367d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 23:31:38.965316 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-09a5c71fc1a6d568c3918b06a912f9a4d39a56623f6585fb0962cacb5c769036-shm.mount: Deactivated successfully. Jul 14 23:31:38.972138 env[1270]: time="2025-07-14T23:31:38.972106646Z" level=info msg="shim disconnected" id=848a87257382c565f8227bd6a57d716ae00223a5eeb685536b9be8ec8a8c0489 Jul 14 23:31:38.972575 env[1270]: time="2025-07-14T23:31:38.972563362Z" level=warning msg="cleaning up after shim disconnected" id=848a87257382c565f8227bd6a57d716ae00223a5eeb685536b9be8ec8a8c0489 namespace=k8s.io Jul 14 23:31:38.972639 env[1270]: time="2025-07-14T23:31:38.972629444Z" level=info msg="cleaning up dead shim" Jul 14 23:31:38.976121 systemd[1]: cri-containerd-09a5c71fc1a6d568c3918b06a912f9a4d39a56623f6585fb0962cacb5c769036.scope: Deactivated successfully. Jul 14 23:31:38.977413 env[1270]: time="2025-07-14T23:31:38.977399418Z" level=warning msg="cleanup warnings time=\"2025-07-14T23:31:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3736 runtime=io.containerd.runc.v2\n" Jul 14 23:31:38.978476 env[1270]: time="2025-07-14T23:31:38.978461427Z" level=info msg="TearDown network for sandbox \"848a87257382c565f8227bd6a57d716ae00223a5eeb685536b9be8ec8a8c0489\" successfully" Jul 14 23:31:38.978558 env[1270]: time="2025-07-14T23:31:38.978546463Z" level=info msg="StopPodSandbox for \"848a87257382c565f8227bd6a57d716ae00223a5eeb685536b9be8ec8a8c0489\" returns successfully" Jul 14 23:31:38.995119 env[1270]: time="2025-07-14T23:31:38.995081716Z" level=info msg="shim disconnected" id=09a5c71fc1a6d568c3918b06a912f9a4d39a56623f6585fb0962cacb5c769036 Jul 14 23:31:38.995119 env[1270]: time="2025-07-14T23:31:38.995113844Z" level=warning msg="cleaning up after shim disconnected" id=09a5c71fc1a6d568c3918b06a912f9a4d39a56623f6585fb0962cacb5c769036 namespace=k8s.io Jul 14 23:31:38.995119 env[1270]: time="2025-07-14T23:31:38.995119890Z" level=info msg="cleaning up dead shim" Jul 14 23:31:39.004914 env[1270]: time="2025-07-14T23:31:39.004892499Z" level=warning msg="cleanup warnings time=\"2025-07-14T23:31:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3761 runtime=io.containerd.runc.v2\n" Jul 14 23:31:39.005390 env[1270]: time="2025-07-14T23:31:39.005375360Z" level=info msg="TearDown network for sandbox \"09a5c71fc1a6d568c3918b06a912f9a4d39a56623f6585fb0962cacb5c769036\" successfully" Jul 14 23:31:39.005449 env[1270]: time="2025-07-14T23:31:39.005437197Z" level=info msg="StopPodSandbox for \"09a5c71fc1a6d568c3918b06a912f9a4d39a56623f6585fb0962cacb5c769036\" returns successfully" Jul 14 23:31:39.092612 kubelet[2067]: I0714 23:31:39.092510 2067 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/20edd75d-d4cb-42f4-8f69-b13b554a1959-host-proc-sys-kernel\") pod \"20edd75d-d4cb-42f4-8f69-b13b554a1959\" (UID: \"20edd75d-d4cb-42f4-8f69-b13b554a1959\") " Jul 14 23:31:39.092612 kubelet[2067]: I0714 23:31:39.092558 2067 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/20edd75d-d4cb-42f4-8f69-b13b554a1959-cilium-run\") pod \"20edd75d-d4cb-42f4-8f69-b13b554a1959\" (UID: \"20edd75d-d4cb-42f4-8f69-b13b554a1959\") " Jul 14 23:31:39.092612 kubelet[2067]: I0714 23:31:39.092572 2067 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/20edd75d-d4cb-42f4-8f69-b13b554a1959-hostproc\") pod \"20edd75d-d4cb-42f4-8f69-b13b554a1959\" (UID: \"20edd75d-d4cb-42f4-8f69-b13b554a1959\") " Jul 14 23:31:39.092612 kubelet[2067]: I0714 23:31:39.092587 2067 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/20edd75d-d4cb-42f4-8f69-b13b554a1959-host-proc-sys-net\") pod \"20edd75d-d4cb-42f4-8f69-b13b554a1959\" (UID: \"20edd75d-d4cb-42f4-8f69-b13b554a1959\") " Jul 14 23:31:39.093871 kubelet[2067]: I0714 23:31:39.093540 2067 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20edd75d-d4cb-42f4-8f69-b13b554a1959-lib-modules\") pod \"20edd75d-d4cb-42f4-8f69-b13b554a1959\" (UID: \"20edd75d-d4cb-42f4-8f69-b13b554a1959\") " Jul 14 23:31:39.093871 kubelet[2067]: I0714 23:31:39.093571 2067 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/20edd75d-d4cb-42f4-8f69-b13b554a1959-hubble-tls\") pod \"20edd75d-d4cb-42f4-8f69-b13b554a1959\" (UID: \"20edd75d-d4cb-42f4-8f69-b13b554a1959\") " Jul 14 23:31:39.093871 kubelet[2067]: I0714 23:31:39.093588 2067 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/20edd75d-d4cb-42f4-8f69-b13b554a1959-bpf-maps\") pod \"20edd75d-d4cb-42f4-8f69-b13b554a1959\" (UID: \"20edd75d-d4cb-42f4-8f69-b13b554a1959\") " Jul 14 23:31:39.093871 kubelet[2067]: I0714 23:31:39.093610 2067 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/20edd75d-d4cb-42f4-8f69-b13b554a1959-cilium-config-path\") pod \"20edd75d-d4cb-42f4-8f69-b13b554a1959\" (UID: \"20edd75d-d4cb-42f4-8f69-b13b554a1959\") " Jul 14 23:31:39.093871 kubelet[2067]: I0714 23:31:39.093623 2067 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/20edd75d-d4cb-42f4-8f69-b13b554a1959-cilium-cgroup\") pod \"20edd75d-d4cb-42f4-8f69-b13b554a1959\" (UID: \"20edd75d-d4cb-42f4-8f69-b13b554a1959\") " Jul 14 23:31:39.093871 kubelet[2067]: I0714 23:31:39.093637 2067 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cebddc3f-6131-426b-8af7-64a883b43f0a-cilium-config-path\") pod \"cebddc3f-6131-426b-8af7-64a883b43f0a\" (UID: \"cebddc3f-6131-426b-8af7-64a883b43f0a\") " Jul 14 23:31:39.094082 kubelet[2067]: I0714 23:31:39.093651 2067 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmlct\" (UniqueName: \"kubernetes.io/projected/cebddc3f-6131-426b-8af7-64a883b43f0a-kube-api-access-gmlct\") pod \"cebddc3f-6131-426b-8af7-64a883b43f0a\" (UID: \"cebddc3f-6131-426b-8af7-64a883b43f0a\") " Jul 14 23:31:39.094082 kubelet[2067]: I0714 23:31:39.093664 2067 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/20edd75d-d4cb-42f4-8f69-b13b554a1959-etc-cni-netd\") pod \"20edd75d-d4cb-42f4-8f69-b13b554a1959\" (UID: \"20edd75d-d4cb-42f4-8f69-b13b554a1959\") " Jul 14 23:31:39.094082 kubelet[2067]: I0714 23:31:39.093675 2067 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/20edd75d-d4cb-42f4-8f69-b13b554a1959-cni-path\") pod \"20edd75d-d4cb-42f4-8f69-b13b554a1959\" (UID: \"20edd75d-d4cb-42f4-8f69-b13b554a1959\") " Jul 14 23:31:39.094082 kubelet[2067]: I0714 23:31:39.093689 2067 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/20edd75d-d4cb-42f4-8f69-b13b554a1959-xtables-lock\") pod \"20edd75d-d4cb-42f4-8f69-b13b554a1959\" (UID: \"20edd75d-d4cb-42f4-8f69-b13b554a1959\") " Jul 14 23:31:39.094082 kubelet[2067]: I0714 23:31:39.093702 2067 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/20edd75d-d4cb-42f4-8f69-b13b554a1959-clustermesh-secrets\") pod \"20edd75d-d4cb-42f4-8f69-b13b554a1959\" (UID: \"20edd75d-d4cb-42f4-8f69-b13b554a1959\") " Jul 14 23:31:39.094082 kubelet[2067]: I0714 23:31:39.093726 2067 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plvsl\" (UniqueName: \"kubernetes.io/projected/20edd75d-d4cb-42f4-8f69-b13b554a1959-kube-api-access-plvsl\") pod \"20edd75d-d4cb-42f4-8f69-b13b554a1959\" (UID: \"20edd75d-d4cb-42f4-8f69-b13b554a1959\") " Jul 14 23:31:39.095099 kubelet[2067]: I0714 23:31:39.094718 2067 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20edd75d-d4cb-42f4-8f69-b13b554a1959-hostproc" (OuterVolumeSpecName: "hostproc") pod "20edd75d-d4cb-42f4-8f69-b13b554a1959" (UID: "20edd75d-d4cb-42f4-8f69-b13b554a1959"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 23:31:39.095149 kubelet[2067]: I0714 23:31:39.095108 2067 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20edd75d-d4cb-42f4-8f69-b13b554a1959-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "20edd75d-d4cb-42f4-8f69-b13b554a1959" (UID: "20edd75d-d4cb-42f4-8f69-b13b554a1959"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 23:31:39.095149 kubelet[2067]: I0714 23:31:39.095127 2067 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20edd75d-d4cb-42f4-8f69-b13b554a1959-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "20edd75d-d4cb-42f4-8f69-b13b554a1959" (UID: "20edd75d-d4cb-42f4-8f69-b13b554a1959"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 23:31:39.095322 kubelet[2067]: I0714 23:31:39.095297 2067 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20edd75d-d4cb-42f4-8f69-b13b554a1959-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "20edd75d-d4cb-42f4-8f69-b13b554a1959" (UID: "20edd75d-d4cb-42f4-8f69-b13b554a1959"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 23:31:39.097965 kubelet[2067]: I0714 23:31:39.093445 2067 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20edd75d-d4cb-42f4-8f69-b13b554a1959-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "20edd75d-d4cb-42f4-8f69-b13b554a1959" (UID: "20edd75d-d4cb-42f4-8f69-b13b554a1959"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 23:31:39.105948 kubelet[2067]: I0714 23:31:39.105646 2067 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20edd75d-d4cb-42f4-8f69-b13b554a1959-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "20edd75d-d4cb-42f4-8f69-b13b554a1959" (UID: "20edd75d-d4cb-42f4-8f69-b13b554a1959"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 23:31:39.106069 kubelet[2067]: I0714 23:31:39.106045 2067 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20edd75d-d4cb-42f4-8f69-b13b554a1959-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "20edd75d-d4cb-42f4-8f69-b13b554a1959" (UID: "20edd75d-d4cb-42f4-8f69-b13b554a1959"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 23:31:39.106120 kubelet[2067]: I0714 23:31:39.106083 2067 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20edd75d-d4cb-42f4-8f69-b13b554a1959-cni-path" (OuterVolumeSpecName: "cni-path") pod "20edd75d-d4cb-42f4-8f69-b13b554a1959" (UID: "20edd75d-d4cb-42f4-8f69-b13b554a1959"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 23:31:39.106378 kubelet[2067]: I0714 23:31:39.106352 2067 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20edd75d-d4cb-42f4-8f69-b13b554a1959-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "20edd75d-d4cb-42f4-8f69-b13b554a1959" (UID: "20edd75d-d4cb-42f4-8f69-b13b554a1959"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 23:31:39.106445 kubelet[2067]: I0714 23:31:39.106316 2067 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20edd75d-d4cb-42f4-8f69-b13b554a1959-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "20edd75d-d4cb-42f4-8f69-b13b554a1959" (UID: "20edd75d-d4cb-42f4-8f69-b13b554a1959"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 23:31:39.109100 kubelet[2067]: I0714 23:31:39.108978 2067 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cebddc3f-6131-426b-8af7-64a883b43f0a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cebddc3f-6131-426b-8af7-64a883b43f0a" (UID: "cebddc3f-6131-426b-8af7-64a883b43f0a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 14 23:31:39.111297 kubelet[2067]: I0714 23:31:39.111272 2067 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20edd75d-d4cb-42f4-8f69-b13b554a1959-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "20edd75d-d4cb-42f4-8f69-b13b554a1959" (UID: "20edd75d-d4cb-42f4-8f69-b13b554a1959"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 14 23:31:39.118465 kubelet[2067]: I0714 23:31:39.118421 2067 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20edd75d-d4cb-42f4-8f69-b13b554a1959-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "20edd75d-d4cb-42f4-8f69-b13b554a1959" (UID: "20edd75d-d4cb-42f4-8f69-b13b554a1959"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 14 23:31:39.118694 kubelet[2067]: I0714 23:31:39.118675 2067 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20edd75d-d4cb-42f4-8f69-b13b554a1959-kube-api-access-plvsl" (OuterVolumeSpecName: "kube-api-access-plvsl") pod "20edd75d-d4cb-42f4-8f69-b13b554a1959" (UID: "20edd75d-d4cb-42f4-8f69-b13b554a1959"). InnerVolumeSpecName "kube-api-access-plvsl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 14 23:31:39.119741 kubelet[2067]: I0714 23:31:39.119725 2067 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20edd75d-d4cb-42f4-8f69-b13b554a1959-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "20edd75d-d4cb-42f4-8f69-b13b554a1959" (UID: "20edd75d-d4cb-42f4-8f69-b13b554a1959"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 14 23:31:39.120519 kubelet[2067]: I0714 23:31:39.120490 2067 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cebddc3f-6131-426b-8af7-64a883b43f0a-kube-api-access-gmlct" (OuterVolumeSpecName: "kube-api-access-gmlct") pod "cebddc3f-6131-426b-8af7-64a883b43f0a" (UID: "cebddc3f-6131-426b-8af7-64a883b43f0a"). InnerVolumeSpecName "kube-api-access-gmlct". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 14 23:31:39.194478 kubelet[2067]: I0714 23:31:39.194451 2067 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/20edd75d-d4cb-42f4-8f69-b13b554a1959-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 14 23:31:39.194636 kubelet[2067]: I0714 23:31:39.194624 2067 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/20edd75d-d4cb-42f4-8f69-b13b554a1959-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 14 23:31:39.194721 kubelet[2067]: I0714 23:31:39.194711 2067 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/20edd75d-d4cb-42f4-8f69-b13b554a1959-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 14 23:31:39.194794 kubelet[2067]: I0714 23:31:39.194782 2067 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/20edd75d-d4cb-42f4-8f69-b13b554a1959-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 14 23:31:39.194894 kubelet[2067]: I0714 23:31:39.194884 2067 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20edd75d-d4cb-42f4-8f69-b13b554a1959-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 14 23:31:39.194983 kubelet[2067]: I0714 23:31:39.194974 2067 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/20edd75d-d4cb-42f4-8f69-b13b554a1959-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 14 23:31:39.195288 kubelet[2067]: I0714 23:31:39.195177 2067 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/20edd75d-d4cb-42f4-8f69-b13b554a1959-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 14 23:31:39.195288 kubelet[2067]: I0714 23:31:39.195187 2067 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/20edd75d-d4cb-42f4-8f69-b13b554a1959-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 14 23:31:39.195288 kubelet[2067]: I0714 23:31:39.195193 2067 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/20edd75d-d4cb-42f4-8f69-b13b554a1959-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 14 23:31:39.195288 kubelet[2067]: I0714 23:31:39.195200 2067 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cebddc3f-6131-426b-8af7-64a883b43f0a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 14 23:31:39.195288 kubelet[2067]: I0714 23:31:39.195209 2067 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gmlct\" (UniqueName: \"kubernetes.io/projected/cebddc3f-6131-426b-8af7-64a883b43f0a-kube-api-access-gmlct\") on node \"localhost\" DevicePath \"\"" Jul 14 23:31:39.195459 kubelet[2067]: I0714 23:31:39.195225 2067 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/20edd75d-d4cb-42f4-8f69-b13b554a1959-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 14 23:31:39.195525 kubelet[2067]: I0714 23:31:39.195509 2067 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/20edd75d-d4cb-42f4-8f69-b13b554a1959-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 14 23:31:39.195588 kubelet[2067]: I0714 23:31:39.195579 2067 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/20edd75d-d4cb-42f4-8f69-b13b554a1959-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 14 23:31:39.195653 kubelet[2067]: I0714 23:31:39.195637 2067 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/20edd75d-d4cb-42f4-8f69-b13b554a1959-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 14 23:31:39.195717 kubelet[2067]: I0714 23:31:39.195708 2067 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-plvsl\" (UniqueName: \"kubernetes.io/projected/20edd75d-d4cb-42f4-8f69-b13b554a1959-kube-api-access-plvsl\") on node \"localhost\" DevicePath \"\"" Jul 14 23:31:39.783648 systemd[1]: Removed slice kubepods-burstable-pod20edd75d_d4cb_42f4_8f69_b13b554a1959.slice. Jul 14 23:31:39.783702 systemd[1]: kubepods-burstable-pod20edd75d_d4cb_42f4_8f69_b13b554a1959.slice: Consumed 4.626s CPU time. Jul 14 23:31:39.786304 kubelet[2067]: I0714 23:31:39.786280 2067 scope.go:117] "RemoveContainer" containerID="8223aee4d40453f492c4020af4bc41f19a320b801eea425a190eee6bd65e367d" Jul 14 23:31:39.798413 env[1270]: time="2025-07-14T23:31:39.798133909Z" level=info msg="RemoveContainer for \"8223aee4d40453f492c4020af4bc41f19a320b801eea425a190eee6bd65e367d\"" Jul 14 23:31:39.800407 env[1270]: time="2025-07-14T23:31:39.800347674Z" level=info msg="RemoveContainer for \"8223aee4d40453f492c4020af4bc41f19a320b801eea425a190eee6bd65e367d\" returns successfully" Jul 14 23:31:39.800943 kubelet[2067]: I0714 23:31:39.800932 2067 scope.go:117] "RemoveContainer" containerID="4f3e28e29966e85a60e0f730e6c434945268a3b2574b22d7f3f8b125c7e27536" Jul 14 23:31:39.801387 systemd[1]: Removed slice kubepods-besteffort-podcebddc3f_6131_426b_8af7_64a883b43f0a.slice. Jul 14 23:31:39.802530 env[1270]: time="2025-07-14T23:31:39.801548639Z" level=info msg="RemoveContainer for \"4f3e28e29966e85a60e0f730e6c434945268a3b2574b22d7f3f8b125c7e27536\"" Jul 14 23:31:39.804707 env[1270]: time="2025-07-14T23:31:39.804657249Z" level=info msg="RemoveContainer for \"4f3e28e29966e85a60e0f730e6c434945268a3b2574b22d7f3f8b125c7e27536\" returns successfully" Jul 14 23:31:39.805999 kubelet[2067]: I0714 23:31:39.805654 2067 scope.go:117] "RemoveContainer" containerID="b226cf9bbc865441170edd8ae45fbba1d5e688373a11b76eeefac50bc162ba68" Jul 14 23:31:39.807861 env[1270]: time="2025-07-14T23:31:39.807653611Z" level=info msg="RemoveContainer for \"b226cf9bbc865441170edd8ae45fbba1d5e688373a11b76eeefac50bc162ba68\"" Jul 14 23:31:39.809559 env[1270]: time="2025-07-14T23:31:39.809376223Z" level=info msg="RemoveContainer for \"b226cf9bbc865441170edd8ae45fbba1d5e688373a11b76eeefac50bc162ba68\" returns successfully" Jul 14 23:31:39.809733 kubelet[2067]: I0714 23:31:39.809719 2067 scope.go:117] "RemoveContainer" containerID="34041ef5306c10e219be059765a4a520f1e33336dd495f0f86901378e2c955af" Jul 14 23:31:39.810644 env[1270]: time="2025-07-14T23:31:39.810381973Z" level=info msg="RemoveContainer for \"34041ef5306c10e219be059765a4a520f1e33336dd495f0f86901378e2c955af\"" Jul 14 23:31:39.812038 env[1270]: time="2025-07-14T23:31:39.812021696Z" level=info msg="RemoveContainer for \"34041ef5306c10e219be059765a4a520f1e33336dd495f0f86901378e2c955af\" returns successfully" Jul 14 23:31:39.812817 kubelet[2067]: I0714 23:31:39.812801 2067 scope.go:117] "RemoveContainer" containerID="d2c2bb689fb74bcbf1e27484899b1e229bc16d811d4b34d269c011cbaced7a1f" Jul 14 23:31:39.814221 env[1270]: time="2025-07-14T23:31:39.814196432Z" level=info msg="RemoveContainer for \"d2c2bb689fb74bcbf1e27484899b1e229bc16d811d4b34d269c011cbaced7a1f\"" Jul 14 23:31:39.815484 env[1270]: time="2025-07-14T23:31:39.815470253Z" level=info msg="RemoveContainer for \"d2c2bb689fb74bcbf1e27484899b1e229bc16d811d4b34d269c011cbaced7a1f\" returns successfully" Jul 14 23:31:39.815629 kubelet[2067]: I0714 23:31:39.815620 2067 scope.go:117] "RemoveContainer" containerID="8223aee4d40453f492c4020af4bc41f19a320b801eea425a190eee6bd65e367d" Jul 14 23:31:39.815864 env[1270]: time="2025-07-14T23:31:39.815795935Z" level=error msg="ContainerStatus for \"8223aee4d40453f492c4020af4bc41f19a320b801eea425a190eee6bd65e367d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8223aee4d40453f492c4020af4bc41f19a320b801eea425a190eee6bd65e367d\": not found" Jul 14 23:31:39.819631 kubelet[2067]: E0714 23:31:39.819602 2067 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8223aee4d40453f492c4020af4bc41f19a320b801eea425a190eee6bd65e367d\": not found" containerID="8223aee4d40453f492c4020af4bc41f19a320b801eea425a190eee6bd65e367d" Jul 14 23:31:39.821033 kubelet[2067]: I0714 23:31:39.820973 2067 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8223aee4d40453f492c4020af4bc41f19a320b801eea425a190eee6bd65e367d"} err="failed to get container status \"8223aee4d40453f492c4020af4bc41f19a320b801eea425a190eee6bd65e367d\": rpc error: code = NotFound desc = an error occurred when try to find container \"8223aee4d40453f492c4020af4bc41f19a320b801eea425a190eee6bd65e367d\": not found" Jul 14 23:31:39.821101 kubelet[2067]: I0714 23:31:39.821091 2067 scope.go:117] "RemoveContainer" containerID="4f3e28e29966e85a60e0f730e6c434945268a3b2574b22d7f3f8b125c7e27536" Jul 14 23:31:39.821371 env[1270]: time="2025-07-14T23:31:39.821297726Z" level=error msg="ContainerStatus for \"4f3e28e29966e85a60e0f730e6c434945268a3b2574b22d7f3f8b125c7e27536\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4f3e28e29966e85a60e0f730e6c434945268a3b2574b22d7f3f8b125c7e27536\": not found" Jul 14 23:31:39.821470 kubelet[2067]: E0714 23:31:39.821458 2067 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4f3e28e29966e85a60e0f730e6c434945268a3b2574b22d7f3f8b125c7e27536\": not found" containerID="4f3e28e29966e85a60e0f730e6c434945268a3b2574b22d7f3f8b125c7e27536" Jul 14 23:31:39.821533 kubelet[2067]: I0714 23:31:39.821519 2067 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4f3e28e29966e85a60e0f730e6c434945268a3b2574b22d7f3f8b125c7e27536"} err="failed to get container status \"4f3e28e29966e85a60e0f730e6c434945268a3b2574b22d7f3f8b125c7e27536\": rpc error: code = NotFound desc = an error occurred when try to find container \"4f3e28e29966e85a60e0f730e6c434945268a3b2574b22d7f3f8b125c7e27536\": not found" Jul 14 23:31:39.821580 kubelet[2067]: I0714 23:31:39.821572 2067 scope.go:117] "RemoveContainer" containerID="b226cf9bbc865441170edd8ae45fbba1d5e688373a11b76eeefac50bc162ba68" Jul 14 23:31:39.821776 env[1270]: time="2025-07-14T23:31:39.821735076Z" level=error msg="ContainerStatus for \"b226cf9bbc865441170edd8ae45fbba1d5e688373a11b76eeefac50bc162ba68\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b226cf9bbc865441170edd8ae45fbba1d5e688373a11b76eeefac50bc162ba68\": not found" Jul 14 23:31:39.821875 kubelet[2067]: E0714 23:31:39.821865 2067 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b226cf9bbc865441170edd8ae45fbba1d5e688373a11b76eeefac50bc162ba68\": not found" containerID="b226cf9bbc865441170edd8ae45fbba1d5e688373a11b76eeefac50bc162ba68" Jul 14 23:31:39.821933 kubelet[2067]: I0714 23:31:39.821922 2067 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b226cf9bbc865441170edd8ae45fbba1d5e688373a11b76eeefac50bc162ba68"} err="failed to get container status \"b226cf9bbc865441170edd8ae45fbba1d5e688373a11b76eeefac50bc162ba68\": rpc error: code = NotFound desc = an error occurred when try to find container \"b226cf9bbc865441170edd8ae45fbba1d5e688373a11b76eeefac50bc162ba68\": not found" Jul 14 23:31:39.821984 kubelet[2067]: I0714 23:31:39.821974 2067 scope.go:117] "RemoveContainer" containerID="34041ef5306c10e219be059765a4a520f1e33336dd495f0f86901378e2c955af" Jul 14 23:31:39.822153 env[1270]: time="2025-07-14T23:31:39.822104482Z" level=error msg="ContainerStatus for \"34041ef5306c10e219be059765a4a520f1e33336dd495f0f86901378e2c955af\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"34041ef5306c10e219be059765a4a520f1e33336dd495f0f86901378e2c955af\": not found" Jul 14 23:31:39.822237 kubelet[2067]: E0714 23:31:39.822221 2067 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"34041ef5306c10e219be059765a4a520f1e33336dd495f0f86901378e2c955af\": not found" containerID="34041ef5306c10e219be059765a4a520f1e33336dd495f0f86901378e2c955af" Jul 14 23:31:39.822296 kubelet[2067]: I0714 23:31:39.822278 2067 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"34041ef5306c10e219be059765a4a520f1e33336dd495f0f86901378e2c955af"} err="failed to get container status \"34041ef5306c10e219be059765a4a520f1e33336dd495f0f86901378e2c955af\": rpc error: code = NotFound desc = an error occurred when try to find container \"34041ef5306c10e219be059765a4a520f1e33336dd495f0f86901378e2c955af\": not found" Jul 14 23:31:39.822350 kubelet[2067]: I0714 23:31:39.822341 2067 scope.go:117] "RemoveContainer" containerID="d2c2bb689fb74bcbf1e27484899b1e229bc16d811d4b34d269c011cbaced7a1f" Jul 14 23:31:39.822515 env[1270]: time="2025-07-14T23:31:39.822471635Z" level=error msg="ContainerStatus for \"d2c2bb689fb74bcbf1e27484899b1e229bc16d811d4b34d269c011cbaced7a1f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d2c2bb689fb74bcbf1e27484899b1e229bc16d811d4b34d269c011cbaced7a1f\": not found" Jul 14 23:31:39.822591 kubelet[2067]: E0714 23:31:39.822582 2067 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d2c2bb689fb74bcbf1e27484899b1e229bc16d811d4b34d269c011cbaced7a1f\": not found" containerID="d2c2bb689fb74bcbf1e27484899b1e229bc16d811d4b34d269c011cbaced7a1f" Jul 14 23:31:39.822661 kubelet[2067]: I0714 23:31:39.822649 2067 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d2c2bb689fb74bcbf1e27484899b1e229bc16d811d4b34d269c011cbaced7a1f"} err="failed to get container status \"d2c2bb689fb74bcbf1e27484899b1e229bc16d811d4b34d269c011cbaced7a1f\": rpc error: code = NotFound desc = an error occurred when try to find container \"d2c2bb689fb74bcbf1e27484899b1e229bc16d811d4b34d269c011cbaced7a1f\": not found" Jul 14 23:31:39.822707 kubelet[2067]: I0714 23:31:39.822699 2067 scope.go:117] "RemoveContainer" containerID="f96f3ac4f0ffdd1f4540a08456f03e0ed77e34bf268a631d71e965c45a52733b" Jul 14 23:31:39.823483 env[1270]: time="2025-07-14T23:31:39.823445533Z" level=info msg="RemoveContainer for \"f96f3ac4f0ffdd1f4540a08456f03e0ed77e34bf268a631d71e965c45a52733b\"" Jul 14 23:31:39.824759 env[1270]: time="2025-07-14T23:31:39.824692477Z" level=info msg="RemoveContainer for \"f96f3ac4f0ffdd1f4540a08456f03e0ed77e34bf268a631d71e965c45a52733b\" returns successfully" Jul 14 23:31:39.824800 kubelet[2067]: I0714 23:31:39.824775 2067 scope.go:117] "RemoveContainer" containerID="f96f3ac4f0ffdd1f4540a08456f03e0ed77e34bf268a631d71e965c45a52733b" Jul 14 23:31:39.824955 env[1270]: time="2025-07-14T23:31:39.824903196Z" level=error msg="ContainerStatus for \"f96f3ac4f0ffdd1f4540a08456f03e0ed77e34bf268a631d71e965c45a52733b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f96f3ac4f0ffdd1f4540a08456f03e0ed77e34bf268a631d71e965c45a52733b\": not found" Jul 14 23:31:39.824993 kubelet[2067]: E0714 23:31:39.824972 2067 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f96f3ac4f0ffdd1f4540a08456f03e0ed77e34bf268a631d71e965c45a52733b\": not found" containerID="f96f3ac4f0ffdd1f4540a08456f03e0ed77e34bf268a631d71e965c45a52733b" Jul 14 23:31:39.824993 kubelet[2067]: I0714 23:31:39.824984 2067 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f96f3ac4f0ffdd1f4540a08456f03e0ed77e34bf268a631d71e965c45a52733b"} err="failed to get container status \"f96f3ac4f0ffdd1f4540a08456f03e0ed77e34bf268a631d71e965c45a52733b\": rpc error: code = NotFound desc = an error occurred when try to find container \"f96f3ac4f0ffdd1f4540a08456f03e0ed77e34bf268a631d71e965c45a52733b\": not found" Jul 14 23:31:39.863160 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-09a5c71fc1a6d568c3918b06a912f9a4d39a56623f6585fb0962cacb5c769036-rootfs.mount: Deactivated successfully. Jul 14 23:31:39.863236 systemd[1]: var-lib-kubelet-pods-20edd75d\x2dd4cb\x2d42f4\x2d8f69\x2db13b554a1959-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dplvsl.mount: Deactivated successfully. Jul 14 23:31:39.863277 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-848a87257382c565f8227bd6a57d716ae00223a5eeb685536b9be8ec8a8c0489-rootfs.mount: Deactivated successfully. Jul 14 23:31:39.863313 systemd[1]: var-lib-kubelet-pods-cebddc3f\x2d6131\x2d426b\x2d8af7\x2d64a883b43f0a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgmlct.mount: Deactivated successfully. Jul 14 23:31:39.863373 systemd[1]: var-lib-kubelet-pods-20edd75d\x2dd4cb\x2d42f4\x2d8f69\x2db13b554a1959-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 14 23:31:39.863432 systemd[1]: var-lib-kubelet-pods-20edd75d\x2dd4cb\x2d42f4\x2d8f69\x2db13b554a1959-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 14 23:31:40.521846 kubelet[2067]: I0714 23:31:40.521804 2067 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20edd75d-d4cb-42f4-8f69-b13b554a1959" path="/var/lib/kubelet/pods/20edd75d-d4cb-42f4-8f69-b13b554a1959/volumes" Jul 14 23:31:40.522899 kubelet[2067]: I0714 23:31:40.522884 2067 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cebddc3f-6131-426b-8af7-64a883b43f0a" path="/var/lib/kubelet/pods/cebddc3f-6131-426b-8af7-64a883b43f0a/volumes" Jul 14 23:31:40.823910 systemd[1]: Started sshd@22-139.178.70.107:22-139.178.89.65:56270.service. Jul 14 23:31:40.824402 sshd[3619]: pam_unix(sshd:session): session closed for user core Jul 14 23:31:40.827048 systemd[1]: sshd@21-139.178.70.107:22-139.178.89.65:37578.service: Deactivated successfully. Jul 14 23:31:40.827716 systemd[1]: session-24.scope: Deactivated successfully. Jul 14 23:31:40.828335 systemd-logind[1240]: Session 24 logged out. Waiting for processes to exit. Jul 14 23:31:40.829092 systemd-logind[1240]: Removed session 24. Jul 14 23:31:40.867965 sshd[3781]: Accepted publickey for core from 139.178.89.65 port 56270 ssh2: RSA SHA256:XtFLP+nsyPN7YR75cpt5lclh1ThW2mP4NmG7F3yw0l4 Jul 14 23:31:40.869323 sshd[3781]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 23:31:40.872823 systemd-logind[1240]: New session 25 of user core. Jul 14 23:31:40.873485 systemd[1]: Started session-25.scope. Jul 14 23:31:41.187099 sshd[3781]: pam_unix(sshd:session): session closed for user core Jul 14 23:31:41.187720 systemd[1]: Started sshd@23-139.178.70.107:22-139.178.89.65:56276.service. Jul 14 23:31:41.189247 systemd[1]: sshd@22-139.178.70.107:22-139.178.89.65:56270.service: Deactivated successfully. Jul 14 23:31:41.189719 systemd[1]: session-25.scope: Deactivated successfully. Jul 14 23:31:41.190150 systemd-logind[1240]: Session 25 logged out. Waiting for processes to exit. Jul 14 23:31:41.190730 systemd-logind[1240]: Removed session 25. Jul 14 23:31:41.225287 kubelet[2067]: I0714 23:31:41.225258 2067 memory_manager.go:355] "RemoveStaleState removing state" podUID="cebddc3f-6131-426b-8af7-64a883b43f0a" containerName="cilium-operator" Jul 14 23:31:41.225287 kubelet[2067]: I0714 23:31:41.225289 2067 memory_manager.go:355] "RemoveStaleState removing state" podUID="20edd75d-d4cb-42f4-8f69-b13b554a1959" containerName="cilium-agent" Jul 14 23:31:41.228707 sshd[3791]: Accepted publickey for core from 139.178.89.65 port 56276 ssh2: RSA SHA256:XtFLP+nsyPN7YR75cpt5lclh1ThW2mP4NmG7F3yw0l4 Jul 14 23:31:41.230081 sshd[3791]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 23:31:41.231531 systemd[1]: Created slice kubepods-burstable-pod45c62d80_7e52_4609_b610_ff5b9a2a6d10.slice. Jul 14 23:31:41.235797 systemd-logind[1240]: New session 26 of user core. Jul 14 23:31:41.236329 systemd[1]: Started session-26.scope. Jul 14 23:31:41.311153 kubelet[2067]: I0714 23:31:41.311125 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/45c62d80-7e52-4609-b610-ff5b9a2a6d10-bpf-maps\") pod \"cilium-zb5g8\" (UID: \"45c62d80-7e52-4609-b610-ff5b9a2a6d10\") " pod="kube-system/cilium-zb5g8" Jul 14 23:31:41.311153 kubelet[2067]: I0714 23:31:41.311153 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/45c62d80-7e52-4609-b610-ff5b9a2a6d10-etc-cni-netd\") pod \"cilium-zb5g8\" (UID: \"45c62d80-7e52-4609-b610-ff5b9a2a6d10\") " pod="kube-system/cilium-zb5g8" Jul 14 23:31:41.311276 kubelet[2067]: I0714 23:31:41.311172 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/45c62d80-7e52-4609-b610-ff5b9a2a6d10-hubble-tls\") pod \"cilium-zb5g8\" (UID: \"45c62d80-7e52-4609-b610-ff5b9a2a6d10\") " pod="kube-system/cilium-zb5g8" Jul 14 23:31:41.311276 kubelet[2067]: I0714 23:31:41.311183 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/45c62d80-7e52-4609-b610-ff5b9a2a6d10-cilium-run\") pod \"cilium-zb5g8\" (UID: \"45c62d80-7e52-4609-b610-ff5b9a2a6d10\") " pod="kube-system/cilium-zb5g8" Jul 14 23:31:41.311276 kubelet[2067]: I0714 23:31:41.311194 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/45c62d80-7e52-4609-b610-ff5b9a2a6d10-cilium-ipsec-secrets\") pod \"cilium-zb5g8\" (UID: \"45c62d80-7e52-4609-b610-ff5b9a2a6d10\") " pod="kube-system/cilium-zb5g8" Jul 14 23:31:41.311276 kubelet[2067]: I0714 23:31:41.311208 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/45c62d80-7e52-4609-b610-ff5b9a2a6d10-cni-path\") pod \"cilium-zb5g8\" (UID: \"45c62d80-7e52-4609-b610-ff5b9a2a6d10\") " pod="kube-system/cilium-zb5g8" Jul 14 23:31:41.311276 kubelet[2067]: I0714 23:31:41.311222 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45c62d80-7e52-4609-b610-ff5b9a2a6d10-xtables-lock\") pod \"cilium-zb5g8\" (UID: \"45c62d80-7e52-4609-b610-ff5b9a2a6d10\") " pod="kube-system/cilium-zb5g8" Jul 14 23:31:41.311276 kubelet[2067]: I0714 23:31:41.311231 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/45c62d80-7e52-4609-b610-ff5b9a2a6d10-clustermesh-secrets\") pod \"cilium-zb5g8\" (UID: \"45c62d80-7e52-4609-b610-ff5b9a2a6d10\") " pod="kube-system/cilium-zb5g8" Jul 14 23:31:41.311397 kubelet[2067]: I0714 23:31:41.311241 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/45c62d80-7e52-4609-b610-ff5b9a2a6d10-hostproc\") pod \"cilium-zb5g8\" (UID: \"45c62d80-7e52-4609-b610-ff5b9a2a6d10\") " pod="kube-system/cilium-zb5g8" Jul 14 23:31:41.311397 kubelet[2067]: I0714 23:31:41.311253 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/45c62d80-7e52-4609-b610-ff5b9a2a6d10-host-proc-sys-net\") pod \"cilium-zb5g8\" (UID: \"45c62d80-7e52-4609-b610-ff5b9a2a6d10\") " pod="kube-system/cilium-zb5g8" Jul 14 23:31:41.311397 kubelet[2067]: I0714 23:31:41.311266 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkbds\" (UniqueName: \"kubernetes.io/projected/45c62d80-7e52-4609-b610-ff5b9a2a6d10-kube-api-access-bkbds\") pod \"cilium-zb5g8\" (UID: \"45c62d80-7e52-4609-b610-ff5b9a2a6d10\") " pod="kube-system/cilium-zb5g8" Jul 14 23:31:41.311397 kubelet[2067]: I0714 23:31:41.311279 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/45c62d80-7e52-4609-b610-ff5b9a2a6d10-cilium-config-path\") pod \"cilium-zb5g8\" (UID: \"45c62d80-7e52-4609-b610-ff5b9a2a6d10\") " pod="kube-system/cilium-zb5g8" Jul 14 23:31:41.311397 kubelet[2067]: I0714 23:31:41.311296 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/45c62d80-7e52-4609-b610-ff5b9a2a6d10-cilium-cgroup\") pod \"cilium-zb5g8\" (UID: \"45c62d80-7e52-4609-b610-ff5b9a2a6d10\") " pod="kube-system/cilium-zb5g8" Jul 14 23:31:41.311397 kubelet[2067]: I0714 23:31:41.311306 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45c62d80-7e52-4609-b610-ff5b9a2a6d10-lib-modules\") pod \"cilium-zb5g8\" (UID: \"45c62d80-7e52-4609-b610-ff5b9a2a6d10\") " pod="kube-system/cilium-zb5g8" Jul 14 23:31:41.311517 kubelet[2067]: I0714 23:31:41.311315 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/45c62d80-7e52-4609-b610-ff5b9a2a6d10-host-proc-sys-kernel\") pod \"cilium-zb5g8\" (UID: \"45c62d80-7e52-4609-b610-ff5b9a2a6d10\") " pod="kube-system/cilium-zb5g8" Jul 14 23:31:41.375014 sshd[3791]: pam_unix(sshd:session): session closed for user core Jul 14 23:31:41.376698 systemd[1]: Started sshd@24-139.178.70.107:22-139.178.89.65:56288.service. Jul 14 23:31:41.379436 systemd[1]: sshd@23-139.178.70.107:22-139.178.89.65:56276.service: Deactivated successfully. Jul 14 23:31:41.379857 systemd[1]: session-26.scope: Deactivated successfully. Jul 14 23:31:41.381012 systemd-logind[1240]: Session 26 logged out. Waiting for processes to exit. Jul 14 23:31:41.382098 systemd-logind[1240]: Removed session 26. Jul 14 23:31:41.385388 kubelet[2067]: E0714 23:31:41.385360 2067 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-bkbds lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-zb5g8" podUID="45c62d80-7e52-4609-b610-ff5b9a2a6d10" Jul 14 23:31:41.412276 sshd[3802]: Accepted publickey for core from 139.178.89.65 port 56288 ssh2: RSA SHA256:XtFLP+nsyPN7YR75cpt5lclh1ThW2mP4NmG7F3yw0l4 Jul 14 23:31:41.420410 sshd[3802]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 23:31:41.437252 systemd-logind[1240]: New session 27 of user core. Jul 14 23:31:41.438651 systemd[1]: Started session-27.scope. Jul 14 23:31:41.914925 kubelet[2067]: I0714 23:31:41.914901 2067 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/45c62d80-7e52-4609-b610-ff5b9a2a6d10-hostproc\") pod \"45c62d80-7e52-4609-b610-ff5b9a2a6d10\" (UID: \"45c62d80-7e52-4609-b610-ff5b9a2a6d10\") " Jul 14 23:31:41.915248 kubelet[2067]: I0714 23:31:41.915234 2067 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/45c62d80-7e52-4609-b610-ff5b9a2a6d10-clustermesh-secrets\") pod \"45c62d80-7e52-4609-b610-ff5b9a2a6d10\" (UID: \"45c62d80-7e52-4609-b610-ff5b9a2a6d10\") " Jul 14 23:31:41.915367 kubelet[2067]: I0714 23:31:41.915350 2067 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/45c62d80-7e52-4609-b610-ff5b9a2a6d10-bpf-maps\") pod \"45c62d80-7e52-4609-b610-ff5b9a2a6d10\" (UID: \"45c62d80-7e52-4609-b610-ff5b9a2a6d10\") " Jul 14 23:31:41.915457 kubelet[2067]: I0714 23:31:41.915445 2067 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/45c62d80-7e52-4609-b610-ff5b9a2a6d10-cilium-run\") pod \"45c62d80-7e52-4609-b610-ff5b9a2a6d10\" (UID: \"45c62d80-7e52-4609-b610-ff5b9a2a6d10\") " Jul 14 23:31:41.915538 kubelet[2067]: I0714 23:31:41.915526 2067 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/45c62d80-7e52-4609-b610-ff5b9a2a6d10-cni-path\") pod \"45c62d80-7e52-4609-b610-ff5b9a2a6d10\" (UID: \"45c62d80-7e52-4609-b610-ff5b9a2a6d10\") " Jul 14 23:31:41.915628 kubelet[2067]: I0714 23:31:41.915616 2067 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45c62d80-7e52-4609-b610-ff5b9a2a6d10-xtables-lock\") pod \"45c62d80-7e52-4609-b610-ff5b9a2a6d10\" (UID: \"45c62d80-7e52-4609-b610-ff5b9a2a6d10\") " Jul 14 23:31:41.916277 kubelet[2067]: I0714 23:31:41.916265 2067 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/45c62d80-7e52-4609-b610-ff5b9a2a6d10-host-proc-sys-kernel\") pod \"45c62d80-7e52-4609-b610-ff5b9a2a6d10\" (UID: \"45c62d80-7e52-4609-b610-ff5b9a2a6d10\") " Jul 14 23:31:41.916376 kubelet[2067]: I0714 23:31:41.916365 2067 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/45c62d80-7e52-4609-b610-ff5b9a2a6d10-hubble-tls\") pod \"45c62d80-7e52-4609-b610-ff5b9a2a6d10\" (UID: \"45c62d80-7e52-4609-b610-ff5b9a2a6d10\") " Jul 14 23:31:41.916467 kubelet[2067]: I0714 23:31:41.916455 2067 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/45c62d80-7e52-4609-b610-ff5b9a2a6d10-host-proc-sys-net\") pod \"45c62d80-7e52-4609-b610-ff5b9a2a6d10\" (UID: \"45c62d80-7e52-4609-b610-ff5b9a2a6d10\") " Jul 14 23:31:41.916562 kubelet[2067]: I0714 23:31:41.916550 2067 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/45c62d80-7e52-4609-b610-ff5b9a2a6d10-cilium-ipsec-secrets\") pod \"45c62d80-7e52-4609-b610-ff5b9a2a6d10\" (UID: \"45c62d80-7e52-4609-b610-ff5b9a2a6d10\") " Jul 14 23:31:41.916642 kubelet[2067]: I0714 23:31:41.916631 2067 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/45c62d80-7e52-4609-b610-ff5b9a2a6d10-etc-cni-netd\") pod \"45c62d80-7e52-4609-b610-ff5b9a2a6d10\" (UID: \"45c62d80-7e52-4609-b610-ff5b9a2a6d10\") " Jul 14 23:31:41.916722 kubelet[2067]: I0714 23:31:41.916711 2067 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkbds\" (UniqueName: \"kubernetes.io/projected/45c62d80-7e52-4609-b610-ff5b9a2a6d10-kube-api-access-bkbds\") pod \"45c62d80-7e52-4609-b610-ff5b9a2a6d10\" (UID: \"45c62d80-7e52-4609-b610-ff5b9a2a6d10\") " Jul 14 23:31:41.916804 kubelet[2067]: I0714 23:31:41.916784 2067 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/45c62d80-7e52-4609-b610-ff5b9a2a6d10-cilium-config-path\") pod \"45c62d80-7e52-4609-b610-ff5b9a2a6d10\" (UID: \"45c62d80-7e52-4609-b610-ff5b9a2a6d10\") " Jul 14 23:31:41.916894 kubelet[2067]: I0714 23:31:41.916883 2067 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/45c62d80-7e52-4609-b610-ff5b9a2a6d10-cilium-cgroup\") pod \"45c62d80-7e52-4609-b610-ff5b9a2a6d10\" (UID: \"45c62d80-7e52-4609-b610-ff5b9a2a6d10\") " Jul 14 23:31:41.917012 kubelet[2067]: I0714 23:31:41.917000 2067 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45c62d80-7e52-4609-b610-ff5b9a2a6d10-lib-modules\") pod \"45c62d80-7e52-4609-b610-ff5b9a2a6d10\" (UID: \"45c62d80-7e52-4609-b610-ff5b9a2a6d10\") " Jul 14 23:31:41.917114 kubelet[2067]: I0714 23:31:41.914879 2067 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45c62d80-7e52-4609-b610-ff5b9a2a6d10-hostproc" (OuterVolumeSpecName: "hostproc") pod "45c62d80-7e52-4609-b610-ff5b9a2a6d10" (UID: "45c62d80-7e52-4609-b610-ff5b9a2a6d10"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 23:31:41.917186 kubelet[2067]: I0714 23:31:41.915716 2067 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45c62d80-7e52-4609-b610-ff5b9a2a6d10-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "45c62d80-7e52-4609-b610-ff5b9a2a6d10" (UID: "45c62d80-7e52-4609-b610-ff5b9a2a6d10"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 23:31:41.917255 kubelet[2067]: I0714 23:31:41.917101 2067 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45c62d80-7e52-4609-b610-ff5b9a2a6d10-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "45c62d80-7e52-4609-b610-ff5b9a2a6d10" (UID: "45c62d80-7e52-4609-b610-ff5b9a2a6d10"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 23:31:41.917333 kubelet[2067]: I0714 23:31:41.917320 2067 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45c62d80-7e52-4609-b610-ff5b9a2a6d10-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "45c62d80-7e52-4609-b610-ff5b9a2a6d10" (UID: "45c62d80-7e52-4609-b610-ff5b9a2a6d10"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 23:31:41.922225 kubelet[2067]: I0714 23:31:41.920332 2067 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45c62d80-7e52-4609-b610-ff5b9a2a6d10-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "45c62d80-7e52-4609-b610-ff5b9a2a6d10" (UID: "45c62d80-7e52-4609-b610-ff5b9a2a6d10"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 23:31:41.922225 kubelet[2067]: I0714 23:31:41.920359 2067 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45c62d80-7e52-4609-b610-ff5b9a2a6d10-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "45c62d80-7e52-4609-b610-ff5b9a2a6d10" (UID: "45c62d80-7e52-4609-b610-ff5b9a2a6d10"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 23:31:41.922225 kubelet[2067]: I0714 23:31:41.920373 2067 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45c62d80-7e52-4609-b610-ff5b9a2a6d10-cni-path" (OuterVolumeSpecName: "cni-path") pod "45c62d80-7e52-4609-b610-ff5b9a2a6d10" (UID: "45c62d80-7e52-4609-b610-ff5b9a2a6d10"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 23:31:41.922225 kubelet[2067]: I0714 23:31:41.920386 2067 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45c62d80-7e52-4609-b610-ff5b9a2a6d10-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "45c62d80-7e52-4609-b610-ff5b9a2a6d10" (UID: "45c62d80-7e52-4609-b610-ff5b9a2a6d10"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 23:31:41.922225 kubelet[2067]: I0714 23:31:41.920400 2067 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45c62d80-7e52-4609-b610-ff5b9a2a6d10-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "45c62d80-7e52-4609-b610-ff5b9a2a6d10" (UID: "45c62d80-7e52-4609-b610-ff5b9a2a6d10"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 23:31:41.920756 systemd[1]: var-lib-kubelet-pods-45c62d80\x2d7e52\x2d4609\x2db610\x2dff5b9a2a6d10-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 14 23:31:41.922490 kubelet[2067]: I0714 23:31:41.921918 2067 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45c62d80-7e52-4609-b610-ff5b9a2a6d10-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "45c62d80-7e52-4609-b610-ff5b9a2a6d10" (UID: "45c62d80-7e52-4609-b610-ff5b9a2a6d10"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 14 23:31:41.922920 kubelet[2067]: I0714 23:31:41.922868 2067 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45c62d80-7e52-4609-b610-ff5b9a2a6d10-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "45c62d80-7e52-4609-b610-ff5b9a2a6d10" (UID: "45c62d80-7e52-4609-b610-ff5b9a2a6d10"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 14 23:31:41.923247 kubelet[2067]: I0714 23:31:41.923234 2067 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45c62d80-7e52-4609-b610-ff5b9a2a6d10-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "45c62d80-7e52-4609-b610-ff5b9a2a6d10" (UID: "45c62d80-7e52-4609-b610-ff5b9a2a6d10"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 14 23:31:41.923326 kubelet[2067]: I0714 23:31:41.923316 2067 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45c62d80-7e52-4609-b610-ff5b9a2a6d10-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "45c62d80-7e52-4609-b610-ff5b9a2a6d10" (UID: "45c62d80-7e52-4609-b610-ff5b9a2a6d10"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 23:31:41.924488 kubelet[2067]: I0714 23:31:41.924470 2067 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45c62d80-7e52-4609-b610-ff5b9a2a6d10-kube-api-access-bkbds" (OuterVolumeSpecName: "kube-api-access-bkbds") pod "45c62d80-7e52-4609-b610-ff5b9a2a6d10" (UID: "45c62d80-7e52-4609-b610-ff5b9a2a6d10"). InnerVolumeSpecName "kube-api-access-bkbds". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 14 23:31:41.925078 kubelet[2067]: I0714 23:31:41.925066 2067 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45c62d80-7e52-4609-b610-ff5b9a2a6d10-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "45c62d80-7e52-4609-b610-ff5b9a2a6d10" (UID: "45c62d80-7e52-4609-b610-ff5b9a2a6d10"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 14 23:31:42.017901 kubelet[2067]: I0714 23:31:42.017867 2067 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45c62d80-7e52-4609-b610-ff5b9a2a6d10-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 14 23:31:42.018065 kubelet[2067]: I0714 23:31:42.018052 2067 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/45c62d80-7e52-4609-b610-ff5b9a2a6d10-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 14 23:31:42.018199 kubelet[2067]: I0714 23:31:42.018142 2067 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/45c62d80-7e52-4609-b610-ff5b9a2a6d10-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 14 23:31:42.018290 kubelet[2067]: I0714 23:31:42.018279 2067 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/45c62d80-7e52-4609-b610-ff5b9a2a6d10-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 14 23:31:42.018369 kubelet[2067]: I0714 23:31:42.018358 2067 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/45c62d80-7e52-4609-b610-ff5b9a2a6d10-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 14 23:31:42.018442 kubelet[2067]: I0714 23:31:42.018432 2067 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/45c62d80-7e52-4609-b610-ff5b9a2a6d10-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 14 23:31:42.018510 kubelet[2067]: I0714 23:31:42.018500 2067 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/45c62d80-7e52-4609-b610-ff5b9a2a6d10-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Jul 14 23:31:42.018587 kubelet[2067]: I0714 23:31:42.018577 2067 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/45c62d80-7e52-4609-b610-ff5b9a2a6d10-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 14 23:31:42.018660 kubelet[2067]: I0714 23:31:42.018650 2067 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/45c62d80-7e52-4609-b610-ff5b9a2a6d10-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 14 23:31:42.018736 kubelet[2067]: I0714 23:31:42.018725 2067 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/45c62d80-7e52-4609-b610-ff5b9a2a6d10-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 14 23:31:42.018804 kubelet[2067]: I0714 23:31:42.018795 2067 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45c62d80-7e52-4609-b610-ff5b9a2a6d10-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 14 23:31:42.018889 kubelet[2067]: I0714 23:31:42.018878 2067 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bkbds\" (UniqueName: \"kubernetes.io/projected/45c62d80-7e52-4609-b610-ff5b9a2a6d10-kube-api-access-bkbds\") on node \"localhost\" DevicePath \"\"" Jul 14 23:31:42.018972 kubelet[2067]: I0714 23:31:42.018962 2067 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/45c62d80-7e52-4609-b610-ff5b9a2a6d10-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 14 23:31:42.019053 kubelet[2067]: I0714 23:31:42.019041 2067 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/45c62d80-7e52-4609-b610-ff5b9a2a6d10-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 14 23:31:42.019121 kubelet[2067]: I0714 23:31:42.019111 2067 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/45c62d80-7e52-4609-b610-ff5b9a2a6d10-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 14 23:31:42.418580 systemd[1]: var-lib-kubelet-pods-45c62d80\x2d7e52\x2d4609\x2db610\x2dff5b9a2a6d10-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 14 23:31:42.418655 systemd[1]: var-lib-kubelet-pods-45c62d80\x2d7e52\x2d4609\x2db610\x2dff5b9a2a6d10-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbkbds.mount: Deactivated successfully. Jul 14 23:31:42.418703 systemd[1]: var-lib-kubelet-pods-45c62d80\x2d7e52\x2d4609\x2db610\x2dff5b9a2a6d10-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 14 23:31:42.526750 systemd[1]: Removed slice kubepods-burstable-pod45c62d80_7e52_4609_b610_ff5b9a2a6d10.slice. Jul 14 23:31:42.880743 systemd[1]: Created slice kubepods-burstable-pode94328c5_8db3_41e9_9c13_9e5162a2c876.slice. Jul 14 23:31:43.024774 kubelet[2067]: I0714 23:31:43.024747 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e94328c5-8db3-41e9-9c13-9e5162a2c876-cilium-config-path\") pod \"cilium-7j2lj\" (UID: \"e94328c5-8db3-41e9-9c13-9e5162a2c876\") " pod="kube-system/cilium-7j2lj" Jul 14 23:31:43.025064 kubelet[2067]: I0714 23:31:43.025053 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e94328c5-8db3-41e9-9c13-9e5162a2c876-lib-modules\") pod \"cilium-7j2lj\" (UID: \"e94328c5-8db3-41e9-9c13-9e5162a2c876\") " pod="kube-system/cilium-7j2lj" Jul 14 23:31:43.025123 kubelet[2067]: I0714 23:31:43.025113 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e94328c5-8db3-41e9-9c13-9e5162a2c876-cilium-run\") pod \"cilium-7j2lj\" (UID: \"e94328c5-8db3-41e9-9c13-9e5162a2c876\") " pod="kube-system/cilium-7j2lj" Jul 14 23:31:43.025179 kubelet[2067]: I0714 23:31:43.025170 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e94328c5-8db3-41e9-9c13-9e5162a2c876-bpf-maps\") pod \"cilium-7j2lj\" (UID: \"e94328c5-8db3-41e9-9c13-9e5162a2c876\") " pod="kube-system/cilium-7j2lj" Jul 14 23:31:43.025237 kubelet[2067]: I0714 23:31:43.025228 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e94328c5-8db3-41e9-9c13-9e5162a2c876-cni-path\") pod \"cilium-7j2lj\" (UID: \"e94328c5-8db3-41e9-9c13-9e5162a2c876\") " pod="kube-system/cilium-7j2lj" Jul 14 23:31:43.025296 kubelet[2067]: I0714 23:31:43.025287 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e94328c5-8db3-41e9-9c13-9e5162a2c876-etc-cni-netd\") pod \"cilium-7j2lj\" (UID: \"e94328c5-8db3-41e9-9c13-9e5162a2c876\") " pod="kube-system/cilium-7j2lj" Jul 14 23:31:43.025353 kubelet[2067]: I0714 23:31:43.025344 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e94328c5-8db3-41e9-9c13-9e5162a2c876-clustermesh-secrets\") pod \"cilium-7j2lj\" (UID: \"e94328c5-8db3-41e9-9c13-9e5162a2c876\") " pod="kube-system/cilium-7j2lj" Jul 14 23:31:43.025413 kubelet[2067]: I0714 23:31:43.025403 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e94328c5-8db3-41e9-9c13-9e5162a2c876-cilium-ipsec-secrets\") pod \"cilium-7j2lj\" (UID: \"e94328c5-8db3-41e9-9c13-9e5162a2c876\") " pod="kube-system/cilium-7j2lj" Jul 14 23:31:43.025470 kubelet[2067]: I0714 23:31:43.025460 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e94328c5-8db3-41e9-9c13-9e5162a2c876-host-proc-sys-kernel\") pod \"cilium-7j2lj\" (UID: \"e94328c5-8db3-41e9-9c13-9e5162a2c876\") " pod="kube-system/cilium-7j2lj" Jul 14 23:31:43.025525 kubelet[2067]: I0714 23:31:43.025516 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e94328c5-8db3-41e9-9c13-9e5162a2c876-hubble-tls\") pod \"cilium-7j2lj\" (UID: \"e94328c5-8db3-41e9-9c13-9e5162a2c876\") " pod="kube-system/cilium-7j2lj" Jul 14 23:31:43.025582 kubelet[2067]: I0714 23:31:43.025572 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e94328c5-8db3-41e9-9c13-9e5162a2c876-host-proc-sys-net\") pod \"cilium-7j2lj\" (UID: \"e94328c5-8db3-41e9-9c13-9e5162a2c876\") " pod="kube-system/cilium-7j2lj" Jul 14 23:31:43.025642 kubelet[2067]: I0714 23:31:43.025633 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e94328c5-8db3-41e9-9c13-9e5162a2c876-hostproc\") pod \"cilium-7j2lj\" (UID: \"e94328c5-8db3-41e9-9c13-9e5162a2c876\") " pod="kube-system/cilium-7j2lj" Jul 14 23:31:43.025698 kubelet[2067]: I0714 23:31:43.025690 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e94328c5-8db3-41e9-9c13-9e5162a2c876-cilium-cgroup\") pod \"cilium-7j2lj\" (UID: \"e94328c5-8db3-41e9-9c13-9e5162a2c876\") " pod="kube-system/cilium-7j2lj" Jul 14 23:31:43.025758 kubelet[2067]: I0714 23:31:43.025749 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e94328c5-8db3-41e9-9c13-9e5162a2c876-xtables-lock\") pod \"cilium-7j2lj\" (UID: \"e94328c5-8db3-41e9-9c13-9e5162a2c876\") " pod="kube-system/cilium-7j2lj" Jul 14 23:31:43.025814 kubelet[2067]: I0714 23:31:43.025804 2067 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjvrp\" (UniqueName: \"kubernetes.io/projected/e94328c5-8db3-41e9-9c13-9e5162a2c876-kube-api-access-wjvrp\") pod \"cilium-7j2lj\" (UID: \"e94328c5-8db3-41e9-9c13-9e5162a2c876\") " pod="kube-system/cilium-7j2lj" Jul 14 23:31:43.183604 env[1270]: time="2025-07-14T23:31:43.183529931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7j2lj,Uid:e94328c5-8db3-41e9-9c13-9e5162a2c876,Namespace:kube-system,Attempt:0,}" Jul 14 23:31:43.196691 env[1270]: time="2025-07-14T23:31:43.196483167Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 23:31:43.196691 env[1270]: time="2025-07-14T23:31:43.196527287Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 23:31:43.196691 env[1270]: time="2025-07-14T23:31:43.196535163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 23:31:43.196899 env[1270]: time="2025-07-14T23:31:43.196750895Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec905eba4550dbf345f0dfc12c162767c57cae03de173949fae680554cc3b779 pid=3829 runtime=io.containerd.runc.v2 Jul 14 23:31:43.204584 systemd[1]: Started cri-containerd-ec905eba4550dbf345f0dfc12c162767c57cae03de173949fae680554cc3b779.scope. Jul 14 23:31:43.223556 env[1270]: time="2025-07-14T23:31:43.223522389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7j2lj,Uid:e94328c5-8db3-41e9-9c13-9e5162a2c876,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec905eba4550dbf345f0dfc12c162767c57cae03de173949fae680554cc3b779\"" Jul 14 23:31:43.226281 env[1270]: time="2025-07-14T23:31:43.226242982Z" level=info msg="CreateContainer within sandbox \"ec905eba4550dbf345f0dfc12c162767c57cae03de173949fae680554cc3b779\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 14 23:31:43.231660 env[1270]: time="2025-07-14T23:31:43.231638195Z" level=info msg="CreateContainer within sandbox \"ec905eba4550dbf345f0dfc12c162767c57cae03de173949fae680554cc3b779\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9f38017e84f1f5b183ebc302bf2ba4e102f4161f8c168a030fcc3b21255298f8\"" Jul 14 23:31:43.232007 env[1270]: time="2025-07-14T23:31:43.231994026Z" level=info msg="StartContainer for \"9f38017e84f1f5b183ebc302bf2ba4e102f4161f8c168a030fcc3b21255298f8\"" Jul 14 23:31:43.242176 systemd[1]: Started cri-containerd-9f38017e84f1f5b183ebc302bf2ba4e102f4161f8c168a030fcc3b21255298f8.scope. Jul 14 23:31:43.267660 env[1270]: time="2025-07-14T23:31:43.267631569Z" level=info msg="StartContainer for \"9f38017e84f1f5b183ebc302bf2ba4e102f4161f8c168a030fcc3b21255298f8\" returns successfully" Jul 14 23:31:43.295604 systemd[1]: cri-containerd-9f38017e84f1f5b183ebc302bf2ba4e102f4161f8c168a030fcc3b21255298f8.scope: Deactivated successfully. Jul 14 23:31:43.313629 env[1270]: time="2025-07-14T23:31:43.313586288Z" level=info msg="shim disconnected" id=9f38017e84f1f5b183ebc302bf2ba4e102f4161f8c168a030fcc3b21255298f8 Jul 14 23:31:43.313629 env[1270]: time="2025-07-14T23:31:43.313628138Z" level=warning msg="cleaning up after shim disconnected" id=9f38017e84f1f5b183ebc302bf2ba4e102f4161f8c168a030fcc3b21255298f8 namespace=k8s.io Jul 14 23:31:43.313769 env[1270]: time="2025-07-14T23:31:43.313636765Z" level=info msg="cleaning up dead shim" Jul 14 23:31:43.318283 env[1270]: time="2025-07-14T23:31:43.318261513Z" level=warning msg="cleanup warnings time=\"2025-07-14T23:31:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3912 runtime=io.containerd.runc.v2\n" Jul 14 23:31:43.631042 kubelet[2067]: E0714 23:31:43.631006 2067 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 14 23:31:43.805914 env[1270]: time="2025-07-14T23:31:43.805616116Z" level=info msg="CreateContainer within sandbox \"ec905eba4550dbf345f0dfc12c162767c57cae03de173949fae680554cc3b779\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 14 23:31:43.875870 env[1270]: time="2025-07-14T23:31:43.875794612Z" level=info msg="CreateContainer within sandbox \"ec905eba4550dbf345f0dfc12c162767c57cae03de173949fae680554cc3b779\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"838d2239cbc8b3311ac0111636d0990e5ffc69c60990928a3d2357f9a5a6ece9\"" Jul 14 23:31:43.876244 env[1270]: time="2025-07-14T23:31:43.876227262Z" level=info msg="StartContainer for \"838d2239cbc8b3311ac0111636d0990e5ffc69c60990928a3d2357f9a5a6ece9\"" Jul 14 23:31:43.888224 systemd[1]: Started cri-containerd-838d2239cbc8b3311ac0111636d0990e5ffc69c60990928a3d2357f9a5a6ece9.scope. Jul 14 23:31:43.918118 env[1270]: time="2025-07-14T23:31:43.918091183Z" level=info msg="StartContainer for \"838d2239cbc8b3311ac0111636d0990e5ffc69c60990928a3d2357f9a5a6ece9\" returns successfully" Jul 14 23:31:43.932439 systemd[1]: cri-containerd-838d2239cbc8b3311ac0111636d0990e5ffc69c60990928a3d2357f9a5a6ece9.scope: Deactivated successfully. Jul 14 23:31:43.946473 env[1270]: time="2025-07-14T23:31:43.946444636Z" level=info msg="shim disconnected" id=838d2239cbc8b3311ac0111636d0990e5ffc69c60990928a3d2357f9a5a6ece9 Jul 14 23:31:43.946610 env[1270]: time="2025-07-14T23:31:43.946597770Z" level=warning msg="cleaning up after shim disconnected" id=838d2239cbc8b3311ac0111636d0990e5ffc69c60990928a3d2357f9a5a6ece9 namespace=k8s.io Jul 14 23:31:43.946663 env[1270]: time="2025-07-14T23:31:43.946652290Z" level=info msg="cleaning up dead shim" Jul 14 23:31:43.952046 env[1270]: time="2025-07-14T23:31:43.952011425Z" level=warning msg="cleanup warnings time=\"2025-07-14T23:31:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3972 runtime=io.containerd.runc.v2\n" Jul 14 23:31:44.418446 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-838d2239cbc8b3311ac0111636d0990e5ffc69c60990928a3d2357f9a5a6ece9-rootfs.mount: Deactivated successfully. Jul 14 23:31:44.522399 kubelet[2067]: I0714 23:31:44.522379 2067 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45c62d80-7e52-4609-b610-ff5b9a2a6d10" path="/var/lib/kubelet/pods/45c62d80-7e52-4609-b610-ff5b9a2a6d10/volumes" Jul 14 23:31:44.809194 env[1270]: time="2025-07-14T23:31:44.809160010Z" level=info msg="CreateContainer within sandbox \"ec905eba4550dbf345f0dfc12c162767c57cae03de173949fae680554cc3b779\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 14 23:31:44.817025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3398807321.mount: Deactivated successfully. Jul 14 23:31:44.821004 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3525068234.mount: Deactivated successfully. Jul 14 23:31:44.823128 env[1270]: time="2025-07-14T23:31:44.823100828Z" level=info msg="CreateContainer within sandbox \"ec905eba4550dbf345f0dfc12c162767c57cae03de173949fae680554cc3b779\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"760da560dafbd6e599117efe274b7928d1e6eae7798eadf19fb7ad9f964b85c8\"" Jul 14 23:31:44.823551 env[1270]: time="2025-07-14T23:31:44.823536083Z" level=info msg="StartContainer for \"760da560dafbd6e599117efe274b7928d1e6eae7798eadf19fb7ad9f964b85c8\"" Jul 14 23:31:44.837359 systemd[1]: Started cri-containerd-760da560dafbd6e599117efe274b7928d1e6eae7798eadf19fb7ad9f964b85c8.scope. Jul 14 23:31:44.855941 env[1270]: time="2025-07-14T23:31:44.855910373Z" level=info msg="StartContainer for \"760da560dafbd6e599117efe274b7928d1e6eae7798eadf19fb7ad9f964b85c8\" returns successfully" Jul 14 23:31:44.864136 systemd[1]: cri-containerd-760da560dafbd6e599117efe274b7928d1e6eae7798eadf19fb7ad9f964b85c8.scope: Deactivated successfully. Jul 14 23:31:44.879651 env[1270]: time="2025-07-14T23:31:44.879617920Z" level=info msg="shim disconnected" id=760da560dafbd6e599117efe274b7928d1e6eae7798eadf19fb7ad9f964b85c8 Jul 14 23:31:44.879858 env[1270]: time="2025-07-14T23:31:44.879820442Z" level=warning msg="cleaning up after shim disconnected" id=760da560dafbd6e599117efe274b7928d1e6eae7798eadf19fb7ad9f964b85c8 namespace=k8s.io Jul 14 23:31:44.879940 env[1270]: time="2025-07-14T23:31:44.879930090Z" level=info msg="cleaning up dead shim" Jul 14 23:31:44.884658 env[1270]: time="2025-07-14T23:31:44.884634204Z" level=warning msg="cleanup warnings time=\"2025-07-14T23:31:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4029 runtime=io.containerd.runc.v2\n" Jul 14 23:31:45.811353 env[1270]: time="2025-07-14T23:31:45.811323295Z" level=info msg="CreateContainer within sandbox \"ec905eba4550dbf345f0dfc12c162767c57cae03de173949fae680554cc3b779\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 14 23:31:45.818291 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3655022007.mount: Deactivated successfully. Jul 14 23:31:45.822251 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3547799065.mount: Deactivated successfully. Jul 14 23:31:45.822717 env[1270]: time="2025-07-14T23:31:45.822695392Z" level=info msg="CreateContainer within sandbox \"ec905eba4550dbf345f0dfc12c162767c57cae03de173949fae680554cc3b779\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"df01b8ec095eea83214435c46f0c65c59495f25ccc044695ebf58f7788951d1b\"" Jul 14 23:31:45.824059 env[1270]: time="2025-07-14T23:31:45.823307109Z" level=info msg="StartContainer for \"df01b8ec095eea83214435c46f0c65c59495f25ccc044695ebf58f7788951d1b\"" Jul 14 23:31:45.839495 systemd[1]: Started cri-containerd-df01b8ec095eea83214435c46f0c65c59495f25ccc044695ebf58f7788951d1b.scope. Jul 14 23:31:45.856992 systemd[1]: cri-containerd-df01b8ec095eea83214435c46f0c65c59495f25ccc044695ebf58f7788951d1b.scope: Deactivated successfully. Jul 14 23:31:45.920576 env[1270]: time="2025-07-14T23:31:45.918700449Z" level=info msg="StartContainer for \"df01b8ec095eea83214435c46f0c65c59495f25ccc044695ebf58f7788951d1b\" returns successfully" Jul 14 23:31:45.932696 env[1270]: time="2025-07-14T23:31:45.932665226Z" level=info msg="shim disconnected" id=df01b8ec095eea83214435c46f0c65c59495f25ccc044695ebf58f7788951d1b Jul 14 23:31:45.932856 env[1270]: time="2025-07-14T23:31:45.932844196Z" level=warning msg="cleaning up after shim disconnected" id=df01b8ec095eea83214435c46f0c65c59495f25ccc044695ebf58f7788951d1b namespace=k8s.io Jul 14 23:31:45.932918 env[1270]: time="2025-07-14T23:31:45.932908821Z" level=info msg="cleaning up dead shim" Jul 14 23:31:45.938982 env[1270]: time="2025-07-14T23:31:45.938867584Z" level=warning msg="cleanup warnings time=\"2025-07-14T23:31:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4084 runtime=io.containerd.runc.v2\n" Jul 14 23:31:46.520893 kubelet[2067]: E0714 23:31:46.520861 2067 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-n454f" podUID="54d0d611-0cc9-4dae-9da1-21b43f472d5d" Jul 14 23:31:46.814614 env[1270]: time="2025-07-14T23:31:46.814548710Z" level=info msg="CreateContainer within sandbox \"ec905eba4550dbf345f0dfc12c162767c57cae03de173949fae680554cc3b779\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 14 23:31:46.933609 env[1270]: time="2025-07-14T23:31:46.933570476Z" level=info msg="CreateContainer within sandbox \"ec905eba4550dbf345f0dfc12c162767c57cae03de173949fae680554cc3b779\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"037ac3a7ee02ad8cef53436f17cda41e5fdbd473e06d58d77f87e0744a6ae875\"" Jul 14 23:31:46.934074 env[1270]: time="2025-07-14T23:31:46.934056833Z" level=info msg="StartContainer for \"037ac3a7ee02ad8cef53436f17cda41e5fdbd473e06d58d77f87e0744a6ae875\"" Jul 14 23:31:46.947082 systemd[1]: Started cri-containerd-037ac3a7ee02ad8cef53436f17cda41e5fdbd473e06d58d77f87e0744a6ae875.scope. Jul 14 23:31:47.090177 env[1270]: time="2025-07-14T23:31:47.090105031Z" level=info msg="StartContainer for \"037ac3a7ee02ad8cef53436f17cda41e5fdbd473e06d58d77f87e0744a6ae875\" returns successfully" Jul 14 23:31:47.635867 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 14 23:31:47.845287 kubelet[2067]: I0714 23:31:47.845250 2067 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7j2lj" podStartSLOduration=5.84523783 podStartE2EDuration="5.84523783s" podCreationTimestamp="2025-07-14 23:31:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 23:31:47.844683004 +0000 UTC m=+139.419508362" watchObservedRunningTime="2025-07-14 23:31:47.84523783 +0000 UTC m=+139.420063182" Jul 14 23:31:48.520570 kubelet[2067]: E0714 23:31:48.520544 2067 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-n454f" podUID="54d0d611-0cc9-4dae-9da1-21b43f472d5d" Jul 14 23:31:50.633361 systemd-networkd[1063]: lxc_health: Link UP Jul 14 23:31:50.657169 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 14 23:31:50.655401 systemd-networkd[1063]: lxc_health: Gained carrier Jul 14 23:31:52.477986 systemd-networkd[1063]: lxc_health: Gained IPv6LL Jul 14 23:31:56.088550 systemd[1]: run-containerd-runc-k8s.io-037ac3a7ee02ad8cef53436f17cda41e5fdbd473e06d58d77f87e0744a6ae875-runc.LqHTzZ.mount: Deactivated successfully. Jul 14 23:31:56.145213 sshd[3802]: pam_unix(sshd:session): session closed for user core Jul 14 23:31:56.195632 systemd[1]: sshd@24-139.178.70.107:22-139.178.89.65:56288.service: Deactivated successfully. Jul 14 23:31:56.196166 systemd[1]: session-27.scope: Deactivated successfully. Jul 14 23:31:56.196565 systemd-logind[1240]: Session 27 logged out. Waiting for processes to exit. Jul 14 23:31:56.197207 systemd-logind[1240]: Removed session 27.