Mar 17 18:35:18.658845 kernel: Linux version 5.15.179-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Mar 17 17:12:34 -00 2025
Mar 17 18:35:18.658860 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a
Mar 17 18:35:18.658867 kernel: Disabled fast string operations
Mar 17 18:35:18.658871 kernel: BIOS-provided physical RAM map:
Mar 17 18:35:18.658874 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable
Mar 17 18:35:18.658878 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved
Mar 17 18:35:18.658884 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved
Mar 17 18:35:18.658888 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable
Mar 17 18:35:18.658893 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data
Mar 17 18:35:18.658897 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS
Mar 17 18:35:18.658901 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable
Mar 17 18:35:18.658905 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved
Mar 17 18:35:18.658909 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved
Mar 17 18:35:18.658913 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved
Mar 17 18:35:18.658919 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved
Mar 17 18:35:18.658934 kernel: NX (Execute Disable) protection: active
Mar 17 18:35:18.658940 kernel: SMBIOS 2.7 present.
Mar 17 18:35:18.658944 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020
Mar 17 18:35:18.658949 kernel: vmware: hypercall mode: 0x00
Mar 17 18:35:18.658953 kernel: Hypervisor detected: VMware
Mar 17 18:35:18.658959 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz
Mar 17 18:35:18.658964 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz
Mar 17 18:35:18.658968 kernel: vmware: using clock offset of 4496067946 ns
Mar 17 18:35:18.658972 kernel: tsc: Detected 3408.000 MHz processor
Mar 17 18:35:18.658977 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Mar 17 18:35:18.658982 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Mar 17 18:35:18.658987 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000
Mar 17 18:35:18.658991 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Mar 17 18:35:18.658996 kernel: total RAM covered: 3072M
Mar 17 18:35:18.659001 kernel: Found optimal setting for mtrr clean up
Mar 17 18:35:18.659007 kernel:  gran_size: 64K         chunk_size: 64K         num_reg: 2          lose cover RAM: 0G
Mar 17 18:35:18.659011 kernel: Using GB pages for direct mapping
Mar 17 18:35:18.659016 kernel: ACPI: Early table checksum verification disabled
Mar 17 18:35:18.659020 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD )
Mar 17 18:35:18.659025 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL  440BX    06040000 VMW  01324272)
Mar 17 18:35:18.659029 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL  440BX    06040000 PTL  000F4240)
Mar 17 18:35:18.659034 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD  Custom   06040000 MSFT 03000001)
Mar 17 18:35:18.659038 kernel: ACPI: FACS 0x000000007FEFFFC0 000040
Mar 17 18:35:18.659043 kernel: ACPI: FACS 0x000000007FEFFFC0 000040
Mar 17 18:35:18.659049 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD  $SBFTBL$ 06040000  LTP 00000001)
Mar 17 18:35:18.659055 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD  ? APIC   06040000  LTP 00000000)
Mar 17 18:35:18.659060 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD  $PCITBL$ 06040000  LTP 00000001)
Mar 17 18:35:18.659065 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG  06040000 VMW  00000001)
Mar 17 18:35:18.659070 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW  00000001)
Mar 17 18:35:18.659076 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW  00000001)
Mar 17 18:35:18.659175 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66]
Mar 17 18:35:18.659184 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72]
Mar 17 18:35:18.659190 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff]
Mar 17 18:35:18.659195 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff]
Mar 17 18:35:18.659200 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54]
Mar 17 18:35:18.659205 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c]
Mar 17 18:35:18.659210 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea]
Mar 17 18:35:18.659214 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe]
Mar 17 18:35:18.659221 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756]
Mar 17 18:35:18.659226 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e]
Mar 17 18:35:18.659231 kernel: system APIC only can use physical flat
Mar 17 18:35:18.659236 kernel: Setting APIC routing to physical flat.
Mar 17 18:35:18.659241 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0
Mar 17 18:35:18.659246 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0
Mar 17 18:35:18.659251 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0
Mar 17 18:35:18.659255 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0
Mar 17 18:35:18.659260 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0
Mar 17 18:35:18.659265 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0
Mar 17 18:35:18.659272 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0
Mar 17 18:35:18.659276 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0
Mar 17 18:35:18.659281 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0
Mar 17 18:35:18.659289 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0
Mar 17 18:35:18.659297 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0
Mar 17 18:35:18.659304 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0
Mar 17 18:35:18.659309 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0
Mar 17 18:35:18.659314 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0
Mar 17 18:35:18.659318 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0
Mar 17 18:35:18.659325 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0
Mar 17 18:35:18.659330 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0
Mar 17 18:35:18.659334 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0
Mar 17 18:35:18.659339 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0
Mar 17 18:35:18.659344 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0
Mar 17 18:35:18.659349 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0
Mar 17 18:35:18.659354 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0
Mar 17 18:35:18.659358 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0
Mar 17 18:35:18.659363 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0
Mar 17 18:35:18.659368 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0
Mar 17 18:35:18.659374 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0
Mar 17 18:35:18.659379 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0
Mar 17 18:35:18.659383 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0
Mar 17 18:35:18.659389 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0
Mar 17 18:35:18.659393 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0
Mar 17 18:35:18.659398 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0
Mar 17 18:35:18.659403 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0
Mar 17 18:35:18.659408 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0
Mar 17 18:35:18.659412 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0
Mar 17 18:35:18.659417 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0
Mar 17 18:35:18.659423 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0
Mar 17 18:35:18.659428 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0
Mar 17 18:35:18.659433 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0
Mar 17 18:35:18.659437 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0
Mar 17 18:35:18.659442 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0
Mar 17 18:35:18.659447 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0
Mar 17 18:35:18.659452 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0
Mar 17 18:35:18.659457 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0
Mar 17 18:35:18.659461 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0
Mar 17 18:35:18.659466 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0
Mar 17 18:35:18.659472 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0
Mar 17 18:35:18.659477 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0
Mar 17 18:35:18.659482 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0
Mar 17 18:35:18.659486 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0
Mar 17 18:35:18.659491 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0
Mar 17 18:35:18.659496 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0
Mar 17 18:35:18.659501 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0
Mar 17 18:35:18.659505 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0
Mar 17 18:35:18.659510 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0
Mar 17 18:35:18.659515 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0
Mar 17 18:35:18.659521 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0
Mar 17 18:35:18.659526 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0
Mar 17 18:35:18.659531 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0
Mar 17 18:35:18.659536 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0
Mar 17 18:35:18.659541 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0
Mar 17 18:35:18.659547 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0
Mar 17 18:35:18.659554 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0
Mar 17 18:35:18.659560 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0
Mar 17 18:35:18.659565 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0
Mar 17 18:35:18.659570 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0
Mar 17 18:35:18.659577 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0
Mar 17 18:35:18.659582 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0
Mar 17 18:35:18.659587 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0
Mar 17 18:35:18.659592 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0
Mar 17 18:35:18.659598 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0
Mar 17 18:35:18.659603 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0
Mar 17 18:35:18.659608 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0
Mar 17 18:35:18.659613 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0
Mar 17 18:35:18.659619 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0
Mar 17 18:35:18.659624 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0
Mar 17 18:35:18.659629 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0
Mar 17 18:35:18.659634 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0
Mar 17 18:35:18.659640 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0
Mar 17 18:35:18.659645 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0
Mar 17 18:35:18.659650 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0
Mar 17 18:35:18.659655 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0
Mar 17 18:35:18.659660 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0
Mar 17 18:35:18.659665 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0
Mar 17 18:35:18.659672 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0
Mar 17 18:35:18.659677 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0
Mar 17 18:35:18.659682 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0
Mar 17 18:35:18.659687 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0
Mar 17 18:35:18.659693 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0
Mar 17 18:35:18.659698 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0
Mar 17 18:35:18.659703 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0
Mar 17 18:35:18.659708 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0
Mar 17 18:35:18.659713 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0
Mar 17 18:35:18.659719 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0
Mar 17 18:35:18.659725 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0
Mar 17 18:35:18.659730 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0
Mar 17 18:35:18.659735 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0
Mar 17 18:35:18.659740 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0
Mar 17 18:35:18.659745 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0
Mar 17 18:35:18.659750 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0
Mar 17 18:35:18.659755 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0
Mar 17 18:35:18.659760 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0
Mar 17 18:35:18.659766 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0
Mar 17 18:35:18.659772 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0
Mar 17 18:35:18.659777 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0
Mar 17 18:35:18.659782 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0
Mar 17 18:35:18.659787 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0
Mar 17 18:35:18.659792 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0
Mar 17 18:35:18.659797 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0
Mar 17 18:35:18.659802 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0
Mar 17 18:35:18.659807 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0
Mar 17 18:35:18.659812 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0
Mar 17 18:35:18.659818 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0
Mar 17 18:35:18.659824 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0
Mar 17 18:35:18.659829 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0
Mar 17 18:35:18.659834 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0
Mar 17 18:35:18.659839 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0
Mar 17 18:35:18.659844 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0
Mar 17 18:35:18.659850 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0
Mar 17 18:35:18.659855 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0
Mar 17 18:35:18.659860 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0
Mar 17 18:35:18.659865 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0
Mar 17 18:35:18.659870 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0
Mar 17 18:35:18.659876 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0
Mar 17 18:35:18.659882 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0
Mar 17 18:35:18.659887 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0
Mar 17 18:35:18.659892 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0
Mar 17 18:35:18.659897 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0
Mar 17 18:35:18.659902 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0
Mar 17 18:35:18.659907 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff]
Mar 17 18:35:18.659912 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff]
Mar 17 18:35:18.659918 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug
Mar 17 18:35:18.659928 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff]
Mar 17 18:35:18.659935 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff]
Mar 17 18:35:18.659941 kernel: Zone ranges:
Mar 17 18:35:18.659946 kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Mar 17 18:35:18.659951 kernel:   DMA32    [mem 0x0000000001000000-0x000000007fffffff]
Mar 17 18:35:18.659956 kernel:   Normal   empty
Mar 17 18:35:18.659962 kernel: Movable zone start for each node
Mar 17 18:35:18.659967 kernel: Early memory node ranges
Mar 17 18:35:18.659972 kernel:   node   0: [mem 0x0000000000001000-0x000000000009dfff]
Mar 17 18:35:18.659977 kernel:   node   0: [mem 0x0000000000100000-0x000000007fedffff]
Mar 17 18:35:18.659984 kernel:   node   0: [mem 0x000000007ff00000-0x000000007fffffff]
Mar 17 18:35:18.659989 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff]
Mar 17 18:35:18.659994 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Mar 17 18:35:18.659999 kernel: On node 0, zone DMA: 98 pages in unavailable ranges
Mar 17 18:35:18.660005 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges
Mar 17 18:35:18.660010 kernel: ACPI: PM-Timer IO Port: 0x1008
Mar 17 18:35:18.660015 kernel: system APIC only can use physical flat
Mar 17 18:35:18.660020 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1])
Mar 17 18:35:18.660025 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1])
Mar 17 18:35:18.660030 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1])
Mar 17 18:35:18.660037 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1])
Mar 17 18:35:18.660042 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1])
Mar 17 18:35:18.660047 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1])
Mar 17 18:35:18.660052 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1])
Mar 17 18:35:18.660057 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1])
Mar 17 18:35:18.660063 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1])
Mar 17 18:35:18.660068 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1])
Mar 17 18:35:18.660073 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1])
Mar 17 18:35:18.660078 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1])
Mar 17 18:35:18.660093 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1])
Mar 17 18:35:18.660099 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1])
Mar 17 18:35:18.660104 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1])
Mar 17 18:35:18.660109 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1])
Mar 17 18:35:18.660114 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1])
Mar 17 18:35:18.660119 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1])
Mar 17 18:35:18.660124 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1])
Mar 17 18:35:18.660129 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1])
Mar 17 18:35:18.660135 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1])
Mar 17 18:35:18.660141 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1])
Mar 17 18:35:18.660146 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1])
Mar 17 18:35:18.660151 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1])
Mar 17 18:35:18.660156 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1])
Mar 17 18:35:18.660162 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1])
Mar 17 18:35:18.660167 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1])
Mar 17 18:35:18.660172 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1])
Mar 17 18:35:18.660178 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1])
Mar 17 18:35:18.660183 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1])
Mar 17 18:35:18.660188 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1])
Mar 17 18:35:18.660194 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1])
Mar 17 18:35:18.660199 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1])
Mar 17 18:35:18.660204 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1])
Mar 17 18:35:18.660209 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1])
Mar 17 18:35:18.660215 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1])
Mar 17 18:35:18.660220 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1])
Mar 17 18:35:18.660225 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1])
Mar 17 18:35:18.660230 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1])
Mar 17 18:35:18.660236 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1])
Mar 17 18:35:18.660242 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1])
Mar 17 18:35:18.660247 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1])
Mar 17 18:35:18.660253 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1])
Mar 17 18:35:18.660258 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1])
Mar 17 18:35:18.660263 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1])
Mar 17 18:35:18.660268 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1])
Mar 17 18:35:18.660273 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1])
Mar 17 18:35:18.660278 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1])
Mar 17 18:35:18.660284 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1])
Mar 17 18:35:18.660289 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1])
Mar 17 18:35:18.660295 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1])
Mar 17 18:35:18.660301 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1])
Mar 17 18:35:18.660306 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1])
Mar 17 18:35:18.660311 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1])
Mar 17 18:35:18.660316 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1])
Mar 17 18:35:18.660321 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1])
Mar 17 18:35:18.660327 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1])
Mar 17 18:35:18.660332 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1])
Mar 17 18:35:18.660337 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1])
Mar 17 18:35:18.660343 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1])
Mar 17 18:35:18.660348 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1])
Mar 17 18:35:18.660353 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1])
Mar 17 18:35:18.660358 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1])
Mar 17 18:35:18.660364 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1])
Mar 17 18:35:18.660369 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1])
Mar 17 18:35:18.660374 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1])
Mar 17 18:35:18.660379 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1])
Mar 17 18:35:18.660384 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1])
Mar 17 18:35:18.660389 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1])
Mar 17 18:35:18.660396 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1])
Mar 17 18:35:18.660401 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1])
Mar 17 18:35:18.660406 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1])
Mar 17 18:35:18.660411 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1])
Mar 17 18:35:18.660416 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1])
Mar 17 18:35:18.660421 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1])
Mar 17 18:35:18.660427 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1])
Mar 17 18:35:18.660433 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1])
Mar 17 18:35:18.660439 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1])
Mar 17 18:35:18.660446 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1])
Mar 17 18:35:18.660451 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1])
Mar 17 18:35:18.660456 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1])
Mar 17 18:35:18.660464 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1])
Mar 17 18:35:18.660472 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1])
Mar 17 18:35:18.660478 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1])
Mar 17 18:35:18.660483 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1])
Mar 17 18:35:18.660489 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1])
Mar 17 18:35:18.660494 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1])
Mar 17 18:35:18.660500 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1])
Mar 17 18:35:18.660505 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1])
Mar 17 18:35:18.660510 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1])
Mar 17 18:35:18.660516 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1])
Mar 17 18:35:18.660521 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1])
Mar 17 18:35:18.660526 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1])
Mar 17 18:35:18.660531 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1])
Mar 17 18:35:18.660537 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1])
Mar 17 18:35:18.660542 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1])
Mar 17 18:35:18.660547 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1])
Mar 17 18:35:18.660553 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1])
Mar 17 18:35:18.660558 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1])
Mar 17 18:35:18.660563 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1])
Mar 17 18:35:18.660568 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1])
Mar 17 18:35:18.660574 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1])
Mar 17 18:35:18.660579 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1])
Mar 17 18:35:18.660584 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1])
Mar 17 18:35:18.660589 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1])
Mar 17 18:35:18.660594 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1])
Mar 17 18:35:18.660601 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1])
Mar 17 18:35:18.660606 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1])
Mar 17 18:35:18.660611 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1])
Mar 17 18:35:18.660616 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1])
Mar 17 18:35:18.660621 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1])
Mar 17 18:35:18.660626 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1])
Mar 17 18:35:18.660631 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1])
Mar 17 18:35:18.660637 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1])
Mar 17 18:35:18.660642 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1])
Mar 17 18:35:18.660647 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1])
Mar 17 18:35:18.660653 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1])
Mar 17 18:35:18.660658 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1])
Mar 17 18:35:18.660663 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1])
Mar 17 18:35:18.660668 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1])
Mar 17 18:35:18.660674 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1])
Mar 17 18:35:18.660679 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1])
Mar 17 18:35:18.660684 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1])
Mar 17 18:35:18.660689 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1])
Mar 17 18:35:18.660694 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1])
Mar 17 18:35:18.660701 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1])
Mar 17 18:35:18.660706 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1])
Mar 17 18:35:18.660711 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1])
Mar 17 18:35:18.660716 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23
Mar 17 18:35:18.660722 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge)
Mar 17 18:35:18.660727 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Mar 17 18:35:18.660732 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000
Mar 17 18:35:18.660738 kernel: TSC deadline timer available
Mar 17 18:35:18.660743 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs
Mar 17 18:35:18.660749 kernel: [mem 0x80000000-0xefffffff] available for PCI devices
Mar 17 18:35:18.660755 kernel: Booting paravirtualized kernel on VMware hypervisor
Mar 17 18:35:18.660760 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Mar 17 18:35:18.660765 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:128 nr_node_ids:1
Mar 17 18:35:18.660771 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144
Mar 17 18:35:18.660776 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152
Mar 17 18:35:18.660781 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 
Mar 17 18:35:18.660787 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 
Mar 17 18:35:18.660792 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 
Mar 17 18:35:18.660798 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 
Mar 17 18:35:18.660803 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 
Mar 17 18:35:18.660808 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 
Mar 17 18:35:18.660814 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 
Mar 17 18:35:18.660826 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 
Mar 17 18:35:18.660833 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 
Mar 17 18:35:18.660839 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 
Mar 17 18:35:18.660844 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 
Mar 17 18:35:18.660850 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 
Mar 17 18:35:18.660856 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 
Mar 17 18:35:18.660862 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 
Mar 17 18:35:18.660867 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 
Mar 17 18:35:18.660873 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 
Mar 17 18:35:18.660878 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 515808
Mar 17 18:35:18.660884 kernel: Policy zone: DMA32
Mar 17 18:35:18.660890 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a
Mar 17 18:35:18.660896 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Mar 17 18:35:18.660902 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes
Mar 17 18:35:18.660908 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes
Mar 17 18:35:18.660914 kernel: printk: log_buf_len min size: 262144 bytes
Mar 17 18:35:18.660920 kernel: printk: log_buf_len: 1048576 bytes
Mar 17 18:35:18.660925 kernel: printk: early log buf free: 239728(91%)
Mar 17 18:35:18.660931 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Mar 17 18:35:18.660936 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Mar 17 18:35:18.660942 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Mar 17 18:35:18.660948 kernel: Memory: 1940392K/2096628K available (12294K kernel code, 2278K rwdata, 13724K rodata, 47472K init, 4108K bss, 155976K reserved, 0K cma-reserved)
Mar 17 18:35:18.660955 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1
Mar 17 18:35:18.660960 kernel: ftrace: allocating 34580 entries in 136 pages
Mar 17 18:35:18.660966 kernel: ftrace: allocated 136 pages with 2 groups
Mar 17 18:35:18.660973 kernel: rcu: Hierarchical RCU implementation.
Mar 17 18:35:18.660979 kernel: rcu:         RCU event tracing is enabled.
Mar 17 18:35:18.660985 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128.
Mar 17 18:35:18.660991 kernel:         Rude variant of Tasks RCU enabled.
Mar 17 18:35:18.660997 kernel:         Tracing variant of Tasks RCU enabled.
Mar 17 18:35:18.661002 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Mar 17 18:35:18.661008 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128
Mar 17 18:35:18.661014 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16
Mar 17 18:35:18.661019 kernel: random: crng init done
Mar 17 18:35:18.661025 kernel: Console: colour VGA+ 80x25
Mar 17 18:35:18.661030 kernel: printk: console [tty0] enabled
Mar 17 18:35:18.661037 kernel: printk: console [ttyS0] enabled
Mar 17 18:35:18.661042 kernel: ACPI: Core revision 20210730
Mar 17 18:35:18.661048 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns
Mar 17 18:35:18.661054 kernel: APIC: Switch to symmetric I/O mode setup
Mar 17 18:35:18.661059 kernel: x2apic enabled
Mar 17 18:35:18.661065 kernel: Switched APIC routing to physical x2apic.
Mar 17 18:35:18.661071 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
Mar 17 18:35:18.661077 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns
Mar 17 18:35:18.661091 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000)
Mar 17 18:35:18.661099 kernel: Disabled fast string operations
Mar 17 18:35:18.661105 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8
Mar 17 18:35:18.661111 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4
Mar 17 18:35:18.661117 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Mar 17 18:35:18.661123 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
Mar 17 18:35:18.661131 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit
Mar 17 18:35:18.661137 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall
Mar 17 18:35:18.661143 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS
Mar 17 18:35:18.661148 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
Mar 17 18:35:18.661155 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT
Mar 17 18:35:18.661163 kernel: RETBleed: Mitigation: Enhanced IBRS
Mar 17 18:35:18.661169 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Mar 17 18:35:18.661176 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp
Mar 17 18:35:18.661185 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode
Mar 17 18:35:18.661191 kernel: SRBDS: Unknown: Dependent on hypervisor status
Mar 17 18:35:18.661197 kernel: GDS: Unknown: Dependent on hypervisor status
Mar 17 18:35:18.661202 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Mar 17 18:35:18.661208 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Mar 17 18:35:18.661215 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Mar 17 18:35:18.661221 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Mar 17 18:35:18.661226 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Mar 17 18:35:18.661233 kernel: Freeing SMP alternatives memory: 32K
Mar 17 18:35:18.661239 kernel: pid_max: default: 131072 minimum: 1024
Mar 17 18:35:18.661244 kernel: LSM: Security Framework initializing
Mar 17 18:35:18.661250 kernel: SELinux:  Initializing.
Mar 17 18:35:18.661256 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
Mar 17 18:35:18.661261 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
Mar 17 18:35:18.661268 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd)
Mar 17 18:35:18.661274 kernel: Performance Events: Skylake events, core PMU driver.
Mar 17 18:35:18.661279 kernel: core: CPUID marked event: 'cpu cycles' unavailable
Mar 17 18:35:18.661285 kernel: core: CPUID marked event: 'instructions' unavailable
Mar 17 18:35:18.661290 kernel: core: CPUID marked event: 'bus cycles' unavailable
Mar 17 18:35:18.661296 kernel: core: CPUID marked event: 'cache references' unavailable
Mar 17 18:35:18.661302 kernel: core: CPUID marked event: 'cache misses' unavailable
Mar 17 18:35:18.661307 kernel: core: CPUID marked event: 'branch instructions' unavailable
Mar 17 18:35:18.661313 kernel: core: CPUID marked event: 'branch misses' unavailable
Mar 17 18:35:18.661319 kernel: ... version:                1
Mar 17 18:35:18.661325 kernel: ... bit width:              48
Mar 17 18:35:18.661330 kernel: ... generic registers:      4
Mar 17 18:35:18.661336 kernel: ... value mask:             0000ffffffffffff
Mar 17 18:35:18.661341 kernel: ... max period:             000000007fffffff
Mar 17 18:35:18.661347 kernel: ... fixed-purpose events:   0
Mar 17 18:35:18.661353 kernel: ... event mask:             000000000000000f
Mar 17 18:35:18.661358 kernel: signal: max sigframe size: 1776
Mar 17 18:35:18.661365 kernel: rcu: Hierarchical SRCU implementation.
Mar 17 18:35:18.661371 kernel: NMI watchdog: Perf NMI watchdog permanently disabled
Mar 17 18:35:18.661377 kernel: smp: Bringing up secondary CPUs ...
Mar 17 18:35:18.661382 kernel: x86: Booting SMP configuration:
Mar 17 18:35:18.661388 kernel: .... node  #0, CPUs:          #1
Mar 17 18:35:18.661393 kernel: Disabled fast string operations
Mar 17 18:35:18.661399 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1
Mar 17 18:35:18.661404 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1
Mar 17 18:35:18.661410 kernel: smp: Brought up 1 node, 2 CPUs
Mar 17 18:35:18.661415 kernel: smpboot: Max logical packages: 128
Mar 17 18:35:18.661422 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS)
Mar 17 18:35:18.661428 kernel: devtmpfs: initialized
Mar 17 18:35:18.661433 kernel: x86/mm: Memory block size: 128MB
Mar 17 18:35:18.661439 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes)
Mar 17 18:35:18.661444 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Mar 17 18:35:18.661450 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear)
Mar 17 18:35:18.661456 kernel: pinctrl core: initialized pinctrl subsystem
Mar 17 18:35:18.661461 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Mar 17 18:35:18.661467 kernel: audit: initializing netlink subsys (disabled)
Mar 17 18:35:18.661474 kernel: audit: type=2000 audit(1742236517.062:1): state=initialized audit_enabled=0 res=1
Mar 17 18:35:18.661479 kernel: thermal_sys: Registered thermal governor 'step_wise'
Mar 17 18:35:18.661485 kernel: thermal_sys: Registered thermal governor 'user_space'
Mar 17 18:35:18.661491 kernel: cpuidle: using governor menu
Mar 17 18:35:18.661496 kernel: Simple Boot Flag at 0x36 set to 0x80
Mar 17 18:35:18.661502 kernel: ACPI: bus type PCI registered
Mar 17 18:35:18.661507 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Mar 17 18:35:18.661513 kernel: dca service started, version 1.12.1
Mar 17 18:35:18.661519 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000)
Mar 17 18:35:18.661525 kernel: PCI: MMCONFIG at [mem 0xf0000000-0xf7ffffff] reserved in E820
Mar 17 18:35:18.661531 kernel: PCI: Using configuration type 1 for base access
Mar 17 18:35:18.661537 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Mar 17 18:35:18.661542 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
Mar 17 18:35:18.661548 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
Mar 17 18:35:18.661553 kernel: ACPI: Added _OSI(Module Device)
Mar 17 18:35:18.661559 kernel: ACPI: Added _OSI(Processor Device)
Mar 17 18:35:18.661565 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Mar 17 18:35:18.661571 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Mar 17 18:35:18.661578 kernel: ACPI: Added _OSI(Linux-Dell-Video)
Mar 17 18:35:18.661583 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio)
Mar 17 18:35:18.661589 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics)
Mar 17 18:35:18.661595 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Mar 17 18:35:18.661600 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored
Mar 17 18:35:18.661606 kernel: ACPI: Interpreter enabled
Mar 17 18:35:18.661612 kernel: ACPI: PM: (supports S0 S1 S5)
Mar 17 18:35:18.661617 kernel: ACPI: Using IOAPIC for interrupt routing
Mar 17 18:35:18.661623 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Mar 17 18:35:18.661630 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F
Mar 17 18:35:18.661635 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f])
Mar 17 18:35:18.661711 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3]
Mar 17 18:35:18.661762 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR]
Mar 17 18:35:18.661808 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability]
Mar 17 18:35:18.661816 kernel: PCI host bridge to bus 0000:00
Mar 17 18:35:18.661865 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Mar 17 18:35:18.661910 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000cffff window]
Mar 17 18:35:18.661951 kernel: pci_bus 0000:00: root bus resource [mem 0x000d0000-0x000d3fff window]
Mar 17 18:35:18.661992 kernel: pci_bus 0000:00: root bus resource [mem 0x000d4000-0x000d7fff window]
Mar 17 18:35:18.662033 kernel: pci_bus 0000:00: root bus resource [mem 0x000d8000-0x000dbfff window]
Mar 17 18:35:18.662073 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Mar 17 18:35:18.662135 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Mar 17 18:35:18.662176 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xfeff window]
Mar 17 18:35:18.662221 kernel: pci_bus 0000:00: root bus resource [bus 00-7f]
Mar 17 18:35:18.662304 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000
Mar 17 18:35:18.675166 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400
Mar 17 18:35:18.675243 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100
Mar 17 18:35:18.675298 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a
Mar 17 18:35:18.675347 kernel: pci 0000:00:07.1: reg 0x20: [io  0x1060-0x106f]
Mar 17 18:35:18.675400 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io  0x01f0-0x01f7]
Mar 17 18:35:18.675448 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io  0x03f6]
Mar 17 18:35:18.675495 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io  0x0170-0x0177]
Mar 17 18:35:18.675541 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io  0x0376]
Mar 17 18:35:18.675591 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000
Mar 17 18:35:18.675639 kernel: pci 0000:00:07.3: quirk: [io  0x1000-0x103f] claimed by PIIX4 ACPI
Mar 17 18:35:18.675686 kernel: pci 0000:00:07.3: quirk: [io  0x1040-0x104f] claimed by PIIX4 SMB
Mar 17 18:35:18.675739 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000
Mar 17 18:35:18.675785 kernel: pci 0000:00:07.7: reg 0x10: [io  0x1080-0x10bf]
Mar 17 18:35:18.675832 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit]
Mar 17 18:35:18.675884 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000
Mar 17 18:35:18.675937 kernel: pci 0000:00:0f.0: reg 0x10: [io  0x1070-0x107f]
Mar 17 18:35:18.675984 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref]
Mar 17 18:35:18.676032 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff]
Mar 17 18:35:18.676078 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref]
Mar 17 18:35:18.676136 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Mar 17 18:35:18.676187 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401
Mar 17 18:35:18.676245 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400
Mar 17 18:35:18.676294 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold
Mar 17 18:35:18.676346 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400
Mar 17 18:35:18.676396 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold
Mar 17 18:35:18.676448 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400
Mar 17 18:35:18.676495 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold
Mar 17 18:35:18.676546 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400
Mar 17 18:35:18.676594 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold
Mar 17 18:35:18.676645 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400
Mar 17 18:35:18.676696 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold
Mar 17 18:35:18.676746 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400
Mar 17 18:35:18.676793 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold
Mar 17 18:35:18.676843 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400
Mar 17 18:35:18.676891 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold
Mar 17 18:35:18.676941 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400
Mar 17 18:35:18.676992 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold
Mar 17 18:35:18.677043 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400
Mar 17 18:35:18.677111 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold
Mar 17 18:35:18.677167 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400
Mar 17 18:35:18.677214 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold
Mar 17 18:35:18.677265 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400
Mar 17 18:35:18.677313 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold
Mar 17 18:35:18.677367 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400
Mar 17 18:35:18.677416 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold
Mar 17 18:35:18.677466 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400
Mar 17 18:35:18.677514 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold
Mar 17 18:35:18.677563 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400
Mar 17 18:35:18.677613 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold
Mar 17 18:35:18.677664 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400
Mar 17 18:35:18.677711 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold
Mar 17 18:35:18.677761 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400
Mar 17 18:35:18.677808 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold
Mar 17 18:35:18.677858 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400
Mar 17 18:35:18.677907 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold
Mar 17 18:35:18.677962 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400
Mar 17 18:35:18.678010 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold
Mar 17 18:35:18.678059 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400
Mar 17 18:35:18.678121 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold
Mar 17 18:35:18.678174 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400
Mar 17 18:35:18.678226 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold
Mar 17 18:35:18.678277 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400
Mar 17 18:35:18.678326 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold
Mar 17 18:35:18.678375 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400
Mar 17 18:35:18.678422 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold
Mar 17 18:35:18.678474 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400
Mar 17 18:35:18.678523 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold
Mar 17 18:35:18.678575 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400
Mar 17 18:35:18.678621 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold
Mar 17 18:35:18.678673 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400
Mar 17 18:35:18.678720 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold
Mar 17 18:35:18.678772 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400
Mar 17 18:35:18.678819 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold
Mar 17 18:35:18.678871 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400
Mar 17 18:35:18.678918 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold
Mar 17 18:35:18.678973 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400
Mar 17 18:35:18.679021 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold
Mar 17 18:35:18.679071 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400
Mar 17 18:35:18.680295 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold
Mar 17 18:35:18.680358 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400
Mar 17 18:35:18.680422 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold
Mar 17 18:35:18.680497 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400
Mar 17 18:35:18.680553 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold
Mar 17 18:35:18.680624 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400
Mar 17 18:35:18.680694 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold
Mar 17 18:35:18.680774 kernel: pci_bus 0000:01: extended config space not accessible
Mar 17 18:35:18.680850 kernel: pci 0000:00:01.0: PCI bridge to [bus 01]
Mar 17 18:35:18.680935 kernel: pci_bus 0000:02: extended config space not accessible
Mar 17 18:35:18.680945 kernel: acpiphp: Slot [32] registered
Mar 17 18:35:18.680951 kernel: acpiphp: Slot [33] registered
Mar 17 18:35:18.680958 kernel: acpiphp: Slot [34] registered
Mar 17 18:35:18.680967 kernel: acpiphp: Slot [35] registered
Mar 17 18:35:18.680976 kernel: acpiphp: Slot [36] registered
Mar 17 18:35:18.680987 kernel: acpiphp: Slot [37] registered
Mar 17 18:35:18.680997 kernel: acpiphp: Slot [38] registered
Mar 17 18:35:18.681006 kernel: acpiphp: Slot [39] registered
Mar 17 18:35:18.681015 kernel: acpiphp: Slot [40] registered
Mar 17 18:35:18.681025 kernel: acpiphp: Slot [41] registered
Mar 17 18:35:18.681033 kernel: acpiphp: Slot [42] registered
Mar 17 18:35:18.681042 kernel: acpiphp: Slot [43] registered
Mar 17 18:35:18.681050 kernel: acpiphp: Slot [44] registered
Mar 17 18:35:18.681056 kernel: acpiphp: Slot [45] registered
Mar 17 18:35:18.681061 kernel: acpiphp: Slot [46] registered
Mar 17 18:35:18.681069 kernel: acpiphp: Slot [47] registered
Mar 17 18:35:18.681075 kernel: acpiphp: Slot [48] registered
Mar 17 18:35:18.681091 kernel: acpiphp: Slot [49] registered
Mar 17 18:35:18.681100 kernel: acpiphp: Slot [50] registered
Mar 17 18:35:18.681108 kernel: acpiphp: Slot [51] registered
Mar 17 18:35:18.681113 kernel: acpiphp: Slot [52] registered
Mar 17 18:35:18.681119 kernel: acpiphp: Slot [53] registered
Mar 17 18:35:18.681124 kernel: acpiphp: Slot [54] registered
Mar 17 18:35:18.681130 kernel: acpiphp: Slot [55] registered
Mar 17 18:35:18.681137 kernel: acpiphp: Slot [56] registered
Mar 17 18:35:18.681145 kernel: acpiphp: Slot [57] registered
Mar 17 18:35:18.681154 kernel: acpiphp: Slot [58] registered
Mar 17 18:35:18.681163 kernel: acpiphp: Slot [59] registered
Mar 17 18:35:18.681172 kernel: acpiphp: Slot [60] registered
Mar 17 18:35:18.681181 kernel: acpiphp: Slot [61] registered
Mar 17 18:35:18.681189 kernel: acpiphp: Slot [62] registered
Mar 17 18:35:18.681197 kernel: acpiphp: Slot [63] registered
Mar 17 18:35:18.681275 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode)
Mar 17 18:35:18.681347 kernel: pci 0000:00:11.0:   bridge window [io  0x2000-0x3fff]
Mar 17 18:35:18.681414 kernel: pci 0000:00:11.0:   bridge window [mem 0xfd600000-0xfdffffff]
Mar 17 18:35:18.681491 kernel: pci 0000:00:11.0:   bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref]
Mar 17 18:35:18.681554 kernel: pci 0000:00:11.0:   bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode)
Mar 17 18:35:18.681608 kernel: pci 0000:00:11.0:   bridge window [mem 0x000cc000-0x000cffff window] (subtractive decode)
Mar 17 18:35:18.681658 kernel: pci 0000:00:11.0:   bridge window [mem 0x000d0000-0x000d3fff window] (subtractive decode)
Mar 17 18:35:18.681713 kernel: pci 0000:00:11.0:   bridge window [mem 0x000d4000-0x000d7fff window] (subtractive decode)
Mar 17 18:35:18.681760 kernel: pci 0000:00:11.0:   bridge window [mem 0x000d8000-0x000dbfff window] (subtractive decode)
Mar 17 18:35:18.681812 kernel: pci 0000:00:11.0:   bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode)
Mar 17 18:35:18.681870 kernel: pci 0000:00:11.0:   bridge window [io  0x0000-0x0cf7 window] (subtractive decode)
Mar 17 18:35:18.681930 kernel: pci 0000:00:11.0:   bridge window [io  0x0d00-0xfeff window] (subtractive decode)
Mar 17 18:35:18.681991 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700
Mar 17 18:35:18.682063 kernel: pci 0000:03:00.0: reg 0x10: [io  0x4000-0x4007]
Mar 17 18:35:18.683248 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit]
Mar 17 18:35:18.683319 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref]
Mar 17 18:35:18.683375 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold
Mar 17 18:35:18.683436 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device.  You can enable it with 'pcie_aspm=force'
Mar 17 18:35:18.683507 kernel: pci 0000:00:15.0: PCI bridge to [bus 03]
Mar 17 18:35:18.683560 kernel: pci 0000:00:15.0:   bridge window [io  0x4000-0x4fff]
Mar 17 18:35:18.683619 kernel: pci 0000:00:15.0:   bridge window [mem 0xfd500000-0xfd5fffff]
Mar 17 18:35:18.683686 kernel: pci 0000:00:15.1: PCI bridge to [bus 04]
Mar 17 18:35:18.683737 kernel: pci 0000:00:15.1:   bridge window [io  0x8000-0x8fff]
Mar 17 18:35:18.683783 kernel: pci 0000:00:15.1:   bridge window [mem 0xfd100000-0xfd1fffff]
Mar 17 18:35:18.683838 kernel: pci 0000:00:15.1:   bridge window [mem 0xe7800000-0xe78fffff 64bit pref]
Mar 17 18:35:18.683894 kernel: pci 0000:00:15.2: PCI bridge to [bus 05]
Mar 17 18:35:18.683946 kernel: pci 0000:00:15.2:   bridge window [io  0xc000-0xcfff]
Mar 17 18:35:18.683999 kernel: pci 0000:00:15.2:   bridge window [mem 0xfcd00000-0xfcdfffff]
Mar 17 18:35:18.684061 kernel: pci 0000:00:15.2:   bridge window [mem 0xe7400000-0xe74fffff 64bit pref]
Mar 17 18:35:18.687199 kernel: pci 0000:00:15.3: PCI bridge to [bus 06]
Mar 17 18:35:18.687293 kernel: pci 0000:00:15.3:   bridge window [mem 0xfc900000-0xfc9fffff]
Mar 17 18:35:18.687379 kernel: pci 0000:00:15.3:   bridge window [mem 0xe7000000-0xe70fffff 64bit pref]
Mar 17 18:35:18.687456 kernel: pci 0000:00:15.4: PCI bridge to [bus 07]
Mar 17 18:35:18.687536 kernel: pci 0000:00:15.4:   bridge window [mem 0xfc500000-0xfc5fffff]
Mar 17 18:35:18.687611 kernel: pci 0000:00:15.4:   bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref]
Mar 17 18:35:18.687682 kernel: pci 0000:00:15.5: PCI bridge to [bus 08]
Mar 17 18:35:18.687758 kernel: pci 0000:00:15.5:   bridge window [mem 0xfc100000-0xfc1fffff]
Mar 17 18:35:18.687828 kernel: pci 0000:00:15.5:   bridge window [mem 0xe6800000-0xe68fffff 64bit pref]
Mar 17 18:35:18.687904 kernel: pci 0000:00:15.6: PCI bridge to [bus 09]
Mar 17 18:35:18.687979 kernel: pci 0000:00:15.6:   bridge window [mem 0xfbd00000-0xfbdfffff]
Mar 17 18:35:18.688054 kernel: pci 0000:00:15.6:   bridge window [mem 0xe6400000-0xe64fffff 64bit pref]
Mar 17 18:35:18.688129 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a]
Mar 17 18:35:18.688200 kernel: pci 0000:00:15.7:   bridge window [mem 0xfb900000-0xfb9fffff]
Mar 17 18:35:18.688258 kernel: pci 0000:00:15.7:   bridge window [mem 0xe6000000-0xe60fffff 64bit pref]
Mar 17 18:35:18.688318 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000
Mar 17 18:35:18.688368 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff]
Mar 17 18:35:18.688415 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff]
Mar 17 18:35:18.688474 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff]
Mar 17 18:35:18.688534 kernel: pci 0000:0b:00.0: reg 0x1c: [io  0x5000-0x500f]
Mar 17 18:35:18.688582 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref]
Mar 17 18:35:18.688648 kernel: pci 0000:0b:00.0: supports D1 D2
Mar 17 18:35:18.688702 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold
Mar 17 18:35:18.688751 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device.  You can enable it with 'pcie_aspm=force'
Mar 17 18:35:18.688798 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b]
Mar 17 18:35:18.688847 kernel: pci 0000:00:16.0:   bridge window [io  0x5000-0x5fff]
Mar 17 18:35:18.688910 kernel: pci 0000:00:16.0:   bridge window [mem 0xfd400000-0xfd4fffff]
Mar 17 18:35:18.688968 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c]
Mar 17 18:35:18.689033 kernel: pci 0000:00:16.1:   bridge window [io  0x9000-0x9fff]
Mar 17 18:35:18.689099 kernel: pci 0000:00:16.1:   bridge window [mem 0xfd000000-0xfd0fffff]
Mar 17 18:35:18.689153 kernel: pci 0000:00:16.1:   bridge window [mem 0xe7700000-0xe77fffff 64bit pref]
Mar 17 18:35:18.689202 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d]
Mar 17 18:35:18.689268 kernel: pci 0000:00:16.2:   bridge window [io  0xd000-0xdfff]
Mar 17 18:35:18.689332 kernel: pci 0000:00:16.2:   bridge window [mem 0xfcc00000-0xfccfffff]
Mar 17 18:35:18.689380 kernel: pci 0000:00:16.2:   bridge window [mem 0xe7300000-0xe73fffff 64bit pref]
Mar 17 18:35:18.689429 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e]
Mar 17 18:35:18.689501 kernel: pci 0000:00:16.3:   bridge window [mem 0xfc800000-0xfc8fffff]
Mar 17 18:35:18.689574 kernel: pci 0000:00:16.3:   bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref]
Mar 17 18:35:18.689635 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f]
Mar 17 18:35:18.689683 kernel: pci 0000:00:16.4:   bridge window [mem 0xfc400000-0xfc4fffff]
Mar 17 18:35:18.689732 kernel: pci 0000:00:16.4:   bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref]
Mar 17 18:35:18.689790 kernel: pci 0000:00:16.5: PCI bridge to [bus 10]
Mar 17 18:35:18.689839 kernel: pci 0000:00:16.5:   bridge window [mem 0xfc000000-0xfc0fffff]
Mar 17 18:35:18.689911 kernel: pci 0000:00:16.5:   bridge window [mem 0xe6700000-0xe67fffff 64bit pref]
Mar 17 18:35:18.689984 kernel: pci 0000:00:16.6: PCI bridge to [bus 11]
Mar 17 18:35:18.690049 kernel: pci 0000:00:16.6:   bridge window [mem 0xfbc00000-0xfbcfffff]
Mar 17 18:35:18.697142 kernel: pci 0000:00:16.6:   bridge window [mem 0xe6300000-0xe63fffff 64bit pref]
Mar 17 18:35:18.697240 kernel: pci 0000:00:16.7: PCI bridge to [bus 12]
Mar 17 18:35:18.697297 kernel: pci 0000:00:16.7:   bridge window [mem 0xfb800000-0xfb8fffff]
Mar 17 18:35:18.697351 kernel: pci 0000:00:16.7:   bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref]
Mar 17 18:35:18.697423 kernel: pci 0000:00:17.0: PCI bridge to [bus 13]
Mar 17 18:35:18.697476 kernel: pci 0000:00:17.0:   bridge window [io  0x6000-0x6fff]
Mar 17 18:35:18.697534 kernel: pci 0000:00:17.0:   bridge window [mem 0xfd300000-0xfd3fffff]
Mar 17 18:35:18.697609 kernel: pci 0000:00:17.0:   bridge window [mem 0xe7a00000-0xe7afffff 64bit pref]
Mar 17 18:35:18.697683 kernel: pci 0000:00:17.1: PCI bridge to [bus 14]
Mar 17 18:35:18.697745 kernel: pci 0000:00:17.1:   bridge window [io  0xa000-0xafff]
Mar 17 18:35:18.697818 kernel: pci 0000:00:17.1:   bridge window [mem 0xfcf00000-0xfcffffff]
Mar 17 18:35:18.697889 kernel: pci 0000:00:17.1:   bridge window [mem 0xe7600000-0xe76fffff 64bit pref]
Mar 17 18:35:18.697946 kernel: pci 0000:00:17.2: PCI bridge to [bus 15]
Mar 17 18:35:18.698007 kernel: pci 0000:00:17.2:   bridge window [io  0xe000-0xefff]
Mar 17 18:35:18.698062 kernel: pci 0000:00:17.2:   bridge window [mem 0xfcb00000-0xfcbfffff]
Mar 17 18:35:18.698132 kernel: pci 0000:00:17.2:   bridge window [mem 0xe7200000-0xe72fffff 64bit pref]
Mar 17 18:35:18.698202 kernel: pci 0000:00:17.3: PCI bridge to [bus 16]
Mar 17 18:35:18.698259 kernel: pci 0000:00:17.3:   bridge window [mem 0xfc700000-0xfc7fffff]
Mar 17 18:35:18.698316 kernel: pci 0000:00:17.3:   bridge window [mem 0xe6e00000-0xe6efffff 64bit pref]
Mar 17 18:35:18.698384 kernel: pci 0000:00:17.4: PCI bridge to [bus 17]
Mar 17 18:35:18.698439 kernel: pci 0000:00:17.4:   bridge window [mem 0xfc300000-0xfc3fffff]
Mar 17 18:35:18.698507 kernel: pci 0000:00:17.4:   bridge window [mem 0xe6a00000-0xe6afffff 64bit pref]
Mar 17 18:35:18.698566 kernel: pci 0000:00:17.5: PCI bridge to [bus 18]
Mar 17 18:35:18.698618 kernel: pci 0000:00:17.5:   bridge window [mem 0xfbf00000-0xfbffffff]
Mar 17 18:35:18.698682 kernel: pci 0000:00:17.5:   bridge window [mem 0xe6600000-0xe66fffff 64bit pref]
Mar 17 18:35:18.698741 kernel: pci 0000:00:17.6: PCI bridge to [bus 19]
Mar 17 18:35:18.698791 kernel: pci 0000:00:17.6:   bridge window [mem 0xfbb00000-0xfbbfffff]
Mar 17 18:35:18.698838 kernel: pci 0000:00:17.6:   bridge window [mem 0xe6200000-0xe62fffff 64bit pref]
Mar 17 18:35:18.698894 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a]
Mar 17 18:35:18.698956 kernel: pci 0000:00:17.7:   bridge window [mem 0xfb700000-0xfb7fffff]
Mar 17 18:35:18.699016 kernel: pci 0000:00:17.7:   bridge window [mem 0xe5e00000-0xe5efffff 64bit pref]
Mar 17 18:35:18.699093 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b]
Mar 17 18:35:18.699162 kernel: pci 0000:00:18.0:   bridge window [io  0x7000-0x7fff]
Mar 17 18:35:18.699226 kernel: pci 0000:00:18.0:   bridge window [mem 0xfd200000-0xfd2fffff]
Mar 17 18:35:18.699280 kernel: pci 0000:00:18.0:   bridge window [mem 0xe7900000-0xe79fffff 64bit pref]
Mar 17 18:35:18.699341 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c]
Mar 17 18:35:18.699412 kernel: pci 0000:00:18.1:   bridge window [io  0xb000-0xbfff]
Mar 17 18:35:18.699461 kernel: pci 0000:00:18.1:   bridge window [mem 0xfce00000-0xfcefffff]
Mar 17 18:35:18.699517 kernel: pci 0000:00:18.1:   bridge window [mem 0xe7500000-0xe75fffff 64bit pref]
Mar 17 18:35:18.699593 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d]
Mar 17 18:35:18.699654 kernel: pci 0000:00:18.2:   bridge window [mem 0xfca00000-0xfcafffff]
Mar 17 18:35:18.699713 kernel: pci 0000:00:18.2:   bridge window [mem 0xe7100000-0xe71fffff 64bit pref]
Mar 17 18:35:18.699766 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e]
Mar 17 18:35:18.699813 kernel: pci 0000:00:18.3:   bridge window [mem 0xfc600000-0xfc6fffff]
Mar 17 18:35:18.699879 kernel: pci 0000:00:18.3:   bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref]
Mar 17 18:35:18.699943 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f]
Mar 17 18:35:18.700007 kernel: pci 0000:00:18.4:   bridge window [mem 0xfc200000-0xfc2fffff]
Mar 17 18:35:18.700070 kernel: pci 0000:00:18.4:   bridge window [mem 0xe6900000-0xe69fffff 64bit pref]
Mar 17 18:35:18.700140 kernel: pci 0000:00:18.5: PCI bridge to [bus 20]
Mar 17 18:35:18.700199 kernel: pci 0000:00:18.5:   bridge window [mem 0xfbe00000-0xfbefffff]
Mar 17 18:35:18.700271 kernel: pci 0000:00:18.5:   bridge window [mem 0xe6500000-0xe65fffff 64bit pref]
Mar 17 18:35:18.700339 kernel: pci 0000:00:18.6: PCI bridge to [bus 21]
Mar 17 18:35:18.700406 kernel: pci 0000:00:18.6:   bridge window [mem 0xfba00000-0xfbafffff]
Mar 17 18:35:18.700474 kernel: pci 0000:00:18.6:   bridge window [mem 0xe6100000-0xe61fffff 64bit pref]
Mar 17 18:35:18.700540 kernel: pci 0000:00:18.7: PCI bridge to [bus 22]
Mar 17 18:35:18.700602 kernel: pci 0000:00:18.7:   bridge window [mem 0xfb600000-0xfb6fffff]
Mar 17 18:35:18.700653 kernel: pci 0000:00:18.7:   bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref]
Mar 17 18:35:18.700662 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9
Mar 17 18:35:18.700672 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0
Mar 17 18:35:18.700678 kernel: ACPI: PCI: Interrupt link LNKB disabled
Mar 17 18:35:18.700684 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Mar 17 18:35:18.700689 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10
Mar 17 18:35:18.700695 kernel: iommu: Default domain type: Translated 
Mar 17 18:35:18.700703 kernel: iommu: DMA domain TLB invalidation policy: lazy mode 
Mar 17 18:35:18.700766 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device
Mar 17 18:35:18.700837 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Mar 17 18:35:18.700893 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible
Mar 17 18:35:18.700903 kernel: vgaarb: loaded
Mar 17 18:35:18.700909 kernel: pps_core: LinuxPPS API ver. 1 registered
Mar 17 18:35:18.700918 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Mar 17 18:35:18.700927 kernel: PTP clock support registered
Mar 17 18:35:18.700933 kernel: PCI: Using ACPI for IRQ routing
Mar 17 18:35:18.700940 kernel: PCI: pci_cache_line_size set to 64 bytes
Mar 17 18:35:18.700946 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff]
Mar 17 18:35:18.700952 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff]
Mar 17 18:35:18.700958 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0
Mar 17 18:35:18.700964 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter
Mar 17 18:35:18.700970 kernel: clocksource: Switched to clocksource tsc-early
Mar 17 18:35:18.700976 kernel: VFS: Disk quotas dquot_6.6.0
Mar 17 18:35:18.700986 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Mar 17 18:35:18.700995 kernel: pnp: PnP ACPI init
Mar 17 18:35:18.701062 kernel: system 00:00: [io  0x1000-0x103f] has been reserved
Mar 17 18:35:18.701131 kernel: system 00:00: [io  0x1040-0x104f] has been reserved
Mar 17 18:35:18.701196 kernel: system 00:00: [io  0x0cf0-0x0cf1] has been reserved
Mar 17 18:35:18.701252 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved
Mar 17 18:35:18.701301 kernel: pnp 00:06: [dma 2]
Mar 17 18:35:18.701362 kernel: system 00:07: [io  0xfce0-0xfcff] has been reserved
Mar 17 18:35:18.701411 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved
Mar 17 18:35:18.701472 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved
Mar 17 18:35:18.701485 kernel: pnp: PnP ACPI: found 8 devices
Mar 17 18:35:18.701495 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Mar 17 18:35:18.701505 kernel: NET: Registered PF_INET protocol family
Mar 17 18:35:18.701511 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear)
Mar 17 18:35:18.701517 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear)
Mar 17 18:35:18.701523 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Mar 17 18:35:18.701530 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear)
Mar 17 18:35:18.701536 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear)
Mar 17 18:35:18.701542 kernel: TCP: Hash tables configured (established 16384 bind 16384)
Mar 17 18:35:18.701548 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear)
Mar 17 18:35:18.701553 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear)
Mar 17 18:35:18.701559 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Mar 17 18:35:18.701565 kernel: NET: Registered PF_XDP protocol family
Mar 17 18:35:18.701973 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000
Mar 17 18:35:18.702048 kernel: pci 0000:00:15.3: bridge window [io  0x1000-0x0fff] to [bus 06] add_size 1000
Mar 17 18:35:18.702188 kernel: pci 0000:00:15.4: bridge window [io  0x1000-0x0fff] to [bus 07] add_size 1000
Mar 17 18:35:18.702522 kernel: pci 0000:00:15.5: bridge window [io  0x1000-0x0fff] to [bus 08] add_size 1000
Mar 17 18:35:18.702595 kernel: pci 0000:00:15.6: bridge window [io  0x1000-0x0fff] to [bus 09] add_size 1000
Mar 17 18:35:18.702669 kernel: pci 0000:00:15.7: bridge window [io  0x1000-0x0fff] to [bus 0a] add_size 1000
Mar 17 18:35:18.702733 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000
Mar 17 18:35:18.702796 kernel: pci 0000:00:16.3: bridge window [io  0x1000-0x0fff] to [bus 0e] add_size 1000
Mar 17 18:35:18.702864 kernel: pci 0000:00:16.4: bridge window [io  0x1000-0x0fff] to [bus 0f] add_size 1000
Mar 17 18:35:18.702936 kernel: pci 0000:00:16.5: bridge window [io  0x1000-0x0fff] to [bus 10] add_size 1000
Mar 17 18:35:18.702998 kernel: pci 0000:00:16.6: bridge window [io  0x1000-0x0fff] to [bus 11] add_size 1000
Mar 17 18:35:18.703048 kernel: pci 0000:00:16.7: bridge window [io  0x1000-0x0fff] to [bus 12] add_size 1000
Mar 17 18:35:18.703121 kernel: pci 0000:00:17.3: bridge window [io  0x1000-0x0fff] to [bus 16] add_size 1000
Mar 17 18:35:18.703198 kernel: pci 0000:00:17.4: bridge window [io  0x1000-0x0fff] to [bus 17] add_size 1000
Mar 17 18:35:18.703271 kernel: pci 0000:00:17.5: bridge window [io  0x1000-0x0fff] to [bus 18] add_size 1000
Mar 17 18:35:18.703323 kernel: pci 0000:00:17.6: bridge window [io  0x1000-0x0fff] to [bus 19] add_size 1000
Mar 17 18:35:18.703397 kernel: pci 0000:00:17.7: bridge window [io  0x1000-0x0fff] to [bus 1a] add_size 1000
Mar 17 18:35:18.703461 kernel: pci 0000:00:18.2: bridge window [io  0x1000-0x0fff] to [bus 1d] add_size 1000
Mar 17 18:35:18.703520 kernel: pci 0000:00:18.3: bridge window [io  0x1000-0x0fff] to [bus 1e] add_size 1000
Mar 17 18:35:18.703576 kernel: pci 0000:00:18.4: bridge window [io  0x1000-0x0fff] to [bus 1f] add_size 1000
Mar 17 18:35:18.703644 kernel: pci 0000:00:18.5: bridge window [io  0x1000-0x0fff] to [bus 20] add_size 1000
Mar 17 18:35:18.703713 kernel: pci 0000:00:18.6: bridge window [io  0x1000-0x0fff] to [bus 21] add_size 1000
Mar 17 18:35:18.703772 kernel: pci 0000:00:18.7: bridge window [io  0x1000-0x0fff] to [bus 22] add_size 1000
Mar 17 18:35:18.703827 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref]
Mar 17 18:35:18.703883 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref]
Mar 17 18:35:18.704297 kernel: pci 0000:00:15.3: BAR 13: no space for [io  size 0x1000]
Mar 17 18:35:18.704364 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io  size 0x1000]
Mar 17 18:35:18.704418 kernel: pci 0000:00:15.4: BAR 13: no space for [io  size 0x1000]
Mar 17 18:35:18.704486 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io  size 0x1000]
Mar 17 18:35:18.704537 kernel: pci 0000:00:15.5: BAR 13: no space for [io  size 0x1000]
Mar 17 18:35:18.704602 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io  size 0x1000]
Mar 17 18:35:18.704654 kernel: pci 0000:00:15.6: BAR 13: no space for [io  size 0x1000]
Mar 17 18:35:18.704713 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io  size 0x1000]
Mar 17 18:35:18.704768 kernel: pci 0000:00:15.7: BAR 13: no space for [io  size 0x1000]
Mar 17 18:35:18.704825 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io  size 0x1000]
Mar 17 18:35:18.704878 kernel: pci 0000:00:16.3: BAR 13: no space for [io  size 0x1000]
Mar 17 18:35:18.704942 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io  size 0x1000]
Mar 17 18:35:18.704991 kernel: pci 0000:00:16.4: BAR 13: no space for [io  size 0x1000]
Mar 17 18:35:18.705038 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io  size 0x1000]
Mar 17 18:35:18.705102 kernel: pci 0000:00:16.5: BAR 13: no space for [io  size 0x1000]
Mar 17 18:35:18.705151 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io  size 0x1000]
Mar 17 18:35:18.705216 kernel: pci 0000:00:16.6: BAR 13: no space for [io  size 0x1000]
Mar 17 18:35:18.705267 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io  size 0x1000]
Mar 17 18:35:18.705327 kernel: pci 0000:00:16.7: BAR 13: no space for [io  size 0x1000]
Mar 17 18:35:18.705386 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io  size 0x1000]
Mar 17 18:35:18.705445 kernel: pci 0000:00:17.3: BAR 13: no space for [io  size 0x1000]
Mar 17 18:35:18.705492 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io  size 0x1000]
Mar 17 18:35:18.705546 kernel: pci 0000:00:17.4: BAR 13: no space for [io  size 0x1000]
Mar 17 18:35:18.705606 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io  size 0x1000]
Mar 17 18:35:18.705669 kernel: pci 0000:00:17.5: BAR 13: no space for [io  size 0x1000]
Mar 17 18:35:18.705729 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io  size 0x1000]
Mar 17 18:35:18.705800 kernel: pci 0000:00:17.6: BAR 13: no space for [io  size 0x1000]
Mar 17 18:35:18.705858 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io  size 0x1000]
Mar 17 18:35:18.705925 kernel: pci 0000:00:17.7: BAR 13: no space for [io  size 0x1000]
Mar 17 18:35:18.705992 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io  size 0x1000]
Mar 17 18:35:18.706044 kernel: pci 0000:00:18.2: BAR 13: no space for [io  size 0x1000]
Mar 17 18:35:18.706111 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io  size 0x1000]
Mar 17 18:35:18.706173 kernel: pci 0000:00:18.3: BAR 13: no space for [io  size 0x1000]
Mar 17 18:35:18.706220 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io  size 0x1000]
Mar 17 18:35:18.706284 kernel: pci 0000:00:18.4: BAR 13: no space for [io  size 0x1000]
Mar 17 18:35:18.706333 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io  size 0x1000]
Mar 17 18:35:18.706398 kernel: pci 0000:00:18.5: BAR 13: no space for [io  size 0x1000]
Mar 17 18:35:18.706469 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io  size 0x1000]
Mar 17 18:35:18.706526 kernel: pci 0000:00:18.6: BAR 13: no space for [io  size 0x1000]
Mar 17 18:35:18.706584 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io  size 0x1000]
Mar 17 18:35:18.708413 kernel: pci 0000:00:18.7: BAR 13: no space for [io  size 0x1000]
Mar 17 18:35:18.708478 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io  size 0x1000]
Mar 17 18:35:18.708854 kernel: pci 0000:00:18.7: BAR 13: no space for [io  size 0x1000]
Mar 17 18:35:18.708932 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io  size 0x1000]
Mar 17 18:35:18.708994 kernel: pci 0000:00:18.6: BAR 13: no space for [io  size 0x1000]
Mar 17 18:35:18.709049 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io  size 0x1000]
Mar 17 18:35:18.709177 kernel: pci 0000:00:18.5: BAR 13: no space for [io  size 0x1000]
Mar 17 18:35:18.709227 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io  size 0x1000]
Mar 17 18:35:18.709274 kernel: pci 0000:00:18.4: BAR 13: no space for [io  size 0x1000]
Mar 17 18:35:18.709622 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io  size 0x1000]
Mar 17 18:35:18.709678 kernel: pci 0000:00:18.3: BAR 13: no space for [io  size 0x1000]
Mar 17 18:35:18.709726 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io  size 0x1000]
Mar 17 18:35:18.709773 kernel: pci 0000:00:18.2: BAR 13: no space for [io  size 0x1000]
Mar 17 18:35:18.709819 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io  size 0x1000]
Mar 17 18:35:18.709867 kernel: pci 0000:00:17.7: BAR 13: no space for [io  size 0x1000]
Mar 17 18:35:18.709913 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io  size 0x1000]
Mar 17 18:35:18.709973 kernel: pci 0000:00:17.6: BAR 13: no space for [io  size 0x1000]
Mar 17 18:35:18.710020 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io  size 0x1000]
Mar 17 18:35:18.710066 kernel: pci 0000:00:17.5: BAR 13: no space for [io  size 0x1000]
Mar 17 18:35:18.710132 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io  size 0x1000]
Mar 17 18:35:18.710181 kernel: pci 0000:00:17.4: BAR 13: no space for [io  size 0x1000]
Mar 17 18:35:18.710228 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io  size 0x1000]
Mar 17 18:35:18.710274 kernel: pci 0000:00:17.3: BAR 13: no space for [io  size 0x1000]
Mar 17 18:35:18.710321 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io  size 0x1000]
Mar 17 18:35:18.710367 kernel: pci 0000:00:16.7: BAR 13: no space for [io  size 0x1000]
Mar 17 18:35:18.710414 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io  size 0x1000]
Mar 17 18:35:18.710460 kernel: pci 0000:00:16.6: BAR 13: no space for [io  size 0x1000]
Mar 17 18:35:18.710507 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io  size 0x1000]
Mar 17 18:35:18.710556 kernel: pci 0000:00:16.5: BAR 13: no space for [io  size 0x1000]
Mar 17 18:35:18.710603 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io  size 0x1000]
Mar 17 18:35:18.710650 kernel: pci 0000:00:16.4: BAR 13: no space for [io  size 0x1000]
Mar 17 18:35:18.710724 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io  size 0x1000]
Mar 17 18:35:18.710774 kernel: pci 0000:00:16.3: BAR 13: no space for [io  size 0x1000]
Mar 17 18:35:18.710821 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io  size 0x1000]
Mar 17 18:35:18.710868 kernel: pci 0000:00:15.7: BAR 13: no space for [io  size 0x1000]
Mar 17 18:35:18.711144 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io  size 0x1000]
Mar 17 18:35:18.711206 kernel: pci 0000:00:15.6: BAR 13: no space for [io  size 0x1000]
Mar 17 18:35:18.711259 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io  size 0x1000]
Mar 17 18:35:18.711315 kernel: pci 0000:00:15.5: BAR 13: no space for [io  size 0x1000]
Mar 17 18:35:18.712856 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io  size 0x1000]
Mar 17 18:35:18.712927 kernel: pci 0000:00:15.4: BAR 13: no space for [io  size 0x1000]
Mar 17 18:35:18.713228 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io  size 0x1000]
Mar 17 18:35:18.713300 kernel: pci 0000:00:15.3: BAR 13: no space for [io  size 0x1000]
Mar 17 18:35:18.713358 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io  size 0x1000]
Mar 17 18:35:18.713409 kernel: pci 0000:00:01.0: PCI bridge to [bus 01]
Mar 17 18:35:18.713472 kernel: pci 0000:00:11.0: PCI bridge to [bus 02]
Mar 17 18:35:18.713529 kernel: pci 0000:00:11.0:   bridge window [io  0x2000-0x3fff]
Mar 17 18:35:18.713584 kernel: pci 0000:00:11.0:   bridge window [mem 0xfd600000-0xfdffffff]
Mar 17 18:35:18.713646 kernel: pci 0000:00:11.0:   bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref]
Mar 17 18:35:18.713703 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref]
Mar 17 18:35:18.713765 kernel: pci 0000:00:15.0: PCI bridge to [bus 03]
Mar 17 18:35:18.713820 kernel: pci 0000:00:15.0:   bridge window [io  0x4000-0x4fff]
Mar 17 18:35:18.713869 kernel: pci 0000:00:15.0:   bridge window [mem 0xfd500000-0xfd5fffff]
Mar 17 18:35:18.713932 kernel: pci 0000:00:15.0:   bridge window [mem 0xc0000000-0xc01fffff 64bit pref]
Mar 17 18:35:18.713988 kernel: pci 0000:00:15.1: PCI bridge to [bus 04]
Mar 17 18:35:18.714040 kernel: pci 0000:00:15.1:   bridge window [io  0x8000-0x8fff]
Mar 17 18:35:18.714347 kernel: pci 0000:00:15.1:   bridge window [mem 0xfd100000-0xfd1fffff]
Mar 17 18:35:18.714430 kernel: pci 0000:00:15.1:   bridge window [mem 0xe7800000-0xe78fffff 64bit pref]
Mar 17 18:35:18.714810 kernel: pci 0000:00:15.2: PCI bridge to [bus 05]
Mar 17 18:35:18.714885 kernel: pci 0000:00:15.2:   bridge window [io  0xc000-0xcfff]
Mar 17 18:35:18.714938 kernel: pci 0000:00:15.2:   bridge window [mem 0xfcd00000-0xfcdfffff]
Mar 17 18:35:18.715343 kernel: pci 0000:00:15.2:   bridge window [mem 0xe7400000-0xe74fffff 64bit pref]
Mar 17 18:35:18.715413 kernel: pci 0000:00:15.3: PCI bridge to [bus 06]
Mar 17 18:35:18.715472 kernel: pci 0000:00:15.3:   bridge window [mem 0xfc900000-0xfc9fffff]
Mar 17 18:35:18.715529 kernel: pci 0000:00:15.3:   bridge window [mem 0xe7000000-0xe70fffff 64bit pref]
Mar 17 18:35:18.715598 kernel: pci 0000:00:15.4: PCI bridge to [bus 07]
Mar 17 18:35:18.715648 kernel: pci 0000:00:15.4:   bridge window [mem 0xfc500000-0xfc5fffff]
Mar 17 18:35:18.715694 kernel: pci 0000:00:15.4:   bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref]
Mar 17 18:35:18.715743 kernel: pci 0000:00:15.5: PCI bridge to [bus 08]
Mar 17 18:35:18.715806 kernel: pci 0000:00:15.5:   bridge window [mem 0xfc100000-0xfc1fffff]
Mar 17 18:35:18.715861 kernel: pci 0000:00:15.5:   bridge window [mem 0xe6800000-0xe68fffff 64bit pref]
Mar 17 18:35:18.715922 kernel: pci 0000:00:15.6: PCI bridge to [bus 09]
Mar 17 18:35:18.715970 kernel: pci 0000:00:15.6:   bridge window [mem 0xfbd00000-0xfbdfffff]
Mar 17 18:35:18.716026 kernel: pci 0000:00:15.6:   bridge window [mem 0xe6400000-0xe64fffff 64bit pref]
Mar 17 18:35:18.716102 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a]
Mar 17 18:35:18.716161 kernel: pci 0000:00:15.7:   bridge window [mem 0xfb900000-0xfb9fffff]
Mar 17 18:35:18.716222 kernel: pci 0000:00:15.7:   bridge window [mem 0xe6000000-0xe60fffff 64bit pref]
Mar 17 18:35:18.716274 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref]
Mar 17 18:35:18.716330 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b]
Mar 17 18:35:18.716392 kernel: pci 0000:00:16.0:   bridge window [io  0x5000-0x5fff]
Mar 17 18:35:18.716447 kernel: pci 0000:00:16.0:   bridge window [mem 0xfd400000-0xfd4fffff]
Mar 17 18:35:18.716502 kernel: pci 0000:00:16.0:   bridge window [mem 0xc0200000-0xc03fffff 64bit pref]
Mar 17 18:35:18.716555 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c]
Mar 17 18:35:18.716614 kernel: pci 0000:00:16.1:   bridge window [io  0x9000-0x9fff]
Mar 17 18:35:18.716672 kernel: pci 0000:00:16.1:   bridge window [mem 0xfd000000-0xfd0fffff]
Mar 17 18:35:18.716724 kernel: pci 0000:00:16.1:   bridge window [mem 0xe7700000-0xe77fffff 64bit pref]
Mar 17 18:35:18.716783 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d]
Mar 17 18:35:18.716830 kernel: pci 0000:00:16.2:   bridge window [io  0xd000-0xdfff]
Mar 17 18:35:18.716880 kernel: pci 0000:00:16.2:   bridge window [mem 0xfcc00000-0xfccfffff]
Mar 17 18:35:18.716955 kernel: pci 0000:00:16.2:   bridge window [mem 0xe7300000-0xe73fffff 64bit pref]
Mar 17 18:35:18.717013 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e]
Mar 17 18:35:18.717064 kernel: pci 0000:00:16.3:   bridge window [mem 0xfc800000-0xfc8fffff]
Mar 17 18:35:18.717170 kernel: pci 0000:00:16.3:   bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref]
Mar 17 18:35:18.717239 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f]
Mar 17 18:35:18.717295 kernel: pci 0000:00:16.4:   bridge window [mem 0xfc400000-0xfc4fffff]
Mar 17 18:35:18.717356 kernel: pci 0000:00:16.4:   bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref]
Mar 17 18:35:18.717538 kernel: pci 0000:00:16.5: PCI bridge to [bus 10]
Mar 17 18:35:18.717602 kernel: pci 0000:00:16.5:   bridge window [mem 0xfc000000-0xfc0fffff]
Mar 17 18:35:18.717659 kernel: pci 0000:00:16.5:   bridge window [mem 0xe6700000-0xe67fffff 64bit pref]
Mar 17 18:35:18.717729 kernel: pci 0000:00:16.6: PCI bridge to [bus 11]
Mar 17 18:35:18.717957 kernel: pci 0000:00:16.6:   bridge window [mem 0xfbc00000-0xfbcfffff]
Mar 17 18:35:18.718024 kernel: pci 0000:00:16.6:   bridge window [mem 0xe6300000-0xe63fffff 64bit pref]
Mar 17 18:35:18.718449 kernel: pci 0000:00:16.7: PCI bridge to [bus 12]
Mar 17 18:35:18.718517 kernel: pci 0000:00:16.7:   bridge window [mem 0xfb800000-0xfb8fffff]
Mar 17 18:35:18.718587 kernel: pci 0000:00:16.7:   bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref]
Mar 17 18:35:18.718641 kernel: pci 0000:00:17.0: PCI bridge to [bus 13]
Mar 17 18:35:18.718704 kernel: pci 0000:00:17.0:   bridge window [io  0x6000-0x6fff]
Mar 17 18:35:18.718752 kernel: pci 0000:00:17.0:   bridge window [mem 0xfd300000-0xfd3fffff]
Mar 17 18:35:18.718801 kernel: pci 0000:00:17.0:   bridge window [mem 0xe7a00000-0xe7afffff 64bit pref]
Mar 17 18:35:18.718848 kernel: pci 0000:00:17.1: PCI bridge to [bus 14]
Mar 17 18:35:18.718895 kernel: pci 0000:00:17.1:   bridge window [io  0xa000-0xafff]
Mar 17 18:35:18.718941 kernel: pci 0000:00:17.1:   bridge window [mem 0xfcf00000-0xfcffffff]
Mar 17 18:35:18.718987 kernel: pci 0000:00:17.1:   bridge window [mem 0xe7600000-0xe76fffff 64bit pref]
Mar 17 18:35:18.719039 kernel: pci 0000:00:17.2: PCI bridge to [bus 15]
Mar 17 18:35:18.719095 kernel: pci 0000:00:17.2:   bridge window [io  0xe000-0xefff]
Mar 17 18:35:18.719163 kernel: pci 0000:00:17.2:   bridge window [mem 0xfcb00000-0xfcbfffff]
Mar 17 18:35:18.719218 kernel: pci 0000:00:17.2:   bridge window [mem 0xe7200000-0xe72fffff 64bit pref]
Mar 17 18:35:18.719271 kernel: pci 0000:00:17.3: PCI bridge to [bus 16]
Mar 17 18:35:18.719643 kernel: pci 0000:00:17.3:   bridge window [mem 0xfc700000-0xfc7fffff]
Mar 17 18:35:18.719702 kernel: pci 0000:00:17.3:   bridge window [mem 0xe6e00000-0xe6efffff 64bit pref]
Mar 17 18:35:18.719769 kernel: pci 0000:00:17.4: PCI bridge to [bus 17]
Mar 17 18:35:18.720160 kernel: pci 0000:00:17.4:   bridge window [mem 0xfc300000-0xfc3fffff]
Mar 17 18:35:18.720215 kernel: pci 0000:00:17.4:   bridge window [mem 0xe6a00000-0xe6afffff 64bit pref]
Mar 17 18:35:18.720273 kernel: pci 0000:00:17.5: PCI bridge to [bus 18]
Mar 17 18:35:18.720337 kernel: pci 0000:00:17.5:   bridge window [mem 0xfbf00000-0xfbffffff]
Mar 17 18:35:18.720392 kernel: pci 0000:00:17.5:   bridge window [mem 0xe6600000-0xe66fffff 64bit pref]
Mar 17 18:35:18.720440 kernel: pci 0000:00:17.6: PCI bridge to [bus 19]
Mar 17 18:35:18.720493 kernel: pci 0000:00:17.6:   bridge window [mem 0xfbb00000-0xfbbfffff]
Mar 17 18:35:18.720553 kernel: pci 0000:00:17.6:   bridge window [mem 0xe6200000-0xe62fffff 64bit pref]
Mar 17 18:35:18.720611 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a]
Mar 17 18:35:18.720673 kernel: pci 0000:00:17.7:   bridge window [mem 0xfb700000-0xfb7fffff]
Mar 17 18:35:18.720727 kernel: pci 0000:00:17.7:   bridge window [mem 0xe5e00000-0xe5efffff 64bit pref]
Mar 17 18:35:18.720774 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b]
Mar 17 18:35:18.720825 kernel: pci 0000:00:18.0:   bridge window [io  0x7000-0x7fff]
Mar 17 18:35:18.720885 kernel: pci 0000:00:18.0:   bridge window [mem 0xfd200000-0xfd2fffff]
Mar 17 18:35:18.720947 kernel: pci 0000:00:18.0:   bridge window [mem 0xe7900000-0xe79fffff 64bit pref]
Mar 17 18:35:18.720999 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c]
Mar 17 18:35:18.721053 kernel: pci 0000:00:18.1:   bridge window [io  0xb000-0xbfff]
Mar 17 18:35:18.721141 kernel: pci 0000:00:18.1:   bridge window [mem 0xfce00000-0xfcefffff]
Mar 17 18:35:18.721194 kernel: pci 0000:00:18.1:   bridge window [mem 0xe7500000-0xe75fffff 64bit pref]
Mar 17 18:35:18.721257 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d]
Mar 17 18:35:18.721304 kernel: pci 0000:00:18.2:   bridge window [mem 0xfca00000-0xfcafffff]
Mar 17 18:35:18.721351 kernel: pci 0000:00:18.2:   bridge window [mem 0xe7100000-0xe71fffff 64bit pref]
Mar 17 18:35:18.721408 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e]
Mar 17 18:35:18.721465 kernel: pci 0000:00:18.3:   bridge window [mem 0xfc600000-0xfc6fffff]
Mar 17 18:35:18.721515 kernel: pci 0000:00:18.3:   bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref]
Mar 17 18:35:18.721576 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f]
Mar 17 18:35:18.721633 kernel: pci 0000:00:18.4:   bridge window [mem 0xfc200000-0xfc2fffff]
Mar 17 18:35:18.721681 kernel: pci 0000:00:18.4:   bridge window [mem 0xe6900000-0xe69fffff 64bit pref]
Mar 17 18:35:18.721743 kernel: pci 0000:00:18.5: PCI bridge to [bus 20]
Mar 17 18:35:18.721809 kernel: pci 0000:00:18.5:   bridge window [mem 0xfbe00000-0xfbefffff]
Mar 17 18:35:18.721861 kernel: pci 0000:00:18.5:   bridge window [mem 0xe6500000-0xe65fffff 64bit pref]
Mar 17 18:35:18.721924 kernel: pci 0000:00:18.6: PCI bridge to [bus 21]
Mar 17 18:35:18.721978 kernel: pci 0000:00:18.6:   bridge window [mem 0xfba00000-0xfbafffff]
Mar 17 18:35:18.722036 kernel: pci 0000:00:18.6:   bridge window [mem 0xe6100000-0xe61fffff 64bit pref]
Mar 17 18:35:18.722118 kernel: pci 0000:00:18.7: PCI bridge to [bus 22]
Mar 17 18:35:18.722176 kernel: pci 0000:00:18.7:   bridge window [mem 0xfb600000-0xfb6fffff]
Mar 17 18:35:18.722222 kernel: pci 0000:00:18.7:   bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref]
Mar 17 18:35:18.722268 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window]
Mar 17 18:35:18.722311 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000cffff window]
Mar 17 18:35:18.722355 kernel: pci_bus 0000:00: resource 6 [mem 0x000d0000-0x000d3fff window]
Mar 17 18:35:18.722409 kernel: pci_bus 0000:00: resource 7 [mem 0x000d4000-0x000d7fff window]
Mar 17 18:35:18.722451 kernel: pci_bus 0000:00: resource 8 [mem 0x000d8000-0x000dbfff window]
Mar 17 18:35:18.722495 kernel: pci_bus 0000:00: resource 9 [mem 0xc0000000-0xfebfffff window]
Mar 17 18:35:18.722537 kernel: pci_bus 0000:00: resource 10 [io  0x0000-0x0cf7 window]
Mar 17 18:35:18.722579 kernel: pci_bus 0000:00: resource 11 [io  0x0d00-0xfeff window]
Mar 17 18:35:18.722625 kernel: pci_bus 0000:02: resource 0 [io  0x2000-0x3fff]
Mar 17 18:35:18.722669 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff]
Mar 17 18:35:18.722712 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref]
Mar 17 18:35:18.722754 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window]
Mar 17 18:35:18.722799 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000cffff window]
Mar 17 18:35:18.722843 kernel: pci_bus 0000:02: resource 6 [mem 0x000d0000-0x000d3fff window]
Mar 17 18:35:18.722885 kernel: pci_bus 0000:02: resource 7 [mem 0x000d4000-0x000d7fff window]
Mar 17 18:35:18.722933 kernel: pci_bus 0000:02: resource 8 [mem 0x000d8000-0x000dbfff window]
Mar 17 18:35:18.722976 kernel: pci_bus 0000:02: resource 9 [mem 0xc0000000-0xfebfffff window]
Mar 17 18:35:18.723023 kernel: pci_bus 0000:02: resource 10 [io  0x0000-0x0cf7 window]
Mar 17 18:35:18.723067 kernel: pci_bus 0000:02: resource 11 [io  0x0d00-0xfeff window]
Mar 17 18:35:18.723151 kernel: pci_bus 0000:03: resource 0 [io  0x4000-0x4fff]
Mar 17 18:35:18.723201 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff]
Mar 17 18:35:18.723244 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref]
Mar 17 18:35:18.723291 kernel: pci_bus 0000:04: resource 0 [io  0x8000-0x8fff]
Mar 17 18:35:18.723335 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff]
Mar 17 18:35:18.723378 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref]
Mar 17 18:35:18.723425 kernel: pci_bus 0000:05: resource 0 [io  0xc000-0xcfff]
Mar 17 18:35:18.723467 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff]
Mar 17 18:35:18.723513 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref]
Mar 17 18:35:18.723570 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff]
Mar 17 18:35:18.723627 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref]
Mar 17 18:35:18.723680 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff]
Mar 17 18:35:18.723733 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref]
Mar 17 18:35:18.723786 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff]
Mar 17 18:35:18.723832 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref]
Mar 17 18:35:18.723881 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff]
Mar 17 18:35:18.723925 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref]
Mar 17 18:35:18.723979 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff]
Mar 17 18:35:18.724033 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref]
Mar 17 18:35:18.724102 kernel: pci_bus 0000:0b: resource 0 [io  0x5000-0x5fff]
Mar 17 18:35:18.724151 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff]
Mar 17 18:35:18.724194 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref]
Mar 17 18:35:18.724241 kernel: pci_bus 0000:0c: resource 0 [io  0x9000-0x9fff]
Mar 17 18:35:18.724286 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff]
Mar 17 18:35:18.724332 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref]
Mar 17 18:35:18.724396 kernel: pci_bus 0000:0d: resource 0 [io  0xd000-0xdfff]
Mar 17 18:35:18.724452 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff]
Mar 17 18:35:18.724502 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref]
Mar 17 18:35:18.724556 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff]
Mar 17 18:35:18.724600 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref]
Mar 17 18:35:18.724647 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff]
Mar 17 18:35:18.724693 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref]
Mar 17 18:35:18.724757 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff]
Mar 17 18:35:18.724826 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref]
Mar 17 18:35:18.724877 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff]
Mar 17 18:35:18.724922 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref]
Mar 17 18:35:18.724968 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff]
Mar 17 18:35:18.725013 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref]
Mar 17 18:35:18.725065 kernel: pci_bus 0000:13: resource 0 [io  0x6000-0x6fff]
Mar 17 18:35:18.725499 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff]
Mar 17 18:35:18.725553 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref]
Mar 17 18:35:18.725602 kernel: pci_bus 0000:14: resource 0 [io  0xa000-0xafff]
Mar 17 18:35:18.725804 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff]
Mar 17 18:35:18.725853 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref]
Mar 17 18:35:18.725906 kernel: pci_bus 0000:15: resource 0 [io  0xe000-0xefff]
Mar 17 18:35:18.725961 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff]
Mar 17 18:35:18.726010 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref]
Mar 17 18:35:18.726363 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff]
Mar 17 18:35:18.726428 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref]
Mar 17 18:35:18.726491 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff]
Mar 17 18:35:18.726544 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref]
Mar 17 18:35:18.726600 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff]
Mar 17 18:35:18.726648 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref]
Mar 17 18:35:18.726706 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff]
Mar 17 18:35:18.726754 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref]
Mar 17 18:35:18.726842 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff]
Mar 17 18:35:18.726901 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref]
Mar 17 18:35:18.727001 kernel: pci_bus 0000:1b: resource 0 [io  0x7000-0x7fff]
Mar 17 18:35:18.727066 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff]
Mar 17 18:35:18.727450 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref]
Mar 17 18:35:18.727510 kernel: pci_bus 0000:1c: resource 0 [io  0xb000-0xbfff]
Mar 17 18:35:18.727572 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff]
Mar 17 18:35:18.727655 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref]
Mar 17 18:35:18.727939 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff]
Mar 17 18:35:18.727995 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref]
Mar 17 18:35:18.728046 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff]
Mar 17 18:35:18.728103 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref]
Mar 17 18:35:18.728159 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff]
Mar 17 18:35:18.728205 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref]
Mar 17 18:35:18.728252 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff]
Mar 17 18:35:18.728317 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref]
Mar 17 18:35:18.728388 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff]
Mar 17 18:35:18.728441 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref]
Mar 17 18:35:18.728499 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff]
Mar 17 18:35:18.728565 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref]
Mar 17 18:35:18.728850 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Mar 17 18:35:18.728863 kernel: PCI: CLS 32 bytes, default 64
Mar 17 18:35:18.728872 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer
Mar 17 18:35:18.728879 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns
Mar 17 18:35:18.728885 kernel: clocksource: Switched to clocksource tsc
Mar 17 18:35:18.728892 kernel: Initialise system trusted keyrings
Mar 17 18:35:18.728898 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0
Mar 17 18:35:18.728904 kernel: Key type asymmetric registered
Mar 17 18:35:18.728910 kernel: Asymmetric key parser 'x509' registered
Mar 17 18:35:18.728917 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249)
Mar 17 18:35:18.728923 kernel: io scheduler mq-deadline registered
Mar 17 18:35:18.728930 kernel: io scheduler kyber registered
Mar 17 18:35:18.728937 kernel: io scheduler bfq registered
Mar 17 18:35:18.728992 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24
Mar 17 18:35:18.729046 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Mar 17 18:35:18.729141 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25
Mar 17 18:35:18.729195 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Mar 17 18:35:18.729256 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26
Mar 17 18:35:18.729311 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Mar 17 18:35:18.729855 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27
Mar 17 18:35:18.729948 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Mar 17 18:35:18.730024 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28
Mar 17 18:35:18.730470 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Mar 17 18:35:18.730551 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29
Mar 17 18:35:18.730607 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Mar 17 18:35:18.730659 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30
Mar 17 18:35:18.730707 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Mar 17 18:35:18.730757 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31
Mar 17 18:35:18.730804 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Mar 17 18:35:18.730855 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32
Mar 17 18:35:18.730915 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Mar 17 18:35:18.730965 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33
Mar 17 18:35:18.731024 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Mar 17 18:35:18.731074 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34
Mar 17 18:35:18.731157 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Mar 17 18:35:18.731221 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35
Mar 17 18:35:18.731281 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Mar 17 18:35:18.731333 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36
Mar 17 18:35:18.731379 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Mar 17 18:35:18.731427 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37
Mar 17 18:35:18.731474 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Mar 17 18:35:18.731523 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38
Mar 17 18:35:18.731573 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Mar 17 18:35:18.731621 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39
Mar 17 18:35:18.731671 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Mar 17 18:35:18.731730 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40
Mar 17 18:35:18.731782 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Mar 17 18:35:18.731837 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41
Mar 17 18:35:18.731890 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Mar 17 18:35:18.731939 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42
Mar 17 18:35:18.732008 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Mar 17 18:35:18.732068 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43
Mar 17 18:35:18.732154 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Mar 17 18:35:18.732206 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44
Mar 17 18:35:18.732256 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Mar 17 18:35:18.732327 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45
Mar 17 18:35:18.732383 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Mar 17 18:35:18.732542 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46
Mar 17 18:35:18.732600 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Mar 17 18:35:18.732663 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47
Mar 17 18:35:18.733017 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Mar 17 18:35:18.733077 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48
Mar 17 18:35:18.733152 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Mar 17 18:35:18.733230 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49
Mar 17 18:35:18.733428 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Mar 17 18:35:18.733481 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50
Mar 17 18:35:18.733533 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Mar 17 18:35:18.733871 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51
Mar 17 18:35:18.733944 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Mar 17 18:35:18.734012 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52
Mar 17 18:35:18.734064 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Mar 17 18:35:18.734160 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53
Mar 17 18:35:18.734209 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Mar 17 18:35:18.734258 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54
Mar 17 18:35:18.734306 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Mar 17 18:35:18.734356 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55
Mar 17 18:35:18.734413 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Mar 17 18:35:18.734424 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00
Mar 17 18:35:18.734431 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Mar 17 18:35:18.734438 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Mar 17 18:35:18.734447 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12
Mar 17 18:35:18.734456 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Mar 17 18:35:18.734463 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Mar 17 18:35:18.734515 kernel: rtc_cmos 00:01: registered as rtc0
Mar 17 18:35:18.734566 kernel: rtc_cmos 00:01: setting system clock to 2025-03-17T18:35:18 UTC (1742236518)
Mar 17 18:35:18.734618 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram
Mar 17 18:35:18.734627 kernel: intel_pstate: CPU model not supported
Mar 17 18:35:18.734635 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0
Mar 17 18:35:18.734645 kernel: NET: Registered PF_INET6 protocol family
Mar 17 18:35:18.734652 kernel: Segment Routing with IPv6
Mar 17 18:35:18.734658 kernel: In-situ OAM (IOAM) with IPv6
Mar 17 18:35:18.734665 kernel: NET: Registered PF_PACKET protocol family
Mar 17 18:35:18.734673 kernel: Key type dns_resolver registered
Mar 17 18:35:18.734685 kernel: IPI shorthand broadcast: enabled
Mar 17 18:35:18.734691 kernel: sched_clock: Marking stable (878278124, 238432709)->(1198003928, -81293095)
Mar 17 18:35:18.734697 kernel: registered taskstats version 1
Mar 17 18:35:18.734704 kernel: Loading compiled-in X.509 certificates
Mar 17 18:35:18.734710 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.179-flatcar: d5b956bbabb2d386c0246a969032c0de9eaa8220'
Mar 17 18:35:18.734716 kernel: Key type .fscrypt registered
Mar 17 18:35:18.734722 kernel: Key type fscrypt-provisioning registered
Mar 17 18:35:18.734727 kernel: ima: No TPM chip found, activating TPM-bypass!
Mar 17 18:35:18.734735 kernel: ima: Allocated hash algorithm: sha1
Mar 17 18:35:18.734741 kernel: ima: No architecture policies found
Mar 17 18:35:18.734747 kernel: clk: Disabling unused clocks
Mar 17 18:35:18.734753 kernel: Freeing unused kernel image (initmem) memory: 47472K
Mar 17 18:35:18.734759 kernel: Write protecting the kernel read-only data: 28672k
Mar 17 18:35:18.734765 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K
Mar 17 18:35:18.734771 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K
Mar 17 18:35:18.734778 kernel: Run /init as init process
Mar 17 18:35:18.734784 kernel:   with arguments:
Mar 17 18:35:18.734791 kernel:     /init
Mar 17 18:35:18.734797 kernel:   with environment:
Mar 17 18:35:18.734803 kernel:     HOME=/
Mar 17 18:35:18.734809 kernel:     TERM=linux
Mar 17 18:35:18.734815 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Mar 17 18:35:18.734823 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Mar 17 18:35:18.734831 systemd[1]: Detected virtualization vmware.
Mar 17 18:35:18.734838 systemd[1]: Detected architecture x86-64.
Mar 17 18:35:18.734845 systemd[1]: Running in initrd.
Mar 17 18:35:18.734851 systemd[1]: No hostname configured, using default hostname.
Mar 17 18:35:18.734857 systemd[1]: Hostname set to <localhost>.
Mar 17 18:35:18.734864 systemd[1]: Initializing machine ID from random generator.
Mar 17 18:35:18.734870 systemd[1]: Queued start job for default target initrd.target.
Mar 17 18:35:18.734876 systemd[1]: Started systemd-ask-password-console.path.
Mar 17 18:35:18.734882 systemd[1]: Reached target cryptsetup.target.
Mar 17 18:35:18.734889 systemd[1]: Reached target paths.target.
Mar 17 18:35:18.734895 systemd[1]: Reached target slices.target.
Mar 17 18:35:18.734902 systemd[1]: Reached target swap.target.
Mar 17 18:35:18.734908 systemd[1]: Reached target timers.target.
Mar 17 18:35:18.734914 systemd[1]: Listening on iscsid.socket.
Mar 17 18:35:18.734920 systemd[1]: Listening on iscsiuio.socket.
Mar 17 18:35:18.734927 systemd[1]: Listening on systemd-journald-audit.socket.
Mar 17 18:35:18.734935 systemd[1]: Listening on systemd-journald-dev-log.socket.
Mar 17 18:35:18.734941 systemd[1]: Listening on systemd-journald.socket.
Mar 17 18:35:18.734949 systemd[1]: Listening on systemd-networkd.socket.
Mar 17 18:35:18.734955 systemd[1]: Listening on systemd-udevd-control.socket.
Mar 17 18:35:18.734961 systemd[1]: Listening on systemd-udevd-kernel.socket.
Mar 17 18:35:18.734968 systemd[1]: Reached target sockets.target.
Mar 17 18:35:18.734974 systemd[1]: Starting kmod-static-nodes.service...
Mar 17 18:35:18.734980 systemd[1]: Finished network-cleanup.service.
Mar 17 18:35:18.734986 systemd[1]: Starting systemd-fsck-usr.service...
Mar 17 18:35:18.734992 systemd[1]: Starting systemd-journald.service...
Mar 17 18:35:18.734999 systemd[1]: Starting systemd-modules-load.service...
Mar 17 18:35:18.735006 systemd[1]: Starting systemd-resolved.service...
Mar 17 18:35:18.735012 systemd[1]: Starting systemd-vconsole-setup.service...
Mar 17 18:35:18.735019 systemd[1]: Finished kmod-static-nodes.service.
Mar 17 18:35:18.735025 systemd[1]: Finished systemd-fsck-usr.service.
Mar 17 18:35:18.735031 systemd[1]: Starting systemd-tmpfiles-setup-dev.service...
Mar 17 18:35:18.735038 systemd[1]: Finished systemd-vconsole-setup.service.
Mar 17 18:35:18.735044 kernel: audit: type=1130 audit(1742236518.662:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:18.735050 systemd[1]: Finished systemd-tmpfiles-setup-dev.service.
Mar 17 18:35:18.735059 kernel: audit: type=1130 audit(1742236518.666:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:18.735069 systemd[1]: Starting dracut-cmdline-ask.service...
Mar 17 18:35:18.735078 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Mar 17 18:35:18.735091 systemd[1]: Started systemd-resolved.service.
Mar 17 18:35:18.735098 systemd[1]: Reached target nss-lookup.target.
Mar 17 18:35:18.735104 kernel: audit: type=1130 audit(1742236518.697:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:18.735110 systemd[1]: Finished dracut-cmdline-ask.service.
Mar 17 18:35:18.735116 kernel: audit: type=1130 audit(1742236518.702:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:18.735125 systemd[1]: Starting dracut-cmdline.service...
Mar 17 18:35:18.735133 kernel: Bridge firewalling registered
Mar 17 18:35:18.735139 kernel: SCSI subsystem initialized
Mar 17 18:35:18.735149 systemd-journald[217]: Journal started
Mar 17 18:35:18.735186 systemd-journald[217]: Runtime Journal (/run/log/journal/ae86a0155ab74168a37c86979ea9b8db) is 4.8M, max 38.8M, 34.0M free.
Mar 17 18:35:18.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:18.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:18.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:18.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:18.663050 systemd-modules-load[218]: Inserted module 'overlay'
Mar 17 18:35:18.691479 systemd-resolved[219]: Positive Trust Anchors:
Mar 17 18:35:18.691487 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Mar 17 18:35:18.691510 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test
Mar 17 18:35:18.743442 systemd[1]: Started systemd-journald.service.
Mar 17 18:35:18.743459 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Mar 17 18:35:18.743476 kernel: device-mapper: uevent: version 1.0.3
Mar 17 18:35:18.743485 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com
Mar 17 18:35:18.743493 kernel: audit: type=1130 audit(1742236518.738:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:18.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:18.696514 systemd-resolved[219]: Defaulting to hostname 'linux'.
Mar 17 18:35:18.711942 systemd-modules-load[218]: Inserted module 'br_netfilter'
Mar 17 18:35:18.744077 dracut-cmdline[233]: dracut-dracut-053
Mar 17 18:35:18.744077 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA
Mar 17 18:35:18.744077 dracut-cmdline[233]: BEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a
Mar 17 18:35:18.747421 systemd-modules-load[218]: Inserted module 'dm_multipath'
Mar 17 18:35:18.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:18.747864 systemd[1]: Finished systemd-modules-load.service.
Mar 17 18:35:18.748445 systemd[1]: Starting systemd-sysctl.service...
Mar 17 18:35:18.752126 kernel: audit: type=1130 audit(1742236518.746:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:18.755024 systemd[1]: Finished systemd-sysctl.service.
Mar 17 18:35:18.757823 kernel: audit: type=1130 audit(1742236518.754:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:18.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:18.780100 kernel: Loading iSCSI transport class v2.0-870.
Mar 17 18:35:18.793102 kernel: iscsi: registered transport (tcp)
Mar 17 18:35:18.809104 kernel: iscsi: registered transport (qla4xxx)
Mar 17 18:35:18.809151 kernel: QLogic iSCSI HBA Driver
Mar 17 18:35:18.826027 systemd[1]: Finished dracut-cmdline.service.
Mar 17 18:35:18.830105 kernel: audit: type=1130 audit(1742236518.825:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:18.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:18.826720 systemd[1]: Starting dracut-pre-udev.service...
Mar 17 18:35:18.867110 kernel: raid6: avx2x4   gen() 44304 MB/s
Mar 17 18:35:18.883138 kernel: raid6: avx2x4   xor() 17843 MB/s
Mar 17 18:35:18.900111 kernel: raid6: avx2x2   gen() 41310 MB/s
Mar 17 18:35:18.917107 kernel: raid6: avx2x2   xor() 22981 MB/s
Mar 17 18:35:18.934110 kernel: raid6: avx2x1   gen() 38731 MB/s
Mar 17 18:35:18.951106 kernel: raid6: avx2x1   xor() 26896 MB/s
Mar 17 18:35:18.968101 kernel: raid6: sse2x4   gen() 21074 MB/s
Mar 17 18:35:18.985105 kernel: raid6: sse2x4   xor() 11692 MB/s
Mar 17 18:35:19.002107 kernel: raid6: sse2x2   gen() 20153 MB/s
Mar 17 18:35:19.019105 kernel: raid6: sse2x2   xor() 12390 MB/s
Mar 17 18:35:19.036108 kernel: raid6: sse2x1   gen() 16397 MB/s
Mar 17 18:35:19.053407 kernel: raid6: sse2x1   xor()  8484 MB/s
Mar 17 18:35:19.053455 kernel: raid6: using algorithm avx2x4 gen() 44304 MB/s
Mar 17 18:35:19.053468 kernel: raid6: .... xor() 17843 MB/s, rmw enabled
Mar 17 18:35:19.054111 kernel: raid6: using avx2x2 recovery algorithm
Mar 17 18:35:19.064104 kernel: xor: automatically using best checksumming function   avx       
Mar 17 18:35:19.134118 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no
Mar 17 18:35:19.139549 systemd[1]: Finished dracut-pre-udev.service.
Mar 17 18:35:19.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:19.140203 systemd[1]: Starting systemd-udevd.service...
Mar 17 18:35:19.143212 kernel: audit: type=1130 audit(1742236519.138:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:19.138000 audit: BPF prog-id=7 op=LOAD
Mar 17 18:35:19.138000 audit: BPF prog-id=8 op=LOAD
Mar 17 18:35:19.151106 systemd-udevd[416]: Using default interface naming scheme 'v252'.
Mar 17 18:35:19.154049 systemd[1]: Started systemd-udevd.service.
Mar 17 18:35:19.154579 systemd[1]: Starting dracut-pre-trigger.service...
Mar 17 18:35:19.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:19.162184 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation
Mar 17 18:35:19.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:19.178038 systemd[1]: Finished dracut-pre-trigger.service.
Mar 17 18:35:19.178619 systemd[1]: Starting systemd-udev-trigger.service...
Mar 17 18:35:19.253395 systemd[1]: Finished systemd-udev-trigger.service.
Mar 17 18:35:19.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:19.315240 kernel: VMware PVSCSI driver - version 1.0.7.0-k
Mar 17 18:35:19.315282 kernel: vmw_pvscsi: using 64bit dma
Mar 17 18:35:19.315291 kernel: vmw_pvscsi: max_id: 16
Mar 17 18:35:19.315299 kernel: vmw_pvscsi: setting ring_pages to 8
Mar 17 18:35:19.319092 kernel: libata version 3.00 loaded.
Mar 17 18:35:19.322489 kernel: ata_piix 0000:00:07.1: version 2.13
Mar 17 18:35:19.325943 kernel: scsi host0: ata_piix
Mar 17 18:35:19.326034 kernel: scsi host1: ata_piix
Mar 17 18:35:19.326135 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14
Mar 17 18:35:19.326150 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15
Mar 17 18:35:19.341768 kernel: VMware vmxnet3 virtual NIC driver - version 1.6.0.0-k-NAPI
Mar 17 18:35:19.341817 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2
Mar 17 18:35:19.359148 kernel: vmw_pvscsi: enabling reqCallThreshold
Mar 17 18:35:19.359164 kernel: vmw_pvscsi: driver-based request coalescing enabled
Mar 17 18:35:19.359177 kernel: vmw_pvscsi: using MSI-X
Mar 17 18:35:19.359187 kernel: scsi host2: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254
Mar 17 18:35:19.359280 kernel: cryptd: max_cpu_qlen set to 1000
Mar 17 18:35:19.359295 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #2
Mar 17 18:35:19.359376 kernel: scsi 2:0:0:0: Direct-Access     VMware   Virtual disk     2.0  PQ: 0 ANSI: 6
Mar 17 18:35:19.359396 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps
Mar 17 18:35:19.496145 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33
Mar 17 18:35:19.500092 kernel: scsi 1:0:0:0: CD-ROM            NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5
Mar 17 18:35:19.508308 kernel: AVX2 version of gcm_enc/dec engaged.
Mar 17 18:35:19.508340 kernel: AES CTR mode by8 optimization enabled
Mar 17 18:35:19.516104 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0
Mar 17 18:35:19.526359 kernel: sd 2:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB)
Mar 17 18:35:19.542563 kernel: sd 2:0:0:0: [sda] Write Protect is off
Mar 17 18:35:19.542675 kernel: sd 2:0:0:0: [sda] Mode Sense: 31 00 00 00
Mar 17 18:35:19.542752 kernel: sd 2:0:0:0: [sda] Cache data unavailable
Mar 17 18:35:19.542823 kernel: sd 2:0:0:0: [sda] Assuming drive cache: write through
Mar 17 18:35:19.542880 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Mar 17 18:35:19.542889 kernel: sd 2:0:0:0: [sda] Attached SCSI disk
Mar 17 18:35:19.567550 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray
Mar 17 18:35:19.583851 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Mar 17 18:35:19.583869 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (464)
Mar 17 18:35:19.583885 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0
Mar 17 18:35:19.567100 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device.
Mar 17 18:35:19.570035 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device.
Mar 17 18:35:19.579771 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device.
Mar 17 18:35:19.579909 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device.
Mar 17 18:35:19.580510 systemd[1]: Starting disk-uuid.service...
Mar 17 18:35:19.585222 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device.
Mar 17 18:35:19.650098 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Mar 17 18:35:19.680103 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Mar 17 18:35:20.733046 disk-uuid[549]: The operation has completed successfully.
Mar 17 18:35:20.733357 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Mar 17 18:35:20.796944 systemd[1]: disk-uuid.service: Deactivated successfully.
Mar 17 18:35:20.797239 systemd[1]: Finished disk-uuid.service.
Mar 17 18:35:20.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:20.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:20.797998 systemd[1]: Starting verity-setup.service...
Mar 17 18:35:20.822098 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2"
Mar 17 18:35:21.006143 systemd[1]: Found device dev-mapper-usr.device.
Mar 17 18:35:21.007344 systemd[1]: Mounting sysusr-usr.mount...
Mar 17 18:35:21.009765 systemd[1]: Finished verity-setup.service.
Mar 17 18:35:21.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:21.071099 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none.
Mar 17 18:35:21.071350 systemd[1]: Mounted sysusr-usr.mount.
Mar 17 18:35:21.071949 systemd[1]: Starting afterburn-network-kargs.service...
Mar 17 18:35:21.072412 systemd[1]: Starting ignition-setup.service...
Mar 17 18:35:21.164277 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm
Mar 17 18:35:21.164323 kernel: BTRFS info (device sda6): using free space tree
Mar 17 18:35:21.164342 kernel: BTRFS info (device sda6): has skinny extents
Mar 17 18:35:21.171111 kernel: BTRFS info (device sda6): enabling ssd optimizations
Mar 17 18:35:21.181733 systemd[1]: mnt-oem.mount: Deactivated successfully.
Mar 17 18:35:21.190185 systemd[1]: Finished ignition-setup.service.
Mar 17 18:35:21.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:21.190923 systemd[1]: Starting ignition-fetch-offline.service...
Mar 17 18:35:21.309825 systemd[1]: Finished afterburn-network-kargs.service.
Mar 17 18:35:21.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:21.310652 systemd[1]: Starting parse-ip-for-networkd.service...
Mar 17 18:35:21.368657 systemd[1]: Finished parse-ip-for-networkd.service.
Mar 17 18:35:21.369590 systemd[1]: Starting systemd-networkd.service...
Mar 17 18:35:21.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:21.368000 audit: BPF prog-id=9 op=LOAD
Mar 17 18:35:21.385042 systemd-networkd[736]: lo: Link UP
Mar 17 18:35:21.385050 systemd-networkd[736]: lo: Gained carrier
Mar 17 18:35:21.385553 systemd-networkd[736]: Enumeration completed
Mar 17 18:35:21.390556 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated
Mar 17 18:35:21.390701 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps
Mar 17 18:35:21.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:21.385717 systemd[1]: Started systemd-networkd.service.
Mar 17 18:35:21.385881 systemd[1]: Reached target network.target.
Mar 17 18:35:21.385908 systemd-networkd[736]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network.
Mar 17 18:35:21.386434 systemd[1]: Starting iscsiuio.service...
Mar 17 18:35:21.390223 systemd-networkd[736]: ens192: Link UP
Mar 17 18:35:21.390228 systemd-networkd[736]: ens192: Gained carrier
Mar 17 18:35:21.392070 systemd[1]: Started iscsiuio.service.
Mar 17 18:35:21.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:21.392829 systemd[1]: Starting iscsid.service...
Mar 17 18:35:21.394726 iscsid[741]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi
Mar 17 18:35:21.394726 iscsid[741]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.<reversed domain name>[:identifier].
Mar 17 18:35:21.394726 iscsid[741]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6.
Mar 17 18:35:21.394726 iscsid[741]: If using hardware iscsi like qla4xxx this message can be ignored.
Mar 17 18:35:21.394726 iscsid[741]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi
Mar 17 18:35:21.394726 iscsid[741]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf
Mar 17 18:35:21.395829 systemd[1]: Started iscsid.service.
Mar 17 18:35:21.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:21.396488 systemd[1]: Starting dracut-initqueue.service...
Mar 17 18:35:21.405089 systemd[1]: Finished dracut-initqueue.service.
Mar 17 18:35:21.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:21.405390 systemd[1]: Reached target remote-fs-pre.target.
Mar 17 18:35:21.405601 systemd[1]: Reached target remote-cryptsetup.target.
Mar 17 18:35:21.406107 systemd[1]: Reached target remote-fs.target.
Mar 17 18:35:21.406699 systemd[1]: Starting dracut-pre-mount.service...
Mar 17 18:35:21.412778 systemd[1]: Finished dracut-pre-mount.service.
Mar 17 18:35:21.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:21.475242 ignition[607]: Ignition 2.14.0
Mar 17 18:35:21.475254 ignition[607]: Stage: fetch-offline
Mar 17 18:35:21.475293 ignition[607]: reading system config file "/usr/lib/ignition/base.d/base.ign"
Mar 17 18:35:21.475308 ignition[607]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed
Mar 17 18:35:21.478385 ignition[607]: no config dir at "/usr/lib/ignition/base.platform.d/vmware"
Mar 17 18:35:21.478629 ignition[607]: parsed url from cmdline: ""
Mar 17 18:35:21.478677 ignition[607]: no config URL provided
Mar 17 18:35:21.478799 ignition[607]: reading system config file "/usr/lib/ignition/user.ign"
Mar 17 18:35:21.478946 ignition[607]: no config at "/usr/lib/ignition/user.ign"
Mar 17 18:35:21.479325 ignition[607]: config successfully fetched
Mar 17 18:35:21.479341 ignition[607]: parsing config with SHA512: 8947f2092d57fb3e86fa958af7e4eefd2cb7b5e6604dff022f967667fab4c303f740d6d9c57d7828c55f31e4bb3d4a4435df4d99195c35b7cca2388ffc78e6aa
Mar 17 18:35:21.492066 unknown[607]: fetched base config from "system"
Mar 17 18:35:21.492302 unknown[607]: fetched user config from "vmware"
Mar 17 18:35:21.492920 ignition[607]: fetch-offline: fetch-offline passed
Mar 17 18:35:21.493124 ignition[607]: Ignition finished successfully
Mar 17 18:35:21.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:21.493829 systemd[1]: Finished ignition-fetch-offline.service.
Mar 17 18:35:21.493999 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json).
Mar 17 18:35:21.494600 systemd[1]: Starting ignition-kargs.service...
Mar 17 18:35:21.500925 ignition[755]: Ignition 2.14.0
Mar 17 18:35:21.501264 ignition[755]: Stage: kargs
Mar 17 18:35:21.501473 ignition[755]: reading system config file "/usr/lib/ignition/base.d/base.ign"
Mar 17 18:35:21.501645 ignition[755]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed
Mar 17 18:35:21.504334 ignition[755]: no config dir at "/usr/lib/ignition/base.platform.d/vmware"
Mar 17 18:35:21.505297 ignition[755]: kargs: kargs passed
Mar 17 18:35:21.505464 ignition[755]: Ignition finished successfully
Mar 17 18:35:21.506481 systemd[1]: Finished ignition-kargs.service.
Mar 17 18:35:21.507220 systemd[1]: Starting ignition-disks.service...
Mar 17 18:35:21.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:21.514198 ignition[761]: Ignition 2.14.0
Mar 17 18:35:21.514464 ignition[761]: Stage: disks
Mar 17 18:35:21.514641 ignition[761]: reading system config file "/usr/lib/ignition/base.d/base.ign"
Mar 17 18:35:21.514993 ignition[761]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed
Mar 17 18:35:21.516718 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/vmware"
Mar 17 18:35:21.518327 ignition[761]: disks: disks passed
Mar 17 18:35:21.518493 ignition[761]: Ignition finished successfully
Mar 17 18:35:21.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:21.519174 systemd[1]: Finished ignition-disks.service.
Mar 17 18:35:21.519327 systemd[1]: Reached target initrd-root-device.target.
Mar 17 18:35:21.519421 systemd[1]: Reached target local-fs-pre.target.
Mar 17 18:35:21.519506 systemd[1]: Reached target local-fs.target.
Mar 17 18:35:21.519586 systemd[1]: Reached target sysinit.target.
Mar 17 18:35:21.519676 systemd[1]: Reached target basic.target.
Mar 17 18:35:21.520370 systemd[1]: Starting systemd-fsck-root.service...
Mar 17 18:35:21.612917 systemd-fsck[769]: ROOT: clean, 623/1628000 files, 124059/1617920 blocks
Mar 17 18:35:21.614587 systemd[1]: Finished systemd-fsck-root.service.
Mar 17 18:35:21.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:21.615207 systemd[1]: Mounting sysroot.mount...
Mar 17 18:35:21.622046 systemd[1]: Mounted sysroot.mount.
Mar 17 18:35:21.622338 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none.
Mar 17 18:35:21.622252 systemd[1]: Reached target initrd-root-fs.target.
Mar 17 18:35:21.623429 systemd[1]: Mounting sysroot-usr.mount...
Mar 17 18:35:21.623813 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met.
Mar 17 18:35:21.623843 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Mar 17 18:35:21.623863 systemd[1]: Reached target ignition-diskful.target.
Mar 17 18:35:21.625766 systemd[1]: Mounted sysroot-usr.mount.
Mar 17 18:35:21.626555 systemd[1]: Starting initrd-setup-root.service...
Mar 17 18:35:21.629953 initrd-setup-root[779]: cut: /sysroot/etc/passwd: No such file or directory
Mar 17 18:35:21.633911 initrd-setup-root[787]: cut: /sysroot/etc/group: No such file or directory
Mar 17 18:35:21.636564 initrd-setup-root[795]: cut: /sysroot/etc/shadow: No such file or directory
Mar 17 18:35:21.639012 initrd-setup-root[803]: cut: /sysroot/etc/gshadow: No such file or directory
Mar 17 18:35:21.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:21.690036 systemd[1]: Finished initrd-setup-root.service.
Mar 17 18:35:21.690597 systemd[1]: Starting ignition-mount.service...
Mar 17 18:35:21.691042 systemd[1]: Starting sysroot-boot.service...
Mar 17 18:35:21.694323 bash[820]: umount: /sysroot/usr/share/oem: not mounted.
Mar 17 18:35:21.700382 ignition[821]: INFO     : Ignition 2.14.0
Mar 17 18:35:21.700662 ignition[821]: INFO     : Stage: mount
Mar 17 18:35:21.700837 ignition[821]: INFO     : reading system config file "/usr/lib/ignition/base.d/base.ign"
Mar 17 18:35:21.700988 ignition[821]: DEBUG    : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed
Mar 17 18:35:21.702458 ignition[821]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/vmware"
Mar 17 18:35:21.704225 ignition[821]: INFO     : mount: mount passed
Mar 17 18:35:21.704375 ignition[821]: INFO     : Ignition finished successfully
Mar 17 18:35:21.704916 systemd[1]: Finished ignition-mount.service.
Mar 17 18:35:21.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:21.802263 systemd[1]: Finished sysroot-boot.service.
Mar 17 18:35:21.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:22.027246 systemd[1]: Mounting sysroot-usr-share-oem.mount...
Mar 17 18:35:22.062134 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (830)
Mar 17 18:35:22.062172 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm
Mar 17 18:35:22.062182 kernel: BTRFS info (device sda6): using free space tree
Mar 17 18:35:22.063779 kernel: BTRFS info (device sda6): has skinny extents
Mar 17 18:35:22.067103 kernel: BTRFS info (device sda6): enabling ssd optimizations
Mar 17 18:35:22.069210 systemd[1]: Mounted sysroot-usr-share-oem.mount.
Mar 17 18:35:22.070007 systemd[1]: Starting ignition-files.service...
Mar 17 18:35:22.079756 ignition[850]: INFO     : Ignition 2.14.0
Mar 17 18:35:22.079756 ignition[850]: INFO     : Stage: files
Mar 17 18:35:22.080160 ignition[850]: INFO     : reading system config file "/usr/lib/ignition/base.d/base.ign"
Mar 17 18:35:22.080160 ignition[850]: DEBUG    : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed
Mar 17 18:35:22.081404 ignition[850]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/vmware"
Mar 17 18:35:22.083557 ignition[850]: DEBUG    : files: compiled without relabeling support, skipping
Mar 17 18:35:22.084131 ignition[850]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Mar 17 18:35:22.084131 ignition[850]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Mar 17 18:35:22.086156 ignition[850]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Mar 17 18:35:22.086400 ignition[850]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Mar 17 18:35:22.087344 unknown[850]: wrote ssh authorized keys file for user: core
Mar 17 18:35:22.087833 ignition[850]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Mar 17 18:35:22.088342 ignition[850]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz"
Mar 17 18:35:22.088342 ignition[850]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1
Mar 17 18:35:22.129701 ignition[850]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET result: OK
Mar 17 18:35:22.258928 ignition[850]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz"
Mar 17 18:35:22.259222 ignition[850]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/opt/bin/cilium.tar.gz"
Mar 17 18:35:22.259222 ignition[850]: INFO     : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1
Mar 17 18:35:22.816284 ignition[850]: INFO     : files: createFilesystemsFiles: createFiles: op(4): GET result: OK
Mar 17 18:35:22.888019 ignition[850]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz"
Mar 17 18:35:22.888297 ignition[850]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing file "/sysroot/home/core/install.sh"
Mar 17 18:35:22.888596 ignition[850]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh"
Mar 17 18:35:22.888781 ignition[850]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/home/core/nginx.yaml"
Mar 17 18:35:22.889034 ignition[850]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml"
Mar 17 18:35:22.889224 ignition[850]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [started]  writing file "/sysroot/home/core/nfs-pod.yaml"
Mar 17 18:35:22.889456 ignition[850]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml"
Mar 17 18:35:22.889648 ignition[850]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [started]  writing file "/sysroot/home/core/nfs-pvc.yaml"
Mar 17 18:35:22.889893 ignition[850]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml"
Mar 17 18:35:22.894624 ignition[850]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Mar 17 18:35:22.894844 ignition[850]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Mar 17 18:35:22.895033 ignition[850]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [started]  writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw"
Mar 17 18:35:22.895302 ignition[850]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw"
Mar 17 18:35:22.903120 ignition[850]: INFO     : files: createFilesystemsFiles: createFiles: op(b): [started]  writing file "/sysroot/etc/systemd/system/vmtoolsd.service"
Mar 17 18:35:22.903341 ignition[850]: INFO     : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition
Mar 17 18:35:22.906813 ignition[850]: INFO     : files: createFilesystemsFiles: createFiles: op(b): op(c): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem3937296949"
Mar 17 18:35:22.907004 ignition[850]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed]   mounting "/dev/disk/by-label/OEM" at "/mnt/oem3937296949": device or resource busy
Mar 17 18:35:22.907004 ignition[850]: ERROR    : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3937296949", trying btrfs: device or resource busy
Mar 17 18:35:22.907004 ignition[850]: INFO     : files: createFilesystemsFiles: createFiles: op(b): op(d): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem3937296949"
Mar 17 18:35:22.908904 ignition[850]: INFO     : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3937296949"
Mar 17 18:35:22.917224 ignition[850]: INFO     : files: createFilesystemsFiles: createFiles: op(b): op(e): [started]  unmounting "/mnt/oem3937296949"
Mar 17 18:35:22.917877 systemd[1]: mnt-oem3937296949.mount: Deactivated successfully.
Mar 17 18:35:22.918141 ignition[850]: INFO     : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem3937296949"
Mar 17 18:35:22.918307 ignition[850]: INFO     : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/vmtoolsd.service"
Mar 17 18:35:22.918307 ignition[850]: INFO     : files: createFilesystemsFiles: createFiles: op(f): [started]  writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw"
Mar 17 18:35:22.918307 ignition[850]: INFO     : files: createFilesystemsFiles: createFiles: op(f): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1
Mar 17 18:35:22.940276 systemd-networkd[736]: ens192: Gained IPv6LL
Mar 17 18:35:23.197887 ignition[850]: INFO     : files: createFilesystemsFiles: createFiles: op(f): GET result: OK
Mar 17 18:35:23.340158 ignition[850]: INFO     : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw"
Mar 17 18:35:23.340446 ignition[850]: INFO     : files: createFilesystemsFiles: createFiles: op(10): [started]  writing file "/sysroot/etc/systemd/network/00-vmware.network"
Mar 17 18:35:23.340628 ignition[850]: INFO     : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network"
Mar 17 18:35:23.340628 ignition[850]: INFO     : files: op(11): [started]  processing unit "vmtoolsd.service"
Mar 17 18:35:23.340628 ignition[850]: INFO     : files: op(11): [finished] processing unit "vmtoolsd.service"
Mar 17 18:35:23.340628 ignition[850]: INFO     : files: op(12): [started]  processing unit "prepare-helm.service"
Mar 17 18:35:23.340628 ignition[850]: INFO     : files: op(12): op(13): [started]  writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Mar 17 18:35:23.340628 ignition[850]: INFO     : files: op(12): op(13): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Mar 17 18:35:23.340628 ignition[850]: INFO     : files: op(12): [finished] processing unit "prepare-helm.service"
Mar 17 18:35:23.340628 ignition[850]: INFO     : files: op(14): [started]  processing unit "coreos-metadata.service"
Mar 17 18:35:23.340628 ignition[850]: INFO     : files: op(14): op(15): [started]  writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service"
Mar 17 18:35:23.340628 ignition[850]: INFO     : files: op(14): op(15): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service"
Mar 17 18:35:23.340628 ignition[850]: INFO     : files: op(14): [finished] processing unit "coreos-metadata.service"
Mar 17 18:35:23.340628 ignition[850]: INFO     : files: op(16): [started]  setting preset to enabled for "vmtoolsd.service"
Mar 17 18:35:23.342547 ignition[850]: INFO     : files: op(16): [finished] setting preset to enabled for "vmtoolsd.service"
Mar 17 18:35:23.342547 ignition[850]: INFO     : files: op(17): [started]  setting preset to enabled for "prepare-helm.service"
Mar 17 18:35:23.342547 ignition[850]: INFO     : files: op(17): [finished] setting preset to enabled for "prepare-helm.service"
Mar 17 18:35:23.342547 ignition[850]: INFO     : files: op(18): [started]  setting preset to disabled for "coreos-metadata.service"
Mar 17 18:35:23.342547 ignition[850]: INFO     : files: op(18): op(19): [started]  removing enablement symlink(s) for "coreos-metadata.service"
Mar 17 18:35:23.467895 ignition[850]: INFO     : files: op(18): op(19): [finished] removing enablement symlink(s) for "coreos-metadata.service"
Mar 17 18:35:23.468135 ignition[850]: INFO     : files: op(18): [finished] setting preset to disabled for "coreos-metadata.service"
Mar 17 18:35:23.468135 ignition[850]: INFO     : files: createResultFile: createFiles: op(1a): [started]  writing file "/sysroot/etc/.ignition-result.json"
Mar 17 18:35:23.468135 ignition[850]: INFO     : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json"
Mar 17 18:35:23.468135 ignition[850]: INFO     : files: files passed
Mar 17 18:35:23.468135 ignition[850]: INFO     : Ignition finished successfully
Mar 17 18:35:23.473165 kernel: kauditd_printk_skb: 24 callbacks suppressed
Mar 17 18:35:23.473201 kernel: audit: type=1130 audit(1742236523.469:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:23.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:23.470371 systemd[1]: Finished ignition-files.service.
Mar 17 18:35:23.471051 systemd[1]: Starting initrd-setup-root-after-ignition.service...
Mar 17 18:35:23.471186 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile).
Mar 17 18:35:23.471702 systemd[1]: Starting ignition-quench.service...
Mar 17 18:35:23.476825 systemd[1]: ignition-quench.service: Deactivated successfully.
Mar 17 18:35:23.476879 systemd[1]: Finished ignition-quench.service.
Mar 17 18:35:23.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:23.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:23.482783 kernel: audit: type=1130 audit(1742236523.476:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:23.482809 kernel: audit: type=1131 audit(1742236523.476:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:23.483947 initrd-setup-root-after-ignition[876]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Mar 17 18:35:23.484717 systemd[1]: Finished initrd-setup-root-after-ignition.service.
Mar 17 18:35:23.484909 systemd[1]: Reached target ignition-complete.target.
Mar 17 18:35:23.488118 kernel: audit: type=1130 audit(1742236523.483:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:23.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:23.488654 systemd[1]: Starting initrd-parse-etc.service...
Mar 17 18:35:23.499293 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Mar 17 18:35:23.499344 systemd[1]: Finished initrd-parse-etc.service.
Mar 17 18:35:23.499789 systemd[1]: Reached target initrd-fs.target.
Mar 17 18:35:23.500006 systemd[1]: Reached target initrd.target.
Mar 17 18:35:23.507374 kernel: audit: type=1130 audit(1742236523.498:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:23.507392 kernel: audit: type=1131 audit(1742236523.498:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:23.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:23.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:23.500155 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met.
Mar 17 18:35:23.500736 systemd[1]: Starting dracut-pre-pivot.service...
Mar 17 18:35:23.509015 systemd[1]: Finished dracut-pre-pivot.service.
Mar 17 18:35:23.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:23.509569 systemd[1]: Starting initrd-cleanup.service...
Mar 17 18:35:23.512436 kernel: audit: type=1130 audit(1742236523.508:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:23.516508 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Mar 17 18:35:23.516564 systemd[1]: Finished initrd-cleanup.service.
Mar 17 18:35:23.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:23.517179 systemd[1]: Stopped target nss-lookup.target.
Mar 17 18:35:23.521613 kernel: audit: type=1130 audit(1742236523.515:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:23.521627 kernel: audit: type=1131 audit(1742236523.515:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:23.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:23.521684 systemd[1]: Stopped target remote-cryptsetup.target.
Mar 17 18:35:23.521905 systemd[1]: Stopped target timers.target.
Mar 17 18:35:23.522199 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Mar 17 18:35:23.522353 systemd[1]: Stopped dracut-pre-pivot.service.
Mar 17 18:35:23.521000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:23.522622 systemd[1]: Stopped target initrd.target.
Mar 17 18:35:23.525048 systemd[1]: Stopped target basic.target.
Mar 17 18:35:23.525160 kernel: audit: type=1131 audit(1742236523.521:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:23.525261 systemd[1]: Stopped target ignition-complete.target.
Mar 17 18:35:23.525469 systemd[1]: Stopped target ignition-diskful.target.
Mar 17 18:35:23.525672 systemd[1]: Stopped target initrd-root-device.target.
Mar 17 18:35:23.525879 systemd[1]: Stopped target remote-fs.target.
Mar 17 18:35:23.526073 systemd[1]: Stopped target remote-fs-pre.target.
Mar 17 18:35:23.526289 systemd[1]: Stopped target sysinit.target.
Mar 17 18:35:23.526486 systemd[1]: Stopped target local-fs.target.
Mar 17 18:35:23.526681 systemd[1]: Stopped target local-fs-pre.target.
Mar 17 18:35:23.526878 systemd[1]: Stopped target swap.target.
Mar 17 18:35:23.527071 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Mar 17 18:35:23.527125 systemd[1]: Stopped dracut-pre-mount.service.
Mar 17 18:35:23.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:23.527307 systemd[1]: Stopped target cryptsetup.target.
Mar 17 18:35:23.527441 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Mar 17 18:35:23.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:23.527463 systemd[1]: Stopped dracut-initqueue.service.
Mar 17 18:35:23.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:23.527619 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Mar 17 18:35:23.527640 systemd[1]: Stopped ignition-fetch-offline.service.
Mar 17 18:35:23.527782 systemd[1]: Stopped target paths.target.
Mar 17 18:35:23.527922 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Mar 17 18:35:23.531100 systemd[1]: Stopped systemd-ask-password-console.path.
Mar 17 18:35:23.531225 systemd[1]: Stopped target slices.target.
Mar 17 18:35:23.531382 systemd[1]: Stopped target sockets.target.
Mar 17 18:35:23.531547 systemd[1]: iscsid.socket: Deactivated successfully.
Mar 17 18:35:23.531562 systemd[1]: Closed iscsid.socket.
Mar 17 18:35:23.531692 systemd[1]: iscsiuio.socket: Deactivated successfully.
Mar 17 18:35:23.530000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:23.531706 systemd[1]: Closed iscsiuio.socket.
Mar 17 18:35:23.531854 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Mar 17 18:35:23.531875 systemd[1]: Stopped initrd-setup-root-after-ignition.service.
Mar 17 18:35:23.532015 systemd[1]: ignition-files.service: Deactivated successfully.
Mar 17 18:35:23.532035 systemd[1]: Stopped ignition-files.service.
Mar 17 18:35:23.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:23.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:23.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:23.532558 systemd[1]: Stopping ignition-mount.service...
Mar 17 18:35:23.533372 systemd[1]: Stopping sysroot-boot.service...
Mar 17 18:35:23.533469 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Mar 17 18:35:23.533502 systemd[1]: Stopped systemd-udev-trigger.service.
Mar 17 18:35:23.533618 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Mar 17 18:35:23.533644 systemd[1]: Stopped dracut-pre-trigger.service.
Mar 17 18:35:23.537006 ignition[889]: INFO     : Ignition 2.14.0
Mar 17 18:35:23.537006 ignition[889]: INFO     : Stage: umount
Mar 17 18:35:23.537006 ignition[889]: INFO     : reading system config file "/usr/lib/ignition/base.d/base.ign"
Mar 17 18:35:23.537006 ignition[889]: DEBUG    : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed
Mar 17 18:35:23.539329 ignition[889]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/vmware"
Mar 17 18:35:23.540843 ignition[889]: INFO     : umount: umount passed
Mar 17 18:35:23.540843 ignition[889]: INFO     : Ignition finished successfully
Mar 17 18:35:23.541186 systemd[1]: ignition-mount.service: Deactivated successfully.
Mar 17 18:35:23.541381 systemd[1]: Stopped ignition-mount.service.
Mar 17 18:35:23.541649 systemd[1]: Stopped target network.target.
Mar 17 18:35:23.541846 systemd[1]: ignition-disks.service: Deactivated successfully.
Mar 17 18:35:23.541871 systemd[1]: Stopped ignition-disks.service.
Mar 17 18:35:23.542220 systemd[1]: ignition-kargs.service: Deactivated successfully.
Mar 17 18:35:23.542243 systemd[1]: Stopped ignition-kargs.service.
Mar 17 18:35:23.542568 systemd[1]: ignition-setup.service: Deactivated successfully.
Mar 17 18:35:23.542590 systemd[1]: Stopped ignition-setup.service.
Mar 17 18:35:23.542974 systemd[1]: Stopping systemd-networkd.service...
Mar 17 18:35:23.543280 systemd[1]: Stopping systemd-resolved.service...
Mar 17 18:35:23.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:23.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:23.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:23.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:23.545618 systemd[1]: systemd-networkd.service: Deactivated successfully.
Mar 17 18:35:23.545800 systemd[1]: Stopped systemd-networkd.service.
Mar 17 18:35:23.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:23.546332 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Mar 17 18:35:23.546353 systemd[1]: Closed systemd-networkd.socket.
Mar 17 18:35:23.547120 systemd[1]: Stopping network-cleanup.service...
Mar 17 18:35:23.547382 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Mar 17 18:35:23.547412 systemd[1]: Stopped parse-ip-for-networkd.service.
Mar 17 18:35:23.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:23.547858 systemd[1]: afterburn-network-kargs.service: Deactivated successfully.
Mar 17 18:35:23.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:23.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:23.547888 systemd[1]: Stopped afterburn-network-kargs.service.
Mar 17 18:35:23.548000 audit: BPF prog-id=9 op=UNLOAD
Mar 17 18:35:23.548712 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Mar 17 18:35:23.548736 systemd[1]: Stopped systemd-sysctl.service.
Mar 17 18:35:23.549455 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Mar 17 18:35:23.549480 systemd[1]: Stopped systemd-modules-load.service.
Mar 17 18:35:23.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:23.549869 systemd[1]: Stopping systemd-udevd.service...
Mar 17 18:35:23.551913 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Mar 17 18:35:23.555274 systemd[1]: systemd-resolved.service: Deactivated successfully.
Mar 17 18:35:23.555492 systemd[1]: Stopped systemd-resolved.service.
Mar 17 18:35:23.554000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:23.556194 systemd[1]: systemd-udevd.service: Deactivated successfully.
Mar 17 18:35:23.556401 systemd[1]: Stopped systemd-udevd.service.
Mar 17 18:35:23.555000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:23.556886 systemd[1]: network-cleanup.service: Deactivated successfully.
Mar 17 18:35:23.557072 systemd[1]: Stopped network-cleanup.service.
Mar 17 18:35:23.556000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:23.557401 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Mar 17 18:35:23.557558 systemd[1]: Closed systemd-udevd-control.socket.
Mar 17 18:35:23.557778 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Mar 17 18:35:23.557930 systemd[1]: Closed systemd-udevd-kernel.socket.
Mar 17 18:35:23.558148 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Mar 17 18:35:23.558296 systemd[1]: Stopped dracut-pre-udev.service.
Mar 17 18:35:23.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:23.558552 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Mar 17 18:35:23.558700 systemd[1]: Stopped dracut-cmdline.service.
Mar 17 18:35:23.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:23.558951 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Mar 17 18:35:23.558000 audit: BPF prog-id=6 op=UNLOAD
Mar 17 18:35:23.559168 systemd[1]: Stopped dracut-cmdline-ask.service.
Mar 17 18:35:23.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:23.559816 systemd[1]: Starting initrd-udevadm-cleanup-db.service...
Mar 17 18:35:23.560168 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Mar 17 18:35:23.560337 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service.
Mar 17 18:35:23.560620 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Mar 17 18:35:23.560774 systemd[1]: Stopped kmod-static-nodes.service.
Mar 17 18:35:23.560994 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Mar 17 18:35:23.561181 systemd[1]: Stopped systemd-vconsole-setup.service.
Mar 17 18:35:23.562138 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Mar 17 18:35:23.563257 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Mar 17 18:35:23.563463 systemd[1]: Finished initrd-udevadm-cleanup-db.service.
Mar 17 18:35:23.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:23.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:23.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:23.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:23.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:23.568192 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Mar 17 18:35:23.750235 systemd[1]: sysroot-boot.service: Deactivated successfully.
Mar 17 18:35:23.750293 systemd[1]: Stopped sysroot-boot.service.
Mar 17 18:35:23.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:23.750591 systemd[1]: Reached target initrd-switch-root.target.
Mar 17 18:35:23.750698 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Mar 17 18:35:23.750721 systemd[1]: Stopped initrd-setup-root.service.
Mar 17 18:35:23.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:23.751289 systemd[1]: Starting initrd-switch-root.service...
Mar 17 18:35:23.757773 systemd[1]: Switching root.
Mar 17 18:35:23.774791 iscsid[741]: iscsid shutting down.
Mar 17 18:35:23.774936 systemd-journald[217]: Journal stopped
Mar 17 18:35:27.424270 systemd-journald[217]: Received SIGTERM from PID 1 (systemd).
Mar 17 18:35:27.424300 kernel: SELinux:  Class mctp_socket not defined in policy.
Mar 17 18:35:27.424311 kernel: SELinux:  Class anon_inode not defined in policy.
Mar 17 18:35:27.424317 kernel: SELinux: the above unknown classes and permissions will be allowed
Mar 17 18:35:27.424323 kernel: SELinux:  policy capability network_peer_controls=1
Mar 17 18:35:27.424330 kernel: SELinux:  policy capability open_perms=1
Mar 17 18:35:27.424336 kernel: SELinux:  policy capability extended_socket_class=1
Mar 17 18:35:27.424342 kernel: SELinux:  policy capability always_check_network=0
Mar 17 18:35:27.424348 kernel: SELinux:  policy capability cgroup_seclabel=1
Mar 17 18:35:27.424354 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Mar 17 18:35:27.424360 kernel: SELinux:  policy capability genfs_seclabel_symlinks=0
Mar 17 18:35:27.424365 kernel: SELinux:  policy capability ioctl_skip_cloexec=0
Mar 17 18:35:27.424373 systemd[1]: Successfully loaded SELinux policy in 69.308ms.
Mar 17 18:35:27.424382 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.028ms.
Mar 17 18:35:27.424390 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Mar 17 18:35:27.424397 systemd[1]: Detected virtualization vmware.
Mar 17 18:35:27.424404 systemd[1]: Detected architecture x86-64.
Mar 17 18:35:27.424410 systemd[1]: Detected first boot.
Mar 17 18:35:27.424417 systemd[1]: Initializing machine ID from random generator.
Mar 17 18:35:27.424424 kernel: SELinux:  Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped).
Mar 17 18:35:27.424430 systemd[1]: Populated /etc with preset unit settings.
Mar 17 18:35:27.424439 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Mar 17 18:35:27.424447 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Mar 17 18:35:27.424460 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Mar 17 18:35:27.424469 systemd[1]: iscsiuio.service: Deactivated successfully.
Mar 17 18:35:27.424476 systemd[1]: Stopped iscsiuio.service.
Mar 17 18:35:27.424483 systemd[1]: iscsid.service: Deactivated successfully.
Mar 17 18:35:27.424490 systemd[1]: Stopped iscsid.service.
Mar 17 18:35:27.424496 systemd[1]: initrd-switch-root.service: Deactivated successfully.
Mar 17 18:35:27.424503 systemd[1]: Stopped initrd-switch-root.service.
Mar 17 18:35:27.424509 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Mar 17 18:35:27.424517 systemd[1]: Created slice system-addon\x2dconfig.slice.
Mar 17 18:35:27.424524 systemd[1]: Created slice system-addon\x2drun.slice.
Mar 17 18:35:27.424530 systemd[1]: Created slice system-getty.slice.
Mar 17 18:35:27.424537 systemd[1]: Created slice system-modprobe.slice.
Mar 17 18:35:27.424543 systemd[1]: Created slice system-serial\x2dgetty.slice.
Mar 17 18:35:27.424550 systemd[1]: Created slice system-system\x2dcloudinit.slice.
Mar 17 18:35:27.424556 systemd[1]: Created slice system-systemd\x2dfsck.slice.
Mar 17 18:35:27.424562 systemd[1]: Created slice user.slice.
Mar 17 18:35:27.424570 systemd[1]: Started systemd-ask-password-console.path.
Mar 17 18:35:27.424577 systemd[1]: Started systemd-ask-password-wall.path.
Mar 17 18:35:27.424585 systemd[1]: Set up automount boot.automount.
Mar 17 18:35:27.424592 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount.
Mar 17 18:35:27.424599 systemd[1]: Stopped target initrd-switch-root.target.
Mar 17 18:35:27.424607 systemd[1]: Stopped target initrd-fs.target.
Mar 17 18:35:27.424618 systemd[1]: Stopped target initrd-root-fs.target.
Mar 17 18:35:27.424628 systemd[1]: Reached target integritysetup.target.
Mar 17 18:35:27.424637 systemd[1]: Reached target remote-cryptsetup.target.
Mar 17 18:35:27.424648 systemd[1]: Reached target remote-fs.target.
Mar 17 18:35:27.424658 systemd[1]: Reached target slices.target.
Mar 17 18:35:27.424668 systemd[1]: Reached target swap.target.
Mar 17 18:35:27.424678 systemd[1]: Reached target torcx.target.
Mar 17 18:35:27.424688 systemd[1]: Reached target veritysetup.target.
Mar 17 18:35:27.424699 systemd[1]: Listening on systemd-coredump.socket.
Mar 17 18:35:27.424711 systemd[1]: Listening on systemd-initctl.socket.
Mar 17 18:35:27.424719 systemd[1]: Listening on systemd-networkd.socket.
Mar 17 18:35:27.424726 systemd[1]: Listening on systemd-udevd-control.socket.
Mar 17 18:35:27.424732 systemd[1]: Listening on systemd-udevd-kernel.socket.
Mar 17 18:35:27.424739 systemd[1]: Listening on systemd-userdbd.socket.
Mar 17 18:35:27.424747 systemd[1]: Mounting dev-hugepages.mount...
Mar 17 18:35:27.424754 systemd[1]: Mounting dev-mqueue.mount...
Mar 17 18:35:27.424762 systemd[1]: Mounting media.mount...
Mar 17 18:35:27.424769 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen).
Mar 17 18:35:27.424776 systemd[1]: Mounting sys-kernel-debug.mount...
Mar 17 18:35:27.424783 systemd[1]: Mounting sys-kernel-tracing.mount...
Mar 17 18:35:27.424790 systemd[1]: Mounting tmp.mount...
Mar 17 18:35:27.424797 systemd[1]: Starting flatcar-tmpfiles.service...
Mar 17 18:35:27.424804 systemd[1]: Starting ignition-delete-config.service...
Mar 17 18:35:27.424811 systemd[1]: Starting kmod-static-nodes.service...
Mar 17 18:35:27.424818 systemd[1]: Starting modprobe@configfs.service...
Mar 17 18:35:27.424826 systemd[1]: Starting modprobe@dm_mod.service...
Mar 17 18:35:27.424833 systemd[1]: Starting modprobe@drm.service...
Mar 17 18:35:27.424840 systemd[1]: Starting modprobe@efi_pstore.service...
Mar 17 18:35:27.424847 systemd[1]: Starting modprobe@fuse.service...
Mar 17 18:35:27.424854 systemd[1]: Starting modprobe@loop.service...
Mar 17 18:35:27.424862 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Mar 17 18:35:27.424869 systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Mar 17 18:35:27.424876 systemd[1]: Stopped systemd-fsck-root.service.
Mar 17 18:35:27.424883 systemd[1]: systemd-fsck-usr.service: Deactivated successfully.
Mar 17 18:35:27.424891 systemd[1]: Stopped systemd-fsck-usr.service.
Mar 17 18:35:27.424898 systemd[1]: Stopped systemd-journald.service.
Mar 17 18:35:27.424905 systemd[1]: Starting systemd-journald.service...
Mar 17 18:35:27.424911 systemd[1]: Starting systemd-modules-load.service...
Mar 17 18:35:27.424918 systemd[1]: Starting systemd-network-generator.service...
Mar 17 18:35:27.424926 systemd[1]: Starting systemd-remount-fs.service...
Mar 17 18:35:27.424933 systemd[1]: Starting systemd-udev-trigger.service...
Mar 17 18:35:27.424940 systemd[1]: verity-setup.service: Deactivated successfully.
Mar 17 18:35:27.424947 systemd[1]: Stopped verity-setup.service.
Mar 17 18:35:27.424955 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen).
Mar 17 18:35:27.424962 systemd[1]: Mounted dev-hugepages.mount.
Mar 17 18:35:27.424972 systemd[1]: Mounted dev-mqueue.mount.
Mar 17 18:35:27.424985 systemd[1]: Mounted media.mount.
Mar 17 18:35:27.428101 systemd-journald[1002]: Journal started
Mar 17 18:35:27.428149 systemd-journald[1002]: Runtime Journal (/run/log/journal/f19441b35169410caeafc7d4b6427138) is 4.8M, max 38.8M, 34.0M free.
Mar 17 18:35:27.428179 systemd[1]: Mounted sys-kernel-debug.mount.
Mar 17 18:35:27.428195 systemd[1]: Mounted sys-kernel-tracing.mount.
Mar 17 18:35:27.428205 systemd[1]: Mounted tmp.mount.
Mar 17 18:35:24.057000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1
Mar 17 18:35:24.229000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1
Mar 17 18:35:24.229000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1
Mar 17 18:35:24.229000 audit: BPF prog-id=10 op=LOAD
Mar 17 18:35:24.229000 audit: BPF prog-id=10 op=UNLOAD
Mar 17 18:35:24.229000 audit: BPF prog-id=11 op=LOAD
Mar 17 18:35:24.229000 audit: BPF prog-id=11 op=UNLOAD
Mar 17 18:35:24.443000 audit[922]: AVC avc:  denied  { associate } for  pid=922 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023"
Mar 17 18:35:24.443000 audit[922]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8ac a1=c0000cede0 a2=c0000d70c0 a3=32 items=0 ppid=905 pid=922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null)
Mar 17 18:35:24.443000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61
Mar 17 18:35:24.445000 audit[922]: AVC avc:  denied  { associate } for  pid=922 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1
Mar 17 18:35:24.445000 audit[922]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d985 a2=1ed a3=0 items=2 ppid=905 pid=922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null)
Mar 17 18:35:24.445000 audit: CWD cwd="/"
Mar 17 18:35:24.445000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:24.445000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:24.445000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61
Mar 17 18:35:27.305000 audit: BPF prog-id=12 op=LOAD
Mar 17 18:35:27.305000 audit: BPF prog-id=3 op=UNLOAD
Mar 17 18:35:27.305000 audit: BPF prog-id=13 op=LOAD
Mar 17 18:35:27.305000 audit: BPF prog-id=14 op=LOAD
Mar 17 18:35:27.305000 audit: BPF prog-id=4 op=UNLOAD
Mar 17 18:35:27.305000 audit: BPF prog-id=5 op=UNLOAD
Mar 17 18:35:27.305000 audit: BPF prog-id=15 op=LOAD
Mar 17 18:35:27.305000 audit: BPF prog-id=12 op=UNLOAD
Mar 17 18:35:27.305000 audit: BPF prog-id=16 op=LOAD
Mar 17 18:35:27.306000 audit: BPF prog-id=17 op=LOAD
Mar 17 18:35:27.306000 audit: BPF prog-id=13 op=UNLOAD
Mar 17 18:35:27.306000 audit: BPF prog-id=14 op=UNLOAD
Mar 17 18:35:27.306000 audit: BPF prog-id=18 op=LOAD
Mar 17 18:35:27.306000 audit: BPF prog-id=15 op=UNLOAD
Mar 17 18:35:27.306000 audit: BPF prog-id=19 op=LOAD
Mar 17 18:35:27.306000 audit: BPF prog-id=20 op=LOAD
Mar 17 18:35:27.306000 audit: BPF prog-id=16 op=UNLOAD
Mar 17 18:35:27.306000 audit: BPF prog-id=17 op=UNLOAD
Mar 17 18:35:27.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:27.309000 audit: BPF prog-id=18 op=UNLOAD
Mar 17 18:35:27.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:27.311000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:27.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:27.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:27.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:27.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:27.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:27.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:27.389000 audit: BPF prog-id=21 op=LOAD
Mar 17 18:35:27.389000 audit: BPF prog-id=22 op=LOAD
Mar 17 18:35:27.389000 audit: BPF prog-id=23 op=LOAD
Mar 17 18:35:27.389000 audit: BPF prog-id=19 op=UNLOAD
Mar 17 18:35:27.389000 audit: BPF prog-id=20 op=UNLOAD
Mar 17 18:35:27.415000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:27.420000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1
Mar 17 18:35:27.420000 audit[1002]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffc09d375a0 a2=4000 a3=7ffc09d3763c items=0 ppid=1 pid=1002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null)
Mar 17 18:35:27.420000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald"
Mar 17 18:35:27.303095 systemd[1]: Queued start job for default target multi-user.target.
Mar 17 18:35:24.443182 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-03-17T18:35:24Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]"
Mar 17 18:35:27.303104 systemd[1]: Unnecessary job was removed for dev-sda6.device.
Mar 17 18:35:24.443655 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-03-17T18:35:24Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json
Mar 17 18:35:27.308130 systemd[1]: systemd-journald.service: Deactivated successfully.
Mar 17 18:35:24.443667 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-03-17T18:35:24Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json
Mar 17 18:35:24.443688 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-03-17T18:35:24Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12"
Mar 17 18:35:24.443695 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-03-17T18:35:24Z" level=debug msg="skipped missing lower profile" missing profile=oem
Mar 17 18:35:24.443716 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-03-17T18:35:24Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory"
Mar 17 18:35:24.443725 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-03-17T18:35:24Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)=
Mar 17 18:35:24.443874 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-03-17T18:35:24Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack
Mar 17 18:35:27.432720 jq[989]: true
Mar 17 18:35:24.443903 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-03-17T18:35:24Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json
Mar 17 18:35:24.443912 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-03-17T18:35:24Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json
Mar 17 18:35:24.444448 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-03-17T18:35:24Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10
Mar 17 18:35:24.444479 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-03-17T18:35:24Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl
Mar 17 18:35:24.444494 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-03-17T18:35:24Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7
Mar 17 18:35:24.444505 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-03-17T18:35:24Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store
Mar 17 18:35:24.444516 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-03-17T18:35:24Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7
Mar 17 18:35:24.444524 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-03-17T18:35:24Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store
Mar 17 18:35:26.854648 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-03-17T18:35:26Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl
Mar 17 18:35:26.854808 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-03-17T18:35:26Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl
Mar 17 18:35:26.854873 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-03-17T18:35:26Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl
Mar 17 18:35:26.855124 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-03-17T18:35:26Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl
Mar 17 18:35:26.855251 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-03-17T18:35:26Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile=
Mar 17 18:35:26.855297 /usr/lib/systemd/system-generators/torcx-generator[922]: time="2025-03-17T18:35:26Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx
Mar 17 18:35:27.434168 systemd[1]: Finished kmod-static-nodes.service.
Mar 17 18:35:27.434191 systemd[1]: Started systemd-journald.service.
Mar 17 18:35:27.434242 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Mar 17 18:35:27.434356 systemd[1]: Finished modprobe@configfs.service.
Mar 17 18:35:27.434592 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Mar 17 18:35:27.434675 systemd[1]: Finished modprobe@dm_mod.service.
Mar 17 18:35:27.434907 systemd[1]: modprobe@drm.service: Deactivated successfully.
Mar 17 18:35:27.435009 systemd[1]: Finished modprobe@drm.service.
Mar 17 18:35:27.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:27.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:27.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:27.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:27.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:27.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:27.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:27.434000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:27.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:27.434000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:27.435277 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Mar 17 18:35:27.435364 systemd[1]: Finished modprobe@efi_pstore.service.
Mar 17 18:35:27.436561 systemd[1]: Mounting sys-kernel-config.mount...
Mar 17 18:35:27.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:27.443320 systemd[1]: Finished systemd-modules-load.service.
Mar 17 18:35:27.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:27.443847 systemd[1]: Finished systemd-network-generator.service.
Mar 17 18:35:27.444973 systemd[1]: Finished systemd-remount-fs.service.
Mar 17 18:35:27.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:27.445254 systemd[1]: Mounted sys-kernel-config.mount.
Mar 17 18:35:27.445439 systemd[1]: Reached target network-pre.target.
Mar 17 18:35:27.445555 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Mar 17 18:35:27.449444 jq[1023]: true
Mar 17 18:35:27.457767 systemd[1]: Starting systemd-hwdb-update.service...
Mar 17 18:35:27.458767 systemd[1]: Starting systemd-journal-flush.service...
Mar 17 18:35:27.458905 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Mar 17 18:35:27.459694 systemd[1]: Starting systemd-random-seed.service...
Mar 17 18:35:27.460600 systemd[1]: Starting systemd-sysctl.service...
Mar 17 18:35:27.464094 kernel: fuse: init (API version 7.34)
Mar 17 18:35:27.464277 systemd[1]: Finished flatcar-tmpfiles.service.
Mar 17 18:35:27.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:27.465365 systemd[1]: Starting systemd-sysusers.service...
Mar 17 18:35:27.466291 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Mar 17 18:35:27.466389 systemd[1]: Finished modprobe@fuse.service.
Mar 17 18:35:27.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:27.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:27.467589 systemd[1]: Mounting sys-fs-fuse-connections.mount...
Mar 17 18:35:27.470305 systemd[1]: Mounted sys-fs-fuse-connections.mount.
Mar 17 18:35:27.473665 systemd[1]: modprobe@loop.service: Deactivated successfully.
Mar 17 18:35:27.473761 systemd[1]: Finished modprobe@loop.service.
Mar 17 18:35:27.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:27.472000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:27.473983 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met.
Mar 17 18:35:27.474104 kernel: loop: module loaded
Mar 17 18:35:27.484761 systemd-journald[1002]: Time spent on flushing to /var/log/journal/f19441b35169410caeafc7d4b6427138 is 23.980ms for 2024 entries.
Mar 17 18:35:27.484761 systemd-journald[1002]: System Journal (/var/log/journal/f19441b35169410caeafc7d4b6427138) is 8.0M, max 584.8M, 576.8M free.
Mar 17 18:35:27.520023 systemd-journald[1002]: Received client request to flush runtime journal.
Mar 17 18:35:27.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:27.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:27.486577 systemd[1]: Finished systemd-random-seed.service.
Mar 17 18:35:27.486755 systemd[1]: Reached target first-boot-complete.target.
Mar 17 18:35:27.501387 systemd[1]: Finished systemd-sysctl.service.
Mar 17 18:35:27.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:27.521167 systemd[1]: Finished systemd-journal-flush.service.
Mar 17 18:35:27.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:27.565635 systemd[1]: Finished systemd-udev-trigger.service.
Mar 17 18:35:27.566715 systemd[1]: Starting systemd-udev-settle.service...
Mar 17 18:35:27.577546 udevadm[1048]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in.
Mar 17 18:35:27.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:27.644731 systemd[1]: Finished systemd-sysusers.service.
Mar 17 18:35:27.645771 systemd[1]: Starting systemd-tmpfiles-setup-dev.service...
Mar 17 18:35:27.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:27.722291 systemd[1]: Finished systemd-tmpfiles-setup-dev.service.
Mar 17 18:35:27.855003 ignition[1042]: Ignition 2.14.0
Mar 17 18:35:27.855220 ignition[1042]: deleting config from guestinfo properties
Mar 17 18:35:27.858404 ignition[1042]: Successfully deleted config
Mar 17 18:35:27.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ignition-delete-config comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:27.859103 systemd[1]: Finished ignition-delete-config.service.
Mar 17 18:35:28.407063 systemd[1]: Finished systemd-hwdb-update.service.
Mar 17 18:35:28.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:28.406000 audit: BPF prog-id=24 op=LOAD
Mar 17 18:35:28.406000 audit: BPF prog-id=25 op=LOAD
Mar 17 18:35:28.406000 audit: BPF prog-id=7 op=UNLOAD
Mar 17 18:35:28.406000 audit: BPF prog-id=8 op=UNLOAD
Mar 17 18:35:28.408372 systemd[1]: Starting systemd-udevd.service...
Mar 17 18:35:28.421032 systemd-udevd[1056]: Using default interface naming scheme 'v252'.
Mar 17 18:35:28.531527 kernel: kauditd_printk_skb: 114 callbacks suppressed
Mar 17 18:35:28.531601 kernel: audit: type=1130 audit(1742236528.526:150): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:28.531621 kernel: audit: type=1334 audit(1742236528.529:151): prog-id=26 op=LOAD
Mar 17 18:35:28.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:28.529000 audit: BPF prog-id=26 op=LOAD
Mar 17 18:35:28.527373 systemd[1]: Started systemd-udevd.service.
Mar 17 18:35:28.531918 systemd[1]: Starting systemd-networkd.service...
Mar 17 18:35:28.546176 kernel: audit: type=1334 audit(1742236528.541:152): prog-id=27 op=LOAD
Mar 17 18:35:28.546236 kernel: audit: type=1334 audit(1742236528.542:153): prog-id=28 op=LOAD
Mar 17 18:35:28.546250 kernel: audit: type=1334 audit(1742236528.543:154): prog-id=29 op=LOAD
Mar 17 18:35:28.541000 audit: BPF prog-id=27 op=LOAD
Mar 17 18:35:28.542000 audit: BPF prog-id=28 op=LOAD
Mar 17 18:35:28.543000 audit: BPF prog-id=29 op=LOAD
Mar 17 18:35:28.545690 systemd[1]: Starting systemd-userdbd.service...
Mar 17 18:35:28.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:28.569594 systemd[1]: Started systemd-userdbd.service.
Mar 17 18:35:28.572791 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped.
Mar 17 18:35:28.573140 kernel: audit: type=1130 audit(1742236528.568:155): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:28.608099 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2
Mar 17 18:35:28.612100 kernel: ACPI: button: Power Button [PWRF]
Mar 17 18:35:28.672987 systemd-networkd[1058]: lo: Link UP
Mar 17 18:35:28.672992 systemd-networkd[1058]: lo: Gained carrier
Mar 17 18:35:28.673569 systemd-networkd[1058]: Enumeration completed
Mar 17 18:35:28.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:28.673639 systemd-networkd[1058]: ens192: Configuring with /etc/systemd/network/00-vmware.network.
Mar 17 18:35:28.673653 systemd[1]: Started systemd-networkd.service.
Mar 17 18:35:28.677130 kernel: audit: type=1130 audit(1742236528.672:156): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:28.679447 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated
Mar 17 18:35:28.679693 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps
Mar 17 18:35:28.679824 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): ens192: link becomes ready
Mar 17 18:35:28.680936 systemd-networkd[1058]: ens192: Link UP
Mar 17 18:35:28.681047 systemd-networkd[1058]: ens192: Gained carrier
Mar 17 18:35:28.683105 kernel: vmw_vmci 0000:00:07.7: Found VMCI PCI device at 0x11080, irq 16
Mar 17 18:35:28.695175 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc
Mar 17 18:35:28.695299 kernel: audit: type=1400 audit(1742236528.688:157): avc:  denied  { confidentiality } for  pid=1066 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1
Mar 17 18:35:28.695334 kernel: audit: type=1300 audit(1742236528.688:157): arch=c000003e syscall=175 success=yes exit=0 a0=555932777eb0 a1=338ac a2=7fe228224bc5 a3=5 items=110 ppid=1056 pid=1066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null)
Mar 17 18:35:28.688000 audit[1066]: AVC avc:  denied  { confidentiality } for  pid=1066 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1
Mar 17 18:35:28.688000 audit[1066]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=555932777eb0 a1=338ac a2=7fe228224bc5 a3=5 items=110 ppid=1056 pid=1066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null)
Mar 17 18:35:28.688000 audit: CWD cwd="/"
Mar 17 18:35:28.699655 kernel: audit: type=1307 audit(1742236528.688:157): cwd="/"
Mar 17 18:35:28.688000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=1 name=(null) inode=24656 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=2 name=(null) inode=24656 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=3 name=(null) inode=24657 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=4 name=(null) inode=24656 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=5 name=(null) inode=24658 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=6 name=(null) inode=24656 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=7 name=(null) inode=24659 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=8 name=(null) inode=24659 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=9 name=(null) inode=24660 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=10 name=(null) inode=24659 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=11 name=(null) inode=24661 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=12 name=(null) inode=24659 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=13 name=(null) inode=24662 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=14 name=(null) inode=24659 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=15 name=(null) inode=24663 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=16 name=(null) inode=24659 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=17 name=(null) inode=24664 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=18 name=(null) inode=24656 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=19 name=(null) inode=24665 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=20 name=(null) inode=24665 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=21 name=(null) inode=24666 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=22 name=(null) inode=24665 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=23 name=(null) inode=24667 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=24 name=(null) inode=24665 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=25 name=(null) inode=24668 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=26 name=(null) inode=24665 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=27 name=(null) inode=24669 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=28 name=(null) inode=24665 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=29 name=(null) inode=24670 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=30 name=(null) inode=24656 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=31 name=(null) inode=24671 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=32 name=(null) inode=24671 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=33 name=(null) inode=24672 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=34 name=(null) inode=24671 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=35 name=(null) inode=24673 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=36 name=(null) inode=24671 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=37 name=(null) inode=24674 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=38 name=(null) inode=24671 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=39 name=(null) inode=24675 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=40 name=(null) inode=24671 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=41 name=(null) inode=24676 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=42 name=(null) inode=24656 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=43 name=(null) inode=24677 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=44 name=(null) inode=24677 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=45 name=(null) inode=24678 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=46 name=(null) inode=24677 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=47 name=(null) inode=24679 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=48 name=(null) inode=24677 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=49 name=(null) inode=24680 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=50 name=(null) inode=24677 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=51 name=(null) inode=24681 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=52 name=(null) inode=24677 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=53 name=(null) inode=24682 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=55 name=(null) inode=24683 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=56 name=(null) inode=24683 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=57 name=(null) inode=24684 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=58 name=(null) inode=24683 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=59 name=(null) inode=24685 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=60 name=(null) inode=24683 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=61 name=(null) inode=24686 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=62 name=(null) inode=24686 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=63 name=(null) inode=24687 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=64 name=(null) inode=24686 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=65 name=(null) inode=24688 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=66 name=(null) inode=24686 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=67 name=(null) inode=24689 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=68 name=(null) inode=24686 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=69 name=(null) inode=24690 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=70 name=(null) inode=24686 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=71 name=(null) inode=24691 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=72 name=(null) inode=24683 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=73 name=(null) inode=24692 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=74 name=(null) inode=24692 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=75 name=(null) inode=24693 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=76 name=(null) inode=24692 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=77 name=(null) inode=24694 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=78 name=(null) inode=24692 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=79 name=(null) inode=24695 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=80 name=(null) inode=24692 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=81 name=(null) inode=24696 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=82 name=(null) inode=24692 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=83 name=(null) inode=24697 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=84 name=(null) inode=24683 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=85 name=(null) inode=24698 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=86 name=(null) inode=24698 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=87 name=(null) inode=24699 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=88 name=(null) inode=24698 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=89 name=(null) inode=24700 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=90 name=(null) inode=24698 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=91 name=(null) inode=24701 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=92 name=(null) inode=24698 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=93 name=(null) inode=24702 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=94 name=(null) inode=24698 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=95 name=(null) inode=24703 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=96 name=(null) inode=24683 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=97 name=(null) inode=24704 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=98 name=(null) inode=24704 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=99 name=(null) inode=24705 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=100 name=(null) inode=24704 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=101 name=(null) inode=24706 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=102 name=(null) inode=24704 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=103 name=(null) inode=24707 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=104 name=(null) inode=24704 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=105 name=(null) inode=24708 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=106 name=(null) inode=24704 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=107 name=(null) inode=24709 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PATH item=109 name=(null) inode=24710 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Mar 17 18:35:28.688000 audit: PROCTITLE proctitle="(udev-worker)"
Mar 17 18:35:28.704111 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled!
Mar 17 18:35:28.706094 kernel: Guest personality initialized and is active
Mar 17 18:35:28.709613 kernel: VMCI host device registered (name=vmci, major=10, minor=125)
Mar 17 18:35:28.709659 kernel: Initialized host personality
Mar 17 18:35:28.749105 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3
Mar 17 18:35:28.751349 (udev-worker)[1068]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte.
Mar 17 18:35:28.756114 kernel: mousedev: PS/2 mouse device common for all mice
Mar 17 18:35:28.802835 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device.
Mar 17 18:35:28.807309 systemd[1]: Finished systemd-udev-settle.service.
Mar 17 18:35:28.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:28.808467 systemd[1]: Starting lvm2-activation-early.service...
Mar 17 18:35:28.828760 lvm[1089]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Mar 17 18:35:28.853684 systemd[1]: Finished lvm2-activation-early.service.
Mar 17 18:35:28.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:28.853884 systemd[1]: Reached target cryptsetup.target.
Mar 17 18:35:28.854877 systemd[1]: Starting lvm2-activation.service...
Mar 17 18:35:28.857884 lvm[1090]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Mar 17 18:35:28.879630 systemd[1]: Finished lvm2-activation.service.
Mar 17 18:35:28.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:28.879820 systemd[1]: Reached target local-fs-pre.target.
Mar 17 18:35:28.879929 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Mar 17 18:35:28.879947 systemd[1]: Reached target local-fs.target.
Mar 17 18:35:28.880037 systemd[1]: Reached target machines.target.
Mar 17 18:35:28.881054 systemd[1]: Starting ldconfig.service...
Mar 17 18:35:28.892149 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met.
Mar 17 18:35:28.892188 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Mar 17 18:35:28.893052 systemd[1]: Starting systemd-boot-update.service...
Mar 17 18:35:28.893846 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service...
Mar 17 18:35:28.894693 systemd[1]: Starting systemd-machine-id-commit.service...
Mar 17 18:35:28.895506 systemd[1]: Starting systemd-sysext.service...
Mar 17 18:35:28.934291 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1092 (bootctl)
Mar 17 18:35:28.934570 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service.
Mar 17 18:35:28.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:28.935375 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service...
Mar 17 18:35:28.945578 systemd[1]: Unmounting usr-share-oem.mount...
Mar 17 18:35:28.962693 systemd[1]: usr-share-oem.mount: Deactivated successfully.
Mar 17 18:35:28.962827 systemd[1]: Unmounted usr-share-oem.mount.
Mar 17 18:35:28.979109 kernel: loop0: detected capacity change from 0 to 210664
Mar 17 18:35:29.883667 systemd[1]: etc-machine\x2did.mount: Deactivated successfully.
Mar 17 18:35:29.884402 systemd[1]: Finished systemd-machine-id-commit.service.
Mar 17 18:35:29.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:29.891110 systemd-fsck[1102]: fsck.fat 4.2 (2021-01-31)
Mar 17 18:35:29.891110 systemd-fsck[1102]: /dev/sda1: 789 files, 119299/258078 clusters
Mar 17 18:35:29.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:29.892648 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service.
Mar 17 18:35:29.894017 systemd[1]: Mounting boot.mount...
Mar 17 18:35:29.904079 systemd[1]: Mounted boot.mount.
Mar 17 18:35:29.906098 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher
Mar 17 18:35:29.918150 systemd[1]: Finished systemd-boot-update.service.
Mar 17 18:35:29.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:29.931102 kernel: loop1: detected capacity change from 0 to 210664
Mar 17 18:35:30.050401 (sd-sysext)[1106]: Using extensions 'kubernetes'.
Mar 17 18:35:30.050671 (sd-sysext)[1106]: Merged extensions into '/usr'.
Mar 17 18:35:30.063585 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen).
Mar 17 18:35:30.065042 systemd[1]: Mounting usr-share-oem.mount...
Mar 17 18:35:30.067598 systemd[1]: Starting modprobe@dm_mod.service...
Mar 17 18:35:30.068781 systemd[1]: Starting modprobe@efi_pstore.service...
Mar 17 18:35:30.070044 systemd[1]: Starting modprobe@loop.service...
Mar 17 18:35:30.070384 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met.
Mar 17 18:35:30.070490 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Mar 17 18:35:30.070596 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen).
Mar 17 18:35:30.072809 systemd[1]: Mounted usr-share-oem.mount.
Mar 17 18:35:30.073245 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Mar 17 18:35:30.073367 systemd[1]: Finished modprobe@dm_mod.service.
Mar 17 18:35:30.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:30.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:30.073770 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Mar 17 18:35:30.073848 systemd[1]: Finished modprobe@efi_pstore.service.
Mar 17 18:35:30.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:30.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:30.074263 systemd[1]: modprobe@loop.service: Deactivated successfully.
Mar 17 18:35:30.074362 systemd[1]: Finished modprobe@loop.service.
Mar 17 18:35:30.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:30.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:30.074799 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Mar 17 18:35:30.074901 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met.
Mar 17 18:35:30.075540 systemd[1]: Finished systemd-sysext.service.
Mar 17 18:35:30.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:30.076565 systemd[1]: Starting ensure-sysext.service...
Mar 17 18:35:30.077421 systemd[1]: Starting systemd-tmpfiles-setup.service...
Mar 17 18:35:30.083954 systemd[1]: Reloading.
Mar 17 18:35:30.090741 systemd-tmpfiles[1113]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring.
Mar 17 18:35:30.095643 systemd-tmpfiles[1113]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring.
Mar 17 18:35:30.101282 systemd-tmpfiles[1113]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring.
Mar 17 18:35:30.130711 /usr/lib/systemd/system-generators/torcx-generator[1132]: time="2025-03-17T18:35:30Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]"
Mar 17 18:35:30.130735 /usr/lib/systemd/system-generators/torcx-generator[1132]: time="2025-03-17T18:35:30Z" level=info msg="torcx already run"
Mar 17 18:35:30.192775 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Mar 17 18:35:30.192794 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Mar 17 18:35:30.206522 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Mar 17 18:35:30.242000 audit: BPF prog-id=30 op=LOAD
Mar 17 18:35:30.242000 audit: BPF prog-id=26 op=UNLOAD
Mar 17 18:35:30.243000 audit: BPF prog-id=31 op=LOAD
Mar 17 18:35:30.243000 audit: BPF prog-id=27 op=UNLOAD
Mar 17 18:35:30.243000 audit: BPF prog-id=32 op=LOAD
Mar 17 18:35:30.243000 audit: BPF prog-id=33 op=LOAD
Mar 17 18:35:30.243000 audit: BPF prog-id=28 op=UNLOAD
Mar 17 18:35:30.243000 audit: BPF prog-id=29 op=UNLOAD
Mar 17 18:35:30.244000 audit: BPF prog-id=34 op=LOAD
Mar 17 18:35:30.244000 audit: BPF prog-id=35 op=LOAD
Mar 17 18:35:30.244000 audit: BPF prog-id=24 op=UNLOAD
Mar 17 18:35:30.244000 audit: BPF prog-id=25 op=UNLOAD
Mar 17 18:35:30.244000 audit: BPF prog-id=36 op=LOAD
Mar 17 18:35:30.244000 audit: BPF prog-id=21 op=UNLOAD
Mar 17 18:35:30.244000 audit: BPF prog-id=37 op=LOAD
Mar 17 18:35:30.244000 audit: BPF prog-id=38 op=LOAD
Mar 17 18:35:30.244000 audit: BPF prog-id=22 op=UNLOAD
Mar 17 18:35:30.244000 audit: BPF prog-id=23 op=UNLOAD
Mar 17 18:35:30.252451 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen).
Mar 17 18:35:30.253218 systemd[1]: Starting modprobe@dm_mod.service...
Mar 17 18:35:30.254135 systemd[1]: Starting modprobe@efi_pstore.service...
Mar 17 18:35:30.255546 systemd[1]: Starting modprobe@loop.service...
Mar 17 18:35:30.255664 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met.
Mar 17 18:35:30.255728 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Mar 17 18:35:30.255812 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen).
Mar 17 18:35:30.256304 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Mar 17 18:35:30.256616 systemd[1]: Finished modprobe@dm_mod.service.
Mar 17 18:35:30.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:30.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:30.257151 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Mar 17 18:35:30.257281 systemd[1]: Finished modprobe@efi_pstore.service.
Mar 17 18:35:30.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:30.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:30.257727 systemd[1]: modprobe@loop.service: Deactivated successfully.
Mar 17 18:35:30.257851 systemd[1]: Finished modprobe@loop.service.
Mar 17 18:35:30.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:30.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:30.258299 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Mar 17 18:35:30.258365 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met.
Mar 17 18:35:30.259490 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen).
Mar 17 18:35:30.260522 systemd[1]: Starting modprobe@dm_mod.service...
Mar 17 18:35:30.261679 systemd[1]: Starting modprobe@efi_pstore.service...
Mar 17 18:35:30.262853 systemd[1]: Starting modprobe@loop.service...
Mar 17 18:35:30.263130 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met.
Mar 17 18:35:30.263215 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Mar 17 18:35:30.263285 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen).
Mar 17 18:35:30.263838 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Mar 17 18:35:30.264147 systemd[1]: Finished modprobe@dm_mod.service.
Mar 17 18:35:30.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:30.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:30.264658 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Mar 17 18:35:30.264818 systemd[1]: Finished modprobe@efi_pstore.service.
Mar 17 18:35:30.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:30.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:30.265359 systemd[1]: modprobe@loop.service: Deactivated successfully.
Mar 17 18:35:30.265497 systemd[1]: Finished modprobe@loop.service.
Mar 17 18:35:30.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:30.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:30.265973 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Mar 17 18:35:30.266045 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met.
Mar 17 18:35:30.268064 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen).
Mar 17 18:35:30.268903 systemd[1]: Starting modprobe@dm_mod.service...
Mar 17 18:35:30.269776 systemd[1]: Starting modprobe@drm.service...
Mar 17 18:35:30.270622 systemd[1]: Starting modprobe@efi_pstore.service...
Mar 17 18:35:30.271459 systemd[1]: Starting modprobe@loop.service...
Mar 17 18:35:30.271618 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met.
Mar 17 18:35:30.271699 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Mar 17 18:35:30.272604 systemd[1]: Starting systemd-networkd-wait-online.service...
Mar 17 18:35:30.272793 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen).
Mar 17 18:35:30.273414 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Mar 17 18:35:30.273500 systemd[1]: Finished modprobe@dm_mod.service.
Mar 17 18:35:30.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:30.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:30.274370 systemd[1]: modprobe@drm.service: Deactivated successfully.
Mar 17 18:35:30.274463 systemd[1]: Finished modprobe@drm.service.
Mar 17 18:35:30.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:30.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:30.275480 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Mar 17 18:35:30.275554 systemd[1]: Finished modprobe@efi_pstore.service.
Mar 17 18:35:30.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:30.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:30.275878 systemd[1]: modprobe@loop.service: Deactivated successfully.
Mar 17 18:35:30.275981 systemd[1]: Finished modprobe@loop.service.
Mar 17 18:35:30.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:30.275000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:30.276386 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Mar 17 18:35:30.276448 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met.
Mar 17 18:35:30.277466 systemd[1]: Finished ensure-sysext.service.
Mar 17 18:35:30.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:30.487635 systemd[1]: Finished systemd-tmpfiles-setup.service.
Mar 17 18:35:30.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:30.488731 systemd[1]: Starting audit-rules.service...
Mar 17 18:35:30.489676 systemd[1]: Starting clean-ca-certificates.service...
Mar 17 18:35:30.490527 systemd[1]: Starting systemd-journal-catalog-update.service...
Mar 17 18:35:30.492000 audit: BPF prog-id=39 op=LOAD
Mar 17 18:35:30.493000 audit: BPF prog-id=40 op=LOAD
Mar 17 18:35:30.493701 systemd[1]: Starting systemd-resolved.service...
Mar 17 18:35:30.495011 systemd[1]: Starting systemd-timesyncd.service...
Mar 17 18:35:30.495828 systemd[1]: Starting systemd-update-utmp.service...
Mar 17 18:35:30.504000 audit[1209]: SYSTEM_BOOT pid=1209 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:30.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:30.505938 systemd[1]: Finished systemd-update-utmp.service.
Mar 17 18:35:30.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:30.515951 systemd[1]: Finished clean-ca-certificates.service.
Mar 17 18:35:30.516115 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Mar 17 18:35:30.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:30.559415 systemd[1]: Started systemd-timesyncd.service.
Mar 17 18:35:30.559578 systemd[1]: Reached target time-set.target.
Mar 17 18:35:30.569526 systemd-resolved[1207]: Positive Trust Anchors:
Mar 17 18:35:30.569668 systemd-resolved[1207]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Mar 17 18:35:30.569730 systemd-resolved[1207]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test
Mar 17 18:35:30.639383 systemd[1]: Finished systemd-journal-catalog-update.service.
Mar 17 18:35:30.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Mar 17 18:35:30.649148 augenrules[1225]: No rules
Mar 17 18:35:30.648000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1
Mar 17 18:35:30.648000 audit[1225]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc17dc0370 a2=420 a3=0 items=0 ppid=1204 pid=1225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null)
Mar 17 18:35:30.648000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573
Mar 17 18:35:30.649876 systemd[1]: Finished audit-rules.service.
Mar 17 18:35:30.656915 systemd-resolved[1207]: Defaulting to hostname 'linux'.
Mar 17 18:35:30.658245 systemd[1]: Started systemd-resolved.service.
Mar 17 18:35:30.658422 systemd[1]: Reached target network.target.
Mar 17 18:35:30.658534 systemd[1]: Reached target nss-lookup.target.
Mar 17 18:35:30.684281 systemd-networkd[1058]: ens192: Gained IPv6LL
Mar 17 18:35:30.685420 systemd[1]: Finished systemd-networkd-wait-online.service.
Mar 17 18:35:30.685637 systemd[1]: Reached target network-online.target.
Mar 17 18:36:47.793477 systemd-resolved[1207]: Clock change detected. Flushing caches.
Mar 17 18:36:47.793578 systemd-timesyncd[1208]: Contacted time server 72.30.35.88:123 (0.flatcar.pool.ntp.org).
Mar 17 18:36:47.793704 systemd-timesyncd[1208]: Initial clock synchronization to Mon 2025-03-17 18:36:47.793349 UTC.
Mar 17 18:36:47.796547 ldconfig[1091]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start.
Mar 17 18:36:47.808718 systemd[1]: Finished ldconfig.service.
Mar 17 18:36:47.810041 systemd[1]: Starting systemd-update-done.service...
Mar 17 18:36:47.816202 systemd[1]: Finished systemd-update-done.service.
Mar 17 18:36:47.816421 systemd[1]: Reached target sysinit.target.
Mar 17 18:36:47.816612 systemd[1]: Started motdgen.path.
Mar 17 18:36:47.816738 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path.
Mar 17 18:36:47.816992 systemd[1]: Started logrotate.timer.
Mar 17 18:36:47.817236 systemd[1]: Started mdadm.timer.
Mar 17 18:36:47.817336 systemd[1]: Started systemd-tmpfiles-clean.timer.
Mar 17 18:36:47.817449 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Mar 17 18:36:47.817476 systemd[1]: Reached target paths.target.
Mar 17 18:36:47.817561 systemd[1]: Reached target timers.target.
Mar 17 18:36:47.817885 systemd[1]: Listening on dbus.socket.
Mar 17 18:36:47.818986 systemd[1]: Starting docker.socket...
Mar 17 18:36:47.825769 systemd[1]: Listening on sshd.socket.
Mar 17 18:36:47.825956 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Mar 17 18:36:47.826306 systemd[1]: Listening on docker.socket.
Mar 17 18:36:47.826443 systemd[1]: Reached target sockets.target.
Mar 17 18:36:47.826536 systemd[1]: Reached target basic.target.
Mar 17 18:36:47.826658 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met.
Mar 17 18:36:47.826679 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met.
Mar 17 18:36:47.827692 systemd[1]: Starting containerd.service...
Mar 17 18:36:47.828848 systemd[1]: Starting dbus.service...
Mar 17 18:36:47.830373 systemd[1]: Starting enable-oem-cloudinit.service...
Mar 17 18:36:47.831230 systemd[1]: Starting extend-filesystems.service...
Mar 17 18:36:47.831799 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment).
Mar 17 18:36:47.833016 jq[1235]: false
Mar 17 18:36:47.839227 systemd[1]: Starting kubelet.service...
Mar 17 18:36:47.840206 systemd[1]: Starting motdgen.service...
Mar 17 18:36:47.841134 systemd[1]: Starting prepare-helm.service...
Mar 17 18:36:47.842036 systemd[1]: Starting ssh-key-proc-cmdline.service...
Mar 17 18:36:47.843077 systemd[1]: Starting sshd-keygen.service...
Mar 17 18:36:47.845076 systemd[1]: Starting systemd-logind.service...
Mar 17 18:36:47.845214 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Mar 17 18:36:47.845257 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0).
Mar 17 18:36:47.845688 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details.
Mar 17 18:36:47.861936 jq[1246]: true
Mar 17 18:36:47.846118 systemd[1]: Starting update-engine.service...
Mar 17 18:36:47.847881 systemd[1]: Starting update-ssh-keys-after-ignition.service...
Mar 17 18:36:47.852969 systemd[1]: Starting vmtoolsd.service...
Mar 17 18:36:47.854456 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'.
Mar 17 18:36:47.862321 jq[1254]: true
Mar 17 18:36:47.854782 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped.
Mar 17 18:36:47.856221 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully.
Mar 17 18:36:47.856439 systemd[1]: Finished ssh-key-proc-cmdline.service.
Mar 17 18:36:47.871285 systemd[1]: Started vmtoolsd.service.
Mar 17 18:36:47.878233 systemd[1]: motdgen.service: Deactivated successfully.
Mar 17 18:36:47.878343 systemd[1]: Finished motdgen.service.
Mar 17 18:36:47.886979 extend-filesystems[1236]: Found loop1
Mar 17 18:36:47.887296 extend-filesystems[1236]: Found sda
Mar 17 18:36:47.887446 extend-filesystems[1236]: Found sda1
Mar 17 18:36:47.887590 extend-filesystems[1236]: Found sda2
Mar 17 18:36:47.887730 extend-filesystems[1236]: Found sda3
Mar 17 18:36:47.887905 extend-filesystems[1236]: Found usr
Mar 17 18:36:47.888048 extend-filesystems[1236]: Found sda4
Mar 17 18:36:47.888197 extend-filesystems[1236]: Found sda6
Mar 17 18:36:47.888338 extend-filesystems[1236]: Found sda7
Mar 17 18:36:47.888476 extend-filesystems[1236]: Found sda9
Mar 17 18:36:47.888609 extend-filesystems[1236]: Checking size of /dev/sda9
Mar 17 18:36:47.918924 env[1274]: time="2025-03-17T18:36:47.918895297Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16
Mar 17 18:36:47.920561 tar[1253]: linux-amd64/helm
Mar 17 18:36:47.936361 extend-filesystems[1236]: Old size kept for /dev/sda9
Mar 17 18:36:47.936361 extend-filesystems[1236]: Found sr0
Mar 17 18:36:47.936518 systemd[1]: extend-filesystems.service: Deactivated successfully.
Mar 17 18:36:47.936613 systemd[1]: Finished extend-filesystems.service.
Mar 17 18:36:47.936988 systemd-logind[1244]: Watching system buttons on /dev/input/event1 (Power Button)
Mar 17 18:36:47.936999 systemd-logind[1244]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard)
Mar 17 18:36:47.937400 systemd-logind[1244]: New seat seat0.
Mar 17 18:36:47.966815 env[1274]: time="2025-03-17T18:36:47.966788857Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Mar 17 18:36:47.972956 bash[1272]: Updated "/home/core/.ssh/authorized_keys"
Mar 17 18:36:47.973306 systemd[1]: Finished update-ssh-keys-after-ignition.service.
Mar 17 18:36:47.977218 env[1274]: time="2025-03-17T18:36:47.977199537Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Mar 17 18:36:47.977988 env[1274]: time="2025-03-17T18:36:47.977971247Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.179-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Mar 17 18:36:47.978039 env[1274]: time="2025-03-17T18:36:47.978029443Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Mar 17 18:36:47.978210 env[1274]: time="2025-03-17T18:36:47.978199052Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Mar 17 18:36:47.978262 env[1274]: time="2025-03-17T18:36:47.978252495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Mar 17 18:36:47.978310 env[1274]: time="2025-03-17T18:36:47.978300047Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
Mar 17 18:36:47.978354 env[1274]: time="2025-03-17T18:36:47.978345036Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Mar 17 18:36:47.978433 env[1274]: time="2025-03-17T18:36:47.978423869Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Mar 17 18:36:47.978616 env[1274]: time="2025-03-17T18:36:47.978606656Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Mar 17 18:36:47.978722 env[1274]: time="2025-03-17T18:36:47.978710580Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Mar 17 18:36:47.978770 env[1274]: time="2025-03-17T18:36:47.978760472Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Mar 17 18:36:47.978835 env[1274]: time="2025-03-17T18:36:47.978825236Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
Mar 17 18:36:47.978878 env[1274]: time="2025-03-17T18:36:47.978868874Z" level=info msg="metadata content store policy set" policy=shared
Mar 17 18:36:48.003950 env[1274]: time="2025-03-17T18:36:48.002190623Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Mar 17 18:36:48.003950 env[1274]: time="2025-03-17T18:36:48.002217741Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Mar 17 18:36:48.003950 env[1274]: time="2025-03-17T18:36:48.002226737Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Mar 17 18:36:48.003950 env[1274]: time="2025-03-17T18:36:48.002267702Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Mar 17 18:36:48.003950 env[1274]: time="2025-03-17T18:36:48.002280290Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Mar 17 18:36:48.003950 env[1274]: time="2025-03-17T18:36:48.002288498Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Mar 17 18:36:48.003950 env[1274]: time="2025-03-17T18:36:48.002296673Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Mar 17 18:36:48.003950 env[1274]: time="2025-03-17T18:36:48.002304081Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Mar 17 18:36:48.003950 env[1274]: time="2025-03-17T18:36:48.002311547Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
Mar 17 18:36:48.003950 env[1274]: time="2025-03-17T18:36:48.002318599Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Mar 17 18:36:48.003950 env[1274]: time="2025-03-17T18:36:48.002334983Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Mar 17 18:36:48.003950 env[1274]: time="2025-03-17T18:36:48.002348288Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Mar 17 18:36:48.003950 env[1274]: time="2025-03-17T18:36:48.002432996Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Mar 17 18:36:48.003950 env[1274]: time="2025-03-17T18:36:48.002494405Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Mar 17 18:36:48.003916 systemd[1]: Started containerd.service.
Mar 17 18:36:48.004248 env[1274]: time="2025-03-17T18:36:48.002650815Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Mar 17 18:36:48.004248 env[1274]: time="2025-03-17T18:36:48.002670606Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Mar 17 18:36:48.004248 env[1274]: time="2025-03-17T18:36:48.002678278Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Mar 17 18:36:48.004248 env[1274]: time="2025-03-17T18:36:48.002736093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Mar 17 18:36:48.004248 env[1274]: time="2025-03-17T18:36:48.002746209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Mar 17 18:36:48.004248 env[1274]: time="2025-03-17T18:36:48.002753082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Mar 17 18:36:48.004248 env[1274]: time="2025-03-17T18:36:48.002760837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Mar 17 18:36:48.004248 env[1274]: time="2025-03-17T18:36:48.002767243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Mar 17 18:36:48.004248 env[1274]: time="2025-03-17T18:36:48.002775474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Mar 17 18:36:48.004248 env[1274]: time="2025-03-17T18:36:48.002791260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Mar 17 18:36:48.004248 env[1274]: time="2025-03-17T18:36:48.002800656Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Mar 17 18:36:48.004248 env[1274]: time="2025-03-17T18:36:48.002808719Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Mar 17 18:36:48.004248 env[1274]: time="2025-03-17T18:36:48.002885719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Mar 17 18:36:48.004248 env[1274]: time="2025-03-17T18:36:48.002894462Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Mar 17 18:36:48.004248 env[1274]: time="2025-03-17T18:36:48.002903612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Mar 17 18:36:48.004462 env[1274]: time="2025-03-17T18:36:48.002914709Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Mar 17 18:36:48.004462 env[1274]: time="2025-03-17T18:36:48.002927268Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
Mar 17 18:36:48.004462 env[1274]: time="2025-03-17T18:36:48.002946567Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Mar 17 18:36:48.004462 env[1274]: time="2025-03-17T18:36:48.002964516Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin"
Mar 17 18:36:48.004462 env[1274]: time="2025-03-17T18:36:48.003026712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Mar 17 18:36:48.004553 env[1274]: time="2025-03-17T18:36:48.003190627Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Mar 17 18:36:48.004553 env[1274]: time="2025-03-17T18:36:48.003228261Z" level=info msg="Connect containerd service"
Mar 17 18:36:48.004553 env[1274]: time="2025-03-17T18:36:48.003261378Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Mar 17 18:36:48.004553 env[1274]: time="2025-03-17T18:36:48.003611762Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Mar 17 18:36:48.004553 env[1274]: time="2025-03-17T18:36:48.003815210Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Mar 17 18:36:48.004553 env[1274]: time="2025-03-17T18:36:48.003843024Z" level=info msg=serving... address=/run/containerd/containerd.sock
Mar 17 18:36:48.004553 env[1274]: time="2025-03-17T18:36:48.003880618Z" level=info msg="containerd successfully booted in 0.089506s"
Mar 17 18:36:48.020225 env[1274]: time="2025-03-17T18:36:48.005146004Z" level=info msg="Start subscribing containerd event"
Mar 17 18:36:48.020225 env[1274]: time="2025-03-17T18:36:48.005175764Z" level=info msg="Start recovering state"
Mar 17 18:36:48.020225 env[1274]: time="2025-03-17T18:36:48.005223373Z" level=info msg="Start event monitor"
Mar 17 18:36:48.020225 env[1274]: time="2025-03-17T18:36:48.005239213Z" level=info msg="Start snapshots syncer"
Mar 17 18:36:48.020225 env[1274]: time="2025-03-17T18:36:48.005247171Z" level=info msg="Start cni network conf syncer for default"
Mar 17 18:36:48.020225 env[1274]: time="2025-03-17T18:36:48.005253266Z" level=info msg="Start streaming server"
Mar 17 18:36:48.049024 dbus-daemon[1234]: [system] SELinux support is enabled
Mar 17 18:36:48.049146 systemd[1]: Started dbus.service.
Mar 17 18:36:48.050557 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml).
Mar 17 18:36:48.050574 systemd[1]: Reached target system-config.target.
Mar 17 18:36:48.050707 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url).
Mar 17 18:36:48.050722 systemd[1]: Reached target user-config.target.
Mar 17 18:36:48.053785 systemd[1]: Started systemd-logind.service.
Mar 17 18:36:48.055478 dbus-daemon[1234]: [system] Successfully activated service 'org.freedesktop.systemd1'
Mar 17 18:36:48.059100 kernel: NET: Registered PF_VSOCK protocol family
Mar 17 18:36:48.064735 update_engine[1245]: I0317 18:36:48.063704  1245 main.cc:92] Flatcar Update Engine starting
Mar 17 18:36:48.066430 systemd[1]: Started update-engine.service.
Mar 17 18:36:48.066589 update_engine[1245]: I0317 18:36:48.066466  1245 update_check_scheduler.cc:74] Next update check in 5m50s
Mar 17 18:36:48.068493 systemd[1]: Started locksmithd.service.
Mar 17 18:36:48.340241 tar[1253]: linux-amd64/LICENSE
Mar 17 18:36:48.340399 tar[1253]: linux-amd64/README.md
Mar 17 18:36:48.343906 systemd[1]: Finished prepare-helm.service.
Mar 17 18:36:48.510563 locksmithd[1298]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot"
Mar 17 18:36:49.322753 sshd_keygen[1256]: ssh-keygen: generating new host keys: RSA ECDSA ED25519
Mar 17 18:36:49.337160 systemd[1]: Finished sshd-keygen.service.
Mar 17 18:36:49.338436 systemd[1]: Starting issuegen.service...
Mar 17 18:36:49.342165 systemd[1]: issuegen.service: Deactivated successfully.
Mar 17 18:36:49.342278 systemd[1]: Finished issuegen.service.
Mar 17 18:36:49.343604 systemd[1]: Starting systemd-user-sessions.service...
Mar 17 18:36:49.351920 systemd[1]: Finished systemd-user-sessions.service.
Mar 17 18:36:49.353072 systemd[1]: Started getty@tty1.service.
Mar 17 18:36:49.354077 systemd[1]: Started serial-getty@ttyS0.service.
Mar 17 18:36:49.354301 systemd[1]: Reached target getty.target.
Mar 17 18:36:50.223824 systemd[1]: Started kubelet.service.
Mar 17 18:36:50.224225 systemd[1]: Reached target multi-user.target.
Mar 17 18:36:50.225488 systemd[1]: Starting systemd-update-utmp-runlevel.service...
Mar 17 18:36:50.233280 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Mar 17 18:36:50.233392 systemd[1]: Finished systemd-update-utmp-runlevel.service.
Mar 17 18:36:50.233612 systemd[1]: Startup finished in 919ms (kernel) + 5.345s (initrd) + 9.284s (userspace) = 15.549s.
Mar 17 18:36:50.449104 login[1362]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0)
Mar 17 18:36:50.450777 login[1363]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0)
Mar 17 18:36:50.470385 systemd[1]: Created slice user-500.slice.
Mar 17 18:36:50.471236 systemd[1]: Starting user-runtime-dir@500.service...
Mar 17 18:36:50.473582 systemd-logind[1244]: New session 2 of user core.
Mar 17 18:36:50.475231 systemd-logind[1244]: New session 1 of user core.
Mar 17 18:36:50.486647 systemd[1]: Finished user-runtime-dir@500.service.
Mar 17 18:36:50.487727 systemd[1]: Starting user@500.service...
Mar 17 18:36:50.499531 (systemd)[1369]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0)
Mar 17 18:36:50.591383 systemd[1369]: Queued start job for default target default.target.
Mar 17 18:36:50.592081 systemd[1369]: Reached target paths.target.
Mar 17 18:36:50.592116 systemd[1369]: Reached target sockets.target.
Mar 17 18:36:50.592125 systemd[1369]: Reached target timers.target.
Mar 17 18:36:50.592133 systemd[1369]: Reached target basic.target.
Mar 17 18:36:50.592197 systemd[1]: Started user@500.service.
Mar 17 18:36:50.593051 systemd[1]: Started session-1.scope.
Mar 17 18:36:50.593615 systemd[1]: Started session-2.scope.
Mar 17 18:36:50.594055 systemd[1369]: Reached target default.target.
Mar 17 18:36:50.594151 systemd[1369]: Startup finished in 90ms.
Mar 17 18:36:51.956505 kubelet[1366]: E0317 18:36:51.956467    1366 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Mar 17 18:36:51.957699 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Mar 17 18:36:51.957780 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Mar 17 18:37:02.122794 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
Mar 17 18:37:02.122924 systemd[1]: Stopped kubelet.service.
Mar 17 18:37:02.124004 systemd[1]: Starting kubelet.service...
Mar 17 18:37:02.367899 systemd[1]: Started kubelet.service.
Mar 17 18:37:02.425234 kubelet[1398]: E0317 18:37:02.425170    1398 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Mar 17 18:37:02.428031 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Mar 17 18:37:02.428147 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Mar 17 18:37:12.622920 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
Mar 17 18:37:12.623068 systemd[1]: Stopped kubelet.service.
Mar 17 18:37:12.624358 systemd[1]: Starting kubelet.service...
Mar 17 18:37:13.044023 systemd[1]: Started kubelet.service.
Mar 17 18:37:13.095529 kubelet[1408]: E0317 18:37:13.095503    1408 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Mar 17 18:37:13.096774 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Mar 17 18:37:13.096846 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Mar 17 18:37:23.122830 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3.
Mar 17 18:37:23.122983 systemd[1]: Stopped kubelet.service.
Mar 17 18:37:23.124313 systemd[1]: Starting kubelet.service...
Mar 17 18:37:23.439118 systemd[1]: Started kubelet.service.
Mar 17 18:37:23.474315 kubelet[1418]: E0317 18:37:23.474270    1418 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Mar 17 18:37:23.475354 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Mar 17 18:37:23.475425 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Mar 17 18:37:28.131932 systemd[1]: Created slice system-sshd.slice.
Mar 17 18:37:28.132960 systemd[1]: Started sshd@0-139.178.70.110:22-139.178.68.195:39624.service.
Mar 17 18:37:28.173945 sshd[1425]: Accepted publickey for core from 139.178.68.195 port 39624 ssh2: RSA SHA256:4oZ1KYBDSs5lS/zKBefF9vskKlH/NySTYiZrtgd5CeA
Mar 17 18:37:28.174798 sshd[1425]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Mar 17 18:37:28.177962 systemd-logind[1244]: New session 3 of user core.
Mar 17 18:37:28.178500 systemd[1]: Started session-3.scope.
Mar 17 18:37:28.227282 systemd[1]: Started sshd@1-139.178.70.110:22-139.178.68.195:39636.service.
Mar 17 18:37:28.255808 sshd[1430]: Accepted publickey for core from 139.178.68.195 port 39636 ssh2: RSA SHA256:4oZ1KYBDSs5lS/zKBefF9vskKlH/NySTYiZrtgd5CeA
Mar 17 18:37:28.256762 sshd[1430]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Mar 17 18:37:28.259574 systemd-logind[1244]: New session 4 of user core.
Mar 17 18:37:28.260322 systemd[1]: Started session-4.scope.
Mar 17 18:37:28.311408 sshd[1430]: pam_unix(sshd:session): session closed for user core
Mar 17 18:37:28.313785 systemd[1]: Started sshd@2-139.178.70.110:22-139.178.68.195:39638.service.
Mar 17 18:37:28.316240 systemd[1]: sshd@1-139.178.70.110:22-139.178.68.195:39636.service: Deactivated successfully.
Mar 17 18:37:28.316601 systemd[1]: session-4.scope: Deactivated successfully.
Mar 17 18:37:28.317188 systemd-logind[1244]: Session 4 logged out. Waiting for processes to exit.
Mar 17 18:37:28.317592 systemd-logind[1244]: Removed session 4.
Mar 17 18:37:28.341615 sshd[1435]: Accepted publickey for core from 139.178.68.195 port 39638 ssh2: RSA SHA256:4oZ1KYBDSs5lS/zKBefF9vskKlH/NySTYiZrtgd5CeA
Mar 17 18:37:28.342283 sshd[1435]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Mar 17 18:37:28.345107 systemd[1]: Started session-5.scope.
Mar 17 18:37:28.345748 systemd-logind[1244]: New session 5 of user core.
Mar 17 18:37:28.392926 sshd[1435]: pam_unix(sshd:session): session closed for user core
Mar 17 18:37:28.395442 systemd[1]: sshd@2-139.178.70.110:22-139.178.68.195:39638.service: Deactivated successfully.
Mar 17 18:37:28.395925 systemd[1]: session-5.scope: Deactivated successfully.
Mar 17 18:37:28.396394 systemd-logind[1244]: Session 5 logged out. Waiting for processes to exit.
Mar 17 18:37:28.397167 systemd[1]: Started sshd@3-139.178.70.110:22-139.178.68.195:39652.service.
Mar 17 18:37:28.397736 systemd-logind[1244]: Removed session 5.
Mar 17 18:37:28.427487 sshd[1442]: Accepted publickey for core from 139.178.68.195 port 39652 ssh2: RSA SHA256:4oZ1KYBDSs5lS/zKBefF9vskKlH/NySTYiZrtgd5CeA
Mar 17 18:37:28.428344 sshd[1442]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Mar 17 18:37:28.432145 systemd[1]: Started session-6.scope.
Mar 17 18:37:28.432436 systemd-logind[1244]: New session 6 of user core.
Mar 17 18:37:28.485024 sshd[1442]: pam_unix(sshd:session): session closed for user core
Mar 17 18:37:28.487331 systemd[1]: sshd@3-139.178.70.110:22-139.178.68.195:39652.service: Deactivated successfully.
Mar 17 18:37:28.487758 systemd[1]: session-6.scope: Deactivated successfully.
Mar 17 18:37:28.488379 systemd-logind[1244]: Session 6 logged out. Waiting for processes to exit.
Mar 17 18:37:28.489205 systemd[1]: Started sshd@4-139.178.70.110:22-139.178.68.195:39668.service.
Mar 17 18:37:28.490068 systemd-logind[1244]: Removed session 6.
Mar 17 18:37:28.519414 sshd[1448]: Accepted publickey for core from 139.178.68.195 port 39668 ssh2: RSA SHA256:4oZ1KYBDSs5lS/zKBefF9vskKlH/NySTYiZrtgd5CeA
Mar 17 18:37:28.520208 sshd[1448]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Mar 17 18:37:28.523100 systemd-logind[1244]: New session 7 of user core.
Mar 17 18:37:28.523592 systemd[1]: Started session-7.scope.
Mar 17 18:37:28.584394 sudo[1451]:     core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh
Mar 17 18:37:28.584572 sudo[1451]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500)
Mar 17 18:37:28.601318 systemd[1]: Starting docker.service...
Mar 17 18:37:28.623595 env[1461]: time="2025-03-17T18:37:28.623564096Z" level=info msg="Starting up"
Mar 17 18:37:28.624317 env[1461]: time="2025-03-17T18:37:28.624303261Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Mar 17 18:37:28.624317 env[1461]: time="2025-03-17T18:37:28.624314850Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Mar 17 18:37:28.624373 env[1461]: time="2025-03-17T18:37:28.624328116Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Mar 17 18:37:28.624373 env[1461]: time="2025-03-17T18:37:28.624334072Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Mar 17 18:37:28.625315 env[1461]: time="2025-03-17T18:37:28.625281765Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Mar 17 18:37:28.625315 env[1461]: time="2025-03-17T18:37:28.625290959Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Mar 17 18:37:28.625315 env[1461]: time="2025-03-17T18:37:28.625297605Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Mar 17 18:37:28.625315 env[1461]: time="2025-03-17T18:37:28.625302711Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Mar 17 18:37:28.638059 env[1461]: time="2025-03-17T18:37:28.638045643Z" level=info msg="Loading containers: start."
Mar 17 18:37:28.714106 kernel: Initializing XFRM netlink socket
Mar 17 18:37:28.736470 env[1461]: time="2025-03-17T18:37:28.736451602Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Mar 17 18:37:28.772739 systemd-networkd[1058]: docker0: Link UP
Mar 17 18:37:28.781020 env[1461]: time="2025-03-17T18:37:28.781002822Z" level=info msg="Loading containers: done."
Mar 17 18:37:28.790145 env[1461]: time="2025-03-17T18:37:28.790120684Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2
Mar 17 18:37:28.790240 env[1461]: time="2025-03-17T18:37:28.790227527Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23
Mar 17 18:37:28.790295 env[1461]: time="2025-03-17T18:37:28.790280581Z" level=info msg="Daemon has completed initialization"
Mar 17 18:37:28.797251 systemd[1]: Started docker.service.
Mar 17 18:37:28.800618 env[1461]: time="2025-03-17T18:37:28.800587453Z" level=info msg="API listen on /run/docker.sock"
Mar 17 18:37:29.655169 env[1274]: time="2025-03-17T18:37:29.655137466Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\""
Mar 17 18:37:30.191236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2297845273.mount: Deactivated successfully.
Mar 17 18:37:31.464581 env[1274]: time="2025-03-17T18:37:31.464523352Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Mar 17 18:37:31.465456 env[1274]: time="2025-03-17T18:37:31.465438601Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Mar 17 18:37:31.466818 env[1274]: time="2025-03-17T18:37:31.466805570Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Mar 17 18:37:31.467945 env[1274]: time="2025-03-17T18:37:31.467931494Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Mar 17 18:37:31.468189 env[1274]: time="2025-03-17T18:37:31.468175180Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\""
Mar 17 18:37:31.474025 env[1274]: time="2025-03-17T18:37:31.474011485Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\""
Mar 17 18:37:32.934711 env[1274]: time="2025-03-17T18:37:32.934679413Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Mar 17 18:37:32.935677 env[1274]: time="2025-03-17T18:37:32.935659993Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Mar 17 18:37:32.936708 env[1274]: time="2025-03-17T18:37:32.936691910Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Mar 17 18:37:32.937731 env[1274]: time="2025-03-17T18:37:32.937717997Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Mar 17 18:37:32.938184 env[1274]: time="2025-03-17T18:37:32.938166364Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\""
Mar 17 18:37:32.944266 env[1274]: time="2025-03-17T18:37:32.944241549Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\""
Mar 17 18:37:33.622854 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4.
Mar 17 18:37:33.623015 systemd[1]: Stopped kubelet.service.
Mar 17 18:37:33.624496 systemd[1]: Starting kubelet.service...
Mar 17 18:37:33.680468 systemd[1]: Started kubelet.service.
Mar 17 18:37:33.689518 update_engine[1245]: I0317 18:37:33.689119  1245 update_attempter.cc:509] Updating boot flags...
Mar 17 18:37:33.731313 kubelet[1603]: E0317 18:37:33.731285    1603 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Mar 17 18:37:33.734542 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Mar 17 18:37:33.734621 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Mar 17 18:37:34.231340 env[1274]: time="2025-03-17T18:37:34.231308768Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Mar 17 18:37:34.247463 env[1274]: time="2025-03-17T18:37:34.247437289Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Mar 17 18:37:34.260650 env[1274]: time="2025-03-17T18:37:34.260623782Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Mar 17 18:37:34.265392 env[1274]: time="2025-03-17T18:37:34.265366504Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Mar 17 18:37:34.266290 env[1274]: time="2025-03-17T18:37:34.266269084Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\""
Mar 17 18:37:34.273194 env[1274]: time="2025-03-17T18:37:34.273176360Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\""
Mar 17 18:37:35.416541 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3437152603.mount: Deactivated successfully.
Mar 17 18:37:35.855510 env[1274]: time="2025-03-17T18:37:35.855266669Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Mar 17 18:37:35.861677 env[1274]: time="2025-03-17T18:37:35.861657128Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Mar 17 18:37:35.868750 env[1274]: time="2025-03-17T18:37:35.868734232Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Mar 17 18:37:35.870814 env[1274]: time="2025-03-17T18:37:35.870794657Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Mar 17 18:37:35.871119 env[1274]: time="2025-03-17T18:37:35.871100327Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\""
Mar 17 18:37:35.876996 env[1274]: time="2025-03-17T18:37:35.876968345Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\""
Mar 17 18:37:36.384982 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2411941641.mount: Deactivated successfully.
Mar 17 18:37:37.423261 env[1274]: time="2025-03-17T18:37:37.423234306Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Mar 17 18:37:37.424517 env[1274]: time="2025-03-17T18:37:37.424503947Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Mar 17 18:37:37.425791 env[1274]: time="2025-03-17T18:37:37.425778757Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Mar 17 18:37:37.427004 env[1274]: time="2025-03-17T18:37:37.426991052Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Mar 17 18:37:37.427516 env[1274]: time="2025-03-17T18:37:37.427500938Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\""
Mar 17 18:37:37.433974 env[1274]: time="2025-03-17T18:37:37.433943661Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\""
Mar 17 18:37:37.852899 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount342977201.mount: Deactivated successfully.
Mar 17 18:37:37.855134 env[1274]: time="2025-03-17T18:37:37.855112905Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Mar 17 18:37:37.855562 env[1274]: time="2025-03-17T18:37:37.855550231Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Mar 17 18:37:37.856326 env[1274]: time="2025-03-17T18:37:37.856314902Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Mar 17 18:37:37.857011 env[1274]: time="2025-03-17T18:37:37.856998809Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Mar 17 18:37:37.857340 env[1274]: time="2025-03-17T18:37:37.857321280Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\""
Mar 17 18:37:37.863474 env[1274]: time="2025-03-17T18:37:37.863446019Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\""
Mar 17 18:37:38.552117 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1341625767.mount: Deactivated successfully.
Mar 17 18:37:40.757778 env[1274]: time="2025-03-17T18:37:40.757743833Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Mar 17 18:37:40.771407 env[1274]: time="2025-03-17T18:37:40.771375209Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Mar 17 18:37:40.776796 env[1274]: time="2025-03-17T18:37:40.776761834Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Mar 17 18:37:40.793036 env[1274]: time="2025-03-17T18:37:40.793009704Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Mar 17 18:37:40.793630 env[1274]: time="2025-03-17T18:37:40.793611364Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\""
Mar 17 18:37:42.552414 systemd[1]: Stopped kubelet.service.
Mar 17 18:37:42.553760 systemd[1]: Starting kubelet.service...
Mar 17 18:37:42.567310 systemd[1]: Reloading.
Mar 17 18:37:42.644955 /usr/lib/systemd/system-generators/torcx-generator[1732]: time="2025-03-17T18:37:42Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]"
Mar 17 18:37:42.644976 /usr/lib/systemd/system-generators/torcx-generator[1732]: time="2025-03-17T18:37:42Z" level=info msg="torcx already run"
Mar 17 18:37:42.680913 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Mar 17 18:37:42.681022 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Mar 17 18:37:42.692726 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Mar 17 18:37:42.758292 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM
Mar 17 18:37:42.758499 systemd[1]: kubelet.service: Failed with result 'signal'.
Mar 17 18:37:42.758751 systemd[1]: Stopped kubelet.service.
Mar 17 18:37:42.760618 systemd[1]: Starting kubelet.service...
Mar 17 18:37:43.558052 systemd[1]: Started kubelet.service.
Mar 17 18:37:43.678108 kubelet[1798]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Mar 17 18:37:43.678350 kubelet[1798]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Mar 17 18:37:43.678390 kubelet[1798]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Mar 17 18:37:43.679497 kubelet[1798]: I0317 18:37:43.679478    1798 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Mar 17 18:37:43.930399 kubelet[1798]: I0317 18:37:43.930375    1798 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
Mar 17 18:37:43.930399 kubelet[1798]: I0317 18:37:43.930394    1798 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Mar 17 18:37:43.930559 kubelet[1798]: I0317 18:37:43.930548    1798 server.go:927] "Client rotation is on, will bootstrap in background"
Mar 17 18:37:43.941001 kubelet[1798]: I0317 18:37:43.940696    1798 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Mar 17 18:37:43.941515 kubelet[1798]: E0317 18:37:43.941504    1798 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.70.110:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.70.110:6443: connect: connection refused
Mar 17 18:37:43.948004 kubelet[1798]: I0317 18:37:43.947994    1798 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Mar 17 18:37:43.948217 kubelet[1798]: I0317 18:37:43.948196    1798 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Mar 17 18:37:43.948360 kubelet[1798]: I0317 18:37:43.948260    1798 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Mar 17 18:37:43.948949 kubelet[1798]: I0317 18:37:43.948939    1798 topology_manager.go:138] "Creating topology manager with none policy"
Mar 17 18:37:43.948997 kubelet[1798]: I0317 18:37:43.948990    1798 container_manager_linux.go:301] "Creating device plugin manager"
Mar 17 18:37:43.949117 kubelet[1798]: I0317 18:37:43.949110    1798 state_mem.go:36] "Initialized new in-memory state store"
Mar 17 18:37:43.949869 kubelet[1798]: I0317 18:37:43.949862    1798 kubelet.go:400] "Attempting to sync node with API server"
Mar 17 18:37:43.949939 kubelet[1798]: I0317 18:37:43.949927    1798 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
Mar 17 18:37:43.949997 kubelet[1798]: I0317 18:37:43.949989    1798 kubelet.go:312] "Adding apiserver pod source"
Mar 17 18:37:43.950047 kubelet[1798]: I0317 18:37:43.950040    1798 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Mar 17 18:37:43.956197 kubelet[1798]: I0317 18:37:43.956185    1798 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1"
Mar 17 18:37:43.957355 kubelet[1798]: I0317 18:37:43.957341    1798 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Mar 17 18:37:43.957391 kubelet[1798]: W0317 18:37:43.957375    1798 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Mar 17 18:37:43.957676 kubelet[1798]: I0317 18:37:43.957659    1798 server.go:1264] "Started kubelet"
Mar 17 18:37:43.958153 kubelet[1798]: W0317 18:37:43.957739    1798 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused
Mar 17 18:37:43.958153 kubelet[1798]: E0317 18:37:43.957774    1798 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused
Mar 17 18:37:43.967836 kernel: SELinux:  Context system_u:object_r:container_file_t:s0 is not valid (left unmapped).
Mar 17 18:37:43.967954 kubelet[1798]: I0317 18:37:43.967940    1798 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Mar 17 18:37:43.972067 kubelet[1798]: W0317 18:37:43.972044    1798 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.110:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused
Mar 17 18:37:43.972152 kubelet[1798]: E0317 18:37:43.972143    1798 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.110:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused
Mar 17 18:37:43.972299 kubelet[1798]: I0317 18:37:43.972285    1798 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
Mar 17 18:37:43.972926 kubelet[1798]: I0317 18:37:43.972916    1798 server.go:455] "Adding debug handlers to kubelet server"
Mar 17 18:37:43.973213 kubelet[1798]: I0317 18:37:43.973200    1798 volume_manager.go:291] "Starting Kubelet Volume Manager"
Mar 17 18:37:43.973470 kubelet[1798]: I0317 18:37:43.973443    1798 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Mar 17 18:37:43.973609 kubelet[1798]: I0317 18:37:43.973601    1798 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Mar 17 18:37:43.973841 kubelet[1798]: E0317 18:37:43.973823    1798 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.110:6443: connect: connection refused" interval="200ms"
Mar 17 18:37:43.974049 kubelet[1798]: E0317 18:37:43.973995    1798 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.110:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.110:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182dab03dddb5d20  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-17 18:37:43.957642528 +0000 UTC m=+0.397163672,LastTimestamp:2025-03-17 18:37:43.957642528 +0000 UTC m=+0.397163672,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}"
Mar 17 18:37:43.974218 kubelet[1798]: I0317 18:37:43.974209    1798 factory.go:221] Registration of the systemd container factory successfully
Mar 17 18:37:43.974305 kubelet[1798]: I0317 18:37:43.974296    1798 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Mar 17 18:37:43.975349 kubelet[1798]: I0317 18:37:43.974952    1798 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
Mar 17 18:37:43.975349 kubelet[1798]: I0317 18:37:43.974982    1798 reconciler.go:26] "Reconciler: start to sync state"
Mar 17 18:37:43.975349 kubelet[1798]: W0317 18:37:43.975190    1798 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused
Mar 17 18:37:43.975349 kubelet[1798]: E0317 18:37:43.975241    1798 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused
Mar 17 18:37:43.976118 kubelet[1798]: I0317 18:37:43.976109    1798 factory.go:221] Registration of the containerd container factory successfully
Mar 17 18:37:43.998754 kubelet[1798]: E0317 18:37:43.998741    1798 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Mar 17 18:37:44.000473 kubelet[1798]: I0317 18:37:44.000463    1798 cpu_manager.go:214] "Starting CPU manager" policy="none"
Mar 17 18:37:44.000534 kubelet[1798]: I0317 18:37:44.000522    1798 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Mar 17 18:37:44.000588 kubelet[1798]: I0317 18:37:44.000582    1798 state_mem.go:36] "Initialized new in-memory state store"
Mar 17 18:37:44.001535 kubelet[1798]: I0317 18:37:44.001528    1798 policy_none.go:49] "None policy: Start"
Mar 17 18:37:44.001909 kubelet[1798]: I0317 18:37:44.001897    1798 memory_manager.go:170] "Starting memorymanager" policy="None"
Mar 17 18:37:44.001945 kubelet[1798]: I0317 18:37:44.001914    1798 state_mem.go:35] "Initializing new in-memory state store"
Mar 17 18:37:44.004615 systemd[1]: Created slice kubepods.slice.
Mar 17 18:37:44.008146 systemd[1]: Created slice kubepods-burstable.slice.
Mar 17 18:37:44.011888 systemd[1]: Created slice kubepods-besteffort.slice.
Mar 17 18:37:44.013429 kubelet[1798]: I0317 18:37:44.013410    1798 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Mar 17 18:37:44.014370 kubelet[1798]: I0317 18:37:44.014361    1798 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Mar 17 18:37:44.014424 kubelet[1798]: I0317 18:37:44.014417    1798 status_manager.go:217] "Starting to sync pod status with apiserver"
Mar 17 18:37:44.014519 kubelet[1798]: I0317 18:37:44.014512    1798 kubelet.go:2337] "Starting kubelet main sync loop"
Mar 17 18:37:44.014579 kubelet[1798]: E0317 18:37:44.014569    1798 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Mar 17 18:37:44.015623 kubelet[1798]: I0317 18:37:44.015609    1798 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Mar 17 18:37:44.015692 kubelet[1798]: W0317 18:37:44.015669    1798 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused
Mar 17 18:37:44.015747 kubelet[1798]: E0317 18:37:44.015739    1798 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused
Mar 17 18:37:44.015791 kubelet[1798]: I0317 18:37:44.015688    1798 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
Mar 17 18:37:44.016760 kubelet[1798]: I0317 18:37:44.016752    1798 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Mar 17 18:37:44.016887 kubelet[1798]: E0317 18:37:44.016466    1798 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found"
Mar 17 18:37:44.074775 kubelet[1798]: I0317 18:37:44.074744    1798 kubelet_node_status.go:73] "Attempting to register node" node="localhost"
Mar 17 18:37:44.075025 kubelet[1798]: E0317 18:37:44.075008    1798 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.110:6443/api/v1/nodes\": dial tcp 139.178.70.110:6443: connect: connection refused" node="localhost"
Mar 17 18:37:44.115243 kubelet[1798]: I0317 18:37:44.115199    1798 topology_manager.go:215] "Topology Admit Handler" podUID="1d491a388749dc66d23e0cfa0fee1b89" podNamespace="kube-system" podName="kube-apiserver-localhost"
Mar 17 18:37:44.116159 kubelet[1798]: I0317 18:37:44.116145    1798 topology_manager.go:215] "Topology Admit Handler" podUID="23a18e2dc14f395c5f1bea711a5a9344" podNamespace="kube-system" podName="kube-controller-manager-localhost"
Mar 17 18:37:44.117407 kubelet[1798]: I0317 18:37:44.117272    1798 topology_manager.go:215] "Topology Admit Handler" podUID="d79ab404294384d4bcc36fb5b5509bbb" podNamespace="kube-system" podName="kube-scheduler-localhost"
Mar 17 18:37:44.120556 systemd[1]: Created slice kubepods-burstable-pod1d491a388749dc66d23e0cfa0fee1b89.slice.
Mar 17 18:37:44.129391 systemd[1]: Created slice kubepods-burstable-pod23a18e2dc14f395c5f1bea711a5a9344.slice.
Mar 17 18:37:44.139986 systemd[1]: Created slice kubepods-burstable-podd79ab404294384d4bcc36fb5b5509bbb.slice.
Mar 17 18:37:44.174683 kubelet[1798]: E0317 18:37:44.174635    1798 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.110:6443: connect: connection refused" interval="400ms"
Mar 17 18:37:44.276866 kubelet[1798]: I0317 18:37:44.276746    1798 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d79ab404294384d4bcc36fb5b5509bbb-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d79ab404294384d4bcc36fb5b5509bbb\") " pod="kube-system/kube-scheduler-localhost"
Mar 17 18:37:44.276866 kubelet[1798]: I0317 18:37:44.276777    1798 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1d491a388749dc66d23e0cfa0fee1b89-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1d491a388749dc66d23e0cfa0fee1b89\") " pod="kube-system/kube-apiserver-localhost"
Mar 17 18:37:44.276866 kubelet[1798]: I0317 18:37:44.276793    1798 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1d491a388749dc66d23e0cfa0fee1b89-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1d491a388749dc66d23e0cfa0fee1b89\") " pod="kube-system/kube-apiserver-localhost"
Mar 17 18:37:44.277447 kubelet[1798]: I0317 18:37:44.276806    1798 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1d491a388749dc66d23e0cfa0fee1b89-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1d491a388749dc66d23e0cfa0fee1b89\") " pod="kube-system/kube-apiserver-localhost"
Mar 17 18:37:44.277607 kubelet[1798]: I0317 18:37:44.277575    1798 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost"
Mar 17 18:37:44.277761 kubelet[1798]: I0317 18:37:44.277748    1798 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost"
Mar 17 18:37:44.277886 kubelet[1798]: I0317 18:37:44.277874    1798 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost"
Mar 17 18:37:44.278029 kubelet[1798]: I0317 18:37:44.278017    1798 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost"
Mar 17 18:37:44.278185 kubelet[1798]: I0317 18:37:44.278162    1798 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost"
Mar 17 18:37:44.278988 kubelet[1798]: I0317 18:37:44.278966    1798 kubelet_node_status.go:73] "Attempting to register node" node="localhost"
Mar 17 18:37:44.279388 kubelet[1798]: E0317 18:37:44.279372    1798 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.110:6443/api/v1/nodes\": dial tcp 139.178.70.110:6443: connect: connection refused" node="localhost"
Mar 17 18:37:44.428575 env[1274]: time="2025-03-17T18:37:44.428270315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1d491a388749dc66d23e0cfa0fee1b89,Namespace:kube-system,Attempt:0,}"
Mar 17 18:37:44.431618 env[1274]: time="2025-03-17T18:37:44.431581812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:23a18e2dc14f395c5f1bea711a5a9344,Namespace:kube-system,Attempt:0,}"
Mar 17 18:37:44.442018 env[1274]: time="2025-03-17T18:37:44.441994752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d79ab404294384d4bcc36fb5b5509bbb,Namespace:kube-system,Attempt:0,}"
Mar 17 18:37:44.575154 kubelet[1798]: E0317 18:37:44.575097    1798 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.110:6443: connect: connection refused" interval="800ms"
Mar 17 18:37:44.680463 kubelet[1798]: I0317 18:37:44.680439    1798 kubelet_node_status.go:73] "Attempting to register node" node="localhost"
Mar 17 18:37:44.680720 kubelet[1798]: E0317 18:37:44.680647    1798 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.110:6443/api/v1/nodes\": dial tcp 139.178.70.110:6443: connect: connection refused" node="localhost"
Mar 17 18:37:44.909472 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount828367007.mount: Deactivated successfully.
Mar 17 18:37:44.911584 env[1274]: time="2025-03-17T18:37:44.911561790Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Mar 17 18:37:44.912184 env[1274]: time="2025-03-17T18:37:44.912169672Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Mar 17 18:37:44.912657 env[1274]: time="2025-03-17T18:37:44.912643374Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Mar 17 18:37:44.913051 env[1274]: time="2025-03-17T18:37:44.913037043Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Mar 17 18:37:44.913499 env[1274]: time="2025-03-17T18:37:44.913486051Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Mar 17 18:37:44.915220 env[1274]: time="2025-03-17T18:37:44.915202912Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Mar 17 18:37:44.916950 env[1274]: time="2025-03-17T18:37:44.916937621Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Mar 17 18:37:44.918647 env[1274]: time="2025-03-17T18:37:44.918627056Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Mar 17 18:37:44.919039 env[1274]: time="2025-03-17T18:37:44.919004444Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Mar 17 18:37:44.919448 env[1274]: time="2025-03-17T18:37:44.919431547Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Mar 17 18:37:44.919809 env[1274]: time="2025-03-17T18:37:44.919794170Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Mar 17 18:37:44.920161 env[1274]: time="2025-03-17T18:37:44.920147096Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Mar 17 18:37:44.942984 env[1274]: time="2025-03-17T18:37:44.936276303Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Mar 17 18:37:44.942984 env[1274]: time="2025-03-17T18:37:44.936303133Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Mar 17 18:37:44.942984 env[1274]: time="2025-03-17T18:37:44.936310085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 17 18:37:44.942984 env[1274]: time="2025-03-17T18:37:44.936414513Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dd1106d8b319fd8059088007e3a8d435fdbb238f119396e34efa457df0c5cbba pid=1838 runtime=io.containerd.runc.v2
Mar 17 18:37:44.943203 env[1274]: time="2025-03-17T18:37:44.937741657Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Mar 17 18:37:44.943203 env[1274]: time="2025-03-17T18:37:44.937777403Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Mar 17 18:37:44.943203 env[1274]: time="2025-03-17T18:37:44.937784831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 17 18:37:44.943203 env[1274]: time="2025-03-17T18:37:44.937898325Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ebe961abac8d082f29504e46061576f1394c5d453aa5b299962fe153c3642af2 pid=1848 runtime=io.containerd.runc.v2
Mar 17 18:37:44.953061 env[1274]: time="2025-03-17T18:37:44.952950352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Mar 17 18:37:44.953061 env[1274]: time="2025-03-17T18:37:44.952974272Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Mar 17 18:37:44.953061 env[1274]: time="2025-03-17T18:37:44.952981113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 17 18:37:44.953195 env[1274]: time="2025-03-17T18:37:44.953076765Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5d183ca0e64b6060c5beaf1bfae3b26e09a088dd893f9f0bccd23a836086f31a pid=1877 runtime=io.containerd.runc.v2
Mar 17 18:37:44.958861 systemd[1]: Started cri-containerd-dd1106d8b319fd8059088007e3a8d435fdbb238f119396e34efa457df0c5cbba.scope.
Mar 17 18:37:44.980699 systemd[1]: Started cri-containerd-5d183ca0e64b6060c5beaf1bfae3b26e09a088dd893f9f0bccd23a836086f31a.scope.
Mar 17 18:37:44.983788 systemd[1]: Started cri-containerd-ebe961abac8d082f29504e46061576f1394c5d453aa5b299962fe153c3642af2.scope.
Mar 17 18:37:45.018696 env[1274]: time="2025-03-17T18:37:45.018669394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1d491a388749dc66d23e0cfa0fee1b89,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd1106d8b319fd8059088007e3a8d435fdbb238f119396e34efa457df0c5cbba\""
Mar 17 18:37:45.025005 env[1274]: time="2025-03-17T18:37:45.024980130Z" level=info msg="CreateContainer within sandbox \"dd1106d8b319fd8059088007e3a8d435fdbb238f119396e34efa457df0c5cbba\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}"
Mar 17 18:37:45.033605 env[1274]: time="2025-03-17T18:37:45.033577994Z" level=info msg="CreateContainer within sandbox \"dd1106d8b319fd8059088007e3a8d435fdbb238f119396e34efa457df0c5cbba\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d6bbd8cf482d7029a5028263a2e07e3e5117cbc7ea08b7a1f287700da496a3fa\""
Mar 17 18:37:45.033941 env[1274]: time="2025-03-17T18:37:45.033924418Z" level=info msg="StartContainer for \"d6bbd8cf482d7029a5028263a2e07e3e5117cbc7ea08b7a1f287700da496a3fa\""
Mar 17 18:37:45.038505 env[1274]: time="2025-03-17T18:37:45.038482751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:23a18e2dc14f395c5f1bea711a5a9344,Namespace:kube-system,Attempt:0,} returns sandbox id \"ebe961abac8d082f29504e46061576f1394c5d453aa5b299962fe153c3642af2\""
Mar 17 18:37:45.040004 env[1274]: time="2025-03-17T18:37:45.039985825Z" level=info msg="CreateContainer within sandbox \"ebe961abac8d082f29504e46061576f1394c5d453aa5b299962fe153c3642af2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}"
Mar 17 18:37:45.046616 systemd[1]: Started cri-containerd-d6bbd8cf482d7029a5028263a2e07e3e5117cbc7ea08b7a1f287700da496a3fa.scope.
Mar 17 18:37:45.052518 env[1274]: time="2025-03-17T18:37:45.052489388Z" level=info msg="CreateContainer within sandbox \"ebe961abac8d082f29504e46061576f1394c5d453aa5b299962fe153c3642af2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"270aaece15db7c8d60ec31add02cdf373e34766d512dc04239ba08f390656bff\""
Mar 17 18:37:45.052803 env[1274]: time="2025-03-17T18:37:45.052787583Z" level=info msg="StartContainer for \"270aaece15db7c8d60ec31add02cdf373e34766d512dc04239ba08f390656bff\""
Mar 17 18:37:45.061764 kubelet[1798]: W0317 18:37:45.061690    1798 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused
Mar 17 18:37:45.061764 kubelet[1798]: E0317 18:37:45.061748    1798 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused
Mar 17 18:37:45.066570 env[1274]: time="2025-03-17T18:37:45.066546972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d79ab404294384d4bcc36fb5b5509bbb,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d183ca0e64b6060c5beaf1bfae3b26e09a088dd893f9f0bccd23a836086f31a\""
Mar 17 18:37:45.069206 env[1274]: time="2025-03-17T18:37:45.069190128Z" level=info msg="CreateContainer within sandbox \"5d183ca0e64b6060c5beaf1bfae3b26e09a088dd893f9f0bccd23a836086f31a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}"
Mar 17 18:37:45.075634 env[1274]: time="2025-03-17T18:37:45.075598952Z" level=info msg="CreateContainer within sandbox \"5d183ca0e64b6060c5beaf1bfae3b26e09a088dd893f9f0bccd23a836086f31a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0cebf5f407c5acde3e00690e399ed4306f9255902c4fa0e8492b296d75547fb0\""
Mar 17 18:37:45.076279 env[1274]: time="2025-03-17T18:37:45.076260808Z" level=info msg="StartContainer for \"0cebf5f407c5acde3e00690e399ed4306f9255902c4fa0e8492b296d75547fb0\""
Mar 17 18:37:45.080276 systemd[1]: Started cri-containerd-270aaece15db7c8d60ec31add02cdf373e34766d512dc04239ba08f390656bff.scope.
Mar 17 18:37:45.095216 systemd[1]: Started cri-containerd-0cebf5f407c5acde3e00690e399ed4306f9255902c4fa0e8492b296d75547fb0.scope.
Mar 17 18:37:45.104406 env[1274]: time="2025-03-17T18:37:45.104379288Z" level=info msg="StartContainer for \"d6bbd8cf482d7029a5028263a2e07e3e5117cbc7ea08b7a1f287700da496a3fa\" returns successfully"
Mar 17 18:37:45.127698 env[1274]: time="2025-03-17T18:37:45.127661591Z" level=info msg="StartContainer for \"270aaece15db7c8d60ec31add02cdf373e34766d512dc04239ba08f390656bff\" returns successfully"
Mar 17 18:37:45.137212 env[1274]: time="2025-03-17T18:37:45.137188457Z" level=info msg="StartContainer for \"0cebf5f407c5acde3e00690e399ed4306f9255902c4fa0e8492b296d75547fb0\" returns successfully"
Mar 17 18:37:45.156069 kubelet[1798]: W0317 18:37:45.156002    1798 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused
Mar 17 18:37:45.156069 kubelet[1798]: E0317 18:37:45.156053    1798 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused
Mar 17 18:37:45.322943 kubelet[1798]: W0317 18:37:45.322856    1798 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.110:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused
Mar 17 18:37:45.322943 kubelet[1798]: E0317 18:37:45.322897    1798 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.110:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused
Mar 17 18:37:45.376148 kubelet[1798]: E0317 18:37:45.376121    1798 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.110:6443: connect: connection refused" interval="1.6s"
Mar 17 18:37:45.440717 kubelet[1798]: W0317 18:37:45.440655    1798 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused
Mar 17 18:37:45.440717 kubelet[1798]: E0317 18:37:45.440697    1798 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.110:6443: connect: connection refused
Mar 17 18:37:45.481974 kubelet[1798]: I0317 18:37:45.481788    1798 kubelet_node_status.go:73] "Attempting to register node" node="localhost"
Mar 17 18:37:45.481974 kubelet[1798]: E0317 18:37:45.481950    1798 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.110:6443/api/v1/nodes\": dial tcp 139.178.70.110:6443: connect: connection refused" node="localhost"
Mar 17 18:37:46.064152 kubelet[1798]: E0317 18:37:46.064115    1798 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.70.110:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.70.110:6443: connect: connection refused
Mar 17 18:37:47.083139 kubelet[1798]: I0317 18:37:47.083123    1798 kubelet_node_status.go:73] "Attempting to register node" node="localhost"
Mar 17 18:37:47.705210 kubelet[1798]: E0317 18:37:47.705165    1798 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost"
Mar 17 18:37:47.778538 kubelet[1798]: I0317 18:37:47.778513    1798 kubelet_node_status.go:76] "Successfully registered node" node="localhost"
Mar 17 18:37:47.783901 kubelet[1798]: E0317 18:37:47.783874    1798 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Mar 17 18:37:47.883979 kubelet[1798]: E0317 18:37:47.883961    1798 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Mar 17 18:37:47.984401 kubelet[1798]: E0317 18:37:47.984338    1798 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Mar 17 18:37:48.084709 kubelet[1798]: E0317 18:37:48.084681    1798 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Mar 17 18:37:48.185481 kubelet[1798]: E0317 18:37:48.185454    1798 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Mar 17 18:37:48.968302 kubelet[1798]: I0317 18:37:48.968277    1798 apiserver.go:52] "Watching apiserver"
Mar 17 18:37:48.975430 kubelet[1798]: I0317 18:37:48.975402    1798 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
Mar 17 18:37:49.579251 systemd[1]: Reloading.
Mar 17 18:37:49.647710 /usr/lib/systemd/system-generators/torcx-generator[2095]: time="2025-03-17T18:37:49Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]"
Mar 17 18:37:49.648487 /usr/lib/systemd/system-generators/torcx-generator[2095]: time="2025-03-17T18:37:49Z" level=info msg="torcx already run"
Mar 17 18:37:49.704979 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Mar 17 18:37:49.704994 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Mar 17 18:37:49.716472 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Mar 17 18:37:49.785949 kubelet[1798]: E0317 18:37:49.785862    1798 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{localhost.182dab03dddb5d20  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-17 18:37:43.957642528 +0000 UTC m=+0.397163672,LastTimestamp:2025-03-17 18:37:43.957642528 +0000 UTC m=+0.397163672,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}"
Mar 17 18:37:49.786024 systemd[1]: Stopping kubelet.service...
Mar 17 18:37:49.803451 systemd[1]: kubelet.service: Deactivated successfully.
Mar 17 18:37:49.803561 systemd[1]: Stopped kubelet.service.
Mar 17 18:37:49.805041 systemd[1]: Starting kubelet.service...
Mar 17 18:37:50.805638 systemd[1]: Started kubelet.service.
Mar 17 18:37:50.890909 kubelet[2158]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Mar 17 18:37:50.890909 kubelet[2158]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Mar 17 18:37:50.890909 kubelet[2158]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Mar 17 18:37:50.891152 kubelet[2158]: I0317 18:37:50.890934    2158 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Mar 17 18:37:50.893933 kubelet[2158]: I0317 18:37:50.893569    2158 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
Mar 17 18:37:50.893933 kubelet[2158]: I0317 18:37:50.893585    2158 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Mar 17 18:37:50.893933 kubelet[2158]: I0317 18:37:50.893696    2158 server.go:927] "Client rotation is on, will bootstrap in background"
Mar 17 18:37:50.895671 kubelet[2158]: I0317 18:37:50.895545    2158 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Mar 17 18:37:50.897104 kubelet[2158]: I0317 18:37:50.897078    2158 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Mar 17 18:37:50.897401 sudo[2169]:     root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin
Mar 17 18:37:50.897532 sudo[2169]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Mar 17 18:37:50.904420 kubelet[2158]: I0317 18:37:50.904388    2158 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Mar 17 18:37:50.904659 kubelet[2158]: I0317 18:37:50.904637    2158 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Mar 17 18:37:50.904819 kubelet[2158]: I0317 18:37:50.904704    2158 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Mar 17 18:37:50.904920 kubelet[2158]: I0317 18:37:50.904912    2158 topology_manager.go:138] "Creating topology manager with none policy"
Mar 17 18:37:50.904969 kubelet[2158]: I0317 18:37:50.904962    2158 container_manager_linux.go:301] "Creating device plugin manager"
Mar 17 18:37:50.905073 kubelet[2158]: I0317 18:37:50.905064    2158 state_mem.go:36] "Initialized new in-memory state store"
Mar 17 18:37:50.905193 kubelet[2158]: I0317 18:37:50.905186    2158 kubelet.go:400] "Attempting to sync node with API server"
Mar 17 18:37:50.905242 kubelet[2158]: I0317 18:37:50.905235    2158 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
Mar 17 18:37:50.905295 kubelet[2158]: I0317 18:37:50.905289    2158 kubelet.go:312] "Adding apiserver pod source"
Mar 17 18:37:50.905341 kubelet[2158]: I0317 18:37:50.905335    2158 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Mar 17 18:37:50.909914 kubelet[2158]: I0317 18:37:50.909570    2158 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1"
Mar 17 18:37:50.909914 kubelet[2158]: I0317 18:37:50.909680    2158 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Mar 17 18:37:50.910012 kubelet[2158]: I0317 18:37:50.909941    2158 server.go:1264] "Started kubelet"
Mar 17 18:37:50.916159 kubelet[2158]: I0317 18:37:50.916143    2158 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Mar 17 18:37:50.924861 kubelet[2158]: I0317 18:37:50.924840    2158 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
Mar 17 18:37:50.925970 kubelet[2158]: I0317 18:37:50.925961    2158 volume_manager.go:291] "Starting Kubelet Volume Manager"
Mar 17 18:37:50.926532 kubelet[2158]: I0317 18:37:50.926523    2158 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
Mar 17 18:37:50.926900 kubelet[2158]: I0317 18:37:50.926893    2158 reconciler.go:26] "Reconciler: start to sync state"
Mar 17 18:37:50.927853 kubelet[2158]: I0317 18:37:50.927842    2158 factory.go:221] Registration of the systemd container factory successfully
Mar 17 18:37:50.927962 kubelet[2158]: I0317 18:37:50.927951    2158 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Mar 17 18:37:50.929506 kubelet[2158]: E0317 18:37:50.929496    2158 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Mar 17 18:37:50.931565 kubelet[2158]: I0317 18:37:50.931548    2158 factory.go:221] Registration of the containerd container factory successfully
Mar 17 18:37:50.934575 kubelet[2158]: I0317 18:37:50.934566    2158 server.go:455] "Adding debug handlers to kubelet server"
Mar 17 18:37:50.935120 kubelet[2158]: I0317 18:37:50.935094    2158 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Mar 17 18:37:50.935303 kubelet[2158]: I0317 18:37:50.935295    2158 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Mar 17 18:37:50.953881 kubelet[2158]: I0317 18:37:50.953867    2158 cpu_manager.go:214] "Starting CPU manager" policy="none"
Mar 17 18:37:50.953992 kubelet[2158]: I0317 18:37:50.953983    2158 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Mar 17 18:37:50.954056 kubelet[2158]: I0317 18:37:50.954050    2158 state_mem.go:36] "Initialized new in-memory state store"
Mar 17 18:37:50.954189 kubelet[2158]: I0317 18:37:50.954180    2158 state_mem.go:88] "Updated default CPUSet" cpuSet=""
Mar 17 18:37:50.954258 kubelet[2158]: I0317 18:37:50.954243    2158 state_mem.go:96] "Updated CPUSet assignments" assignments={}
Mar 17 18:37:50.954305 kubelet[2158]: I0317 18:37:50.954298    2158 policy_none.go:49] "None policy: Start"
Mar 17 18:37:50.954631 kubelet[2158]: I0317 18:37:50.954623    2158 memory_manager.go:170] "Starting memorymanager" policy="None"
Mar 17 18:37:50.954682 kubelet[2158]: I0317 18:37:50.954675    2158 state_mem.go:35] "Initializing new in-memory state store"
Mar 17 18:37:50.954787 kubelet[2158]: I0317 18:37:50.954780    2158 state_mem.go:75] "Updated machine memory state"
Mar 17 18:37:50.956798 kubelet[2158]: I0317 18:37:50.956789    2158 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Mar 17 18:37:50.956947 kubelet[2158]: I0317 18:37:50.956931    2158 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
Mar 17 18:37:50.957031 kubelet[2158]: I0317 18:37:50.957025    2158 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Mar 17 18:37:50.962673 kubelet[2158]: I0317 18:37:50.962653    2158 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Mar 17 18:37:50.963452 kubelet[2158]: I0317 18:37:50.963386    2158 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Mar 17 18:37:50.963452 kubelet[2158]: I0317 18:37:50.963401    2158 status_manager.go:217] "Starting to sync pod status with apiserver"
Mar 17 18:37:50.963452 kubelet[2158]: I0317 18:37:50.963412    2158 kubelet.go:2337] "Starting kubelet main sync loop"
Mar 17 18:37:50.963452 kubelet[2158]: E0317 18:37:50.963433    2158 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
Mar 17 18:37:51.027738 kubelet[2158]: I0317 18:37:51.027723    2158 kubelet_node_status.go:73] "Attempting to register node" node="localhost"
Mar 17 18:37:51.032263 kubelet[2158]: I0317 18:37:51.032240    2158 kubelet_node_status.go:112] "Node was previously registered" node="localhost"
Mar 17 18:37:51.032354 kubelet[2158]: I0317 18:37:51.032292    2158 kubelet_node_status.go:76] "Successfully registered node" node="localhost"
Mar 17 18:37:51.063866 kubelet[2158]: I0317 18:37:51.063799    2158 topology_manager.go:215] "Topology Admit Handler" podUID="1d491a388749dc66d23e0cfa0fee1b89" podNamespace="kube-system" podName="kube-apiserver-localhost"
Mar 17 18:37:51.063866 kubelet[2158]: I0317 18:37:51.063860    2158 topology_manager.go:215] "Topology Admit Handler" podUID="23a18e2dc14f395c5f1bea711a5a9344" podNamespace="kube-system" podName="kube-controller-manager-localhost"
Mar 17 18:37:51.063981 kubelet[2158]: I0317 18:37:51.063900    2158 topology_manager.go:215] "Topology Admit Handler" podUID="d79ab404294384d4bcc36fb5b5509bbb" podNamespace="kube-system" podName="kube-scheduler-localhost"
Mar 17 18:37:51.228519 kubelet[2158]: I0317 18:37:51.228489    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1d491a388749dc66d23e0cfa0fee1b89-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1d491a388749dc66d23e0cfa0fee1b89\") " pod="kube-system/kube-apiserver-localhost"
Mar 17 18:37:51.228519 kubelet[2158]: I0317 18:37:51.228516    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost"
Mar 17 18:37:51.228698 kubelet[2158]: I0317 18:37:51.228528    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost"
Mar 17 18:37:51.228698 kubelet[2158]: I0317 18:37:51.228537    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost"
Mar 17 18:37:51.228698 kubelet[2158]: I0317 18:37:51.228549    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1d491a388749dc66d23e0cfa0fee1b89-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1d491a388749dc66d23e0cfa0fee1b89\") " pod="kube-system/kube-apiserver-localhost"
Mar 17 18:37:51.228698 kubelet[2158]: I0317 18:37:51.228559    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost"
Mar 17 18:37:51.228698 kubelet[2158]: I0317 18:37:51.228568    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost"
Mar 17 18:37:51.228795 kubelet[2158]: I0317 18:37:51.228576    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d79ab404294384d4bcc36fb5b5509bbb-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d79ab404294384d4bcc36fb5b5509bbb\") " pod="kube-system/kube-scheduler-localhost"
Mar 17 18:37:51.228795 kubelet[2158]: I0317 18:37:51.228585    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1d491a388749dc66d23e0cfa0fee1b89-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1d491a388749dc66d23e0cfa0fee1b89\") " pod="kube-system/kube-apiserver-localhost"
Mar 17 18:37:51.510544 sudo[2169]: pam_unix(sudo:session): session closed for user root
Mar 17 18:37:51.909060 kubelet[2158]: I0317 18:37:51.909030    2158 apiserver.go:52] "Watching apiserver"
Mar 17 18:37:51.926942 kubelet[2158]: I0317 18:37:51.926916    2158 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
Mar 17 18:37:51.981108 kubelet[2158]: E0317 18:37:51.981078    2158 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost"
Mar 17 18:37:51.996346 kubelet[2158]: I0317 18:37:51.996307    2158 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.99629333 podStartE2EDuration="996.29333ms" podCreationTimestamp="2025-03-17 18:37:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:37:51.989749652 +0000 UTC m=+1.142563454" watchObservedRunningTime="2025-03-17 18:37:51.99629333 +0000 UTC m=+1.149107130"
Mar 17 18:37:52.000070 kubelet[2158]: I0317 18:37:52.000046    2158 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.000029953 podStartE2EDuration="1.000029953s" podCreationTimestamp="2025-03-17 18:37:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:37:51.996552749 +0000 UTC m=+1.149366547" watchObservedRunningTime="2025-03-17 18:37:52.000029953 +0000 UTC m=+1.152843754"
Mar 17 18:37:52.004336 kubelet[2158]: I0317 18:37:52.004301    2158 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.004287396 podStartE2EDuration="1.004287396s" podCreationTimestamp="2025-03-17 18:37:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:37:52.000237055 +0000 UTC m=+1.153050858" watchObservedRunningTime="2025-03-17 18:37:52.004287396 +0000 UTC m=+1.157101197"
Mar 17 18:37:53.246399 sudo[1451]: pam_unix(sudo:session): session closed for user root
Mar 17 18:37:53.255019 sshd[1448]: pam_unix(sshd:session): session closed for user core
Mar 17 18:37:53.257523 systemd-logind[1244]: Session 7 logged out. Waiting for processes to exit.
Mar 17 18:37:53.258234 systemd[1]: sshd@4-139.178.70.110:22-139.178.68.195:39668.service: Deactivated successfully.
Mar 17 18:37:53.258877 systemd[1]: session-7.scope: Deactivated successfully.
Mar 17 18:37:53.258998 systemd[1]: session-7.scope: Consumed 2.551s CPU time.
Mar 17 18:37:53.259459 systemd-logind[1244]: Removed session 7.
Mar 17 18:38:04.261896 kubelet[2158]: I0317 18:38:04.261874    2158 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24"
Mar 17 18:38:04.262535 env[1274]: time="2025-03-17T18:38:04.262471857Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Mar 17 18:38:04.262809 kubelet[2158]: I0317 18:38:04.262795    2158 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24"
Mar 17 18:38:05.145538 kubelet[2158]: I0317 18:38:05.145513    2158 topology_manager.go:215] "Topology Admit Handler" podUID="0d87d5d5-4269-4f0a-90f8-9a245a822d8e" podNamespace="kube-system" podName="cilium-5kblb"
Mar 17 18:38:05.145781 kubelet[2158]: I0317 18:38:05.145770    2158 topology_manager.go:215] "Topology Admit Handler" podUID="2c134c75-2108-4369-b0c8-6bf0dd1410b8" podNamespace="kube-system" podName="kube-proxy-2dqxl"
Mar 17 18:38:05.150177 systemd[1]: Created slice kubepods-besteffort-pod2c134c75_2108_4369_b0c8_6bf0dd1410b8.slice.
Mar 17 18:38:05.155573 kubelet[2158]: W0317 18:38:05.155516    2158 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object
Mar 17 18:38:05.155706 kubelet[2158]: E0317 18:38:05.155696    2158 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object
Mar 17 18:38:05.163229 systemd[1]: Created slice kubepods-burstable-pod0d87d5d5_4269_4f0a_90f8_9a245a822d8e.slice.
Mar 17 18:38:05.213240 kubelet[2158]: I0317 18:38:05.213212    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-cilium-cgroup\") pod \"cilium-5kblb\" (UID: \"0d87d5d5-4269-4f0a-90f8-9a245a822d8e\") " pod="kube-system/cilium-5kblb"
Mar 17 18:38:05.213405 kubelet[2158]: I0317 18:38:05.213392    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-clustermesh-secrets\") pod \"cilium-5kblb\" (UID: \"0d87d5d5-4269-4f0a-90f8-9a245a822d8e\") " pod="kube-system/cilium-5kblb"
Mar 17 18:38:05.213531 kubelet[2158]: I0317 18:38:05.213508    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-cilium-run\") pod \"cilium-5kblb\" (UID: \"0d87d5d5-4269-4f0a-90f8-9a245a822d8e\") " pod="kube-system/cilium-5kblb"
Mar 17 18:38:05.213611 kubelet[2158]: I0317 18:38:05.213602    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-cni-path\") pod \"cilium-5kblb\" (UID: \"0d87d5d5-4269-4f0a-90f8-9a245a822d8e\") " pod="kube-system/cilium-5kblb"
Mar 17 18:38:05.213701 kubelet[2158]: I0317 18:38:05.213691    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-etc-cni-netd\") pod \"cilium-5kblb\" (UID: \"0d87d5d5-4269-4f0a-90f8-9a245a822d8e\") " pod="kube-system/cilium-5kblb"
Mar 17 18:38:05.213803 kubelet[2158]: I0317 18:38:05.213780    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjtn6\" (UniqueName: \"kubernetes.io/projected/2c134c75-2108-4369-b0c8-6bf0dd1410b8-kube-api-access-xjtn6\") pod \"kube-proxy-2dqxl\" (UID: \"2c134c75-2108-4369-b0c8-6bf0dd1410b8\") " pod="kube-system/kube-proxy-2dqxl"
Mar 17 18:38:05.213843 kubelet[2158]: I0317 18:38:05.213814    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-host-proc-sys-net\") pod \"cilium-5kblb\" (UID: \"0d87d5d5-4269-4f0a-90f8-9a245a822d8e\") " pod="kube-system/cilium-5kblb"
Mar 17 18:38:05.213843 kubelet[2158]: I0317 18:38:05.213828    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c134c75-2108-4369-b0c8-6bf0dd1410b8-lib-modules\") pod \"kube-proxy-2dqxl\" (UID: \"2c134c75-2108-4369-b0c8-6bf0dd1410b8\") " pod="kube-system/kube-proxy-2dqxl"
Mar 17 18:38:05.213843 kubelet[2158]: I0317 18:38:05.213837    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-host-proc-sys-kernel\") pod \"cilium-5kblb\" (UID: \"0d87d5d5-4269-4f0a-90f8-9a245a822d8e\") " pod="kube-system/cilium-5kblb"
Mar 17 18:38:05.213904 kubelet[2158]: I0317 18:38:05.213850    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-hubble-tls\") pod \"cilium-5kblb\" (UID: \"0d87d5d5-4269-4f0a-90f8-9a245a822d8e\") " pod="kube-system/cilium-5kblb"
Mar 17 18:38:05.213904 kubelet[2158]: I0317 18:38:05.213871    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2c134c75-2108-4369-b0c8-6bf0dd1410b8-kube-proxy\") pod \"kube-proxy-2dqxl\" (UID: \"2c134c75-2108-4369-b0c8-6bf0dd1410b8\") " pod="kube-system/kube-proxy-2dqxl"
Mar 17 18:38:05.213904 kubelet[2158]: I0317 18:38:05.213881    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2c134c75-2108-4369-b0c8-6bf0dd1410b8-xtables-lock\") pod \"kube-proxy-2dqxl\" (UID: \"2c134c75-2108-4369-b0c8-6bf0dd1410b8\") " pod="kube-system/kube-proxy-2dqxl"
Mar 17 18:38:05.213904 kubelet[2158]: I0317 18:38:05.213890    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-bpf-maps\") pod \"cilium-5kblb\" (UID: \"0d87d5d5-4269-4f0a-90f8-9a245a822d8e\") " pod="kube-system/cilium-5kblb"
Mar 17 18:38:05.213904 kubelet[2158]: I0317 18:38:05.213898    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-cilium-config-path\") pod \"cilium-5kblb\" (UID: \"0d87d5d5-4269-4f0a-90f8-9a245a822d8e\") " pod="kube-system/cilium-5kblb"
Mar 17 18:38:05.213993 kubelet[2158]: I0317 18:38:05.213909    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpcpl\" (UniqueName: \"kubernetes.io/projected/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-kube-api-access-lpcpl\") pod \"cilium-5kblb\" (UID: \"0d87d5d5-4269-4f0a-90f8-9a245a822d8e\") " pod="kube-system/cilium-5kblb"
Mar 17 18:38:05.213993 kubelet[2158]: I0317 18:38:05.213919    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-lib-modules\") pod \"cilium-5kblb\" (UID: \"0d87d5d5-4269-4f0a-90f8-9a245a822d8e\") " pod="kube-system/cilium-5kblb"
Mar 17 18:38:05.213993 kubelet[2158]: I0317 18:38:05.213936    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-hostproc\") pod \"cilium-5kblb\" (UID: \"0d87d5d5-4269-4f0a-90f8-9a245a822d8e\") " pod="kube-system/cilium-5kblb"
Mar 17 18:38:05.213993 kubelet[2158]: I0317 18:38:05.213945    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-xtables-lock\") pod \"cilium-5kblb\" (UID: \"0d87d5d5-4269-4f0a-90f8-9a245a822d8e\") " pod="kube-system/cilium-5kblb"
Mar 17 18:38:05.216023 kubelet[2158]: I0317 18:38:05.215980    2158 topology_manager.go:215] "Topology Admit Handler" podUID="069f95e3-69e0-4620-95fd-4b18629af9c3" podNamespace="kube-system" podName="cilium-operator-599987898-4h6wb"
Mar 17 18:38:05.219137 systemd[1]: Created slice kubepods-besteffort-pod069f95e3_69e0_4620_95fd_4b18629af9c3.slice.
Mar 17 18:38:05.314217 kubelet[2158]: I0317 18:38:05.314183    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/069f95e3-69e0-4620-95fd-4b18629af9c3-cilium-config-path\") pod \"cilium-operator-599987898-4h6wb\" (UID: \"069f95e3-69e0-4620-95fd-4b18629af9c3\") " pod="kube-system/cilium-operator-599987898-4h6wb"
Mar 17 18:38:05.314492 kubelet[2158]: I0317 18:38:05.314244    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xthw\" (UniqueName: \"kubernetes.io/projected/069f95e3-69e0-4620-95fd-4b18629af9c3-kube-api-access-5xthw\") pod \"cilium-operator-599987898-4h6wb\" (UID: \"069f95e3-69e0-4620-95fd-4b18629af9c3\") " pod="kube-system/cilium-operator-599987898-4h6wb"
Mar 17 18:38:05.465983 env[1274]: time="2025-03-17T18:38:05.465642665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5kblb,Uid:0d87d5d5-4269-4f0a-90f8-9a245a822d8e,Namespace:kube-system,Attempt:0,}"
Mar 17 18:38:05.512716 env[1274]: time="2025-03-17T18:38:05.512677514Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Mar 17 18:38:05.512818 env[1274]: time="2025-03-17T18:38:05.512719368Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Mar 17 18:38:05.512818 env[1274]: time="2025-03-17T18:38:05.512736836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 17 18:38:05.512864 env[1274]: time="2025-03-17T18:38:05.512811133Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/92e863a20a4dc5edad7d939348a07a571d6c9dfdaede567be6b29c811ddba22a pid=2242 runtime=io.containerd.runc.v2
Mar 17 18:38:05.520308 systemd[1]: Started cri-containerd-92e863a20a4dc5edad7d939348a07a571d6c9dfdaede567be6b29c811ddba22a.scope.
Mar 17 18:38:05.524469 env[1274]: time="2025-03-17T18:38:05.521633367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-4h6wb,Uid:069f95e3-69e0-4620-95fd-4b18629af9c3,Namespace:kube-system,Attempt:0,}"
Mar 17 18:38:05.541915 env[1274]: time="2025-03-17T18:38:05.541888944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5kblb,Uid:0d87d5d5-4269-4f0a-90f8-9a245a822d8e,Namespace:kube-system,Attempt:0,} returns sandbox id \"92e863a20a4dc5edad7d939348a07a571d6c9dfdaede567be6b29c811ddba22a\""
Mar 17 18:38:05.543348 env[1274]: time="2025-03-17T18:38:05.543331533Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\""
Mar 17 18:38:05.554717 env[1274]: time="2025-03-17T18:38:05.554665747Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Mar 17 18:38:05.554846 env[1274]: time="2025-03-17T18:38:05.554702613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Mar 17 18:38:05.554846 env[1274]: time="2025-03-17T18:38:05.554711621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 17 18:38:05.554943 env[1274]: time="2025-03-17T18:38:05.554859933Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3d6f3da3d43f52aa3830cdebe93827aa85f9546a4136bc52c789895697043be2 pid=2284 runtime=io.containerd.runc.v2
Mar 17 18:38:05.562237 systemd[1]: Started cri-containerd-3d6f3da3d43f52aa3830cdebe93827aa85f9546a4136bc52c789895697043be2.scope.
Mar 17 18:38:05.595629 env[1274]: time="2025-03-17T18:38:05.595604378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-4h6wb,Uid:069f95e3-69e0-4620-95fd-4b18629af9c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d6f3da3d43f52aa3830cdebe93827aa85f9546a4136bc52c789895697043be2\""
Mar 17 18:38:06.358782 env[1274]: time="2025-03-17T18:38:06.358757873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2dqxl,Uid:2c134c75-2108-4369-b0c8-6bf0dd1410b8,Namespace:kube-system,Attempt:0,}"
Mar 17 18:38:06.389934 env[1274]: time="2025-03-17T18:38:06.389896153Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Mar 17 18:38:06.390036 env[1274]: time="2025-03-17T18:38:06.389921269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Mar 17 18:38:06.390036 env[1274]: time="2025-03-17T18:38:06.389931727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 17 18:38:06.390115 env[1274]: time="2025-03-17T18:38:06.390046470Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d396be75dab7917e98804db5152a2fccce692ed0c1818ec9890d4e51502e8778 pid=2326 runtime=io.containerd.runc.v2
Mar 17 18:38:06.398858 systemd[1]: Started cri-containerd-d396be75dab7917e98804db5152a2fccce692ed0c1818ec9890d4e51502e8778.scope.
Mar 17 18:38:06.419304 env[1274]: time="2025-03-17T18:38:06.419281475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2dqxl,Uid:2c134c75-2108-4369-b0c8-6bf0dd1410b8,Namespace:kube-system,Attempt:0,} returns sandbox id \"d396be75dab7917e98804db5152a2fccce692ed0c1818ec9890d4e51502e8778\""
Mar 17 18:38:06.421671 env[1274]: time="2025-03-17T18:38:06.421653973Z" level=info msg="CreateContainer within sandbox \"d396be75dab7917e98804db5152a2fccce692ed0c1818ec9890d4e51502e8778\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Mar 17 18:38:06.432543 env[1274]: time="2025-03-17T18:38:06.432520088Z" level=info msg="CreateContainer within sandbox \"d396be75dab7917e98804db5152a2fccce692ed0c1818ec9890d4e51502e8778\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c7d4a68ee45448402ea6a8656184ba3d1cf72dc20f42b48db504407e9f835e12\""
Mar 17 18:38:06.433868 env[1274]: time="2025-03-17T18:38:06.433852215Z" level=info msg="StartContainer for \"c7d4a68ee45448402ea6a8656184ba3d1cf72dc20f42b48db504407e9f835e12\""
Mar 17 18:38:06.450146 systemd[1]: Started cri-containerd-c7d4a68ee45448402ea6a8656184ba3d1cf72dc20f42b48db504407e9f835e12.scope.
Mar 17 18:38:06.470452 env[1274]: time="2025-03-17T18:38:06.470417961Z" level=info msg="StartContainer for \"c7d4a68ee45448402ea6a8656184ba3d1cf72dc20f42b48db504407e9f835e12\" returns successfully"
Mar 17 18:38:07.009504 kubelet[2158]: I0317 18:38:07.009338    2158 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2dqxl" podStartSLOduration=2.009325364 podStartE2EDuration="2.009325364s" podCreationTimestamp="2025-03-17 18:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:38:07.008851798 +0000 UTC m=+16.161665605" watchObservedRunningTime="2025-03-17 18:38:07.009325364 +0000 UTC m=+16.162139174"
Mar 17 18:38:12.984096 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount919259885.mount: Deactivated successfully.
Mar 17 18:38:15.876957 env[1274]: time="2025-03-17T18:38:15.876873510Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Mar 17 18:38:15.878170 env[1274]: time="2025-03-17T18:38:15.878153322Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Mar 17 18:38:15.879130 env[1274]: time="2025-03-17T18:38:15.879109299Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Mar 17 18:38:15.879543 env[1274]: time="2025-03-17T18:38:15.879525995Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\""
Mar 17 18:38:15.881345 env[1274]: time="2025-03-17T18:38:15.881329984Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\""
Mar 17 18:38:15.882855 env[1274]: time="2025-03-17T18:38:15.882770689Z" level=info msg="CreateContainer within sandbox \"92e863a20a4dc5edad7d939348a07a571d6c9dfdaede567be6b29c811ddba22a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Mar 17 18:38:15.892895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3170151501.mount: Deactivated successfully.
Mar 17 18:38:15.896566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2231056718.mount: Deactivated successfully.
Mar 17 18:38:15.909472 env[1274]: time="2025-03-17T18:38:15.909432989Z" level=info msg="CreateContainer within sandbox \"92e863a20a4dc5edad7d939348a07a571d6c9dfdaede567be6b29c811ddba22a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a6218de5b6dee25d9568a03bd40505320f03762746b80ae564bef72487449031\""
Mar 17 18:38:15.910600 env[1274]: time="2025-03-17T18:38:15.909801052Z" level=info msg="StartContainer for \"a6218de5b6dee25d9568a03bd40505320f03762746b80ae564bef72487449031\""
Mar 17 18:38:15.924703 systemd[1]: Started cri-containerd-a6218de5b6dee25d9568a03bd40505320f03762746b80ae564bef72487449031.scope.
Mar 17 18:38:15.943739 env[1274]: time="2025-03-17T18:38:15.943704797Z" level=info msg="StartContainer for \"a6218de5b6dee25d9568a03bd40505320f03762746b80ae564bef72487449031\" returns successfully"
Mar 17 18:38:15.954408 systemd[1]: cri-containerd-a6218de5b6dee25d9568a03bd40505320f03762746b80ae564bef72487449031.scope: Deactivated successfully.
Mar 17 18:38:16.506207 env[1274]: time="2025-03-17T18:38:16.506151613Z" level=info msg="shim disconnected" id=a6218de5b6dee25d9568a03bd40505320f03762746b80ae564bef72487449031
Mar 17 18:38:16.506207 env[1274]: time="2025-03-17T18:38:16.506205496Z" level=warning msg="cleaning up after shim disconnected" id=a6218de5b6dee25d9568a03bd40505320f03762746b80ae564bef72487449031 namespace=k8s.io
Mar 17 18:38:16.512589 env[1274]: time="2025-03-17T18:38:16.506214325Z" level=info msg="cleaning up dead shim"
Mar 17 18:38:16.512589 env[1274]: time="2025-03-17T18:38:16.510877880Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:38:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2566 runtime=io.containerd.runc.v2\n"
Mar 17 18:38:16.891401 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a6218de5b6dee25d9568a03bd40505320f03762746b80ae564bef72487449031-rootfs.mount: Deactivated successfully.
Mar 17 18:38:17.025113 env[1274]: time="2025-03-17T18:38:17.022850640Z" level=info msg="CreateContainer within sandbox \"92e863a20a4dc5edad7d939348a07a571d6c9dfdaede567be6b29c811ddba22a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Mar 17 18:38:17.038001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2484069411.mount: Deactivated successfully.
Mar 17 18:38:17.046935 env[1274]: time="2025-03-17T18:38:17.044148805Z" level=info msg="CreateContainer within sandbox \"92e863a20a4dc5edad7d939348a07a571d6c9dfdaede567be6b29c811ddba22a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"08827f432f1057da555e6a34cc1793b76ae2811efd45e4dd8db81dd8d75bd33b\""
Mar 17 18:38:17.046935 env[1274]: time="2025-03-17T18:38:17.044500361Z" level=info msg="StartContainer for \"08827f432f1057da555e6a34cc1793b76ae2811efd45e4dd8db81dd8d75bd33b\""
Mar 17 18:38:17.055472 systemd[1]: Started cri-containerd-08827f432f1057da555e6a34cc1793b76ae2811efd45e4dd8db81dd8d75bd33b.scope.
Mar 17 18:38:17.083807 env[1274]: time="2025-03-17T18:38:17.083769070Z" level=info msg="StartContainer for \"08827f432f1057da555e6a34cc1793b76ae2811efd45e4dd8db81dd8d75bd33b\" returns successfully"
Mar 17 18:38:17.089276 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Mar 17 18:38:17.089461 systemd[1]: Stopped systemd-sysctl.service.
Mar 17 18:38:17.089691 systemd[1]: Stopping systemd-sysctl.service...
Mar 17 18:38:17.091228 systemd[1]: Starting systemd-sysctl.service...
Mar 17 18:38:17.095436 systemd[1]: cri-containerd-08827f432f1057da555e6a34cc1793b76ae2811efd45e4dd8db81dd8d75bd33b.scope: Deactivated successfully.
Mar 17 18:38:17.131199 env[1274]: time="2025-03-17T18:38:17.131164316Z" level=info msg="shim disconnected" id=08827f432f1057da555e6a34cc1793b76ae2811efd45e4dd8db81dd8d75bd33b
Mar 17 18:38:17.131340 env[1274]: time="2025-03-17T18:38:17.131202594Z" level=warning msg="cleaning up after shim disconnected" id=08827f432f1057da555e6a34cc1793b76ae2811efd45e4dd8db81dd8d75bd33b namespace=k8s.io
Mar 17 18:38:17.131340 env[1274]: time="2025-03-17T18:38:17.131212151Z" level=info msg="cleaning up dead shim"
Mar 17 18:38:17.136454 env[1274]: time="2025-03-17T18:38:17.136428408Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:38:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2625 runtime=io.containerd.runc.v2\n"
Mar 17 18:38:17.138387 systemd[1]: Finished systemd-sysctl.service.
Mar 17 18:38:18.145535 env[1274]: time="2025-03-17T18:38:18.145507613Z" level=info msg="CreateContainer within sandbox \"92e863a20a4dc5edad7d939348a07a571d6c9dfdaede567be6b29c811ddba22a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Mar 17 18:38:18.379660 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount262691454.mount: Deactivated successfully.
Mar 17 18:38:18.382986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1236698405.mount: Deactivated successfully.
Mar 17 18:38:18.393922 env[1274]: time="2025-03-17T18:38:18.393884266Z" level=info msg="CreateContainer within sandbox \"92e863a20a4dc5edad7d939348a07a571d6c9dfdaede567be6b29c811ddba22a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"43b2bdc5d6863a3bec552e9e2c94ba95e1f5c223791d4ac81a3f392f349fc8a8\""
Mar 17 18:38:18.396658 env[1274]: time="2025-03-17T18:38:18.396296592Z" level=info msg="StartContainer for \"43b2bdc5d6863a3bec552e9e2c94ba95e1f5c223791d4ac81a3f392f349fc8a8\""
Mar 17 18:38:18.400610 env[1274]: time="2025-03-17T18:38:18.400572549Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Mar 17 18:38:18.401562 env[1274]: time="2025-03-17T18:38:18.401545103Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Mar 17 18:38:18.402622 env[1274]: time="2025-03-17T18:38:18.402607292Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Mar 17 18:38:18.402863 env[1274]: time="2025-03-17T18:38:18.402845740Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\""
Mar 17 18:38:18.404875 env[1274]: time="2025-03-17T18:38:18.404847382Z" level=info msg="CreateContainer within sandbox \"3d6f3da3d43f52aa3830cdebe93827aa85f9546a4136bc52c789895697043be2\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}"
Mar 17 18:38:18.413859 systemd[1]: Started cri-containerd-43b2bdc5d6863a3bec552e9e2c94ba95e1f5c223791d4ac81a3f392f349fc8a8.scope.
Mar 17 18:38:18.440786 env[1274]: time="2025-03-17T18:38:18.440752773Z" level=info msg="CreateContainer within sandbox \"3d6f3da3d43f52aa3830cdebe93827aa85f9546a4136bc52c789895697043be2\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a82c064c81ec74435ccbcc1ea6c6a89a98c41d699ae3c0bdcc65fe559c04b68c\""
Mar 17 18:38:18.441446 env[1274]: time="2025-03-17T18:38:18.441430258Z" level=info msg="StartContainer for \"a82c064c81ec74435ccbcc1ea6c6a89a98c41d699ae3c0bdcc65fe559c04b68c\""
Mar 17 18:38:18.443463 env[1274]: time="2025-03-17T18:38:18.443436354Z" level=info msg="StartContainer for \"43b2bdc5d6863a3bec552e9e2c94ba95e1f5c223791d4ac81a3f392f349fc8a8\" returns successfully"
Mar 17 18:38:18.455721 systemd[1]: Started cri-containerd-a82c064c81ec74435ccbcc1ea6c6a89a98c41d699ae3c0bdcc65fe559c04b68c.scope.
Mar 17 18:38:18.484907 env[1274]: time="2025-03-17T18:38:18.484864977Z" level=info msg="StartContainer for \"a82c064c81ec74435ccbcc1ea6c6a89a98c41d699ae3c0bdcc65fe559c04b68c\" returns successfully"
Mar 17 18:38:18.504932 systemd[1]: cri-containerd-43b2bdc5d6863a3bec552e9e2c94ba95e1f5c223791d4ac81a3f392f349fc8a8.scope: Deactivated successfully.
Mar 17 18:38:18.519921 env[1274]: time="2025-03-17T18:38:18.519887315Z" level=info msg="shim disconnected" id=43b2bdc5d6863a3bec552e9e2c94ba95e1f5c223791d4ac81a3f392f349fc8a8
Mar 17 18:38:18.519921 env[1274]: time="2025-03-17T18:38:18.519917568Z" level=warning msg="cleaning up after shim disconnected" id=43b2bdc5d6863a3bec552e9e2c94ba95e1f5c223791d4ac81a3f392f349fc8a8 namespace=k8s.io
Mar 17 18:38:18.520524 env[1274]: time="2025-03-17T18:38:18.519930752Z" level=info msg="cleaning up dead shim"
Mar 17 18:38:18.526792 env[1274]: time="2025-03-17T18:38:18.526764684Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:38:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2722 runtime=io.containerd.runc.v2\n"
Mar 17 18:38:19.113453 env[1274]: time="2025-03-17T18:38:19.113424390Z" level=info msg="CreateContainer within sandbox \"92e863a20a4dc5edad7d939348a07a571d6c9dfdaede567be6b29c811ddba22a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Mar 17 18:38:19.123081 env[1274]: time="2025-03-17T18:38:19.123050136Z" level=info msg="CreateContainer within sandbox \"92e863a20a4dc5edad7d939348a07a571d6c9dfdaede567be6b29c811ddba22a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"860a3dcd81eebb5a4e08551cadd22c1fac50b7dc644425cf929809f6e1bf3ca8\""
Mar 17 18:38:19.123394 env[1274]: time="2025-03-17T18:38:19.123379497Z" level=info msg="StartContainer for \"860a3dcd81eebb5a4e08551cadd22c1fac50b7dc644425cf929809f6e1bf3ca8\""
Mar 17 18:38:19.146404 systemd[1]: Started cri-containerd-860a3dcd81eebb5a4e08551cadd22c1fac50b7dc644425cf929809f6e1bf3ca8.scope.
Mar 17 18:38:19.198123 env[1274]: time="2025-03-17T18:38:19.198068898Z" level=info msg="StartContainer for \"860a3dcd81eebb5a4e08551cadd22c1fac50b7dc644425cf929809f6e1bf3ca8\" returns successfully"
Mar 17 18:38:19.209611 systemd[1]: cri-containerd-860a3dcd81eebb5a4e08551cadd22c1fac50b7dc644425cf929809f6e1bf3ca8.scope: Deactivated successfully.
Mar 17 18:38:19.229693 env[1274]: time="2025-03-17T18:38:19.229666201Z" level=info msg="shim disconnected" id=860a3dcd81eebb5a4e08551cadd22c1fac50b7dc644425cf929809f6e1bf3ca8
Mar 17 18:38:19.229860 env[1274]: time="2025-03-17T18:38:19.229849337Z" level=warning msg="cleaning up after shim disconnected" id=860a3dcd81eebb5a4e08551cadd22c1fac50b7dc644425cf929809f6e1bf3ca8 namespace=k8s.io
Mar 17 18:38:19.229916 env[1274]: time="2025-03-17T18:38:19.229906994Z" level=info msg="cleaning up dead shim"
Mar 17 18:38:19.235306 env[1274]: time="2025-03-17T18:38:19.235280081Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:38:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2784 runtime=io.containerd.runc.v2\n"
Mar 17 18:38:19.891662 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-860a3dcd81eebb5a4e08551cadd22c1fac50b7dc644425cf929809f6e1bf3ca8-rootfs.mount: Deactivated successfully.
Mar 17 18:38:20.118361 env[1274]: time="2025-03-17T18:38:20.118334520Z" level=info msg="CreateContainer within sandbox \"92e863a20a4dc5edad7d939348a07a571d6c9dfdaede567be6b29c811ddba22a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Mar 17 18:38:20.125878 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1309470396.mount: Deactivated successfully.
Mar 17 18:38:20.129269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2375316052.mount: Deactivated successfully.
Mar 17 18:38:20.130951 env[1274]: time="2025-03-17T18:38:20.130924211Z" level=info msg="CreateContainer within sandbox \"92e863a20a4dc5edad7d939348a07a571d6c9dfdaede567be6b29c811ddba22a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"64e0853eaf019fbb8604798285e15ebb510022dd9c630f3d7ec18f0f18736f5b\""
Mar 17 18:38:20.131391 env[1274]: time="2025-03-17T18:38:20.131376319Z" level=info msg="StartContainer for \"64e0853eaf019fbb8604798285e15ebb510022dd9c630f3d7ec18f0f18736f5b\""
Mar 17 18:38:20.141995 kubelet[2158]: I0317 18:38:20.141920    2158 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-4h6wb" podStartSLOduration=2.333609231 podStartE2EDuration="15.140465663s" podCreationTimestamp="2025-03-17 18:38:05 +0000 UTC" firstStartedPulling="2025-03-17 18:38:05.596467048 +0000 UTC m=+14.749280847" lastFinishedPulling="2025-03-17 18:38:18.403323482 +0000 UTC m=+27.556137279" observedRunningTime="2025-03-17 18:38:19.213363586 +0000 UTC m=+28.366177398" watchObservedRunningTime="2025-03-17 18:38:20.140465663 +0000 UTC m=+29.293279468"
Mar 17 18:38:20.155697 systemd[1]: Started cri-containerd-64e0853eaf019fbb8604798285e15ebb510022dd9c630f3d7ec18f0f18736f5b.scope.
Mar 17 18:38:20.180901 env[1274]: time="2025-03-17T18:38:20.180876169Z" level=info msg="StartContainer for \"64e0853eaf019fbb8604798285e15ebb510022dd9c630f3d7ec18f0f18736f5b\" returns successfully"
Mar 17 18:38:20.364474 kubelet[2158]: I0317 18:38:20.364410    2158 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
Mar 17 18:38:20.391869 kubelet[2158]: I0317 18:38:20.391848    2158 topology_manager.go:215] "Topology Admit Handler" podUID="36e7c497-97f7-452c-848e-21a7469722b4" podNamespace="kube-system" podName="coredns-7db6d8ff4d-4l7rl"
Mar 17 18:38:20.402033 kubelet[2158]: I0317 18:38:20.401970    2158 topology_manager.go:215] "Topology Admit Handler" podUID="849c9752-618d-4d5f-8760-014f37d74fed" podNamespace="kube-system" podName="coredns-7db6d8ff4d-prtgj"
Mar 17 18:38:20.413187 systemd[1]: Created slice kubepods-burstable-pod36e7c497_97f7_452c_848e_21a7469722b4.slice.
Mar 17 18:38:20.416266 systemd[1]: Created slice kubepods-burstable-pod849c9752_618d_4d5f_8760_014f37d74fed.slice.
Mar 17 18:38:20.506729 kubelet[2158]: I0317 18:38:20.506707    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79p7f\" (UniqueName: \"kubernetes.io/projected/849c9752-618d-4d5f-8760-014f37d74fed-kube-api-access-79p7f\") pod \"coredns-7db6d8ff4d-prtgj\" (UID: \"849c9752-618d-4d5f-8760-014f37d74fed\") " pod="kube-system/coredns-7db6d8ff4d-prtgj"
Mar 17 18:38:20.509261 kubelet[2158]: I0317 18:38:20.509244    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36e7c497-97f7-452c-848e-21a7469722b4-config-volume\") pod \"coredns-7db6d8ff4d-4l7rl\" (UID: \"36e7c497-97f7-452c-848e-21a7469722b4\") " pod="kube-system/coredns-7db6d8ff4d-4l7rl"
Mar 17 18:38:20.509313 kubelet[2158]: I0317 18:38:20.509279    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/849c9752-618d-4d5f-8760-014f37d74fed-config-volume\") pod \"coredns-7db6d8ff4d-prtgj\" (UID: \"849c9752-618d-4d5f-8760-014f37d74fed\") " pod="kube-system/coredns-7db6d8ff4d-prtgj"
Mar 17 18:38:20.509313 kubelet[2158]: I0317 18:38:20.509292    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2x6q\" (UniqueName: \"kubernetes.io/projected/36e7c497-97f7-452c-848e-21a7469722b4-kube-api-access-m2x6q\") pod \"coredns-7db6d8ff4d-4l7rl\" (UID: \"36e7c497-97f7-452c-848e-21a7469722b4\") " pod="kube-system/coredns-7db6d8ff4d-4l7rl"
Mar 17 18:38:20.640110 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
Mar 17 18:38:20.718619 env[1274]: time="2025-03-17T18:38:20.718328282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-prtgj,Uid:849c9752-618d-4d5f-8760-014f37d74fed,Namespace:kube-system,Attempt:0,}"
Mar 17 18:38:20.720669 env[1274]: time="2025-03-17T18:38:20.720650033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4l7rl,Uid:36e7c497-97f7-452c-848e-21a7469722b4,Namespace:kube-system,Attempt:0,}"
Mar 17 18:38:20.906109 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
Mar 17 18:38:22.531955 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready
Mar 17 18:38:22.532020 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready
Mar 17 18:38:22.531574 systemd-networkd[1058]: cilium_host: Link UP
Mar 17 18:38:22.531671 systemd-networkd[1058]: cilium_net: Link UP
Mar 17 18:38:22.531767 systemd-networkd[1058]: cilium_net: Gained carrier
Mar 17 18:38:22.531855 systemd-networkd[1058]: cilium_host: Gained carrier
Mar 17 18:38:22.642947 systemd-networkd[1058]: cilium_vxlan: Link UP
Mar 17 18:38:22.642951 systemd-networkd[1058]: cilium_vxlan: Gained carrier
Mar 17 18:38:23.227227 systemd-networkd[1058]: cilium_net: Gained IPv6LL
Mar 17 18:38:23.355269 systemd-networkd[1058]: cilium_host: Gained IPv6LL
Mar 17 18:38:23.599109 kernel: NET: Registered PF_ALG protocol family
Mar 17 18:38:23.867204 systemd-networkd[1058]: cilium_vxlan: Gained IPv6LL
Mar 17 18:38:24.063265 systemd-networkd[1058]: lxc_health: Link UP
Mar 17 18:38:24.087646 systemd-networkd[1058]: lxc_health: Gained carrier
Mar 17 18:38:24.088207 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready
Mar 17 18:38:24.285370 systemd-networkd[1058]: lxcc7bbc0c6edff: Link UP
Mar 17 18:38:24.295195 kernel: eth0: renamed from tmp88212
Mar 17 18:38:24.299486 systemd-networkd[1058]: lxcc7bbc0c6edff: Gained carrier
Mar 17 18:38:24.300102 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcc7bbc0c6edff: link becomes ready
Mar 17 18:38:24.300477 systemd-networkd[1058]: lxc8985a10aa3d8: Link UP
Mar 17 18:38:24.306106 kernel: eth0: renamed from tmp31555
Mar 17 18:38:24.308479 systemd-networkd[1058]: lxc8985a10aa3d8: Gained carrier
Mar 17 18:38:24.309104 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc8985a10aa3d8: link becomes ready
Mar 17 18:38:25.147223 systemd-networkd[1058]: lxc_health: Gained IPv6LL
Mar 17 18:38:25.339208 systemd-networkd[1058]: lxcc7bbc0c6edff: Gained IPv6LL
Mar 17 18:38:25.483005 kubelet[2158]: I0317 18:38:25.482776    2158 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5kblb" podStartSLOduration=10.144968952 podStartE2EDuration="20.482761716s" podCreationTimestamp="2025-03-17 18:38:05 +0000 UTC" firstStartedPulling="2025-03-17 18:38:05.542845336 +0000 UTC m=+14.695659133" lastFinishedPulling="2025-03-17 18:38:15.880638099 +0000 UTC m=+25.033451897" observedRunningTime="2025-03-17 18:38:21.158871691 +0000 UTC m=+30.311685497" watchObservedRunningTime="2025-03-17 18:38:25.482761716 +0000 UTC m=+34.635575516"
Mar 17 18:38:26.043200 systemd-networkd[1058]: lxc8985a10aa3d8: Gained IPv6LL
Mar 17 18:38:26.944220 env[1274]: time="2025-03-17T18:38:26.944160499Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Mar 17 18:38:26.944549 env[1274]: time="2025-03-17T18:38:26.944532927Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Mar 17 18:38:26.944611 env[1274]: time="2025-03-17T18:38:26.944598100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 17 18:38:26.944758 env[1274]: time="2025-03-17T18:38:26.944740853Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/882120daf1da5aa89ef0ce57b602ca7cd8ceb1e4f8f36c0a1c6c28a83818afc2 pid=3341 runtime=io.containerd.runc.v2
Mar 17 18:38:26.956011 env[1274]: time="2025-03-17T18:38:26.955970994Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Mar 17 18:38:26.956149 env[1274]: time="2025-03-17T18:38:26.956133956Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Mar 17 18:38:26.956219 env[1274]: time="2025-03-17T18:38:26.956206215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 17 18:38:26.956382 env[1274]: time="2025-03-17T18:38:26.956366737Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/31555e9f7f53cba62eab642ca8164dde6ee2e79e17c1bd0a60abad6d4254db76 pid=3359 runtime=io.containerd.runc.v2
Mar 17 18:38:26.973973 systemd[1]: Started cri-containerd-31555e9f7f53cba62eab642ca8164dde6ee2e79e17c1bd0a60abad6d4254db76.scope.
Mar 17 18:38:26.978864 systemd[1]: run-containerd-runc-k8s.io-31555e9f7f53cba62eab642ca8164dde6ee2e79e17c1bd0a60abad6d4254db76-runc.AW4dJm.mount: Deactivated successfully.
Mar 17 18:38:26.986994 systemd[1]: Started cri-containerd-882120daf1da5aa89ef0ce57b602ca7cd8ceb1e4f8f36c0a1c6c28a83818afc2.scope.
Mar 17 18:38:27.006363 systemd-resolved[1207]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Mar 17 18:38:27.009481 systemd-resolved[1207]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Mar 17 18:38:27.031496 env[1274]: time="2025-03-17T18:38:27.031467963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-prtgj,Uid:849c9752-618d-4d5f-8760-014f37d74fed,Namespace:kube-system,Attempt:0,} returns sandbox id \"882120daf1da5aa89ef0ce57b602ca7cd8ceb1e4f8f36c0a1c6c28a83818afc2\""
Mar 17 18:38:27.033139 env[1274]: time="2025-03-17T18:38:27.033119837Z" level=info msg="CreateContainer within sandbox \"882120daf1da5aa89ef0ce57b602ca7cd8ceb1e4f8f36c0a1c6c28a83818afc2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Mar 17 18:38:27.044134 env[1274]: time="2025-03-17T18:38:27.044078803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4l7rl,Uid:36e7c497-97f7-452c-848e-21a7469722b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"31555e9f7f53cba62eab642ca8164dde6ee2e79e17c1bd0a60abad6d4254db76\""
Mar 17 18:38:27.045732 env[1274]: time="2025-03-17T18:38:27.045707529Z" level=info msg="CreateContainer within sandbox \"31555e9f7f53cba62eab642ca8164dde6ee2e79e17c1bd0a60abad6d4254db76\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Mar 17 18:38:27.316784 env[1274]: time="2025-03-17T18:38:27.316163473Z" level=info msg="CreateContainer within sandbox \"31555e9f7f53cba62eab642ca8164dde6ee2e79e17c1bd0a60abad6d4254db76\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"951af252239129406716257ded26cd35b99217211478bbc1008268203d979c98\""
Mar 17 18:38:27.317451 env[1274]: time="2025-03-17T18:38:27.317433959Z" level=info msg="StartContainer for \"951af252239129406716257ded26cd35b99217211478bbc1008268203d979c98\""
Mar 17 18:38:27.318252 env[1274]: time="2025-03-17T18:38:27.316293912Z" level=info msg="CreateContainer within sandbox \"882120daf1da5aa89ef0ce57b602ca7cd8ceb1e4f8f36c0a1c6c28a83818afc2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1129dc65cff7f79e9967a8de72db2d5a9420d382d7c83ce6ee3227b8b5948591\""
Mar 17 18:38:27.318824 env[1274]: time="2025-03-17T18:38:27.318804792Z" level=info msg="StartContainer for \"1129dc65cff7f79e9967a8de72db2d5a9420d382d7c83ce6ee3227b8b5948591\""
Mar 17 18:38:27.332207 systemd[1]: Started cri-containerd-951af252239129406716257ded26cd35b99217211478bbc1008268203d979c98.scope.
Mar 17 18:38:27.345684 systemd[1]: Started cri-containerd-1129dc65cff7f79e9967a8de72db2d5a9420d382d7c83ce6ee3227b8b5948591.scope.
Mar 17 18:38:27.359235 env[1274]: time="2025-03-17T18:38:27.359192244Z" level=info msg="StartContainer for \"951af252239129406716257ded26cd35b99217211478bbc1008268203d979c98\" returns successfully"
Mar 17 18:38:27.382027 env[1274]: time="2025-03-17T18:38:27.381997307Z" level=info msg="StartContainer for \"1129dc65cff7f79e9967a8de72db2d5a9420d382d7c83ce6ee3227b8b5948591\" returns successfully"
Mar 17 18:38:27.947922 systemd[1]: run-containerd-runc-k8s.io-882120daf1da5aa89ef0ce57b602ca7cd8ceb1e4f8f36c0a1c6c28a83818afc2-runc.eLOppc.mount: Deactivated successfully.
Mar 17 18:38:28.155899 kubelet[2158]: I0317 18:38:28.155852    2158 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-4l7rl" podStartSLOduration=23.155836842 podStartE2EDuration="23.155836842s" podCreationTimestamp="2025-03-17 18:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:38:28.148022173 +0000 UTC m=+37.300835981" watchObservedRunningTime="2025-03-17 18:38:28.155836842 +0000 UTC m=+37.308650642"
Mar 17 18:38:28.156271 kubelet[2158]: I0317 18:38:28.156247    2158 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-prtgj" podStartSLOduration=23.156240428 podStartE2EDuration="23.156240428s" podCreationTimestamp="2025-03-17 18:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:38:28.155548249 +0000 UTC m=+37.308362055" watchObservedRunningTime="2025-03-17 18:38:28.156240428 +0000 UTC m=+37.309054229"
Mar 17 18:39:19.862980 systemd[1]: Started sshd@5-139.178.70.110:22-139.178.68.195:37172.service.
Mar 17 18:39:19.911576 sshd[3511]: Accepted publickey for core from 139.178.68.195 port 37172 ssh2: RSA SHA256:4oZ1KYBDSs5lS/zKBefF9vskKlH/NySTYiZrtgd5CeA
Mar 17 18:39:19.912921 sshd[3511]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Mar 17 18:39:19.916640 systemd[1]: Started session-8.scope.
Mar 17 18:39:19.917141 systemd-logind[1244]: New session 8 of user core.
Mar 17 18:39:20.111658 sshd[3511]: pam_unix(sshd:session): session closed for user core
Mar 17 18:39:20.113387 systemd[1]: sshd@5-139.178.70.110:22-139.178.68.195:37172.service: Deactivated successfully.
Mar 17 18:39:20.113864 systemd[1]: session-8.scope: Deactivated successfully.
Mar 17 18:39:20.114548 systemd-logind[1244]: Session 8 logged out. Waiting for processes to exit.
Mar 17 18:39:20.115216 systemd-logind[1244]: Removed session 8.
Mar 17 18:39:25.115551 systemd[1]: Started sshd@6-139.178.70.110:22-139.178.68.195:37182.service.
Mar 17 18:39:25.151236 sshd[3523]: Accepted publickey for core from 139.178.68.195 port 37182 ssh2: RSA SHA256:4oZ1KYBDSs5lS/zKBefF9vskKlH/NySTYiZrtgd5CeA
Mar 17 18:39:25.152009 sshd[3523]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Mar 17 18:39:25.154558 systemd-logind[1244]: New session 9 of user core.
Mar 17 18:39:25.155110 systemd[1]: Started session-9.scope.
Mar 17 18:39:25.258740 sshd[3523]: pam_unix(sshd:session): session closed for user core
Mar 17 18:39:25.260458 systemd-logind[1244]: Session 9 logged out. Waiting for processes to exit.
Mar 17 18:39:25.260556 systemd[1]: sshd@6-139.178.70.110:22-139.178.68.195:37182.service: Deactivated successfully.
Mar 17 18:39:25.260970 systemd[1]: session-9.scope: Deactivated successfully.
Mar 17 18:39:25.261454 systemd-logind[1244]: Removed session 9.
Mar 17 18:39:30.263747 systemd[1]: Started sshd@7-139.178.70.110:22-139.178.68.195:43096.service.
Mar 17 18:39:30.716988 sshd[3536]: Accepted publickey for core from 139.178.68.195 port 43096 ssh2: RSA SHA256:4oZ1KYBDSs5lS/zKBefF9vskKlH/NySTYiZrtgd5CeA
Mar 17 18:39:30.719343 sshd[3536]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Mar 17 18:39:30.722735 systemd[1]: Started session-10.scope.
Mar 17 18:39:30.722917 systemd-logind[1244]: New session 10 of user core.
Mar 17 18:39:30.899014 sshd[3536]: pam_unix(sshd:session): session closed for user core
Mar 17 18:39:30.900666 systemd[1]: sshd@7-139.178.70.110:22-139.178.68.195:43096.service: Deactivated successfully.
Mar 17 18:39:30.901118 systemd[1]: session-10.scope: Deactivated successfully.
Mar 17 18:39:30.901636 systemd-logind[1244]: Session 10 logged out. Waiting for processes to exit.
Mar 17 18:39:30.902078 systemd-logind[1244]: Removed session 10.
Mar 17 18:39:35.904227 systemd[1]: Started sshd@8-139.178.70.110:22-139.178.68.195:53228.service.
Mar 17 18:39:36.005111 sshd[3548]: Accepted publickey for core from 139.178.68.195 port 53228 ssh2: RSA SHA256:4oZ1KYBDSs5lS/zKBefF9vskKlH/NySTYiZrtgd5CeA
Mar 17 18:39:36.006409 sshd[3548]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Mar 17 18:39:36.010039 systemd[1]: Started session-11.scope.
Mar 17 18:39:36.010285 systemd-logind[1244]: New session 11 of user core.
Mar 17 18:39:36.145781 sshd[3548]: pam_unix(sshd:session): session closed for user core
Mar 17 18:39:36.148554 systemd[1]: Started sshd@9-139.178.70.110:22-139.178.68.195:53234.service.
Mar 17 18:39:36.152872 systemd-logind[1244]: Session 11 logged out. Waiting for processes to exit.
Mar 17 18:39:36.153901 systemd[1]: sshd@8-139.178.70.110:22-139.178.68.195:53228.service: Deactivated successfully.
Mar 17 18:39:36.154360 systemd[1]: session-11.scope: Deactivated successfully.
Mar 17 18:39:36.155591 systemd-logind[1244]: Removed session 11.
Mar 17 18:39:36.182033 sshd[3559]: Accepted publickey for core from 139.178.68.195 port 53234 ssh2: RSA SHA256:4oZ1KYBDSs5lS/zKBefF9vskKlH/NySTYiZrtgd5CeA
Mar 17 18:39:36.182878 sshd[3559]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Mar 17 18:39:36.185992 systemd[1]: Started session-12.scope.
Mar 17 18:39:36.186901 systemd-logind[1244]: New session 12 of user core.
Mar 17 18:39:36.340606 systemd[1]: Started sshd@10-139.178.70.110:22-139.178.68.195:53242.service.
Mar 17 18:39:36.342770 sshd[3559]: pam_unix(sshd:session): session closed for user core
Mar 17 18:39:36.345236 systemd[1]: sshd@9-139.178.70.110:22-139.178.68.195:53234.service: Deactivated successfully.
Mar 17 18:39:36.345469 systemd-logind[1244]: Session 12 logged out. Waiting for processes to exit.
Mar 17 18:39:36.346523 systemd[1]: session-12.scope: Deactivated successfully.
Mar 17 18:39:36.347269 systemd-logind[1244]: Removed session 12.
Mar 17 18:39:36.378665 sshd[3569]: Accepted publickey for core from 139.178.68.195 port 53242 ssh2: RSA SHA256:4oZ1KYBDSs5lS/zKBefF9vskKlH/NySTYiZrtgd5CeA
Mar 17 18:39:36.379527 sshd[3569]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Mar 17 18:39:36.382556 systemd[1]: Started session-13.scope.
Mar 17 18:39:36.382788 systemd-logind[1244]: New session 13 of user core.
Mar 17 18:39:36.561669 sshd[3569]: pam_unix(sshd:session): session closed for user core
Mar 17 18:39:36.563747 systemd[1]: sshd@10-139.178.70.110:22-139.178.68.195:53242.service: Deactivated successfully.
Mar 17 18:39:36.564173 systemd[1]: session-13.scope: Deactivated successfully.
Mar 17 18:39:36.564208 systemd-logind[1244]: Session 13 logged out. Waiting for processes to exit.
Mar 17 18:39:36.564951 systemd-logind[1244]: Removed session 13.
Mar 17 18:39:41.565619 systemd[1]: Started sshd@11-139.178.70.110:22-139.178.68.195:53250.service.
Mar 17 18:39:41.593337 sshd[3584]: Accepted publickey for core from 139.178.68.195 port 53250 ssh2: RSA SHA256:4oZ1KYBDSs5lS/zKBefF9vskKlH/NySTYiZrtgd5CeA
Mar 17 18:39:41.594240 sshd[3584]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Mar 17 18:39:41.597849 systemd[1]: Started session-14.scope.
Mar 17 18:39:41.598835 systemd-logind[1244]: New session 14 of user core.
Mar 17 18:39:41.698374 sshd[3584]: pam_unix(sshd:session): session closed for user core
Mar 17 18:39:41.700056 systemd[1]: sshd@11-139.178.70.110:22-139.178.68.195:53250.service: Deactivated successfully.
Mar 17 18:39:41.700582 systemd[1]: session-14.scope: Deactivated successfully.
Mar 17 18:39:41.701147 systemd-logind[1244]: Session 14 logged out. Waiting for processes to exit.
Mar 17 18:39:41.701632 systemd-logind[1244]: Removed session 14.
Mar 17 18:39:46.703602 systemd[1]: Started sshd@12-139.178.70.110:22-139.178.68.195:47690.service.
Mar 17 18:39:46.735885 sshd[3597]: Accepted publickey for core from 139.178.68.195 port 47690 ssh2: RSA SHA256:4oZ1KYBDSs5lS/zKBefF9vskKlH/NySTYiZrtgd5CeA
Mar 17 18:39:46.736804 sshd[3597]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Mar 17 18:39:46.740102 systemd[1]: Started session-15.scope.
Mar 17 18:39:46.740954 systemd-logind[1244]: New session 15 of user core.
Mar 17 18:39:46.833059 sshd[3597]: pam_unix(sshd:session): session closed for user core
Mar 17 18:39:46.836931 systemd[1]: Started sshd@13-139.178.70.110:22-139.178.68.195:47700.service.
Mar 17 18:39:46.839148 systemd-logind[1244]: Session 15 logged out. Waiting for processes to exit.
Mar 17 18:39:46.839946 systemd[1]: sshd@12-139.178.70.110:22-139.178.68.195:47690.service: Deactivated successfully.
Mar 17 18:39:46.840390 systemd[1]: session-15.scope: Deactivated successfully.
Mar 17 18:39:46.841304 systemd-logind[1244]: Removed session 15.
Mar 17 18:39:46.865912 sshd[3608]: Accepted publickey for core from 139.178.68.195 port 47700 ssh2: RSA SHA256:4oZ1KYBDSs5lS/zKBefF9vskKlH/NySTYiZrtgd5CeA
Mar 17 18:39:46.867198 sshd[3608]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Mar 17 18:39:46.870517 systemd[1]: Started session-16.scope.
Mar 17 18:39:46.871128 systemd-logind[1244]: New session 16 of user core.
Mar 17 18:39:47.483148 systemd[1]: Started sshd@14-139.178.70.110:22-139.178.68.195:47702.service.
Mar 17 18:39:47.483761 sshd[3608]: pam_unix(sshd:session): session closed for user core
Mar 17 18:39:47.485453 systemd-logind[1244]: Session 16 logged out. Waiting for processes to exit.
Mar 17 18:39:47.486339 systemd[1]: sshd@13-139.178.70.110:22-139.178.68.195:47700.service: Deactivated successfully.
Mar 17 18:39:47.486849 systemd[1]: session-16.scope: Deactivated successfully.
Mar 17 18:39:47.487795 systemd-logind[1244]: Removed session 16.
Mar 17 18:39:47.610890 sshd[3618]: Accepted publickey for core from 139.178.68.195 port 47702 ssh2: RSA SHA256:4oZ1KYBDSs5lS/zKBefF9vskKlH/NySTYiZrtgd5CeA
Mar 17 18:39:47.611938 sshd[3618]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Mar 17 18:39:47.615765 systemd[1]: Started session-17.scope.
Mar 17 18:39:47.616057 systemd-logind[1244]: New session 17 of user core.
Mar 17 18:39:49.029993 sshd[3618]: pam_unix(sshd:session): session closed for user core
Mar 17 18:39:49.032100 systemd[1]: Started sshd@15-139.178.70.110:22-139.178.68.195:47716.service.
Mar 17 18:39:49.149210 systemd[1]: sshd@14-139.178.70.110:22-139.178.68.195:47702.service: Deactivated successfully.
Mar 17 18:39:49.149764 systemd[1]: session-17.scope: Deactivated successfully.
Mar 17 18:39:49.150532 systemd-logind[1244]: Session 17 logged out. Waiting for processes to exit.
Mar 17 18:39:49.151067 systemd-logind[1244]: Removed session 17.
Mar 17 18:39:49.223544 sshd[3645]: Accepted publickey for core from 139.178.68.195 port 47716 ssh2: RSA SHA256:4oZ1KYBDSs5lS/zKBefF9vskKlH/NySTYiZrtgd5CeA
Mar 17 18:39:49.225524 sshd[3645]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Mar 17 18:39:49.229223 systemd[1]: Started session-18.scope.
Mar 17 18:39:49.229506 systemd-logind[1244]: New session 18 of user core.
Mar 17 18:39:49.718040 sshd[3645]: pam_unix(sshd:session): session closed for user core
Mar 17 18:39:49.721183 systemd[1]: Started sshd@16-139.178.70.110:22-139.178.68.195:47722.service.
Mar 17 18:39:49.723343 systemd[1]: sshd@15-139.178.70.110:22-139.178.68.195:47716.service: Deactivated successfully.
Mar 17 18:39:49.723865 systemd[1]: session-18.scope: Deactivated successfully.
Mar 17 18:39:49.724774 systemd-logind[1244]: Session 18 logged out. Waiting for processes to exit.
Mar 17 18:39:49.726201 systemd-logind[1244]: Removed session 18.
Mar 17 18:39:49.752511 sshd[3656]: Accepted publickey for core from 139.178.68.195 port 47722 ssh2: RSA SHA256:4oZ1KYBDSs5lS/zKBefF9vskKlH/NySTYiZrtgd5CeA
Mar 17 18:39:49.754071 sshd[3656]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Mar 17 18:39:49.757960 systemd[1]: Started session-19.scope.
Mar 17 18:39:49.758557 systemd-logind[1244]: New session 19 of user core.
Mar 17 18:39:49.875381 sshd[3656]: pam_unix(sshd:session): session closed for user core
Mar 17 18:39:49.877277 systemd[1]: sshd@16-139.178.70.110:22-139.178.68.195:47722.service: Deactivated successfully.
Mar 17 18:39:49.877767 systemd[1]: session-19.scope: Deactivated successfully.
Mar 17 18:39:49.878621 systemd-logind[1244]: Session 19 logged out. Waiting for processes to exit.
Mar 17 18:39:49.879210 systemd-logind[1244]: Removed session 19.
Mar 17 18:39:54.878252 systemd[1]: Started sshd@17-139.178.70.110:22-139.178.68.195:47724.service.
Mar 17 18:39:54.921802 sshd[3672]: Accepted publickey for core from 139.178.68.195 port 47724 ssh2: RSA SHA256:4oZ1KYBDSs5lS/zKBefF9vskKlH/NySTYiZrtgd5CeA
Mar 17 18:39:54.922616 sshd[3672]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Mar 17 18:39:54.926852 systemd[1]: Started session-20.scope.
Mar 17 18:39:54.927151 systemd-logind[1244]: New session 20 of user core.
Mar 17 18:39:55.021240 sshd[3672]: pam_unix(sshd:session): session closed for user core
Mar 17 18:39:55.022826 systemd[1]: sshd@17-139.178.70.110:22-139.178.68.195:47724.service: Deactivated successfully.
Mar 17 18:39:55.023355 systemd[1]: session-20.scope: Deactivated successfully.
Mar 17 18:39:55.023904 systemd-logind[1244]: Session 20 logged out. Waiting for processes to exit.
Mar 17 18:39:55.024357 systemd-logind[1244]: Removed session 20.
Mar 17 18:40:00.024731 systemd[1]: Started sshd@18-139.178.70.110:22-139.178.68.195:56428.service.
Mar 17 18:40:00.053644 sshd[3685]: Accepted publickey for core from 139.178.68.195 port 56428 ssh2: RSA SHA256:4oZ1KYBDSs5lS/zKBefF9vskKlH/NySTYiZrtgd5CeA
Mar 17 18:40:00.054742 sshd[3685]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Mar 17 18:40:00.057674 systemd[1]: Started session-21.scope.
Mar 17 18:40:00.058125 systemd-logind[1244]: New session 21 of user core.
Mar 17 18:40:00.146292 sshd[3685]: pam_unix(sshd:session): session closed for user core
Mar 17 18:40:00.148136 systemd[1]: sshd@18-139.178.70.110:22-139.178.68.195:56428.service: Deactivated successfully.
Mar 17 18:40:00.148625 systemd[1]: session-21.scope: Deactivated successfully.
Mar 17 18:40:00.149349 systemd-logind[1244]: Session 21 logged out. Waiting for processes to exit.
Mar 17 18:40:00.149839 systemd-logind[1244]: Removed session 21.
Mar 17 18:40:05.150104 systemd[1]: Started sshd@19-139.178.70.110:22-139.178.68.195:56444.service.
Mar 17 18:40:05.177982 sshd[3698]: Accepted publickey for core from 139.178.68.195 port 56444 ssh2: RSA SHA256:4oZ1KYBDSs5lS/zKBefF9vskKlH/NySTYiZrtgd5CeA
Mar 17 18:40:05.179011 sshd[3698]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Mar 17 18:40:05.182809 systemd[1]: Started session-22.scope.
Mar 17 18:40:05.183657 systemd-logind[1244]: New session 22 of user core.
Mar 17 18:40:05.273014 sshd[3698]: pam_unix(sshd:session): session closed for user core
Mar 17 18:40:05.275017 systemd[1]: sshd@19-139.178.70.110:22-139.178.68.195:56444.service: Deactivated successfully.
Mar 17 18:40:05.275503 systemd[1]: session-22.scope: Deactivated successfully.
Mar 17 18:40:05.276171 systemd-logind[1244]: Session 22 logged out. Waiting for processes to exit.
Mar 17 18:40:05.276702 systemd-logind[1244]: Removed session 22.
Mar 17 18:40:10.276942 systemd[1]: Started sshd@20-139.178.70.110:22-139.178.68.195:36510.service.
Mar 17 18:40:10.305358 sshd[3712]: Accepted publickey for core from 139.178.68.195 port 36510 ssh2: RSA SHA256:4oZ1KYBDSs5lS/zKBefF9vskKlH/NySTYiZrtgd5CeA
Mar 17 18:40:10.306293 sshd[3712]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Mar 17 18:40:10.309591 systemd[1]: Started session-23.scope.
Mar 17 18:40:10.309894 systemd-logind[1244]: New session 23 of user core.
Mar 17 18:40:10.410872 sshd[3712]: pam_unix(sshd:session): session closed for user core
Mar 17 18:40:10.413622 systemd[1]: Started sshd@21-139.178.70.110:22-139.178.68.195:36512.service.
Mar 17 18:40:10.416360 systemd-logind[1244]: Session 23 logged out. Waiting for processes to exit.
Mar 17 18:40:10.416548 systemd[1]: sshd@20-139.178.70.110:22-139.178.68.195:36510.service: Deactivated successfully.
Mar 17 18:40:10.416966 systemd[1]: session-23.scope: Deactivated successfully.
Mar 17 18:40:10.417700 systemd-logind[1244]: Removed session 23.
Mar 17 18:40:10.444226 sshd[3723]: Accepted publickey for core from 139.178.68.195 port 36512 ssh2: RSA SHA256:4oZ1KYBDSs5lS/zKBefF9vskKlH/NySTYiZrtgd5CeA
Mar 17 18:40:10.445066 sshd[3723]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Mar 17 18:40:10.447967 systemd[1]: Started session-24.scope.
Mar 17 18:40:10.448340 systemd-logind[1244]: New session 24 of user core.
Mar 17 18:40:12.192505 env[1274]: time="2025-03-17T18:40:12.192244436Z" level=info msg="StopContainer for \"a82c064c81ec74435ccbcc1ea6c6a89a98c41d699ae3c0bdcc65fe559c04b68c\" with timeout 30 (s)"
Mar 17 18:40:12.196000 env[1274]: time="2025-03-17T18:40:12.192528496Z" level=info msg="Stop container \"a82c064c81ec74435ccbcc1ea6c6a89a98c41d699ae3c0bdcc65fe559c04b68c\" with signal terminated"
Mar 17 18:40:12.213696 systemd[1]: run-containerd-runc-k8s.io-64e0853eaf019fbb8604798285e15ebb510022dd9c630f3d7ec18f0f18736f5b-runc.V50IkH.mount: Deactivated successfully.
Mar 17 18:40:12.219811 systemd[1]: cri-containerd-a82c064c81ec74435ccbcc1ea6c6a89a98c41d699ae3c0bdcc65fe559c04b68c.scope: Deactivated successfully.
Mar 17 18:40:12.239494 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a82c064c81ec74435ccbcc1ea6c6a89a98c41d699ae3c0bdcc65fe559c04b68c-rootfs.mount: Deactivated successfully.
Mar 17 18:40:12.244355 env[1274]: time="2025-03-17T18:40:12.244257952Z" level=info msg="shim disconnected" id=a82c064c81ec74435ccbcc1ea6c6a89a98c41d699ae3c0bdcc65fe559c04b68c
Mar 17 18:40:12.244494 env[1274]: time="2025-03-17T18:40:12.244356123Z" level=warning msg="cleaning up after shim disconnected" id=a82c064c81ec74435ccbcc1ea6c6a89a98c41d699ae3c0bdcc65fe559c04b68c namespace=k8s.io
Mar 17 18:40:12.244494 env[1274]: time="2025-03-17T18:40:12.244370008Z" level=info msg="cleaning up dead shim"
Mar 17 18:40:12.249920 env[1274]: time="2025-03-17T18:40:12.249890102Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:40:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3766 runtime=io.containerd.runc.v2\n"
Mar 17 18:40:12.252129 env[1274]: time="2025-03-17T18:40:12.252101580Z" level=info msg="StopContainer for \"a82c064c81ec74435ccbcc1ea6c6a89a98c41d699ae3c0bdcc65fe559c04b68c\" returns successfully"
Mar 17 18:40:12.252722 env[1274]: time="2025-03-17T18:40:12.252705791Z" level=info msg="StopPodSandbox for \"3d6f3da3d43f52aa3830cdebe93827aa85f9546a4136bc52c789895697043be2\""
Mar 17 18:40:12.252828 env[1274]: time="2025-03-17T18:40:12.252813448Z" level=info msg="Container to stop \"a82c064c81ec74435ccbcc1ea6c6a89a98c41d699ae3c0bdcc65fe559c04b68c\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Mar 17 18:40:12.254460 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3d6f3da3d43f52aa3830cdebe93827aa85f9546a4136bc52c789895697043be2-shm.mount: Deactivated successfully.
Mar 17 18:40:12.263129 systemd[1]: cri-containerd-3d6f3da3d43f52aa3830cdebe93827aa85f9546a4136bc52c789895697043be2.scope: Deactivated successfully.
Mar 17 18:40:12.266874 env[1274]: time="2025-03-17T18:40:12.266780466Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Mar 17 18:40:12.274018 env[1274]: time="2025-03-17T18:40:12.273991408Z" level=info msg="StopContainer for \"64e0853eaf019fbb8604798285e15ebb510022dd9c630f3d7ec18f0f18736f5b\" with timeout 2 (s)"
Mar 17 18:40:12.274191 env[1274]: time="2025-03-17T18:40:12.274173258Z" level=info msg="Stop container \"64e0853eaf019fbb8604798285e15ebb510022dd9c630f3d7ec18f0f18736f5b\" with signal terminated"
Mar 17 18:40:12.313587 systemd-networkd[1058]: lxc_health: Link DOWN
Mar 17 18:40:12.313592 systemd-networkd[1058]: lxc_health: Lost carrier
Mar 17 18:40:12.356596 env[1274]: time="2025-03-17T18:40:12.356551050Z" level=info msg="shim disconnected" id=3d6f3da3d43f52aa3830cdebe93827aa85f9546a4136bc52c789895697043be2
Mar 17 18:40:12.356752 env[1274]: time="2025-03-17T18:40:12.356739338Z" level=warning msg="cleaning up after shim disconnected" id=3d6f3da3d43f52aa3830cdebe93827aa85f9546a4136bc52c789895697043be2 namespace=k8s.io
Mar 17 18:40:12.363627 env[1274]: time="2025-03-17T18:40:12.357012211Z" level=info msg="cleaning up dead shim"
Mar 17 18:40:12.363627 env[1274]: time="2025-03-17T18:40:12.362313166Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:40:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3807 runtime=io.containerd.runc.v2\n"
Mar 17 18:40:12.368391 env[1274]: time="2025-03-17T18:40:12.368362690Z" level=info msg="TearDown network for sandbox \"3d6f3da3d43f52aa3830cdebe93827aa85f9546a4136bc52c789895697043be2\" successfully"
Mar 17 18:40:12.368391 env[1274]: time="2025-03-17T18:40:12.368386737Z" level=info msg="StopPodSandbox for \"3d6f3da3d43f52aa3830cdebe93827aa85f9546a4136bc52c789895697043be2\" returns successfully"
Mar 17 18:40:12.369441 systemd[1]: cri-containerd-64e0853eaf019fbb8604798285e15ebb510022dd9c630f3d7ec18f0f18736f5b.scope: Deactivated successfully.
Mar 17 18:40:12.369622 systemd[1]: cri-containerd-64e0853eaf019fbb8604798285e15ebb510022dd9c630f3d7ec18f0f18736f5b.scope: Consumed 4.495s CPU time.
Mar 17 18:40:12.406155 env[1274]: time="2025-03-17T18:40:12.406119960Z" level=info msg="shim disconnected" id=64e0853eaf019fbb8604798285e15ebb510022dd9c630f3d7ec18f0f18736f5b
Mar 17 18:40:12.406155 env[1274]: time="2025-03-17T18:40:12.406150961Z" level=warning msg="cleaning up after shim disconnected" id=64e0853eaf019fbb8604798285e15ebb510022dd9c630f3d7ec18f0f18736f5b namespace=k8s.io
Mar 17 18:40:12.406155 env[1274]: time="2025-03-17T18:40:12.406160307Z" level=info msg="cleaning up dead shim"
Mar 17 18:40:12.411291 env[1274]: time="2025-03-17T18:40:12.411255018Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:40:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3831 runtime=io.containerd.runc.v2\n"
Mar 17 18:40:12.415219 env[1274]: time="2025-03-17T18:40:12.415185568Z" level=info msg="StopContainer for \"64e0853eaf019fbb8604798285e15ebb510022dd9c630f3d7ec18f0f18736f5b\" returns successfully"
Mar 17 18:40:12.415578 env[1274]: time="2025-03-17T18:40:12.415560626Z" level=info msg="StopPodSandbox for \"92e863a20a4dc5edad7d939348a07a571d6c9dfdaede567be6b29c811ddba22a\""
Mar 17 18:40:12.418389 env[1274]: time="2025-03-17T18:40:12.415618389Z" level=info msg="Container to stop \"43b2bdc5d6863a3bec552e9e2c94ba95e1f5c223791d4ac81a3f392f349fc8a8\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Mar 17 18:40:12.418389 env[1274]: time="2025-03-17T18:40:12.415629514Z" level=info msg="Container to stop \"64e0853eaf019fbb8604798285e15ebb510022dd9c630f3d7ec18f0f18736f5b\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Mar 17 18:40:12.418389 env[1274]: time="2025-03-17T18:40:12.415636319Z" level=info msg="Container to stop \"a6218de5b6dee25d9568a03bd40505320f03762746b80ae564bef72487449031\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Mar 17 18:40:12.418389 env[1274]: time="2025-03-17T18:40:12.415642054Z" level=info msg="Container to stop \"08827f432f1057da555e6a34cc1793b76ae2811efd45e4dd8db81dd8d75bd33b\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Mar 17 18:40:12.418389 env[1274]: time="2025-03-17T18:40:12.415649251Z" level=info msg="Container to stop \"860a3dcd81eebb5a4e08551cadd22c1fac50b7dc644425cf929809f6e1bf3ca8\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Mar 17 18:40:12.419291 systemd[1]: cri-containerd-92e863a20a4dc5edad7d939348a07a571d6c9dfdaede567be6b29c811ddba22a.scope: Deactivated successfully.
Mar 17 18:40:12.459210 env[1274]: time="2025-03-17T18:40:12.458524216Z" level=info msg="shim disconnected" id=92e863a20a4dc5edad7d939348a07a571d6c9dfdaede567be6b29c811ddba22a
Mar 17 18:40:12.459210 env[1274]: time="2025-03-17T18:40:12.458565650Z" level=warning msg="cleaning up after shim disconnected" id=92e863a20a4dc5edad7d939348a07a571d6c9dfdaede567be6b29c811ddba22a namespace=k8s.io
Mar 17 18:40:12.459210 env[1274]: time="2025-03-17T18:40:12.458575621Z" level=info msg="cleaning up dead shim"
Mar 17 18:40:12.465807 env[1274]: time="2025-03-17T18:40:12.465766655Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:40:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3862 runtime=io.containerd.runc.v2\n"
Mar 17 18:40:12.468277 env[1274]: time="2025-03-17T18:40:12.468247464Z" level=info msg="TearDown network for sandbox \"92e863a20a4dc5edad7d939348a07a571d6c9dfdaede567be6b29c811ddba22a\" successfully"
Mar 17 18:40:12.468335 env[1274]: time="2025-03-17T18:40:12.468273963Z" level=info msg="StopPodSandbox for \"92e863a20a4dc5edad7d939348a07a571d6c9dfdaede567be6b29c811ddba22a\" returns successfully"
Mar 17 18:40:12.486373 kubelet[2158]: I0317 18:40:12.486334    2158 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xthw\" (UniqueName: \"kubernetes.io/projected/069f95e3-69e0-4620-95fd-4b18629af9c3-kube-api-access-5xthw\") pod \"069f95e3-69e0-4620-95fd-4b18629af9c3\" (UID: \"069f95e3-69e0-4620-95fd-4b18629af9c3\") "
Mar 17 18:40:12.486688 kubelet[2158]: I0317 18:40:12.486677    2158 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/069f95e3-69e0-4620-95fd-4b18629af9c3-cilium-config-path\") pod \"069f95e3-69e0-4620-95fd-4b18629af9c3\" (UID: \"069f95e3-69e0-4620-95fd-4b18629af9c3\") "
Mar 17 18:40:12.516622 kubelet[2158]: I0317 18:40:12.512693    2158 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/069f95e3-69e0-4620-95fd-4b18629af9c3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "069f95e3-69e0-4620-95fd-4b18629af9c3" (UID: "069f95e3-69e0-4620-95fd-4b18629af9c3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Mar 17 18:40:12.524909 kubelet[2158]: I0317 18:40:12.524869    2158 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/069f95e3-69e0-4620-95fd-4b18629af9c3-kube-api-access-5xthw" (OuterVolumeSpecName: "kube-api-access-5xthw") pod "069f95e3-69e0-4620-95fd-4b18629af9c3" (UID: "069f95e3-69e0-4620-95fd-4b18629af9c3"). InnerVolumeSpecName "kube-api-access-5xthw". PluginName "kubernetes.io/projected", VolumeGidValue ""
Mar 17 18:40:12.587820 kubelet[2158]: I0317 18:40:12.587795    2158 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-host-proc-sys-net\") pod \"0d87d5d5-4269-4f0a-90f8-9a245a822d8e\" (UID: \"0d87d5d5-4269-4f0a-90f8-9a245a822d8e\") "
Mar 17 18:40:12.588011 kubelet[2158]: I0317 18:40:12.588000    2158 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-etc-cni-netd\") pod \"0d87d5d5-4269-4f0a-90f8-9a245a822d8e\" (UID: \"0d87d5d5-4269-4f0a-90f8-9a245a822d8e\") "
Mar 17 18:40:12.592081 kubelet[2158]: I0317 18:40:12.588071    2158 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-host-proc-sys-kernel\") pod \"0d87d5d5-4269-4f0a-90f8-9a245a822d8e\" (UID: \"0d87d5d5-4269-4f0a-90f8-9a245a822d8e\") "
Mar 17 18:40:12.592081 kubelet[2158]: I0317 18:40:12.588102    2158 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-hubble-tls\") pod \"0d87d5d5-4269-4f0a-90f8-9a245a822d8e\" (UID: \"0d87d5d5-4269-4f0a-90f8-9a245a822d8e\") "
Mar 17 18:40:12.592081 kubelet[2158]: I0317 18:40:12.588115    2158 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpcpl\" (UniqueName: \"kubernetes.io/projected/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-kube-api-access-lpcpl\") pod \"0d87d5d5-4269-4f0a-90f8-9a245a822d8e\" (UID: \"0d87d5d5-4269-4f0a-90f8-9a245a822d8e\") "
Mar 17 18:40:12.592081 kubelet[2158]: I0317 18:40:12.588127    2158 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-clustermesh-secrets\") pod \"0d87d5d5-4269-4f0a-90f8-9a245a822d8e\" (UID: \"0d87d5d5-4269-4f0a-90f8-9a245a822d8e\") "
Mar 17 18:40:12.592081 kubelet[2158]: I0317 18:40:12.588135    2158 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-cni-path\") pod \"0d87d5d5-4269-4f0a-90f8-9a245a822d8e\" (UID: \"0d87d5d5-4269-4f0a-90f8-9a245a822d8e\") "
Mar 17 18:40:12.592081 kubelet[2158]: I0317 18:40:12.588143    2158 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-hostproc\") pod \"0d87d5d5-4269-4f0a-90f8-9a245a822d8e\" (UID: \"0d87d5d5-4269-4f0a-90f8-9a245a822d8e\") "
Mar 17 18:40:12.592245 kubelet[2158]: I0317 18:40:12.588150    2158 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-cilium-cgroup\") pod \"0d87d5d5-4269-4f0a-90f8-9a245a822d8e\" (UID: \"0d87d5d5-4269-4f0a-90f8-9a245a822d8e\") "
Mar 17 18:40:12.592245 kubelet[2158]: I0317 18:40:12.588158    2158 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-cilium-run\") pod \"0d87d5d5-4269-4f0a-90f8-9a245a822d8e\" (UID: \"0d87d5d5-4269-4f0a-90f8-9a245a822d8e\") "
Mar 17 18:40:12.592245 kubelet[2158]: I0317 18:40:12.588167    2158 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-xtables-lock\") pod \"0d87d5d5-4269-4f0a-90f8-9a245a822d8e\" (UID: \"0d87d5d5-4269-4f0a-90f8-9a245a822d8e\") "
Mar 17 18:40:12.592245 kubelet[2158]: I0317 18:40:12.588174    2158 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-bpf-maps\") pod \"0d87d5d5-4269-4f0a-90f8-9a245a822d8e\" (UID: \"0d87d5d5-4269-4f0a-90f8-9a245a822d8e\") "
Mar 17 18:40:12.592245 kubelet[2158]: I0317 18:40:12.588184    2158 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-cilium-config-path\") pod \"0d87d5d5-4269-4f0a-90f8-9a245a822d8e\" (UID: \"0d87d5d5-4269-4f0a-90f8-9a245a822d8e\") "
Mar 17 18:40:12.592245 kubelet[2158]: I0317 18:40:12.588191    2158 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-lib-modules\") pod \"0d87d5d5-4269-4f0a-90f8-9a245a822d8e\" (UID: \"0d87d5d5-4269-4f0a-90f8-9a245a822d8e\") "
Mar 17 18:40:12.593409 kubelet[2158]: I0317 18:40:12.588227    2158 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/069f95e3-69e0-4620-95fd-4b18629af9c3-cilium-config-path\") on node \"localhost\" DevicePath \"\""
Mar 17 18:40:12.593409 kubelet[2158]: I0317 18:40:12.588234    2158 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-5xthw\" (UniqueName: \"kubernetes.io/projected/069f95e3-69e0-4620-95fd-4b18629af9c3-kube-api-access-5xthw\") on node \"localhost\" DevicePath \"\""
Mar 17 18:40:12.593409 kubelet[2158]: I0317 18:40:12.590073    2158 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0d87d5d5-4269-4f0a-90f8-9a245a822d8e" (UID: "0d87d5d5-4269-4f0a-90f8-9a245a822d8e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Mar 17 18:40:12.593409 kubelet[2158]: I0317 18:40:12.590114    2158 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-cni-path" (OuterVolumeSpecName: "cni-path") pod "0d87d5d5-4269-4f0a-90f8-9a245a822d8e" (UID: "0d87d5d5-4269-4f0a-90f8-9a245a822d8e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Mar 17 18:40:12.593409 kubelet[2158]: I0317 18:40:12.590120    2158 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0d87d5d5-4269-4f0a-90f8-9a245a822d8e" (UID: "0d87d5d5-4269-4f0a-90f8-9a245a822d8e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Mar 17 18:40:12.593506 kubelet[2158]: I0317 18:40:12.590127    2158 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0d87d5d5-4269-4f0a-90f8-9a245a822d8e" (UID: "0d87d5d5-4269-4f0a-90f8-9a245a822d8e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Mar 17 18:40:12.593506 kubelet[2158]: I0317 18:40:12.590133    2158 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0d87d5d5-4269-4f0a-90f8-9a245a822d8e" (UID: "0d87d5d5-4269-4f0a-90f8-9a245a822d8e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Mar 17 18:40:12.593506 kubelet[2158]: I0317 18:40:12.590150    2158 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-hostproc" (OuterVolumeSpecName: "hostproc") pod "0d87d5d5-4269-4f0a-90f8-9a245a822d8e" (UID: "0d87d5d5-4269-4f0a-90f8-9a245a822d8e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Mar 17 18:40:12.593506 kubelet[2158]: I0317 18:40:12.590159    2158 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0d87d5d5-4269-4f0a-90f8-9a245a822d8e" (UID: "0d87d5d5-4269-4f0a-90f8-9a245a822d8e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Mar 17 18:40:12.593506 kubelet[2158]: I0317 18:40:12.590170    2158 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0d87d5d5-4269-4f0a-90f8-9a245a822d8e" (UID: "0d87d5d5-4269-4f0a-90f8-9a245a822d8e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Mar 17 18:40:12.593666 kubelet[2158]: I0317 18:40:12.590183    2158 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0d87d5d5-4269-4f0a-90f8-9a245a822d8e" (UID: "0d87d5d5-4269-4f0a-90f8-9a245a822d8e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Mar 17 18:40:12.593666 kubelet[2158]: I0317 18:40:12.590192    2158 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0d87d5d5-4269-4f0a-90f8-9a245a822d8e" (UID: "0d87d5d5-4269-4f0a-90f8-9a245a822d8e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Mar 17 18:40:12.595991 kubelet[2158]: I0317 18:40:12.595969    2158 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0d87d5d5-4269-4f0a-90f8-9a245a822d8e" (UID: "0d87d5d5-4269-4f0a-90f8-9a245a822d8e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Mar 17 18:40:12.600244 kubelet[2158]: I0317 18:40:12.600218    2158 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0d87d5d5-4269-4f0a-90f8-9a245a822d8e" (UID: "0d87d5d5-4269-4f0a-90f8-9a245a822d8e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue ""
Mar 17 18:40:12.601968 kubelet[2158]: I0317 18:40:12.601944    2158 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-kube-api-access-lpcpl" (OuterVolumeSpecName: "kube-api-access-lpcpl") pod "0d87d5d5-4269-4f0a-90f8-9a245a822d8e" (UID: "0d87d5d5-4269-4f0a-90f8-9a245a822d8e"). InnerVolumeSpecName "kube-api-access-lpcpl". PluginName "kubernetes.io/projected", VolumeGidValue ""
Mar 17 18:40:12.606028 kubelet[2158]: I0317 18:40:12.605990    2158 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0d87d5d5-4269-4f0a-90f8-9a245a822d8e" (UID: "0d87d5d5-4269-4f0a-90f8-9a245a822d8e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Mar 17 18:40:12.689203 kubelet[2158]: I0317 18:40:12.689177    2158 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-cni-path\") on node \"localhost\" DevicePath \"\""
Mar 17 18:40:12.689337 kubelet[2158]: I0317 18:40:12.689329    2158 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-hostproc\") on node \"localhost\" DevicePath \"\""
Mar 17 18:40:12.689386 kubelet[2158]: I0317 18:40:12.689378    2158 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-clustermesh-secrets\") on node \"localhost\" DevicePath \"\""
Mar 17 18:40:12.689432 kubelet[2158]: I0317 18:40:12.689425    2158 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-cilium-run\") on node \"localhost\" DevicePath \"\""
Mar 17 18:40:12.689478 kubelet[2158]: I0317 18:40:12.689471    2158 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-cilium-cgroup\") on node \"localhost\" DevicePath \"\""
Mar 17 18:40:12.689523 kubelet[2158]: I0317 18:40:12.689516    2158 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-bpf-maps\") on node \"localhost\" DevicePath \"\""
Mar 17 18:40:12.689569 kubelet[2158]: I0317 18:40:12.689561    2158 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-cilium-config-path\") on node \"localhost\" DevicePath \"\""
Mar 17 18:40:12.689616 kubelet[2158]: I0317 18:40:12.689608    2158 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-lib-modules\") on node \"localhost\" DevicePath \"\""
Mar 17 18:40:12.689660 kubelet[2158]: I0317 18:40:12.689653    2158 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-xtables-lock\") on node \"localhost\" DevicePath \"\""
Mar 17 18:40:12.689705 kubelet[2158]: I0317 18:40:12.689698    2158 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-host-proc-sys-net\") on node \"localhost\" DevicePath \"\""
Mar 17 18:40:12.689751 kubelet[2158]: I0317 18:40:12.689743    2158 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-etc-cni-netd\") on node \"localhost\" DevicePath \"\""
Mar 17 18:40:12.689798 kubelet[2158]: I0317 18:40:12.689790    2158 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\""
Mar 17 18:40:12.689843 kubelet[2158]: I0317 18:40:12.689836    2158 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-hubble-tls\") on node \"localhost\" DevicePath \"\""
Mar 17 18:40:12.689890 kubelet[2158]: I0317 18:40:12.689882    2158 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-lpcpl\" (UniqueName: \"kubernetes.io/projected/0d87d5d5-4269-4f0a-90f8-9a245a822d8e-kube-api-access-lpcpl\") on node \"localhost\" DevicePath \"\""
Mar 17 18:40:12.987501 systemd[1]: Removed slice kubepods-burstable-pod0d87d5d5_4269_4f0a_90f8_9a245a822d8e.slice.
Mar 17 18:40:12.987585 systemd[1]: kubepods-burstable-pod0d87d5d5_4269_4f0a_90f8_9a245a822d8e.slice: Consumed 4.564s CPU time.
Mar 17 18:40:12.989025 systemd[1]: Removed slice kubepods-besteffort-pod069f95e3_69e0_4620_95fd_4b18629af9c3.slice.
Mar 17 18:40:13.208540 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-64e0853eaf019fbb8604798285e15ebb510022dd9c630f3d7ec18f0f18736f5b-rootfs.mount: Deactivated successfully.
Mar 17 18:40:13.208599 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d6f3da3d43f52aa3830cdebe93827aa85f9546a4136bc52c789895697043be2-rootfs.mount: Deactivated successfully.
Mar 17 18:40:13.208637 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92e863a20a4dc5edad7d939348a07a571d6c9dfdaede567be6b29c811ddba22a-rootfs.mount: Deactivated successfully.
Mar 17 18:40:13.208673 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-92e863a20a4dc5edad7d939348a07a571d6c9dfdaede567be6b29c811ddba22a-shm.mount: Deactivated successfully.
Mar 17 18:40:13.208710 systemd[1]: var-lib-kubelet-pods-069f95e3\x2d69e0\x2d4620\x2d95fd\x2d4b18629af9c3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5xthw.mount: Deactivated successfully.
Mar 17 18:40:13.208750 systemd[1]: var-lib-kubelet-pods-0d87d5d5\x2d4269\x2d4f0a\x2d90f8\x2d9a245a822d8e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlpcpl.mount: Deactivated successfully.
Mar 17 18:40:13.208784 systemd[1]: var-lib-kubelet-pods-0d87d5d5\x2d4269\x2d4f0a\x2d90f8\x2d9a245a822d8e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully.
Mar 17 18:40:13.208817 systemd[1]: var-lib-kubelet-pods-0d87d5d5\x2d4269\x2d4f0a\x2d90f8\x2d9a245a822d8e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully.
Mar 17 18:40:13.301788 kubelet[2158]: I0317 18:40:13.301709    2158 scope.go:117] "RemoveContainer" containerID="a82c064c81ec74435ccbcc1ea6c6a89a98c41d699ae3c0bdcc65fe559c04b68c"
Mar 17 18:40:13.307774 env[1274]: time="2025-03-17T18:40:13.307565115Z" level=info msg="RemoveContainer for \"a82c064c81ec74435ccbcc1ea6c6a89a98c41d699ae3c0bdcc65fe559c04b68c\""
Mar 17 18:40:13.308827 env[1274]: time="2025-03-17T18:40:13.308789395Z" level=info msg="RemoveContainer for \"a82c064c81ec74435ccbcc1ea6c6a89a98c41d699ae3c0bdcc65fe559c04b68c\" returns successfully"
Mar 17 18:40:13.308980 kubelet[2158]: I0317 18:40:13.308960    2158 scope.go:117] "RemoveContainer" containerID="64e0853eaf019fbb8604798285e15ebb510022dd9c630f3d7ec18f0f18736f5b"
Mar 17 18:40:13.309777 env[1274]: time="2025-03-17T18:40:13.309755249Z" level=info msg="RemoveContainer for \"64e0853eaf019fbb8604798285e15ebb510022dd9c630f3d7ec18f0f18736f5b\""
Mar 17 18:40:13.311931 env[1274]: time="2025-03-17T18:40:13.311904578Z" level=info msg="RemoveContainer for \"64e0853eaf019fbb8604798285e15ebb510022dd9c630f3d7ec18f0f18736f5b\" returns successfully"
Mar 17 18:40:13.314173 kubelet[2158]: I0317 18:40:13.314155    2158 scope.go:117] "RemoveContainer" containerID="860a3dcd81eebb5a4e08551cadd22c1fac50b7dc644425cf929809f6e1bf3ca8"
Mar 17 18:40:13.319928 env[1274]: time="2025-03-17T18:40:13.319731714Z" level=info msg="RemoveContainer for \"860a3dcd81eebb5a4e08551cadd22c1fac50b7dc644425cf929809f6e1bf3ca8\""
Mar 17 18:40:13.321340 env[1274]: time="2025-03-17T18:40:13.321288644Z" level=info msg="RemoveContainer for \"860a3dcd81eebb5a4e08551cadd22c1fac50b7dc644425cf929809f6e1bf3ca8\" returns successfully"
Mar 17 18:40:13.322101 kubelet[2158]: I0317 18:40:13.322078    2158 scope.go:117] "RemoveContainer" containerID="43b2bdc5d6863a3bec552e9e2c94ba95e1f5c223791d4ac81a3f392f349fc8a8"
Mar 17 18:40:13.323891 env[1274]: time="2025-03-17T18:40:13.323865209Z" level=info msg="RemoveContainer for \"43b2bdc5d6863a3bec552e9e2c94ba95e1f5c223791d4ac81a3f392f349fc8a8\""
Mar 17 18:40:13.325281 env[1274]: time="2025-03-17T18:40:13.325254743Z" level=info msg="RemoveContainer for \"43b2bdc5d6863a3bec552e9e2c94ba95e1f5c223791d4ac81a3f392f349fc8a8\" returns successfully"
Mar 17 18:40:13.326498 kubelet[2158]: I0317 18:40:13.326481    2158 scope.go:117] "RemoveContainer" containerID="08827f432f1057da555e6a34cc1793b76ae2811efd45e4dd8db81dd8d75bd33b"
Mar 17 18:40:13.328556 env[1274]: time="2025-03-17T18:40:13.328524338Z" level=info msg="RemoveContainer for \"08827f432f1057da555e6a34cc1793b76ae2811efd45e4dd8db81dd8d75bd33b\""
Mar 17 18:40:13.330907 env[1274]: time="2025-03-17T18:40:13.330874905Z" level=info msg="RemoveContainer for \"08827f432f1057da555e6a34cc1793b76ae2811efd45e4dd8db81dd8d75bd33b\" returns successfully"
Mar 17 18:40:13.331057 kubelet[2158]: I0317 18:40:13.331045    2158 scope.go:117] "RemoveContainer" containerID="a6218de5b6dee25d9568a03bd40505320f03762746b80ae564bef72487449031"
Mar 17 18:40:13.332019 env[1274]: time="2025-03-17T18:40:13.331997047Z" level=info msg="RemoveContainer for \"a6218de5b6dee25d9568a03bd40505320f03762746b80ae564bef72487449031\""
Mar 17 18:40:13.333165 env[1274]: time="2025-03-17T18:40:13.333141808Z" level=info msg="RemoveContainer for \"a6218de5b6dee25d9568a03bd40505320f03762746b80ae564bef72487449031\" returns successfully"
Mar 17 18:40:13.333266 kubelet[2158]: I0317 18:40:13.333248    2158 scope.go:117] "RemoveContainer" containerID="64e0853eaf019fbb8604798285e15ebb510022dd9c630f3d7ec18f0f18736f5b"
Mar 17 18:40:13.333449 env[1274]: time="2025-03-17T18:40:13.333392502Z" level=error msg="ContainerStatus for \"64e0853eaf019fbb8604798285e15ebb510022dd9c630f3d7ec18f0f18736f5b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"64e0853eaf019fbb8604798285e15ebb510022dd9c630f3d7ec18f0f18736f5b\": not found"
Mar 17 18:40:13.334415 kubelet[2158]: E0317 18:40:13.334391    2158 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"64e0853eaf019fbb8604798285e15ebb510022dd9c630f3d7ec18f0f18736f5b\": not found" containerID="64e0853eaf019fbb8604798285e15ebb510022dd9c630f3d7ec18f0f18736f5b"
Mar 17 18:40:13.334743 kubelet[2158]: I0317 18:40:13.334423    2158 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"64e0853eaf019fbb8604798285e15ebb510022dd9c630f3d7ec18f0f18736f5b"} err="failed to get container status \"64e0853eaf019fbb8604798285e15ebb510022dd9c630f3d7ec18f0f18736f5b\": rpc error: code = NotFound desc = an error occurred when try to find container \"64e0853eaf019fbb8604798285e15ebb510022dd9c630f3d7ec18f0f18736f5b\": not found"
Mar 17 18:40:13.334743 kubelet[2158]: I0317 18:40:13.334475    2158 scope.go:117] "RemoveContainer" containerID="860a3dcd81eebb5a4e08551cadd22c1fac50b7dc644425cf929809f6e1bf3ca8"
Mar 17 18:40:13.334823 env[1274]: time="2025-03-17T18:40:13.334603646Z" level=error msg="ContainerStatus for \"860a3dcd81eebb5a4e08551cadd22c1fac50b7dc644425cf929809f6e1bf3ca8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"860a3dcd81eebb5a4e08551cadd22c1fac50b7dc644425cf929809f6e1bf3ca8\": not found"
Mar 17 18:40:13.334865 kubelet[2158]: E0317 18:40:13.334696    2158 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"860a3dcd81eebb5a4e08551cadd22c1fac50b7dc644425cf929809f6e1bf3ca8\": not found" containerID="860a3dcd81eebb5a4e08551cadd22c1fac50b7dc644425cf929809f6e1bf3ca8"
Mar 17 18:40:13.334918 kubelet[2158]: I0317 18:40:13.334906    2158 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"860a3dcd81eebb5a4e08551cadd22c1fac50b7dc644425cf929809f6e1bf3ca8"} err="failed to get container status \"860a3dcd81eebb5a4e08551cadd22c1fac50b7dc644425cf929809f6e1bf3ca8\": rpc error: code = NotFound desc = an error occurred when try to find container \"860a3dcd81eebb5a4e08551cadd22c1fac50b7dc644425cf929809f6e1bf3ca8\": not found"
Mar 17 18:40:13.334979 kubelet[2158]: I0317 18:40:13.334970    2158 scope.go:117] "RemoveContainer" containerID="43b2bdc5d6863a3bec552e9e2c94ba95e1f5c223791d4ac81a3f392f349fc8a8"
Mar 17 18:40:13.335172 env[1274]: time="2025-03-17T18:40:13.335138765Z" level=error msg="ContainerStatus for \"43b2bdc5d6863a3bec552e9e2c94ba95e1f5c223791d4ac81a3f392f349fc8a8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"43b2bdc5d6863a3bec552e9e2c94ba95e1f5c223791d4ac81a3f392f349fc8a8\": not found"
Mar 17 18:40:13.335278 kubelet[2158]: E0317 18:40:13.335252    2158 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"43b2bdc5d6863a3bec552e9e2c94ba95e1f5c223791d4ac81a3f392f349fc8a8\": not found" containerID="43b2bdc5d6863a3bec552e9e2c94ba95e1f5c223791d4ac81a3f392f349fc8a8"
Mar 17 18:40:13.335330 kubelet[2158]: I0317 18:40:13.335318    2158 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"43b2bdc5d6863a3bec552e9e2c94ba95e1f5c223791d4ac81a3f392f349fc8a8"} err="failed to get container status \"43b2bdc5d6863a3bec552e9e2c94ba95e1f5c223791d4ac81a3f392f349fc8a8\": rpc error: code = NotFound desc = an error occurred when try to find container \"43b2bdc5d6863a3bec552e9e2c94ba95e1f5c223791d4ac81a3f392f349fc8a8\": not found"
Mar 17 18:40:13.335424 kubelet[2158]: I0317 18:40:13.335407    2158 scope.go:117] "RemoveContainer" containerID="08827f432f1057da555e6a34cc1793b76ae2811efd45e4dd8db81dd8d75bd33b"
Mar 17 18:40:13.335576 env[1274]: time="2025-03-17T18:40:13.335549349Z" level=error msg="ContainerStatus for \"08827f432f1057da555e6a34cc1793b76ae2811efd45e4dd8db81dd8d75bd33b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"08827f432f1057da555e6a34cc1793b76ae2811efd45e4dd8db81dd8d75bd33b\": not found"
Mar 17 18:40:13.335657 kubelet[2158]: E0317 18:40:13.335647    2158 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"08827f432f1057da555e6a34cc1793b76ae2811efd45e4dd8db81dd8d75bd33b\": not found" containerID="08827f432f1057da555e6a34cc1793b76ae2811efd45e4dd8db81dd8d75bd33b"
Mar 17 18:40:13.335734 kubelet[2158]: I0317 18:40:13.335719    2158 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"08827f432f1057da555e6a34cc1793b76ae2811efd45e4dd8db81dd8d75bd33b"} err="failed to get container status \"08827f432f1057da555e6a34cc1793b76ae2811efd45e4dd8db81dd8d75bd33b\": rpc error: code = NotFound desc = an error occurred when try to find container \"08827f432f1057da555e6a34cc1793b76ae2811efd45e4dd8db81dd8d75bd33b\": not found"
Mar 17 18:40:13.335796 kubelet[2158]: I0317 18:40:13.335787    2158 scope.go:117] "RemoveContainer" containerID="a6218de5b6dee25d9568a03bd40505320f03762746b80ae564bef72487449031"
Mar 17 18:40:13.335968 env[1274]: time="2025-03-17T18:40:13.335932562Z" level=error msg="ContainerStatus for \"a6218de5b6dee25d9568a03bd40505320f03762746b80ae564bef72487449031\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a6218de5b6dee25d9568a03bd40505320f03762746b80ae564bef72487449031\": not found"
Mar 17 18:40:13.336072 kubelet[2158]: E0317 18:40:13.336058    2158 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a6218de5b6dee25d9568a03bd40505320f03762746b80ae564bef72487449031\": not found" containerID="a6218de5b6dee25d9568a03bd40505320f03762746b80ae564bef72487449031"
Mar 17 18:40:13.336158 kubelet[2158]: I0317 18:40:13.336143    2158 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a6218de5b6dee25d9568a03bd40505320f03762746b80ae564bef72487449031"} err="failed to get container status \"a6218de5b6dee25d9568a03bd40505320f03762746b80ae564bef72487449031\": rpc error: code = NotFound desc = an error occurred when try to find container \"a6218de5b6dee25d9568a03bd40505320f03762746b80ae564bef72487449031\": not found"
Mar 17 18:40:14.063981 sshd[3723]: pam_unix(sshd:session): session closed for user core
Mar 17 18:40:14.067178 systemd[1]: Started sshd@22-139.178.70.110:22-139.178.68.195:36514.service.
Mar 17 18:40:14.068194 systemd[1]: sshd@21-139.178.70.110:22-139.178.68.195:36512.service: Deactivated successfully.
Mar 17 18:40:14.068836 systemd[1]: session-24.scope: Deactivated successfully.
Mar 17 18:40:14.069659 systemd-logind[1244]: Session 24 logged out. Waiting for processes to exit.
Mar 17 18:40:14.070742 systemd-logind[1244]: Removed session 24.
Mar 17 18:40:14.294253 sshd[3879]: Accepted publickey for core from 139.178.68.195 port 36514 ssh2: RSA SHA256:4oZ1KYBDSs5lS/zKBefF9vskKlH/NySTYiZrtgd5CeA
Mar 17 18:40:14.295250 sshd[3879]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Mar 17 18:40:14.307977 systemd-logind[1244]: New session 25 of user core.
Mar 17 18:40:14.308502 systemd[1]: Started session-25.scope.
Mar 17 18:40:14.799012 sshd[3879]: pam_unix(sshd:session): session closed for user core
Mar 17 18:40:14.802532 systemd[1]: Started sshd@23-139.178.70.110:22-139.178.68.195:36516.service.
Mar 17 18:40:14.806510 systemd[1]: sshd@22-139.178.70.110:22-139.178.68.195:36514.service: Deactivated successfully.
Mar 17 18:40:14.807204 systemd[1]: session-25.scope: Deactivated successfully.
Mar 17 18:40:14.807663 systemd-logind[1244]: Session 25 logged out. Waiting for processes to exit.
Mar 17 18:40:14.808377 systemd-logind[1244]: Removed session 25.
Mar 17 18:40:14.827666 kubelet[2158]: I0317 18:40:14.827598    2158 topology_manager.go:215] "Topology Admit Handler" podUID="8baf2659-08e8-4676-ad94-39a768f7ab3c" podNamespace="kube-system" podName="cilium-gkvk9"
Mar 17 18:40:14.833874 kubelet[2158]: E0317 18:40:14.833579    2158 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0d87d5d5-4269-4f0a-90f8-9a245a822d8e" containerName="mount-cgroup"
Mar 17 18:40:14.833874 kubelet[2158]: E0317 18:40:14.833612    2158 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0d87d5d5-4269-4f0a-90f8-9a245a822d8e" containerName="apply-sysctl-overwrites"
Mar 17 18:40:14.833874 kubelet[2158]: E0317 18:40:14.833619    2158 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="069f95e3-69e0-4620-95fd-4b18629af9c3" containerName="cilium-operator"
Mar 17 18:40:14.833874 kubelet[2158]: E0317 18:40:14.833622    2158 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0d87d5d5-4269-4f0a-90f8-9a245a822d8e" containerName="cilium-agent"
Mar 17 18:40:14.833874 kubelet[2158]: E0317 18:40:14.833628    2158 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0d87d5d5-4269-4f0a-90f8-9a245a822d8e" containerName="mount-bpf-fs"
Mar 17 18:40:14.833874 kubelet[2158]: E0317 18:40:14.833632    2158 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0d87d5d5-4269-4f0a-90f8-9a245a822d8e" containerName="clean-cilium-state"
Mar 17 18:40:14.833874 kubelet[2158]: I0317 18:40:14.833692    2158 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d87d5d5-4269-4f0a-90f8-9a245a822d8e" containerName="cilium-agent"
Mar 17 18:40:14.833874 kubelet[2158]: I0317 18:40:14.833700    2158 memory_manager.go:354] "RemoveStaleState removing state" podUID="069f95e3-69e0-4620-95fd-4b18629af9c3" containerName="cilium-operator"
Mar 17 18:40:14.840965 sshd[3889]: Accepted publickey for core from 139.178.68.195 port 36516 ssh2: RSA SHA256:4oZ1KYBDSs5lS/zKBefF9vskKlH/NySTYiZrtgd5CeA
Mar 17 18:40:14.841830 sshd[3889]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Mar 17 18:40:14.845797 systemd-logind[1244]: New session 26 of user core.
Mar 17 18:40:14.846365 systemd[1]: Started session-26.scope.
Mar 17 18:40:14.890389 systemd[1]: Created slice kubepods-burstable-pod8baf2659_08e8_4676_ad94_39a768f7ab3c.slice.
Mar 17 18:40:14.917397 kubelet[2158]: I0317 18:40:14.917375    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8baf2659-08e8-4676-ad94-39a768f7ab3c-cni-path\") pod \"cilium-gkvk9\" (UID: \"8baf2659-08e8-4676-ad94-39a768f7ab3c\") " pod="kube-system/cilium-gkvk9"
Mar 17 18:40:14.917503 kubelet[2158]: I0317 18:40:14.917491    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8baf2659-08e8-4676-ad94-39a768f7ab3c-clustermesh-secrets\") pod \"cilium-gkvk9\" (UID: \"8baf2659-08e8-4676-ad94-39a768f7ab3c\") " pod="kube-system/cilium-gkvk9"
Mar 17 18:40:14.917566 kubelet[2158]: I0317 18:40:14.917556    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8baf2659-08e8-4676-ad94-39a768f7ab3c-host-proc-sys-kernel\") pod \"cilium-gkvk9\" (UID: \"8baf2659-08e8-4676-ad94-39a768f7ab3c\") " pod="kube-system/cilium-gkvk9"
Mar 17 18:40:14.917634 kubelet[2158]: I0317 18:40:14.917625    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6wd6\" (UniqueName: \"kubernetes.io/projected/8baf2659-08e8-4676-ad94-39a768f7ab3c-kube-api-access-c6wd6\") pod \"cilium-gkvk9\" (UID: \"8baf2659-08e8-4676-ad94-39a768f7ab3c\") " pod="kube-system/cilium-gkvk9"
Mar 17 18:40:14.917689 kubelet[2158]: I0317 18:40:14.917680    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8baf2659-08e8-4676-ad94-39a768f7ab3c-lib-modules\") pod \"cilium-gkvk9\" (UID: \"8baf2659-08e8-4676-ad94-39a768f7ab3c\") " pod="kube-system/cilium-gkvk9"
Mar 17 18:40:14.917742 kubelet[2158]: I0317 18:40:14.917733    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8baf2659-08e8-4676-ad94-39a768f7ab3c-hostproc\") pod \"cilium-gkvk9\" (UID: \"8baf2659-08e8-4676-ad94-39a768f7ab3c\") " pod="kube-system/cilium-gkvk9"
Mar 17 18:40:14.917797 kubelet[2158]: I0317 18:40:14.917788    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8baf2659-08e8-4676-ad94-39a768f7ab3c-cilium-config-path\") pod \"cilium-gkvk9\" (UID: \"8baf2659-08e8-4676-ad94-39a768f7ab3c\") " pod="kube-system/cilium-gkvk9"
Mar 17 18:40:14.917858 kubelet[2158]: I0317 18:40:14.917849    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8baf2659-08e8-4676-ad94-39a768f7ab3c-bpf-maps\") pod \"cilium-gkvk9\" (UID: \"8baf2659-08e8-4676-ad94-39a768f7ab3c\") " pod="kube-system/cilium-gkvk9"
Mar 17 18:40:14.917914 kubelet[2158]: I0317 18:40:14.917905    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8baf2659-08e8-4676-ad94-39a768f7ab3c-cilium-run\") pod \"cilium-gkvk9\" (UID: \"8baf2659-08e8-4676-ad94-39a768f7ab3c\") " pod="kube-system/cilium-gkvk9"
Mar 17 18:40:14.918013 kubelet[2158]: I0317 18:40:14.917961    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8baf2659-08e8-4676-ad94-39a768f7ab3c-cilium-cgroup\") pod \"cilium-gkvk9\" (UID: \"8baf2659-08e8-4676-ad94-39a768f7ab3c\") " pod="kube-system/cilium-gkvk9"
Mar 17 18:40:14.918079 kubelet[2158]: I0317 18:40:14.918067    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8baf2659-08e8-4676-ad94-39a768f7ab3c-etc-cni-netd\") pod \"cilium-gkvk9\" (UID: \"8baf2659-08e8-4676-ad94-39a768f7ab3c\") " pod="kube-system/cilium-gkvk9"
Mar 17 18:40:14.918192 kubelet[2158]: I0317 18:40:14.918181    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8baf2659-08e8-4676-ad94-39a768f7ab3c-xtables-lock\") pod \"cilium-gkvk9\" (UID: \"8baf2659-08e8-4676-ad94-39a768f7ab3c\") " pod="kube-system/cilium-gkvk9"
Mar 17 18:40:14.918245 kubelet[2158]: I0317 18:40:14.918236    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8baf2659-08e8-4676-ad94-39a768f7ab3c-host-proc-sys-net\") pod \"cilium-gkvk9\" (UID: \"8baf2659-08e8-4676-ad94-39a768f7ab3c\") " pod="kube-system/cilium-gkvk9"
Mar 17 18:40:14.918305 kubelet[2158]: I0317 18:40:14.918293    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8baf2659-08e8-4676-ad94-39a768f7ab3c-hubble-tls\") pod \"cilium-gkvk9\" (UID: \"8baf2659-08e8-4676-ad94-39a768f7ab3c\") " pod="kube-system/cilium-gkvk9"
Mar 17 18:40:14.918360 kubelet[2158]: I0317 18:40:14.918350    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8baf2659-08e8-4676-ad94-39a768f7ab3c-cilium-ipsec-secrets\") pod \"cilium-gkvk9\" (UID: \"8baf2659-08e8-4676-ad94-39a768f7ab3c\") " pod="kube-system/cilium-gkvk9"
Mar 17 18:40:14.966510 kubelet[2158]: I0317 18:40:14.966475    2158 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="069f95e3-69e0-4620-95fd-4b18629af9c3" path="/var/lib/kubelet/pods/069f95e3-69e0-4620-95fd-4b18629af9c3/volumes"
Mar 17 18:40:14.968064 kubelet[2158]: I0317 18:40:14.968049    2158 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d87d5d5-4269-4f0a-90f8-9a245a822d8e" path="/var/lib/kubelet/pods/0d87d5d5-4269-4f0a-90f8-9a245a822d8e/volumes"
Mar 17 18:40:15.094068 sshd[3889]: pam_unix(sshd:session): session closed for user core
Mar 17 18:40:15.096023 systemd[1]: Started sshd@24-139.178.70.110:22-139.178.68.195:36528.service.
Mar 17 18:40:15.096415 systemd[1]: sshd@23-139.178.70.110:22-139.178.68.195:36516.service: Deactivated successfully.
Mar 17 18:40:15.098526 systemd[1]: session-26.scope: Deactivated successfully.
Mar 17 18:40:15.099300 systemd-logind[1244]: Session 26 logged out. Waiting for processes to exit.
Mar 17 18:40:15.100113 systemd-logind[1244]: Removed session 26.
Mar 17 18:40:15.108219 env[1274]: time="2025-03-17T18:40:15.107852626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gkvk9,Uid:8baf2659-08e8-4676-ad94-39a768f7ab3c,Namespace:kube-system,Attempt:0,}"
Mar 17 18:40:15.132781 sshd[3905]: Accepted publickey for core from 139.178.68.195 port 36528 ssh2: RSA SHA256:4oZ1KYBDSs5lS/zKBefF9vskKlH/NySTYiZrtgd5CeA
Mar 17 18:40:15.133997 sshd[3905]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Mar 17 18:40:15.134681 env[1274]: time="2025-03-17T18:40:15.134632729Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Mar 17 18:40:15.134796 env[1274]: time="2025-03-17T18:40:15.134772967Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Mar 17 18:40:15.134867 env[1274]: time="2025-03-17T18:40:15.134852491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 17 18:40:15.135044 env[1274]: time="2025-03-17T18:40:15.135026534Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ba8818e628917eaad05925f2296ce3e259ce1ee043777f8e59f9feebfe2439f2 pid=3916 runtime=io.containerd.runc.v2
Mar 17 18:40:15.143383 systemd[1]: Started session-27.scope.
Mar 17 18:40:15.144478 systemd-logind[1244]: New session 27 of user core.
Mar 17 18:40:15.157149 systemd[1]: Started cri-containerd-ba8818e628917eaad05925f2296ce3e259ce1ee043777f8e59f9feebfe2439f2.scope.
Mar 17 18:40:15.180256 env[1274]: time="2025-03-17T18:40:15.180227022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gkvk9,Uid:8baf2659-08e8-4676-ad94-39a768f7ab3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba8818e628917eaad05925f2296ce3e259ce1ee043777f8e59f9feebfe2439f2\""
Mar 17 18:40:15.183118 env[1274]: time="2025-03-17T18:40:15.183093991Z" level=info msg="CreateContainer within sandbox \"ba8818e628917eaad05925f2296ce3e259ce1ee043777f8e59f9feebfe2439f2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Mar 17 18:40:15.229226 env[1274]: time="2025-03-17T18:40:15.228849029Z" level=info msg="CreateContainer within sandbox \"ba8818e628917eaad05925f2296ce3e259ce1ee043777f8e59f9feebfe2439f2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7e6b8b1a61b27bcfaba68865c6da886e7ca2e5620bc99e0b5905ae07058ba9e4\""
Mar 17 18:40:15.229523 env[1274]: time="2025-03-17T18:40:15.229504164Z" level=info msg="StartContainer for \"7e6b8b1a61b27bcfaba68865c6da886e7ca2e5620bc99e0b5905ae07058ba9e4\""
Mar 17 18:40:15.256434 systemd[1]: Started cri-containerd-7e6b8b1a61b27bcfaba68865c6da886e7ca2e5620bc99e0b5905ae07058ba9e4.scope.
Mar 17 18:40:15.266802 systemd[1]: cri-containerd-7e6b8b1a61b27bcfaba68865c6da886e7ca2e5620bc99e0b5905ae07058ba9e4.scope: Deactivated successfully.
Mar 17 18:40:15.266975 systemd[1]: Stopped cri-containerd-7e6b8b1a61b27bcfaba68865c6da886e7ca2e5620bc99e0b5905ae07058ba9e4.scope.
Mar 17 18:40:15.322311 env[1274]: time="2025-03-17T18:40:15.322280123Z" level=info msg="shim disconnected" id=7e6b8b1a61b27bcfaba68865c6da886e7ca2e5620bc99e0b5905ae07058ba9e4
Mar 17 18:40:15.322496 env[1274]: time="2025-03-17T18:40:15.322483803Z" level=warning msg="cleaning up after shim disconnected" id=7e6b8b1a61b27bcfaba68865c6da886e7ca2e5620bc99e0b5905ae07058ba9e4 namespace=k8s.io
Mar 17 18:40:15.322565 env[1274]: time="2025-03-17T18:40:15.322554887Z" level=info msg="cleaning up dead shim"
Mar 17 18:40:15.327280 env[1274]: time="2025-03-17T18:40:15.327258064Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:40:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3984 runtime=io.containerd.runc.v2\ntime=\"2025-03-17T18:40:15Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/7e6b8b1a61b27bcfaba68865c6da886e7ca2e5620bc99e0b5905ae07058ba9e4/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n"
Mar 17 18:40:15.327528 env[1274]: time="2025-03-17T18:40:15.327457250Z" level=error msg="copy shim log" error="read /proc/self/fd/27: file already closed"
Mar 17 18:40:15.328759 env[1274]: time="2025-03-17T18:40:15.327676311Z" level=error msg="Failed to pipe stdout of container \"7e6b8b1a61b27bcfaba68865c6da886e7ca2e5620bc99e0b5905ae07058ba9e4\"" error="reading from a closed fifo"
Mar 17 18:40:15.328827 env[1274]: time="2025-03-17T18:40:15.328629516Z" level=error msg="Failed to pipe stderr of container \"7e6b8b1a61b27bcfaba68865c6da886e7ca2e5620bc99e0b5905ae07058ba9e4\"" error="reading from a closed fifo"
Mar 17 18:40:15.332765 env[1274]: time="2025-03-17T18:40:15.332740941Z" level=error msg="StartContainer for \"7e6b8b1a61b27bcfaba68865c6da886e7ca2e5620bc99e0b5905ae07058ba9e4\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown"
Mar 17 18:40:15.332984 kubelet[2158]: E0317 18:40:15.332921    2158 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="7e6b8b1a61b27bcfaba68865c6da886e7ca2e5620bc99e0b5905ae07058ba9e4"
Mar 17 18:40:15.363029 kubelet[2158]: E0317 18:40:15.360004    2158 kuberuntime_manager.go:1256] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount;
Mar 17 18:40:15.363029 kubelet[2158]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT;
Mar 17 18:40:15.363029 kubelet[2158]: rm /hostbin/cilium-mount
Mar 17 18:40:15.363179 kubelet[2158]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c6wd6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-gkvk9_kube-system(8baf2659-08e8-4676-ad94-39a768f7ab3c): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown
Mar 17 18:40:15.371225 kubelet[2158]: E0317 18:40:15.371191    2158 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-gkvk9" podUID="8baf2659-08e8-4676-ad94-39a768f7ab3c"
Mar 17 18:40:16.071240 kubelet[2158]: E0317 18:40:16.071203    2158 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Mar 17 18:40:16.308602 env[1274]: time="2025-03-17T18:40:16.307237340Z" level=info msg="StopPodSandbox for \"ba8818e628917eaad05925f2296ce3e259ce1ee043777f8e59f9feebfe2439f2\""
Mar 17 18:40:16.308602 env[1274]: time="2025-03-17T18:40:16.307281684Z" level=info msg="Container to stop \"7e6b8b1a61b27bcfaba68865c6da886e7ca2e5620bc99e0b5905ae07058ba9e4\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Mar 17 18:40:16.308492 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ba8818e628917eaad05925f2296ce3e259ce1ee043777f8e59f9feebfe2439f2-shm.mount: Deactivated successfully.
Mar 17 18:40:16.313803 systemd[1]: cri-containerd-ba8818e628917eaad05925f2296ce3e259ce1ee043777f8e59f9feebfe2439f2.scope: Deactivated successfully.
Mar 17 18:40:16.330425 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba8818e628917eaad05925f2296ce3e259ce1ee043777f8e59f9feebfe2439f2-rootfs.mount: Deactivated successfully.
Mar 17 18:40:16.334386 env[1274]: time="2025-03-17T18:40:16.334350132Z" level=info msg="shim disconnected" id=ba8818e628917eaad05925f2296ce3e259ce1ee043777f8e59f9feebfe2439f2
Mar 17 18:40:16.334708 env[1274]: time="2025-03-17T18:40:16.334694257Z" level=warning msg="cleaning up after shim disconnected" id=ba8818e628917eaad05925f2296ce3e259ce1ee043777f8e59f9feebfe2439f2 namespace=k8s.io
Mar 17 18:40:16.334773 env[1274]: time="2025-03-17T18:40:16.334762767Z" level=info msg="cleaning up dead shim"
Mar 17 18:40:16.340053 env[1274]: time="2025-03-17T18:40:16.340026619Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:40:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4015 runtime=io.containerd.runc.v2\n"
Mar 17 18:40:16.340364 env[1274]: time="2025-03-17T18:40:16.340343753Z" level=info msg="TearDown network for sandbox \"ba8818e628917eaad05925f2296ce3e259ce1ee043777f8e59f9feebfe2439f2\" successfully"
Mar 17 18:40:16.340426 env[1274]: time="2025-03-17T18:40:16.340414274Z" level=info msg="StopPodSandbox for \"ba8818e628917eaad05925f2296ce3e259ce1ee043777f8e59f9feebfe2439f2\" returns successfully"
Mar 17 18:40:16.430104 kubelet[2158]: I0317 18:40:16.429309    2158 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8baf2659-08e8-4676-ad94-39a768f7ab3c-clustermesh-secrets\") pod \"8baf2659-08e8-4676-ad94-39a768f7ab3c\" (UID: \"8baf2659-08e8-4676-ad94-39a768f7ab3c\") "
Mar 17 18:40:16.430104 kubelet[2158]: I0317 18:40:16.429341    2158 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8baf2659-08e8-4676-ad94-39a768f7ab3c-hostproc\") pod \"8baf2659-08e8-4676-ad94-39a768f7ab3c\" (UID: \"8baf2659-08e8-4676-ad94-39a768f7ab3c\") "
Mar 17 18:40:16.430104 kubelet[2158]: I0317 18:40:16.429355    2158 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8baf2659-08e8-4676-ad94-39a768f7ab3c-cilium-run\") pod \"8baf2659-08e8-4676-ad94-39a768f7ab3c\" (UID: \"8baf2659-08e8-4676-ad94-39a768f7ab3c\") "
Mar 17 18:40:16.430104 kubelet[2158]: I0317 18:40:16.429367    2158 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8baf2659-08e8-4676-ad94-39a768f7ab3c-host-proc-sys-kernel\") pod \"8baf2659-08e8-4676-ad94-39a768f7ab3c\" (UID: \"8baf2659-08e8-4676-ad94-39a768f7ab3c\") "
Mar 17 18:40:16.430104 kubelet[2158]: I0317 18:40:16.429380    2158 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6wd6\" (UniqueName: \"kubernetes.io/projected/8baf2659-08e8-4676-ad94-39a768f7ab3c-kube-api-access-c6wd6\") pod \"8baf2659-08e8-4676-ad94-39a768f7ab3c\" (UID: \"8baf2659-08e8-4676-ad94-39a768f7ab3c\") "
Mar 17 18:40:16.430104 kubelet[2158]: I0317 18:40:16.429389    2158 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8baf2659-08e8-4676-ad94-39a768f7ab3c-bpf-maps\") pod \"8baf2659-08e8-4676-ad94-39a768f7ab3c\" (UID: \"8baf2659-08e8-4676-ad94-39a768f7ab3c\") "
Mar 17 18:40:16.430104 kubelet[2158]: I0317 18:40:16.429398    2158 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8baf2659-08e8-4676-ad94-39a768f7ab3c-host-proc-sys-net\") pod \"8baf2659-08e8-4676-ad94-39a768f7ab3c\" (UID: \"8baf2659-08e8-4676-ad94-39a768f7ab3c\") "
Mar 17 18:40:16.430104 kubelet[2158]: I0317 18:40:16.429407    2158 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8baf2659-08e8-4676-ad94-39a768f7ab3c-cilium-ipsec-secrets\") pod \"8baf2659-08e8-4676-ad94-39a768f7ab3c\" (UID: \"8baf2659-08e8-4676-ad94-39a768f7ab3c\") "
Mar 17 18:40:16.430104 kubelet[2158]: I0317 18:40:16.429415    2158 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8baf2659-08e8-4676-ad94-39a768f7ab3c-cni-path\") pod \"8baf2659-08e8-4676-ad94-39a768f7ab3c\" (UID: \"8baf2659-08e8-4676-ad94-39a768f7ab3c\") "
Mar 17 18:40:16.430104 kubelet[2158]: I0317 18:40:16.429425    2158 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8baf2659-08e8-4676-ad94-39a768f7ab3c-cilium-config-path\") pod \"8baf2659-08e8-4676-ad94-39a768f7ab3c\" (UID: \"8baf2659-08e8-4676-ad94-39a768f7ab3c\") "
Mar 17 18:40:16.430104 kubelet[2158]: I0317 18:40:16.429433    2158 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8baf2659-08e8-4676-ad94-39a768f7ab3c-lib-modules\") pod \"8baf2659-08e8-4676-ad94-39a768f7ab3c\" (UID: \"8baf2659-08e8-4676-ad94-39a768f7ab3c\") "
Mar 17 18:40:16.430104 kubelet[2158]: I0317 18:40:16.429441    2158 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8baf2659-08e8-4676-ad94-39a768f7ab3c-etc-cni-netd\") pod \"8baf2659-08e8-4676-ad94-39a768f7ab3c\" (UID: \"8baf2659-08e8-4676-ad94-39a768f7ab3c\") "
Mar 17 18:40:16.430104 kubelet[2158]: I0317 18:40:16.429449    2158 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8baf2659-08e8-4676-ad94-39a768f7ab3c-hubble-tls\") pod \"8baf2659-08e8-4676-ad94-39a768f7ab3c\" (UID: \"8baf2659-08e8-4676-ad94-39a768f7ab3c\") "
Mar 17 18:40:16.430104 kubelet[2158]: I0317 18:40:16.429460    2158 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8baf2659-08e8-4676-ad94-39a768f7ab3c-cilium-cgroup\") pod \"8baf2659-08e8-4676-ad94-39a768f7ab3c\" (UID: \"8baf2659-08e8-4676-ad94-39a768f7ab3c\") "
Mar 17 18:40:16.430104 kubelet[2158]: I0317 18:40:16.429468    2158 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8baf2659-08e8-4676-ad94-39a768f7ab3c-xtables-lock\") pod \"8baf2659-08e8-4676-ad94-39a768f7ab3c\" (UID: \"8baf2659-08e8-4676-ad94-39a768f7ab3c\") "
Mar 17 18:40:16.430104 kubelet[2158]: I0317 18:40:16.429529    2158 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8baf2659-08e8-4676-ad94-39a768f7ab3c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8baf2659-08e8-4676-ad94-39a768f7ab3c" (UID: "8baf2659-08e8-4676-ad94-39a768f7ab3c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Mar 17 18:40:16.430597 kubelet[2158]: I0317 18:40:16.429940    2158 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8baf2659-08e8-4676-ad94-39a768f7ab3c-cni-path" (OuterVolumeSpecName: "cni-path") pod "8baf2659-08e8-4676-ad94-39a768f7ab3c" (UID: "8baf2659-08e8-4676-ad94-39a768f7ab3c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Mar 17 18:40:16.430597 kubelet[2158]: I0317 18:40:16.430109    2158 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8baf2659-08e8-4676-ad94-39a768f7ab3c-hostproc" (OuterVolumeSpecName: "hostproc") pod "8baf2659-08e8-4676-ad94-39a768f7ab3c" (UID: "8baf2659-08e8-4676-ad94-39a768f7ab3c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Mar 17 18:40:16.430597 kubelet[2158]: I0317 18:40:16.430132    2158 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8baf2659-08e8-4676-ad94-39a768f7ab3c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8baf2659-08e8-4676-ad94-39a768f7ab3c" (UID: "8baf2659-08e8-4676-ad94-39a768f7ab3c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Mar 17 18:40:16.430597 kubelet[2158]: I0317 18:40:16.430143    2158 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8baf2659-08e8-4676-ad94-39a768f7ab3c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8baf2659-08e8-4676-ad94-39a768f7ab3c" (UID: "8baf2659-08e8-4676-ad94-39a768f7ab3c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Mar 17 18:40:16.430597 kubelet[2158]: I0317 18:40:16.430359    2158 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8baf2659-08e8-4676-ad94-39a768f7ab3c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8baf2659-08e8-4676-ad94-39a768f7ab3c" (UID: "8baf2659-08e8-4676-ad94-39a768f7ab3c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Mar 17 18:40:16.430597 kubelet[2158]: I0317 18:40:16.430375    2158 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8baf2659-08e8-4676-ad94-39a768f7ab3c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8baf2659-08e8-4676-ad94-39a768f7ab3c" (UID: "8baf2659-08e8-4676-ad94-39a768f7ab3c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Mar 17 18:40:16.433045 systemd[1]: var-lib-kubelet-pods-8baf2659\x2d08e8\x2d4676\x2dad94\x2d39a768f7ab3c-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully.
Mar 17 18:40:16.434645 systemd[1]: var-lib-kubelet-pods-8baf2659\x2d08e8\x2d4676\x2dad94\x2d39a768f7ab3c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully.
Mar 17 18:40:16.435502 kubelet[2158]: I0317 18:40:16.435483    2158 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8baf2659-08e8-4676-ad94-39a768f7ab3c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8baf2659-08e8-4676-ad94-39a768f7ab3c" (UID: "8baf2659-08e8-4676-ad94-39a768f7ab3c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Mar 17 18:40:16.435591 kubelet[2158]: I0317 18:40:16.435581    2158 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8baf2659-08e8-4676-ad94-39a768f7ab3c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8baf2659-08e8-4676-ad94-39a768f7ab3c" (UID: "8baf2659-08e8-4676-ad94-39a768f7ab3c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Mar 17 18:40:16.435655 kubelet[2158]: I0317 18:40:16.435639    2158 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8baf2659-08e8-4676-ad94-39a768f7ab3c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8baf2659-08e8-4676-ad94-39a768f7ab3c" (UID: "8baf2659-08e8-4676-ad94-39a768f7ab3c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Mar 17 18:40:16.435724 kubelet[2158]: I0317 18:40:16.435709    2158 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8baf2659-08e8-4676-ad94-39a768f7ab3c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8baf2659-08e8-4676-ad94-39a768f7ab3c" (UID: "8baf2659-08e8-4676-ad94-39a768f7ab3c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Mar 17 18:40:16.435771 kubelet[2158]: I0317 18:40:16.435745    2158 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8baf2659-08e8-4676-ad94-39a768f7ab3c-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "8baf2659-08e8-4676-ad94-39a768f7ab3c" (UID: "8baf2659-08e8-4676-ad94-39a768f7ab3c"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Mar 17 18:40:16.437129 kubelet[2158]: I0317 18:40:16.437116    2158 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8baf2659-08e8-4676-ad94-39a768f7ab3c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8baf2659-08e8-4676-ad94-39a768f7ab3c" (UID: "8baf2659-08e8-4676-ad94-39a768f7ab3c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Mar 17 18:40:16.438673 kubelet[2158]: I0317 18:40:16.438651    2158 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8baf2659-08e8-4676-ad94-39a768f7ab3c-kube-api-access-c6wd6" (OuterVolumeSpecName: "kube-api-access-c6wd6") pod "8baf2659-08e8-4676-ad94-39a768f7ab3c" (UID: "8baf2659-08e8-4676-ad94-39a768f7ab3c"). InnerVolumeSpecName "kube-api-access-c6wd6". PluginName "kubernetes.io/projected", VolumeGidValue ""
Mar 17 18:40:16.438752 kubelet[2158]: I0317 18:40:16.438735    2158 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8baf2659-08e8-4676-ad94-39a768f7ab3c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8baf2659-08e8-4676-ad94-39a768f7ab3c" (UID: "8baf2659-08e8-4676-ad94-39a768f7ab3c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue ""
Mar 17 18:40:16.529957 kubelet[2158]: I0317 18:40:16.529933    2158 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8baf2659-08e8-4676-ad94-39a768f7ab3c-clustermesh-secrets\") on node \"localhost\" DevicePath \"\""
Mar 17 18:40:16.530100 kubelet[2158]: I0317 18:40:16.530079    2158 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8baf2659-08e8-4676-ad94-39a768f7ab3c-hostproc\") on node \"localhost\" DevicePath \"\""
Mar 17 18:40:16.530156 kubelet[2158]: I0317 18:40:16.530148    2158 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8baf2659-08e8-4676-ad94-39a768f7ab3c-cilium-run\") on node \"localhost\" DevicePath \"\""
Mar 17 18:40:16.530274 kubelet[2158]: I0317 18:40:16.530266    2158 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8baf2659-08e8-4676-ad94-39a768f7ab3c-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\""
Mar 17 18:40:16.530331 kubelet[2158]: I0317 18:40:16.530323    2158 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-c6wd6\" (UniqueName: \"kubernetes.io/projected/8baf2659-08e8-4676-ad94-39a768f7ab3c-kube-api-access-c6wd6\") on node \"localhost\" DevicePath \"\""
Mar 17 18:40:16.530381 kubelet[2158]: I0317 18:40:16.530373    2158 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8baf2659-08e8-4676-ad94-39a768f7ab3c-bpf-maps\") on node \"localhost\" DevicePath \"\""
Mar 17 18:40:16.530432 kubelet[2158]: I0317 18:40:16.530424    2158 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8baf2659-08e8-4676-ad94-39a768f7ab3c-host-proc-sys-net\") on node \"localhost\" DevicePath \"\""
Mar 17 18:40:16.530487 kubelet[2158]: I0317 18:40:16.530478    2158 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8baf2659-08e8-4676-ad94-39a768f7ab3c-cilium-config-path\") on node \"localhost\" DevicePath \"\""
Mar 17 18:40:16.530537 kubelet[2158]: I0317 18:40:16.530529    2158 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8baf2659-08e8-4676-ad94-39a768f7ab3c-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\""
Mar 17 18:40:16.530588 kubelet[2158]: I0317 18:40:16.530581    2158 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8baf2659-08e8-4676-ad94-39a768f7ab3c-cni-path\") on node \"localhost\" DevicePath \"\""
Mar 17 18:40:16.530634 kubelet[2158]: I0317 18:40:16.530626    2158 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8baf2659-08e8-4676-ad94-39a768f7ab3c-hubble-tls\") on node \"localhost\" DevicePath \"\""
Mar 17 18:40:16.530679 kubelet[2158]: I0317 18:40:16.530672    2158 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8baf2659-08e8-4676-ad94-39a768f7ab3c-lib-modules\") on node \"localhost\" DevicePath \"\""
Mar 17 18:40:16.530729 kubelet[2158]: I0317 18:40:16.530721    2158 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8baf2659-08e8-4676-ad94-39a768f7ab3c-etc-cni-netd\") on node \"localhost\" DevicePath \"\""
Mar 17 18:40:16.530774 kubelet[2158]: I0317 18:40:16.530767    2158 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8baf2659-08e8-4676-ad94-39a768f7ab3c-cilium-cgroup\") on node \"localhost\" DevicePath \"\""
Mar 17 18:40:16.530825 kubelet[2158]: I0317 18:40:16.530818    2158 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8baf2659-08e8-4676-ad94-39a768f7ab3c-xtables-lock\") on node \"localhost\" DevicePath \"\""
Mar 17 18:40:16.967308 systemd[1]: Removed slice kubepods-burstable-pod8baf2659_08e8_4676_ad94_39a768f7ab3c.slice.
Mar 17 18:40:17.023185 systemd[1]: var-lib-kubelet-pods-8baf2659\x2d08e8\x2d4676\x2dad94\x2d39a768f7ab3c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc6wd6.mount: Deactivated successfully.
Mar 17 18:40:17.023251 systemd[1]: var-lib-kubelet-pods-8baf2659\x2d08e8\x2d4676\x2dad94\x2d39a768f7ab3c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully.
Mar 17 18:40:17.310878 kubelet[2158]: I0317 18:40:17.310807    2158 scope.go:117] "RemoveContainer" containerID="7e6b8b1a61b27bcfaba68865c6da886e7ca2e5620bc99e0b5905ae07058ba9e4"
Mar 17 18:40:17.312169 env[1274]: time="2025-03-17T18:40:17.311762728Z" level=info msg="RemoveContainer for \"7e6b8b1a61b27bcfaba68865c6da886e7ca2e5620bc99e0b5905ae07058ba9e4\""
Mar 17 18:40:17.314068 env[1274]: time="2025-03-17T18:40:17.313966748Z" level=info msg="RemoveContainer for \"7e6b8b1a61b27bcfaba68865c6da886e7ca2e5620bc99e0b5905ae07058ba9e4\" returns successfully"
Mar 17 18:40:17.359554 kubelet[2158]: I0317 18:40:17.359529    2158 topology_manager.go:215] "Topology Admit Handler" podUID="28c91178-279d-43f6-b7b7-d70c73cb3842" podNamespace="kube-system" podName="cilium-mfvkz"
Mar 17 18:40:17.359701 kubelet[2158]: E0317 18:40:17.359691    2158 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8baf2659-08e8-4676-ad94-39a768f7ab3c" containerName="mount-cgroup"
Mar 17 18:40:17.359767 kubelet[2158]: I0317 18:40:17.359758    2158 memory_manager.go:354] "RemoveStaleState removing state" podUID="8baf2659-08e8-4676-ad94-39a768f7ab3c" containerName="mount-cgroup"
Mar 17 18:40:17.363284 systemd[1]: Created slice kubepods-burstable-pod28c91178_279d_43f6_b7b7_d70c73cb3842.slice.
Mar 17 18:40:17.373030 kubelet[2158]: W0317 18:40:17.373008    2158 reflector.go:547] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object
Mar 17 18:40:17.386868 kubelet[2158]: E0317 18:40:17.386836    2158 reflector.go:150] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object
Mar 17 18:40:17.435243 kubelet[2158]: I0317 18:40:17.435216    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/28c91178-279d-43f6-b7b7-d70c73cb3842-cni-path\") pod \"cilium-mfvkz\" (UID: \"28c91178-279d-43f6-b7b7-d70c73cb3842\") " pod="kube-system/cilium-mfvkz"
Mar 17 18:40:17.435379 kubelet[2158]: I0317 18:40:17.435369    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/28c91178-279d-43f6-b7b7-d70c73cb3842-etc-cni-netd\") pod \"cilium-mfvkz\" (UID: \"28c91178-279d-43f6-b7b7-d70c73cb3842\") " pod="kube-system/cilium-mfvkz"
Mar 17 18:40:17.435438 kubelet[2158]: I0317 18:40:17.435428    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/28c91178-279d-43f6-b7b7-d70c73cb3842-clustermesh-secrets\") pod \"cilium-mfvkz\" (UID: \"28c91178-279d-43f6-b7b7-d70c73cb3842\") " pod="kube-system/cilium-mfvkz"
Mar 17 18:40:17.435496 kubelet[2158]: I0317 18:40:17.435486    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/28c91178-279d-43f6-b7b7-d70c73cb3842-host-proc-sys-net\") pod \"cilium-mfvkz\" (UID: \"28c91178-279d-43f6-b7b7-d70c73cb3842\") " pod="kube-system/cilium-mfvkz"
Mar 17 18:40:17.435551 kubelet[2158]: I0317 18:40:17.435541    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/28c91178-279d-43f6-b7b7-d70c73cb3842-cilium-cgroup\") pod \"cilium-mfvkz\" (UID: \"28c91178-279d-43f6-b7b7-d70c73cb3842\") " pod="kube-system/cilium-mfvkz"
Mar 17 18:40:17.435611 kubelet[2158]: I0317 18:40:17.435600    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vl5l\" (UniqueName: \"kubernetes.io/projected/28c91178-279d-43f6-b7b7-d70c73cb3842-kube-api-access-9vl5l\") pod \"cilium-mfvkz\" (UID: \"28c91178-279d-43f6-b7b7-d70c73cb3842\") " pod="kube-system/cilium-mfvkz"
Mar 17 18:40:17.435665 kubelet[2158]: I0317 18:40:17.435657    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/28c91178-279d-43f6-b7b7-d70c73cb3842-bpf-maps\") pod \"cilium-mfvkz\" (UID: \"28c91178-279d-43f6-b7b7-d70c73cb3842\") " pod="kube-system/cilium-mfvkz"
Mar 17 18:40:17.435719 kubelet[2158]: I0317 18:40:17.435711    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/28c91178-279d-43f6-b7b7-d70c73cb3842-hostproc\") pod \"cilium-mfvkz\" (UID: \"28c91178-279d-43f6-b7b7-d70c73cb3842\") " pod="kube-system/cilium-mfvkz"
Mar 17 18:40:17.435772 kubelet[2158]: I0317 18:40:17.435762    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/28c91178-279d-43f6-b7b7-d70c73cb3842-cilium-ipsec-secrets\") pod \"cilium-mfvkz\" (UID: \"28c91178-279d-43f6-b7b7-d70c73cb3842\") " pod="kube-system/cilium-mfvkz"
Mar 17 18:40:17.435827 kubelet[2158]: I0317 18:40:17.435819    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/28c91178-279d-43f6-b7b7-d70c73cb3842-cilium-run\") pod \"cilium-mfvkz\" (UID: \"28c91178-279d-43f6-b7b7-d70c73cb3842\") " pod="kube-system/cilium-mfvkz"
Mar 17 18:40:17.435886 kubelet[2158]: I0317 18:40:17.435877    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/28c91178-279d-43f6-b7b7-d70c73cb3842-cilium-config-path\") pod \"cilium-mfvkz\" (UID: \"28c91178-279d-43f6-b7b7-d70c73cb3842\") " pod="kube-system/cilium-mfvkz"
Mar 17 18:40:17.435948 kubelet[2158]: I0317 18:40:17.435938    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/28c91178-279d-43f6-b7b7-d70c73cb3842-host-proc-sys-kernel\") pod \"cilium-mfvkz\" (UID: \"28c91178-279d-43f6-b7b7-d70c73cb3842\") " pod="kube-system/cilium-mfvkz"
Mar 17 18:40:17.436008 kubelet[2158]: I0317 18:40:17.435999    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/28c91178-279d-43f6-b7b7-d70c73cb3842-hubble-tls\") pod \"cilium-mfvkz\" (UID: \"28c91178-279d-43f6-b7b7-d70c73cb3842\") " pod="kube-system/cilium-mfvkz"
Mar 17 18:40:17.436068 kubelet[2158]: I0317 18:40:17.436059    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/28c91178-279d-43f6-b7b7-d70c73cb3842-xtables-lock\") pod \"cilium-mfvkz\" (UID: \"28c91178-279d-43f6-b7b7-d70c73cb3842\") " pod="kube-system/cilium-mfvkz"
Mar 17 18:40:17.436149 kubelet[2158]: I0317 18:40:17.436139    2158 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/28c91178-279d-43f6-b7b7-d70c73cb3842-lib-modules\") pod \"cilium-mfvkz\" (UID: \"28c91178-279d-43f6-b7b7-d70c73cb3842\") " pod="kube-system/cilium-mfvkz"
Mar 17 18:40:18.478846 kubelet[2158]: W0317 18:40:18.478785    2158 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8baf2659_08e8_4676_ad94_39a768f7ab3c.slice/cri-containerd-7e6b8b1a61b27bcfaba68865c6da886e7ca2e5620bc99e0b5905ae07058ba9e4.scope WatchSource:0}: container "7e6b8b1a61b27bcfaba68865c6da886e7ca2e5620bc99e0b5905ae07058ba9e4" in namespace "k8s.io": not found
Mar 17 18:40:18.554987 kubelet[2158]: E0317 18:40:18.554951    2158 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition
Mar 17 18:40:18.555102 kubelet[2158]: E0317 18:40:18.555041    2158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28c91178-279d-43f6-b7b7-d70c73cb3842-clustermesh-secrets podName:28c91178-279d-43f6-b7b7-d70c73cb3842 nodeName:}" failed. No retries permitted until 2025-03-17 18:40:19.055017487 +0000 UTC m=+148.207831285 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/28c91178-279d-43f6-b7b7-d70c73cb3842-clustermesh-secrets") pod "cilium-mfvkz" (UID: "28c91178-279d-43f6-b7b7-d70c73cb3842") : failed to sync secret cache: timed out waiting for the condition
Mar 17 18:40:18.965574 kubelet[2158]: I0317 18:40:18.965549    2158 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8baf2659-08e8-4676-ad94-39a768f7ab3c" path="/var/lib/kubelet/pods/8baf2659-08e8-4676-ad94-39a768f7ab3c/volumes"
Mar 17 18:40:19.165622 env[1274]: time="2025-03-17T18:40:19.165584816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mfvkz,Uid:28c91178-279d-43f6-b7b7-d70c73cb3842,Namespace:kube-system,Attempt:0,}"
Mar 17 18:40:19.196844 env[1274]: time="2025-03-17T18:40:19.196791056Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Mar 17 18:40:19.196844 env[1274]: time="2025-03-17T18:40:19.196828159Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Mar 17 18:40:19.196998 env[1274]: time="2025-03-17T18:40:19.196964448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Mar 17 18:40:19.197152 env[1274]: time="2025-03-17T18:40:19.197127216Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e5c1bdf285d0c2c7c00e70cd07a5e4bbeeb1b4375323d84a9272004a7fd15113 pid=4042 runtime=io.containerd.runc.v2
Mar 17 18:40:19.211598 systemd[1]: Started cri-containerd-e5c1bdf285d0c2c7c00e70cd07a5e4bbeeb1b4375323d84a9272004a7fd15113.scope.
Mar 17 18:40:19.225577 env[1274]: time="2025-03-17T18:40:19.225504853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mfvkz,Uid:28c91178-279d-43f6-b7b7-d70c73cb3842,Namespace:kube-system,Attempt:0,} returns sandbox id \"e5c1bdf285d0c2c7c00e70cd07a5e4bbeeb1b4375323d84a9272004a7fd15113\""
Mar 17 18:40:19.228007 env[1274]: time="2025-03-17T18:40:19.227989381Z" level=info msg="CreateContainer within sandbox \"e5c1bdf285d0c2c7c00e70cd07a5e4bbeeb1b4375323d84a9272004a7fd15113\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Mar 17 18:40:19.267433 env[1274]: time="2025-03-17T18:40:19.267404940Z" level=info msg="CreateContainer within sandbox \"e5c1bdf285d0c2c7c00e70cd07a5e4bbeeb1b4375323d84a9272004a7fd15113\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"37ada1401ff25b0437f350374f11c3abb6072438eec8acf62b7831730f45b26f\""
Mar 17 18:40:19.267846 env[1274]: time="2025-03-17T18:40:19.267833296Z" level=info msg="StartContainer for \"37ada1401ff25b0437f350374f11c3abb6072438eec8acf62b7831730f45b26f\""
Mar 17 18:40:19.277375 systemd[1]: Started cri-containerd-37ada1401ff25b0437f350374f11c3abb6072438eec8acf62b7831730f45b26f.scope.
Mar 17 18:40:19.296082 env[1274]: time="2025-03-17T18:40:19.296049929Z" level=info msg="StartContainer for \"37ada1401ff25b0437f350374f11c3abb6072438eec8acf62b7831730f45b26f\" returns successfully"
Mar 17 18:40:19.316613 systemd[1]: cri-containerd-37ada1401ff25b0437f350374f11c3abb6072438eec8acf62b7831730f45b26f.scope: Deactivated successfully.
Mar 17 18:40:19.353188 env[1274]: time="2025-03-17T18:40:19.353150899Z" level=info msg="shim disconnected" id=37ada1401ff25b0437f350374f11c3abb6072438eec8acf62b7831730f45b26f
Mar 17 18:40:19.353362 env[1274]: time="2025-03-17T18:40:19.353349618Z" level=warning msg="cleaning up after shim disconnected" id=37ada1401ff25b0437f350374f11c3abb6072438eec8acf62b7831730f45b26f namespace=k8s.io
Mar 17 18:40:19.353417 env[1274]: time="2025-03-17T18:40:19.353402488Z" level=info msg="cleaning up dead shim"
Mar 17 18:40:19.359618 env[1274]: time="2025-03-17T18:40:19.359591413Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:40:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4127 runtime=io.containerd.runc.v2\n"
Mar 17 18:40:20.058701 systemd[1]: run-containerd-runc-k8s.io-e5c1bdf285d0c2c7c00e70cd07a5e4bbeeb1b4375323d84a9272004a7fd15113-runc.Ykvx3V.mount: Deactivated successfully.
Mar 17 18:40:20.322167 env[1274]: time="2025-03-17T18:40:20.321985054Z" level=info msg="CreateContainer within sandbox \"e5c1bdf285d0c2c7c00e70cd07a5e4bbeeb1b4375323d84a9272004a7fd15113\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Mar 17 18:40:20.370993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3800300792.mount: Deactivated successfully.
Mar 17 18:40:20.376306 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2606248455.mount: Deactivated successfully.
Mar 17 18:40:20.393145 env[1274]: time="2025-03-17T18:40:20.393113499Z" level=info msg="CreateContainer within sandbox \"e5c1bdf285d0c2c7c00e70cd07a5e4bbeeb1b4375323d84a9272004a7fd15113\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ceeb0cd11e8e6966d669f5f692b5fe73e957b70c9dad55da4f67790f941c682b\""
Mar 17 18:40:20.394609 env[1274]: time="2025-03-17T18:40:20.394524338Z" level=info msg="StartContainer for \"ceeb0cd11e8e6966d669f5f692b5fe73e957b70c9dad55da4f67790f941c682b\""
Mar 17 18:40:20.404770 systemd[1]: Started cri-containerd-ceeb0cd11e8e6966d669f5f692b5fe73e957b70c9dad55da4f67790f941c682b.scope.
Mar 17 18:40:20.427580 env[1274]: time="2025-03-17T18:40:20.427555657Z" level=info msg="StartContainer for \"ceeb0cd11e8e6966d669f5f692b5fe73e957b70c9dad55da4f67790f941c682b\" returns successfully"
Mar 17 18:40:20.451254 systemd[1]: cri-containerd-ceeb0cd11e8e6966d669f5f692b5fe73e957b70c9dad55da4f67790f941c682b.scope: Deactivated successfully.
Mar 17 18:40:20.464311 env[1274]: time="2025-03-17T18:40:20.464281422Z" level=info msg="shim disconnected" id=ceeb0cd11e8e6966d669f5f692b5fe73e957b70c9dad55da4f67790f941c682b
Mar 17 18:40:20.464483 env[1274]: time="2025-03-17T18:40:20.464471033Z" level=warning msg="cleaning up after shim disconnected" id=ceeb0cd11e8e6966d669f5f692b5fe73e957b70c9dad55da4f67790f941c682b namespace=k8s.io
Mar 17 18:40:20.464544 env[1274]: time="2025-03-17T18:40:20.464531988Z" level=info msg="cleaning up dead shim"
Mar 17 18:40:20.469268 env[1274]: time="2025-03-17T18:40:20.469250020Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:40:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4187 runtime=io.containerd.runc.v2\n"
Mar 17 18:40:21.072693 kubelet[2158]: E0317 18:40:21.072665    2158 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Mar 17 18:40:21.321468 env[1274]: time="2025-03-17T18:40:21.321435582Z" level=info msg="CreateContainer within sandbox \"e5c1bdf285d0c2c7c00e70cd07a5e4bbeeb1b4375323d84a9272004a7fd15113\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Mar 17 18:40:21.330419 env[1274]: time="2025-03-17T18:40:21.330360638Z" level=info msg="CreateContainer within sandbox \"e5c1bdf285d0c2c7c00e70cd07a5e4bbeeb1b4375323d84a9272004a7fd15113\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b00cd60ddab923321abce9a067afe13c56541a2fd7109ea4d72dd28cc6756209\""
Mar 17 18:40:21.331051 env[1274]: time="2025-03-17T18:40:21.331038265Z" level=info msg="StartContainer for \"b00cd60ddab923321abce9a067afe13c56541a2fd7109ea4d72dd28cc6756209\""
Mar 17 18:40:21.350773 systemd[1]: Started cri-containerd-b00cd60ddab923321abce9a067afe13c56541a2fd7109ea4d72dd28cc6756209.scope.
Mar 17 18:40:21.387485 env[1274]: time="2025-03-17T18:40:21.387460332Z" level=info msg="StartContainer for \"b00cd60ddab923321abce9a067afe13c56541a2fd7109ea4d72dd28cc6756209\" returns successfully"
Mar 17 18:40:21.430380 systemd[1]: cri-containerd-b00cd60ddab923321abce9a067afe13c56541a2fd7109ea4d72dd28cc6756209.scope: Deactivated successfully.
Mar 17 18:40:21.447102 env[1274]: time="2025-03-17T18:40:21.447066189Z" level=info msg="shim disconnected" id=b00cd60ddab923321abce9a067afe13c56541a2fd7109ea4d72dd28cc6756209
Mar 17 18:40:21.447277 env[1274]: time="2025-03-17T18:40:21.447266166Z" level=warning msg="cleaning up after shim disconnected" id=b00cd60ddab923321abce9a067afe13c56541a2fd7109ea4d72dd28cc6756209 namespace=k8s.io
Mar 17 18:40:21.447327 env[1274]: time="2025-03-17T18:40:21.447316870Z" level=info msg="cleaning up dead shim"
Mar 17 18:40:21.451930 env[1274]: time="2025-03-17T18:40:21.451909694Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:40:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4249 runtime=io.containerd.runc.v2\n"
Mar 17 18:40:22.059102 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b00cd60ddab923321abce9a067afe13c56541a2fd7109ea4d72dd28cc6756209-rootfs.mount: Deactivated successfully.
Mar 17 18:40:22.331195 env[1274]: time="2025-03-17T18:40:22.327236643Z" level=info msg="CreateContainer within sandbox \"e5c1bdf285d0c2c7c00e70cd07a5e4bbeeb1b4375323d84a9272004a7fd15113\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Mar 17 18:40:22.345919 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1568405568.mount: Deactivated successfully.
Mar 17 18:40:22.352676 env[1274]: time="2025-03-17T18:40:22.352061302Z" level=info msg="CreateContainer within sandbox \"e5c1bdf285d0c2c7c00e70cd07a5e4bbeeb1b4375323d84a9272004a7fd15113\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3463845c3e520d9fd4cdd7c24e0bffc5fb7862ce5f95529d878e892d41631de9\""
Mar 17 18:40:22.352916 env[1274]: time="2025-03-17T18:40:22.352896847Z" level=info msg="StartContainer for \"3463845c3e520d9fd4cdd7c24e0bffc5fb7862ce5f95529d878e892d41631de9\""
Mar 17 18:40:22.367374 systemd[1]: Started cri-containerd-3463845c3e520d9fd4cdd7c24e0bffc5fb7862ce5f95529d878e892d41631de9.scope.
Mar 17 18:40:22.386608 env[1274]: time="2025-03-17T18:40:22.386581723Z" level=info msg="StartContainer for \"3463845c3e520d9fd4cdd7c24e0bffc5fb7862ce5f95529d878e892d41631de9\" returns successfully"
Mar 17 18:40:22.388111 systemd[1]: cri-containerd-3463845c3e520d9fd4cdd7c24e0bffc5fb7862ce5f95529d878e892d41631de9.scope: Deactivated successfully.
Mar 17 18:40:22.400733 env[1274]: time="2025-03-17T18:40:22.400698766Z" level=info msg="shim disconnected" id=3463845c3e520d9fd4cdd7c24e0bffc5fb7862ce5f95529d878e892d41631de9
Mar 17 18:40:22.400733 env[1274]: time="2025-03-17T18:40:22.400732074Z" level=warning msg="cleaning up after shim disconnected" id=3463845c3e520d9fd4cdd7c24e0bffc5fb7862ce5f95529d878e892d41631de9 namespace=k8s.io
Mar 17 18:40:22.400902 env[1274]: time="2025-03-17T18:40:22.400738410Z" level=info msg="cleaning up dead shim"
Mar 17 18:40:22.406226 env[1274]: time="2025-03-17T18:40:22.406194017Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:40:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4305 runtime=io.containerd.runc.v2\n"
Mar 17 18:40:23.327147 env[1274]: time="2025-03-17T18:40:23.327122210Z" level=info msg="CreateContainer within sandbox \"e5c1bdf285d0c2c7c00e70cd07a5e4bbeeb1b4375323d84a9272004a7fd15113\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Mar 17 18:40:23.336756 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount940688291.mount: Deactivated successfully.
Mar 17 18:40:23.340743 env[1274]: time="2025-03-17T18:40:23.340719276Z" level=info msg="CreateContainer within sandbox \"e5c1bdf285d0c2c7c00e70cd07a5e4bbeeb1b4375323d84a9272004a7fd15113\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"de7d957d18f6abc3a67e5aa66cfa019f2386c02ab8eef415769a0221a99699ec\""
Mar 17 18:40:23.341877 env[1274]: time="2025-03-17T18:40:23.341337914Z" level=info msg="StartContainer for \"de7d957d18f6abc3a67e5aa66cfa019f2386c02ab8eef415769a0221a99699ec\""
Mar 17 18:40:23.354675 systemd[1]: Started cri-containerd-de7d957d18f6abc3a67e5aa66cfa019f2386c02ab8eef415769a0221a99699ec.scope.
Mar 17 18:40:23.388017 env[1274]: time="2025-03-17T18:40:23.387987155Z" level=info msg="StartContainer for \"de7d957d18f6abc3a67e5aa66cfa019f2386c02ab8eef415769a0221a99699ec\" returns successfully"
Mar 17 18:40:23.733274 kubelet[2158]: I0317 18:40:23.732670    2158 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-17T18:40:23Z","lastTransitionTime":"2025-03-17T18:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"}
Mar 17 18:40:24.404117 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni))
Mar 17 18:40:25.683213 systemd[1]: run-containerd-runc-k8s.io-de7d957d18f6abc3a67e5aa66cfa019f2386c02ab8eef415769a0221a99699ec-runc.uGurDy.mount: Deactivated successfully.
Mar 17 18:40:26.879444 systemd-networkd[1058]: lxc_health: Link UP
Mar 17 18:40:26.885113 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready
Mar 17 18:40:26.885355 systemd-networkd[1058]: lxc_health: Gained carrier
Mar 17 18:40:27.176711 kubelet[2158]: I0317 18:40:27.176621    2158 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mfvkz" podStartSLOduration=10.17660874 podStartE2EDuration="10.17660874s" podCreationTimestamp="2025-03-17 18:40:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:40:24.341644796 +0000 UTC m=+153.494458601" watchObservedRunningTime="2025-03-17 18:40:27.17660874 +0000 UTC m=+156.329422547"
Mar 17 18:40:27.937761 systemd[1]: run-containerd-runc-k8s.io-de7d957d18f6abc3a67e5aa66cfa019f2386c02ab8eef415769a0221a99699ec-runc.Mtv9nU.mount: Deactivated successfully.
Mar 17 18:40:28.859235 systemd-networkd[1058]: lxc_health: Gained IPv6LL
Mar 17 18:40:30.050538 systemd[1]: run-containerd-runc-k8s.io-de7d957d18f6abc3a67e5aa66cfa019f2386c02ab8eef415769a0221a99699ec-runc.x21e1Y.mount: Deactivated successfully.
Mar 17 18:40:32.222037 sshd[3905]: pam_unix(sshd:session): session closed for user core
Mar 17 18:40:32.239719 systemd[1]: sshd@24-139.178.70.110:22-139.178.68.195:36528.service: Deactivated successfully.
Mar 17 18:40:32.240277 systemd[1]: session-27.scope: Deactivated successfully.
Mar 17 18:40:32.240733 systemd-logind[1244]: Session 27 logged out. Waiting for processes to exit.
Mar 17 18:40:32.241559 systemd-logind[1244]: Removed session 27.