Dec 13 02:48:11.646972 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Dec 12 23:50:37 -00 2024
Dec 13 02:48:11.646986 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c
Dec 13 02:48:11.646993 kernel: Disabled fast string operations
Dec 13 02:48:11.646997 kernel: BIOS-provided physical RAM map:
Dec 13 02:48:11.647001 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable
Dec 13 02:48:11.647004 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved
Dec 13 02:48:11.647010 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved
Dec 13 02:48:11.647014 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable
Dec 13 02:48:11.647019 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data
Dec 13 02:48:11.647023 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS
Dec 13 02:48:11.647027 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable
Dec 13 02:48:11.647031 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved
Dec 13 02:48:11.647035 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved
Dec 13 02:48:11.647039 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved
Dec 13 02:48:11.647045 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved
Dec 13 02:48:11.647049 kernel: NX (Execute Disable) protection: active
Dec 13 02:48:11.647054 kernel: SMBIOS 2.7 present.
Dec 13 02:48:11.647058 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020
Dec 13 02:48:11.647063 kernel: vmware: hypercall mode: 0x00
Dec 13 02:48:11.647067 kernel: Hypervisor detected: VMware
Dec 13 02:48:11.647072 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz
Dec 13 02:48:11.647077 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz
Dec 13 02:48:11.647081 kernel: vmware: using clock offset of 5027456884 ns
Dec 13 02:48:11.647085 kernel: tsc: Detected 3408.000 MHz processor
Dec 13 02:48:11.647090 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Dec 13 02:48:11.647095 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Dec 13 02:48:11.647099 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000
Dec 13 02:48:11.647104 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Dec 13 02:48:11.647108 kernel: total RAM covered: 3072M
Dec 13 02:48:11.647114 kernel: Found optimal setting for mtrr clean up
Dec 13 02:48:11.647119 kernel:  gran_size: 64K         chunk_size: 64K         num_reg: 2          lose cover RAM: 0G
Dec 13 02:48:11.647123 kernel: Using GB pages for direct mapping
Dec 13 02:48:11.647128 kernel: ACPI: Early table checksum verification disabled
Dec 13 02:48:11.647132 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD )
Dec 13 02:48:11.647137 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL  440BX    06040000 VMW  01324272)
Dec 13 02:48:11.647141 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL  440BX    06040000 PTL  000F4240)
Dec 13 02:48:11.647146 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD  Custom   06040000 MSFT 03000001)
Dec 13 02:48:11.647150 kernel: ACPI: FACS 0x000000007FEFFFC0 000040
Dec 13 02:48:11.647154 kernel: ACPI: FACS 0x000000007FEFFFC0 000040
Dec 13 02:48:11.647160 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD  $SBFTBL$ 06040000  LTP 00000001)
Dec 13 02:48:11.647167 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD  ? APIC   06040000  LTP 00000000)
Dec 13 02:48:11.647171 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD  $PCITBL$ 06040000  LTP 00000001)
Dec 13 02:48:11.647176 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG  06040000 VMW  00000001)
Dec 13 02:48:11.647181 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW  00000001)
Dec 13 02:48:11.647187 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW  00000001)
Dec 13 02:48:11.647192 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66]
Dec 13 02:48:11.647197 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72]
Dec 13 02:48:11.647201 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff]
Dec 13 02:48:11.647206 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff]
Dec 13 02:48:11.647211 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54]
Dec 13 02:48:11.647216 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c]
Dec 13 02:48:11.647221 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea]
Dec 13 02:48:11.647226 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe]
Dec 13 02:48:11.647232 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756]
Dec 13 02:48:11.647237 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e]
Dec 13 02:48:11.647241 kernel: system APIC only can use physical flat
Dec 13 02:48:11.647246 kernel: Setting APIC routing to physical flat.
Dec 13 02:48:11.647251 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0
Dec 13 02:48:11.647256 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0
Dec 13 02:48:11.647260 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0
Dec 13 02:48:11.647265 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0
Dec 13 02:48:11.647270 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0
Dec 13 02:48:11.647276 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0
Dec 13 02:48:11.647281 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0
Dec 13 02:48:11.647285 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0
Dec 13 02:48:11.647290 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0
Dec 13 02:48:11.647295 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0
Dec 13 02:48:11.647300 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0
Dec 13 02:48:11.647305 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0
Dec 13 02:48:11.647309 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0
Dec 13 02:48:11.647314 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0
Dec 13 02:48:11.647319 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0
Dec 13 02:48:11.647324 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0
Dec 13 02:48:11.647329 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0
Dec 13 02:48:11.647334 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0
Dec 13 02:48:11.647339 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0
Dec 13 02:48:11.647343 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0
Dec 13 02:48:11.647348 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0
Dec 13 02:48:11.647353 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0
Dec 13 02:48:11.647358 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0
Dec 13 02:48:11.647363 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0
Dec 13 02:48:11.647367 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0
Dec 13 02:48:11.647373 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0
Dec 13 02:48:11.647378 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0
Dec 13 02:48:11.647382 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0
Dec 13 02:48:11.647387 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0
Dec 13 02:48:11.647392 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0
Dec 13 02:48:11.647397 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0
Dec 13 02:48:11.647402 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0
Dec 13 02:48:11.647406 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0
Dec 13 02:48:11.647411 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0
Dec 13 02:48:11.647416 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0
Dec 13 02:48:11.647421 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0
Dec 13 02:48:11.647426 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0
Dec 13 02:48:11.647431 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0
Dec 13 02:48:11.647436 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0
Dec 13 02:48:11.647440 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0
Dec 13 02:48:11.647445 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0
Dec 13 02:48:11.647450 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0
Dec 13 02:48:11.647454 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0
Dec 13 02:48:11.647459 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0
Dec 13 02:48:11.647464 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0
Dec 13 02:48:11.647470 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0
Dec 13 02:48:11.647474 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0
Dec 13 02:48:11.647479 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0
Dec 13 02:48:11.647484 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0
Dec 13 02:48:11.647488 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0
Dec 13 02:48:11.647493 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0
Dec 13 02:48:11.647498 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0
Dec 13 02:48:11.647502 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0
Dec 13 02:48:11.647507 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0
Dec 13 02:48:11.647512 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0
Dec 13 02:48:11.647518 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0
Dec 13 02:48:11.647522 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0
Dec 13 02:48:11.647527 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0
Dec 13 02:48:11.647532 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0
Dec 13 02:48:11.647536 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0
Dec 13 02:48:11.647541 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0
Dec 13 02:48:11.647550 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0
Dec 13 02:48:11.647556 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0
Dec 13 02:48:11.647561 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0
Dec 13 02:48:11.647566 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0
Dec 13 02:48:11.647571 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0
Dec 13 02:48:11.647577 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0
Dec 13 02:48:11.647582 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0
Dec 13 02:48:11.647587 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0
Dec 13 02:48:11.647592 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0
Dec 13 02:48:11.647597 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0
Dec 13 02:48:11.647602 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0
Dec 13 02:48:11.647607 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0
Dec 13 02:48:11.647613 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0
Dec 13 02:48:11.647618 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0
Dec 13 02:48:11.647623 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0
Dec 13 02:48:11.647628 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0
Dec 13 02:48:11.647633 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0
Dec 13 02:48:11.647638 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0
Dec 13 02:48:11.647644 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0
Dec 13 02:48:11.647649 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0
Dec 13 02:48:11.647654 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0
Dec 13 02:48:11.647659 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0
Dec 13 02:48:11.647665 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0
Dec 13 02:48:11.647670 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0
Dec 13 02:48:11.647675 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0
Dec 13 02:48:11.647681 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0
Dec 13 02:48:11.647686 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0
Dec 13 02:48:11.647691 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0
Dec 13 02:48:11.647696 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0
Dec 13 02:48:11.647703 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0
Dec 13 02:48:11.647711 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0
Dec 13 02:48:11.647720 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0
Dec 13 02:48:11.647727 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0
Dec 13 02:48:11.647734 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0
Dec 13 02:48:11.647741 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0
Dec 13 02:48:11.647748 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0
Dec 13 02:48:11.647756 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0
Dec 13 02:48:11.647763 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0
Dec 13 02:48:11.647768 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0
Dec 13 02:48:11.647773 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0
Dec 13 02:48:11.647779 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0
Dec 13 02:48:11.647785 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0
Dec 13 02:48:11.647790 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0
Dec 13 02:48:11.647795 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0
Dec 13 02:48:11.647800 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0
Dec 13 02:48:11.647805 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0
Dec 13 02:48:11.647814 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0
Dec 13 02:48:11.647819 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0
Dec 13 02:48:11.647824 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0
Dec 13 02:48:11.647829 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0
Dec 13 02:48:11.647834 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0
Dec 13 02:48:11.647840 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0
Dec 13 02:48:11.647846 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0
Dec 13 02:48:11.647851 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0
Dec 13 02:48:11.647856 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0
Dec 13 02:48:11.647861 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0
Dec 13 02:48:11.647866 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0
Dec 13 02:48:11.647871 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0
Dec 13 02:48:11.647888 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0
Dec 13 02:48:11.647898 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0
Dec 13 02:48:11.647906 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0
Dec 13 02:48:11.647913 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0
Dec 13 02:48:11.647918 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0
Dec 13 02:48:11.647923 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0
Dec 13 02:48:11.647928 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0
Dec 13 02:48:11.647934 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0
Dec 13 02:48:11.647939 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0
Dec 13 02:48:11.647944 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff]
Dec 13 02:48:11.647949 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff]
Dec 13 02:48:11.647954 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug
Dec 13 02:48:11.647960 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff]
Dec 13 02:48:11.647966 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff]
Dec 13 02:48:11.647971 kernel: Zone ranges:
Dec 13 02:48:11.647977 kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Dec 13 02:48:11.647982 kernel:   DMA32    [mem 0x0000000001000000-0x000000007fffffff]
Dec 13 02:48:11.647987 kernel:   Normal   empty
Dec 13 02:48:11.647992 kernel: Movable zone start for each node
Dec 13 02:48:11.647997 kernel: Early memory node ranges
Dec 13 02:48:11.648002 kernel:   node   0: [mem 0x0000000000001000-0x000000000009dfff]
Dec 13 02:48:11.648008 kernel:   node   0: [mem 0x0000000000100000-0x000000007fedffff]
Dec 13 02:48:11.648014 kernel:   node   0: [mem 0x000000007ff00000-0x000000007fffffff]
Dec 13 02:48:11.648019 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff]
Dec 13 02:48:11.648024 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Dec 13 02:48:11.648030 kernel: On node 0, zone DMA: 98 pages in unavailable ranges
Dec 13 02:48:11.648035 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges
Dec 13 02:48:11.648040 kernel: ACPI: PM-Timer IO Port: 0x1008
Dec 13 02:48:11.648045 kernel: system APIC only can use physical flat
Dec 13 02:48:11.648050 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1])
Dec 13 02:48:11.648056 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1])
Dec 13 02:48:11.648062 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1])
Dec 13 02:48:11.648067 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1])
Dec 13 02:48:11.648072 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1])
Dec 13 02:48:11.648077 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1])
Dec 13 02:48:11.648082 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1])
Dec 13 02:48:11.648087 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1])
Dec 13 02:48:11.648092 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1])
Dec 13 02:48:11.648098 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1])
Dec 13 02:48:11.648103 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1])
Dec 13 02:48:11.648108 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1])
Dec 13 02:48:11.648114 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1])
Dec 13 02:48:11.648119 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1])
Dec 13 02:48:11.648124 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1])
Dec 13 02:48:11.648129 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1])
Dec 13 02:48:11.648135 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1])
Dec 13 02:48:11.648142 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1])
Dec 13 02:48:11.648150 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1])
Dec 13 02:48:11.648157 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1])
Dec 13 02:48:11.648164 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1])
Dec 13 02:48:11.648173 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1])
Dec 13 02:48:11.648181 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1])
Dec 13 02:48:11.648186 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1])
Dec 13 02:48:11.648191 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1])
Dec 13 02:48:11.648196 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1])
Dec 13 02:48:11.648201 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1])
Dec 13 02:48:11.648207 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1])
Dec 13 02:48:11.648212 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1])
Dec 13 02:48:11.648217 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1])
Dec 13 02:48:11.648222 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1])
Dec 13 02:48:11.648228 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1])
Dec 13 02:48:11.648234 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1])
Dec 13 02:48:11.648239 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1])
Dec 13 02:48:11.648244 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1])
Dec 13 02:48:11.648249 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1])
Dec 13 02:48:11.648254 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1])
Dec 13 02:48:11.648259 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1])
Dec 13 02:48:11.648264 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1])
Dec 13 02:48:11.648269 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1])
Dec 13 02:48:11.648275 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1])
Dec 13 02:48:11.648280 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1])
Dec 13 02:48:11.648286 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1])
Dec 13 02:48:11.648291 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1])
Dec 13 02:48:11.648296 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1])
Dec 13 02:48:11.648301 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1])
Dec 13 02:48:11.648306 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1])
Dec 13 02:48:11.648311 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1])
Dec 13 02:48:11.648316 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1])
Dec 13 02:48:11.648321 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1])
Dec 13 02:48:11.648327 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1])
Dec 13 02:48:11.648332 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1])
Dec 13 02:48:11.648337 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1])
Dec 13 02:48:11.648343 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1])
Dec 13 02:48:11.648348 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1])
Dec 13 02:48:11.648353 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1])
Dec 13 02:48:11.648358 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1])
Dec 13 02:48:11.648363 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1])
Dec 13 02:48:11.648368 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1])
Dec 13 02:48:11.648374 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1])
Dec 13 02:48:11.648379 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1])
Dec 13 02:48:11.648384 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1])
Dec 13 02:48:11.648389 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1])
Dec 13 02:48:11.648395 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1])
Dec 13 02:48:11.648400 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1])
Dec 13 02:48:11.648405 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1])
Dec 13 02:48:11.648410 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1])
Dec 13 02:48:11.648415 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1])
Dec 13 02:48:11.648420 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1])
Dec 13 02:48:11.648426 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1])
Dec 13 02:48:11.648431 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1])
Dec 13 02:48:11.648437 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1])
Dec 13 02:48:11.648442 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1])
Dec 13 02:48:11.648447 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1])
Dec 13 02:48:11.648452 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1])
Dec 13 02:48:11.648457 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1])
Dec 13 02:48:11.648462 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1])
Dec 13 02:48:11.648467 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1])
Dec 13 02:48:11.648473 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1])
Dec 13 02:48:11.648479 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1])
Dec 13 02:48:11.648484 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1])
Dec 13 02:48:11.648489 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1])
Dec 13 02:48:11.648494 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1])
Dec 13 02:48:11.648499 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1])
Dec 13 02:48:11.648504 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1])
Dec 13 02:48:11.648509 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1])
Dec 13 02:48:11.648514 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1])
Dec 13 02:48:11.648521 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1])
Dec 13 02:48:11.648526 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1])
Dec 13 02:48:11.648531 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1])
Dec 13 02:48:11.648536 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1])
Dec 13 02:48:11.648541 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1])
Dec 13 02:48:11.648546 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1])
Dec 13 02:48:11.648551 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1])
Dec 13 02:48:11.648556 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1])
Dec 13 02:48:11.648561 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1])
Dec 13 02:48:11.648566 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1])
Dec 13 02:48:11.648572 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1])
Dec 13 02:48:11.648577 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1])
Dec 13 02:48:11.648582 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1])
Dec 13 02:48:11.648587 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1])
Dec 13 02:48:11.648592 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1])
Dec 13 02:48:11.648598 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1])
Dec 13 02:48:11.648603 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1])
Dec 13 02:48:11.648608 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1])
Dec 13 02:48:11.648613 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1])
Dec 13 02:48:11.648619 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1])
Dec 13 02:48:11.648624 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1])
Dec 13 02:48:11.648629 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1])
Dec 13 02:48:11.648635 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1])
Dec 13 02:48:11.648643 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1])
Dec 13 02:48:11.648651 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1])
Dec 13 02:48:11.648659 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1])
Dec 13 02:48:11.648665 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1])
Dec 13 02:48:11.648670 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1])
Dec 13 02:48:11.648675 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1])
Dec 13 02:48:11.648682 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1])
Dec 13 02:48:11.648687 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1])
Dec 13 02:48:11.648692 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1])
Dec 13 02:48:11.648697 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1])
Dec 13 02:48:11.648702 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1])
Dec 13 02:48:11.648708 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1])
Dec 13 02:48:11.648713 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1])
Dec 13 02:48:11.648718 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1])
Dec 13 02:48:11.648723 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1])
Dec 13 02:48:11.648729 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1])
Dec 13 02:48:11.648735 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1])
Dec 13 02:48:11.648740 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1])
Dec 13 02:48:11.648745 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23
Dec 13 02:48:11.648750 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge)
Dec 13 02:48:11.648756 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Dec 13 02:48:11.648761 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000
Dec 13 02:48:11.648766 kernel: TSC deadline timer available
Dec 13 02:48:11.648771 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs
Dec 13 02:48:11.648777 kernel: [mem 0x80000000-0xefffffff] available for PCI devices
Dec 13 02:48:11.648783 kernel: Booting paravirtualized kernel on VMware hypervisor
Dec 13 02:48:11.648788 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Dec 13 02:48:11.648794 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:128 nr_node_ids:1
Dec 13 02:48:11.648802 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144
Dec 13 02:48:11.648810 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152
Dec 13 02:48:11.648818 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 
Dec 13 02:48:11.648825 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 
Dec 13 02:48:11.648834 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 
Dec 13 02:48:11.648842 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 
Dec 13 02:48:11.648847 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 
Dec 13 02:48:11.648852 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 
Dec 13 02:48:11.648857 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 
Dec 13 02:48:11.648869 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 
Dec 13 02:48:11.648876 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 
Dec 13 02:48:11.650442 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 
Dec 13 02:48:11.650450 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 
Dec 13 02:48:11.650456 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 
Dec 13 02:48:11.650464 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 
Dec 13 02:48:11.650469 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 
Dec 13 02:48:11.650475 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 
Dec 13 02:48:11.650480 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 
Dec 13 02:48:11.650486 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 515808
Dec 13 02:48:11.650491 kernel: Policy zone: DMA32
Dec 13 02:48:11.650498 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c
Dec 13 02:48:11.650504 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Dec 13 02:48:11.650510 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes
Dec 13 02:48:11.650516 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes
Dec 13 02:48:11.650522 kernel: printk: log_buf_len min size: 262144 bytes
Dec 13 02:48:11.650527 kernel: printk: log_buf_len: 1048576 bytes
Dec 13 02:48:11.650533 kernel: printk: early log buf free: 239728(91%)
Dec 13 02:48:11.650538 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Dec 13 02:48:11.650545 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Dec 13 02:48:11.650551 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Dec 13 02:48:11.650556 kernel: Memory: 1940392K/2096628K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47476K init, 4108K bss, 155976K reserved, 0K cma-reserved)
Dec 13 02:48:11.650563 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1
Dec 13 02:48:11.650569 kernel: ftrace: allocating 34549 entries in 135 pages
Dec 13 02:48:11.650574 kernel: ftrace: allocated 135 pages with 4 groups
Dec 13 02:48:11.650581 kernel: rcu: Hierarchical RCU implementation.
Dec 13 02:48:11.650587 kernel: rcu:         RCU event tracing is enabled.
Dec 13 02:48:11.650593 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128.
Dec 13 02:48:11.650599 kernel:         Rude variant of Tasks RCU enabled.
Dec 13 02:48:11.650605 kernel:         Tracing variant of Tasks RCU enabled.
Dec 13 02:48:11.650610 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Dec 13 02:48:11.650616 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128
Dec 13 02:48:11.650622 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16
Dec 13 02:48:11.650627 kernel: random: crng init done
Dec 13 02:48:11.650633 kernel: Console: colour VGA+ 80x25
Dec 13 02:48:11.650638 kernel: printk: console [tty0] enabled
Dec 13 02:48:11.650644 kernel: printk: console [ttyS0] enabled
Dec 13 02:48:11.650651 kernel: ACPI: Core revision 20210730
Dec 13 02:48:11.650657 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns
Dec 13 02:48:11.650663 kernel: APIC: Switch to symmetric I/O mode setup
Dec 13 02:48:11.650668 kernel: x2apic enabled
Dec 13 02:48:11.650674 kernel: Switched APIC routing to physical x2apic.
Dec 13 02:48:11.650680 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
Dec 13 02:48:11.650686 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns
Dec 13 02:48:11.650691 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000)
Dec 13 02:48:11.650697 kernel: Disabled fast string operations
Dec 13 02:48:11.650703 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8
Dec 13 02:48:11.650709 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4
Dec 13 02:48:11.650715 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Dec 13 02:48:11.650720 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
Dec 13 02:48:11.650726 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit
Dec 13 02:48:11.650732 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall
Dec 13 02:48:11.650737 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS
Dec 13 02:48:11.650743 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
Dec 13 02:48:11.650749 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT
Dec 13 02:48:11.650755 kernel: RETBleed: Mitigation: Enhanced IBRS
Dec 13 02:48:11.650760 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Dec 13 02:48:11.650766 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp
Dec 13 02:48:11.650772 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode
Dec 13 02:48:11.650777 kernel: SRBDS: Unknown: Dependent on hypervisor status
Dec 13 02:48:11.650783 kernel: GDS: Unknown: Dependent on hypervisor status
Dec 13 02:48:11.650789 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Dec 13 02:48:11.650795 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Dec 13 02:48:11.650802 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Dec 13 02:48:11.650807 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Dec 13 02:48:11.650816 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Dec 13 02:48:11.650822 kernel: Freeing SMP alternatives memory: 32K
Dec 13 02:48:11.650827 kernel: pid_max: default: 131072 minimum: 1024
Dec 13 02:48:11.650833 kernel: LSM: Security Framework initializing
Dec 13 02:48:11.650838 kernel: SELinux:  Initializing.
Dec 13 02:48:11.650844 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
Dec 13 02:48:11.650850 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
Dec 13 02:48:11.650857 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd)
Dec 13 02:48:11.650862 kernel: Performance Events: Skylake events, core PMU driver.
Dec 13 02:48:11.650868 kernel: core: CPUID marked event: 'cpu cycles' unavailable
Dec 13 02:48:11.650874 kernel: core: CPUID marked event: 'instructions' unavailable
Dec 13 02:48:11.650898 kernel: core: CPUID marked event: 'bus cycles' unavailable
Dec 13 02:48:11.650907 kernel: core: CPUID marked event: 'cache references' unavailable
Dec 13 02:48:11.650912 kernel: core: CPUID marked event: 'cache misses' unavailable
Dec 13 02:48:11.650918 kernel: core: CPUID marked event: 'branch instructions' unavailable
Dec 13 02:48:11.650923 kernel: core: CPUID marked event: 'branch misses' unavailable
Dec 13 02:48:11.650931 kernel: ... version:                1
Dec 13 02:48:11.650937 kernel: ... bit width:              48
Dec 13 02:48:11.650942 kernel: ... generic registers:      4
Dec 13 02:48:11.650948 kernel: ... value mask:             0000ffffffffffff
Dec 13 02:48:11.650953 kernel: ... max period:             000000007fffffff
Dec 13 02:48:11.650960 kernel: ... fixed-purpose events:   0
Dec 13 02:48:11.650965 kernel: ... event mask:             000000000000000f
Dec 13 02:48:11.650971 kernel: signal: max sigframe size: 1776
Dec 13 02:48:11.650977 kernel: rcu: Hierarchical SRCU implementation.
Dec 13 02:48:11.650983 kernel: NMI watchdog: Perf NMI watchdog permanently disabled
Dec 13 02:48:11.650989 kernel: smp: Bringing up secondary CPUs ...
Dec 13 02:48:11.650994 kernel: x86: Booting SMP configuration:
Dec 13 02:48:11.651000 kernel: .... node  #0, CPUs:          #1
Dec 13 02:48:11.651005 kernel: Disabled fast string operations
Dec 13 02:48:11.651011 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1
Dec 13 02:48:11.651016 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1
Dec 13 02:48:11.651022 kernel: smp: Brought up 1 node, 2 CPUs
Dec 13 02:48:11.651028 kernel: smpboot: Max logical packages: 128
Dec 13 02:48:11.651033 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS)
Dec 13 02:48:11.651040 kernel: devtmpfs: initialized
Dec 13 02:48:11.651046 kernel: x86/mm: Memory block size: 128MB
Dec 13 02:48:11.651051 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes)
Dec 13 02:48:11.651057 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Dec 13 02:48:11.651063 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear)
Dec 13 02:48:11.651068 kernel: pinctrl core: initialized pinctrl subsystem
Dec 13 02:48:11.651074 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec 13 02:48:11.651079 kernel: audit: initializing netlink subsys (disabled)
Dec 13 02:48:11.651086 kernel: audit: type=2000 audit(1734058090.059:1): state=initialized audit_enabled=0 res=1
Dec 13 02:48:11.651092 kernel: thermal_sys: Registered thermal governor 'step_wise'
Dec 13 02:48:11.651097 kernel: thermal_sys: Registered thermal governor 'user_space'
Dec 13 02:48:11.651103 kernel: cpuidle: using governor menu
Dec 13 02:48:11.651109 kernel: Simple Boot Flag at 0x36 set to 0x80
Dec 13 02:48:11.651114 kernel: ACPI: bus type PCI registered
Dec 13 02:48:11.651120 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec 13 02:48:11.651125 kernel: dca service started, version 1.12.1
Dec 13 02:48:11.651131 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000)
Dec 13 02:48:11.651136 kernel: PCI: MMCONFIG at [mem 0xf0000000-0xf7ffffff] reserved in E820
Dec 13 02:48:11.651143 kernel: PCI: Using configuration type 1 for base access
Dec 13 02:48:11.651149 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Dec 13 02:48:11.651154 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
Dec 13 02:48:11.651160 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
Dec 13 02:48:11.651165 kernel: ACPI: Added _OSI(Module Device)
Dec 13 02:48:11.651171 kernel: ACPI: Added _OSI(Processor Device)
Dec 13 02:48:11.651176 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Dec 13 02:48:11.651182 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Dec 13 02:48:11.651188 kernel: ACPI: Added _OSI(Linux-Dell-Video)
Dec 13 02:48:11.651194 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio)
Dec 13 02:48:11.651200 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics)
Dec 13 02:48:11.651205 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Dec 13 02:48:11.651211 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored
Dec 13 02:48:11.651217 kernel: ACPI: Interpreter enabled
Dec 13 02:48:11.651222 kernel: ACPI: PM: (supports S0 S1 S5)
Dec 13 02:48:11.651228 kernel: ACPI: Using IOAPIC for interrupt routing
Dec 13 02:48:11.651233 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Dec 13 02:48:11.651239 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F
Dec 13 02:48:11.651245 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f])
Dec 13 02:48:11.651321 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3]
Dec 13 02:48:11.651370 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR]
Dec 13 02:48:11.651416 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability]
Dec 13 02:48:11.651424 kernel: PCI host bridge to bus 0000:00
Dec 13 02:48:11.651470 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Dec 13 02:48:11.651513 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000cffff window]
Dec 13 02:48:11.651553 kernel: pci_bus 0000:00: root bus resource [mem 0x000d0000-0x000d3fff window]
Dec 13 02:48:11.651592 kernel: pci_bus 0000:00: root bus resource [mem 0x000d4000-0x000d7fff window]
Dec 13 02:48:11.651635 kernel: pci_bus 0000:00: root bus resource [mem 0x000d8000-0x000dbfff window]
Dec 13 02:48:11.651674 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Dec 13 02:48:11.651714 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Dec 13 02:48:11.651753 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xfeff window]
Dec 13 02:48:11.651794 kernel: pci_bus 0000:00: root bus resource [bus 00-7f]
Dec 13 02:48:11.651862 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000
Dec 13 02:48:11.651924 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400
Dec 13 02:48:11.651978 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100
Dec 13 02:48:11.652028 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a
Dec 13 02:48:11.652075 kernel: pci 0000:00:07.1: reg 0x20: [io  0x1060-0x106f]
Dec 13 02:48:11.652124 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io  0x01f0-0x01f7]
Dec 13 02:48:11.652171 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io  0x03f6]
Dec 13 02:48:11.652217 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io  0x0170-0x0177]
Dec 13 02:48:11.652263 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io  0x0376]
Dec 13 02:48:11.652312 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000
Dec 13 02:48:11.652358 kernel: pci 0000:00:07.3: quirk: [io  0x1000-0x103f] claimed by PIIX4 ACPI
Dec 13 02:48:11.652404 kernel: pci 0000:00:07.3: quirk: [io  0x1040-0x104f] claimed by PIIX4 SMB
Dec 13 02:48:11.652456 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000
Dec 13 02:48:11.652503 kernel: pci 0000:00:07.7: reg 0x10: [io  0x1080-0x10bf]
Dec 13 02:48:11.652549 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit]
Dec 13 02:48:11.652600 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000
Dec 13 02:48:11.652646 kernel: pci 0000:00:0f.0: reg 0x10: [io  0x1070-0x107f]
Dec 13 02:48:11.652692 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref]
Dec 13 02:48:11.652737 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff]
Dec 13 02:48:11.652785 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref]
Dec 13 02:48:11.652831 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Dec 13 02:48:11.652887 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401
Dec 13 02:48:11.652941 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400
Dec 13 02:48:11.652988 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold
Dec 13 02:48:11.653037 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400
Dec 13 02:48:11.653086 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold
Dec 13 02:48:11.653138 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400
Dec 13 02:48:11.653184 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold
Dec 13 02:48:11.653235 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400
Dec 13 02:48:11.653282 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold
Dec 13 02:48:11.653332 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400
Dec 13 02:48:11.653381 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold
Dec 13 02:48:11.653431 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400
Dec 13 02:48:11.653477 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold
Dec 13 02:48:11.653526 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400
Dec 13 02:48:11.653573 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold
Dec 13 02:48:11.653623 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400
Dec 13 02:48:11.653670 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold
Dec 13 02:48:11.653722 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400
Dec 13 02:48:11.653769 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold
Dec 13 02:48:11.653819 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400
Dec 13 02:48:11.653865 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold
Dec 13 02:48:11.653922 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400
Dec 13 02:48:11.653970 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold
Dec 13 02:48:11.654020 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400
Dec 13 02:48:11.654067 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold
Dec 13 02:48:11.654116 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400
Dec 13 02:48:11.654163 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold
Dec 13 02:48:11.654211 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400
Dec 13 02:48:11.654260 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold
Dec 13 02:48:11.654309 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400
Dec 13 02:48:11.654355 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold
Dec 13 02:48:11.654404 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400
Dec 13 02:48:11.654452 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold
Dec 13 02:48:11.654503 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400
Dec 13 02:48:11.654552 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold
Dec 13 02:48:11.654601 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400
Dec 13 02:48:11.654647 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold
Dec 13 02:48:11.654696 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400
Dec 13 02:48:11.654742 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold
Dec 13 02:48:11.654793 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400
Dec 13 02:48:11.654849 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold
Dec 13 02:48:11.654917 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400
Dec 13 02:48:11.654966 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold
Dec 13 02:48:11.655014 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400
Dec 13 02:48:11.655060 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold
Dec 13 02:48:11.655109 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400
Dec 13 02:48:11.655156 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold
Dec 13 02:48:11.655208 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400
Dec 13 02:48:11.655256 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold
Dec 13 02:48:11.655307 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400
Dec 13 02:48:11.659747 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold
Dec 13 02:48:11.659808 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400
Dec 13 02:48:11.659858 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold
Dec 13 02:48:11.659920 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400
Dec 13 02:48:11.659968 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold
Dec 13 02:48:11.660021 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400
Dec 13 02:48:11.660068 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold
Dec 13 02:48:11.660119 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400
Dec 13 02:48:11.660166 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold
Dec 13 02:48:11.660217 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400
Dec 13 02:48:11.660264 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold
Dec 13 02:48:11.660312 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400
Dec 13 02:48:11.660358 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold
Dec 13 02:48:11.660407 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400
Dec 13 02:48:11.660455 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold
Dec 13 02:48:11.660503 kernel: pci_bus 0000:01: extended config space not accessible
Dec 13 02:48:11.660553 kernel: pci 0000:00:01.0: PCI bridge to [bus 01]
Dec 13 02:48:11.660602 kernel: pci_bus 0000:02: extended config space not accessible
Dec 13 02:48:11.660610 kernel: acpiphp: Slot [32] registered
Dec 13 02:48:11.660616 kernel: acpiphp: Slot [33] registered
Dec 13 02:48:11.660622 kernel: acpiphp: Slot [34] registered
Dec 13 02:48:11.660628 kernel: acpiphp: Slot [35] registered
Dec 13 02:48:11.660633 kernel: acpiphp: Slot [36] registered
Dec 13 02:48:11.660639 kernel: acpiphp: Slot [37] registered
Dec 13 02:48:11.660646 kernel: acpiphp: Slot [38] registered
Dec 13 02:48:11.660651 kernel: acpiphp: Slot [39] registered
Dec 13 02:48:11.660657 kernel: acpiphp: Slot [40] registered
Dec 13 02:48:11.660662 kernel: acpiphp: Slot [41] registered
Dec 13 02:48:11.660668 kernel: acpiphp: Slot [42] registered
Dec 13 02:48:11.660673 kernel: acpiphp: Slot [43] registered
Dec 13 02:48:11.660679 kernel: acpiphp: Slot [44] registered
Dec 13 02:48:11.660685 kernel: acpiphp: Slot [45] registered
Dec 13 02:48:11.660690 kernel: acpiphp: Slot [46] registered
Dec 13 02:48:11.660696 kernel: acpiphp: Slot [47] registered
Dec 13 02:48:11.660702 kernel: acpiphp: Slot [48] registered
Dec 13 02:48:11.660708 kernel: acpiphp: Slot [49] registered
Dec 13 02:48:11.660714 kernel: acpiphp: Slot [50] registered
Dec 13 02:48:11.660719 kernel: acpiphp: Slot [51] registered
Dec 13 02:48:11.660724 kernel: acpiphp: Slot [52] registered
Dec 13 02:48:11.660730 kernel: acpiphp: Slot [53] registered
Dec 13 02:48:11.660735 kernel: acpiphp: Slot [54] registered
Dec 13 02:48:11.660741 kernel: acpiphp: Slot [55] registered
Dec 13 02:48:11.660746 kernel: acpiphp: Slot [56] registered
Dec 13 02:48:11.660753 kernel: acpiphp: Slot [57] registered
Dec 13 02:48:11.660759 kernel: acpiphp: Slot [58] registered
Dec 13 02:48:11.660764 kernel: acpiphp: Slot [59] registered
Dec 13 02:48:11.660769 kernel: acpiphp: Slot [60] registered
Dec 13 02:48:11.660775 kernel: acpiphp: Slot [61] registered
Dec 13 02:48:11.660780 kernel: acpiphp: Slot [62] registered
Dec 13 02:48:11.660786 kernel: acpiphp: Slot [63] registered
Dec 13 02:48:11.660837 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode)
Dec 13 02:48:11.660893 kernel: pci 0000:00:11.0:   bridge window [io  0x2000-0x3fff]
Dec 13 02:48:11.660946 kernel: pci 0000:00:11.0:   bridge window [mem 0xfd600000-0xfdffffff]
Dec 13 02:48:11.660992 kernel: pci 0000:00:11.0:   bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref]
Dec 13 02:48:11.661037 kernel: pci 0000:00:11.0:   bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode)
Dec 13 02:48:11.661082 kernel: pci 0000:00:11.0:   bridge window [mem 0x000cc000-0x000cffff window] (subtractive decode)
Dec 13 02:48:11.661127 kernel: pci 0000:00:11.0:   bridge window [mem 0x000d0000-0x000d3fff window] (subtractive decode)
Dec 13 02:48:11.661172 kernel: pci 0000:00:11.0:   bridge window [mem 0x000d4000-0x000d7fff window] (subtractive decode)
Dec 13 02:48:11.661217 kernel: pci 0000:00:11.0:   bridge window [mem 0x000d8000-0x000dbfff window] (subtractive decode)
Dec 13 02:48:11.661264 kernel: pci 0000:00:11.0:   bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode)
Dec 13 02:48:11.661310 kernel: pci 0000:00:11.0:   bridge window [io  0x0000-0x0cf7 window] (subtractive decode)
Dec 13 02:48:11.661356 kernel: pci 0000:00:11.0:   bridge window [io  0x0d00-0xfeff window] (subtractive decode)
Dec 13 02:48:11.661408 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700
Dec 13 02:48:11.661456 kernel: pci 0000:03:00.0: reg 0x10: [io  0x4000-0x4007]
Dec 13 02:48:11.661504 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit]
Dec 13 02:48:11.661552 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref]
Dec 13 02:48:11.661601 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold
Dec 13 02:48:11.661649 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device.  You can enable it with 'pcie_aspm=force'
Dec 13 02:48:11.661695 kernel: pci 0000:00:15.0: PCI bridge to [bus 03]
Dec 13 02:48:11.661741 kernel: pci 0000:00:15.0:   bridge window [io  0x4000-0x4fff]
Dec 13 02:48:11.661787 kernel: pci 0000:00:15.0:   bridge window [mem 0xfd500000-0xfd5fffff]
Dec 13 02:48:11.661845 kernel: pci 0000:00:15.1: PCI bridge to [bus 04]
Dec 13 02:48:11.661904 kernel: pci 0000:00:15.1:   bridge window [io  0x8000-0x8fff]
Dec 13 02:48:11.661953 kernel: pci 0000:00:15.1:   bridge window [mem 0xfd100000-0xfd1fffff]
Dec 13 02:48:11.662002 kernel: pci 0000:00:15.1:   bridge window [mem 0xe7800000-0xe78fffff 64bit pref]
Dec 13 02:48:11.662048 kernel: pci 0000:00:15.2: PCI bridge to [bus 05]
Dec 13 02:48:11.662095 kernel: pci 0000:00:15.2:   bridge window [io  0xc000-0xcfff]
Dec 13 02:48:11.662141 kernel: pci 0000:00:15.2:   bridge window [mem 0xfcd00000-0xfcdfffff]
Dec 13 02:48:11.662187 kernel: pci 0000:00:15.2:   bridge window [mem 0xe7400000-0xe74fffff 64bit pref]
Dec 13 02:48:11.662233 kernel: pci 0000:00:15.3: PCI bridge to [bus 06]
Dec 13 02:48:11.662279 kernel: pci 0000:00:15.3:   bridge window [mem 0xfc900000-0xfc9fffff]
Dec 13 02:48:11.662327 kernel: pci 0000:00:15.3:   bridge window [mem 0xe7000000-0xe70fffff 64bit pref]
Dec 13 02:48:11.662374 kernel: pci 0000:00:15.4: PCI bridge to [bus 07]
Dec 13 02:48:11.662420 kernel: pci 0000:00:15.4:   bridge window [mem 0xfc500000-0xfc5fffff]
Dec 13 02:48:11.662465 kernel: pci 0000:00:15.4:   bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref]
Dec 13 02:48:11.662511 kernel: pci 0000:00:15.5: PCI bridge to [bus 08]
Dec 13 02:48:11.662559 kernel: pci 0000:00:15.5:   bridge window [mem 0xfc100000-0xfc1fffff]
Dec 13 02:48:11.662604 kernel: pci 0000:00:15.5:   bridge window [mem 0xe6800000-0xe68fffff 64bit pref]
Dec 13 02:48:11.662650 kernel: pci 0000:00:15.6: PCI bridge to [bus 09]
Dec 13 02:48:11.662696 kernel: pci 0000:00:15.6:   bridge window [mem 0xfbd00000-0xfbdfffff]
Dec 13 02:48:11.662741 kernel: pci 0000:00:15.6:   bridge window [mem 0xe6400000-0xe64fffff 64bit pref]
Dec 13 02:48:11.662787 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a]
Dec 13 02:48:11.662831 kernel: pci 0000:00:15.7:   bridge window [mem 0xfb900000-0xfb9fffff]
Dec 13 02:48:11.665946 kernel: pci 0000:00:15.7:   bridge window [mem 0xe6000000-0xe60fffff 64bit pref]
Dec 13 02:48:11.666017 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000
Dec 13 02:48:11.666070 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff]
Dec 13 02:48:11.666118 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff]
Dec 13 02:48:11.666167 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff]
Dec 13 02:48:11.668959 kernel: pci 0000:0b:00.0: reg 0x1c: [io  0x5000-0x500f]
Dec 13 02:48:11.669014 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref]
Dec 13 02:48:11.669065 kernel: pci 0000:0b:00.0: supports D1 D2
Dec 13 02:48:11.669117 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold
Dec 13 02:48:11.669165 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device.  You can enable it with 'pcie_aspm=force'
Dec 13 02:48:11.669213 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b]
Dec 13 02:48:11.669260 kernel: pci 0000:00:16.0:   bridge window [io  0x5000-0x5fff]
Dec 13 02:48:11.669306 kernel: pci 0000:00:16.0:   bridge window [mem 0xfd400000-0xfd4fffff]
Dec 13 02:48:11.669354 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c]
Dec 13 02:48:11.669400 kernel: pci 0000:00:16.1:   bridge window [io  0x9000-0x9fff]
Dec 13 02:48:11.669446 kernel: pci 0000:00:16.1:   bridge window [mem 0xfd000000-0xfd0fffff]
Dec 13 02:48:11.669504 kernel: pci 0000:00:16.1:   bridge window [mem 0xe7700000-0xe77fffff 64bit pref]
Dec 13 02:48:11.669551 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d]
Dec 13 02:48:11.669597 kernel: pci 0000:00:16.2:   bridge window [io  0xd000-0xdfff]
Dec 13 02:48:11.669643 kernel: pci 0000:00:16.2:   bridge window [mem 0xfcc00000-0xfccfffff]
Dec 13 02:48:11.669689 kernel: pci 0000:00:16.2:   bridge window [mem 0xe7300000-0xe73fffff 64bit pref]
Dec 13 02:48:11.669739 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e]
Dec 13 02:48:11.669786 kernel: pci 0000:00:16.3:   bridge window [mem 0xfc800000-0xfc8fffff]
Dec 13 02:48:11.669835 kernel: pci 0000:00:16.3:   bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref]
Dec 13 02:48:11.669888 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f]
Dec 13 02:48:11.669936 kernel: pci 0000:00:16.4:   bridge window [mem 0xfc400000-0xfc4fffff]
Dec 13 02:48:11.669982 kernel: pci 0000:00:16.4:   bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref]
Dec 13 02:48:11.670029 kernel: pci 0000:00:16.5: PCI bridge to [bus 10]
Dec 13 02:48:11.670075 kernel: pci 0000:00:16.5:   bridge window [mem 0xfc000000-0xfc0fffff]
Dec 13 02:48:11.670121 kernel: pci 0000:00:16.5:   bridge window [mem 0xe6700000-0xe67fffff 64bit pref]
Dec 13 02:48:11.670168 kernel: pci 0000:00:16.6: PCI bridge to [bus 11]
Dec 13 02:48:11.670213 kernel: pci 0000:00:16.6:   bridge window [mem 0xfbc00000-0xfbcfffff]
Dec 13 02:48:11.670261 kernel: pci 0000:00:16.6:   bridge window [mem 0xe6300000-0xe63fffff 64bit pref]
Dec 13 02:48:11.670308 kernel: pci 0000:00:16.7: PCI bridge to [bus 12]
Dec 13 02:48:11.670353 kernel: pci 0000:00:16.7:   bridge window [mem 0xfb800000-0xfb8fffff]
Dec 13 02:48:11.670398 kernel: pci 0000:00:16.7:   bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref]
Dec 13 02:48:11.670445 kernel: pci 0000:00:17.0: PCI bridge to [bus 13]
Dec 13 02:48:11.670490 kernel: pci 0000:00:17.0:   bridge window [io  0x6000-0x6fff]
Dec 13 02:48:11.670535 kernel: pci 0000:00:17.0:   bridge window [mem 0xfd300000-0xfd3fffff]
Dec 13 02:48:11.670580 kernel: pci 0000:00:17.0:   bridge window [mem 0xe7a00000-0xe7afffff 64bit pref]
Dec 13 02:48:11.670629 kernel: pci 0000:00:17.1: PCI bridge to [bus 14]
Dec 13 02:48:11.670674 kernel: pci 0000:00:17.1:   bridge window [io  0xa000-0xafff]
Dec 13 02:48:11.670720 kernel: pci 0000:00:17.1:   bridge window [mem 0xfcf00000-0xfcffffff]
Dec 13 02:48:11.670765 kernel: pci 0000:00:17.1:   bridge window [mem 0xe7600000-0xe76fffff 64bit pref]
Dec 13 02:48:11.670815 kernel: pci 0000:00:17.2: PCI bridge to [bus 15]
Dec 13 02:48:11.670863 kernel: pci 0000:00:17.2:   bridge window [io  0xe000-0xefff]
Dec 13 02:48:11.670917 kernel: pci 0000:00:17.2:   bridge window [mem 0xfcb00000-0xfcbfffff]
Dec 13 02:48:11.670966 kernel: pci 0000:00:17.2:   bridge window [mem 0xe7200000-0xe72fffff 64bit pref]
Dec 13 02:48:11.671014 kernel: pci 0000:00:17.3: PCI bridge to [bus 16]
Dec 13 02:48:11.671059 kernel: pci 0000:00:17.3:   bridge window [mem 0xfc700000-0xfc7fffff]
Dec 13 02:48:11.671104 kernel: pci 0000:00:17.3:   bridge window [mem 0xe6e00000-0xe6efffff 64bit pref]
Dec 13 02:48:11.671151 kernel: pci 0000:00:17.4: PCI bridge to [bus 17]
Dec 13 02:48:11.671196 kernel: pci 0000:00:17.4:   bridge window [mem 0xfc300000-0xfc3fffff]
Dec 13 02:48:11.671241 kernel: pci 0000:00:17.4:   bridge window [mem 0xe6a00000-0xe6afffff 64bit pref]
Dec 13 02:48:11.671288 kernel: pci 0000:00:17.5: PCI bridge to [bus 18]
Dec 13 02:48:11.671335 kernel: pci 0000:00:17.5:   bridge window [mem 0xfbf00000-0xfbffffff]
Dec 13 02:48:11.671381 kernel: pci 0000:00:17.5:   bridge window [mem 0xe6600000-0xe66fffff 64bit pref]
Dec 13 02:48:11.671427 kernel: pci 0000:00:17.6: PCI bridge to [bus 19]
Dec 13 02:48:11.671472 kernel: pci 0000:00:17.6:   bridge window [mem 0xfbb00000-0xfbbfffff]
Dec 13 02:48:11.671518 kernel: pci 0000:00:17.6:   bridge window [mem 0xe6200000-0xe62fffff 64bit pref]
Dec 13 02:48:11.671563 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a]
Dec 13 02:48:11.671609 kernel: pci 0000:00:17.7:   bridge window [mem 0xfb700000-0xfb7fffff]
Dec 13 02:48:11.671654 kernel: pci 0000:00:17.7:   bridge window [mem 0xe5e00000-0xe5efffff 64bit pref]
Dec 13 02:48:11.671703 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b]
Dec 13 02:48:11.671749 kernel: pci 0000:00:18.0:   bridge window [io  0x7000-0x7fff]
Dec 13 02:48:11.671795 kernel: pci 0000:00:18.0:   bridge window [mem 0xfd200000-0xfd2fffff]
Dec 13 02:48:11.671846 kernel: pci 0000:00:18.0:   bridge window [mem 0xe7900000-0xe79fffff 64bit pref]
Dec 13 02:48:11.675819 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c]
Dec 13 02:48:11.675876 kernel: pci 0000:00:18.1:   bridge window [io  0xb000-0xbfff]
Dec 13 02:48:11.675932 kernel: pci 0000:00:18.1:   bridge window [mem 0xfce00000-0xfcefffff]
Dec 13 02:48:11.675979 kernel: pci 0000:00:18.1:   bridge window [mem 0xe7500000-0xe75fffff 64bit pref]
Dec 13 02:48:11.676031 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d]
Dec 13 02:48:11.676079 kernel: pci 0000:00:18.2:   bridge window [mem 0xfca00000-0xfcafffff]
Dec 13 02:48:11.676125 kernel: pci 0000:00:18.2:   bridge window [mem 0xe7100000-0xe71fffff 64bit pref]
Dec 13 02:48:11.676172 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e]
Dec 13 02:48:11.676217 kernel: pci 0000:00:18.3:   bridge window [mem 0xfc600000-0xfc6fffff]
Dec 13 02:48:11.676262 kernel: pci 0000:00:18.3:   bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref]
Dec 13 02:48:11.676308 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f]
Dec 13 02:48:11.676356 kernel: pci 0000:00:18.4:   bridge window [mem 0xfc200000-0xfc2fffff]
Dec 13 02:48:11.676401 kernel: pci 0000:00:18.4:   bridge window [mem 0xe6900000-0xe69fffff 64bit pref]
Dec 13 02:48:11.676448 kernel: pci 0000:00:18.5: PCI bridge to [bus 20]
Dec 13 02:48:11.676492 kernel: pci 0000:00:18.5:   bridge window [mem 0xfbe00000-0xfbefffff]
Dec 13 02:48:11.676537 kernel: pci 0000:00:18.5:   bridge window [mem 0xe6500000-0xe65fffff 64bit pref]
Dec 13 02:48:11.676583 kernel: pci 0000:00:18.6: PCI bridge to [bus 21]
Dec 13 02:48:11.676627 kernel: pci 0000:00:18.6:   bridge window [mem 0xfba00000-0xfbafffff]
Dec 13 02:48:11.676672 kernel: pci 0000:00:18.6:   bridge window [mem 0xe6100000-0xe61fffff 64bit pref]
Dec 13 02:48:11.676721 kernel: pci 0000:00:18.7: PCI bridge to [bus 22]
Dec 13 02:48:11.676765 kernel: pci 0000:00:18.7:   bridge window [mem 0xfb600000-0xfb6fffff]
Dec 13 02:48:11.676811 kernel: pci 0000:00:18.7:   bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref]
Dec 13 02:48:11.676819 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9
Dec 13 02:48:11.676825 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0
Dec 13 02:48:11.676831 kernel: ACPI: PCI: Interrupt link LNKB disabled
Dec 13 02:48:11.676837 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Dec 13 02:48:11.676843 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10
Dec 13 02:48:11.676848 kernel: iommu: Default domain type: Translated 
Dec 13 02:48:11.676855 kernel: iommu: DMA domain TLB invalidation policy: lazy mode 
Dec 13 02:48:11.676909 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device
Dec 13 02:48:11.676955 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Dec 13 02:48:11.677001 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible
Dec 13 02:48:11.677009 kernel: vgaarb: loaded
Dec 13 02:48:11.677015 kernel: pps_core: LinuxPPS API ver. 1 registered
Dec 13 02:48:11.677021 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Dec 13 02:48:11.677026 kernel: PTP clock support registered
Dec 13 02:48:11.677032 kernel: PCI: Using ACPI for IRQ routing
Dec 13 02:48:11.677040 kernel: PCI: pci_cache_line_size set to 64 bytes
Dec 13 02:48:11.677046 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff]
Dec 13 02:48:11.677052 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff]
Dec 13 02:48:11.677057 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0
Dec 13 02:48:11.677063 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter
Dec 13 02:48:11.677069 kernel: clocksource: Switched to clocksource tsc-early
Dec 13 02:48:11.677074 kernel: VFS: Disk quotas dquot_6.6.0
Dec 13 02:48:11.677080 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec 13 02:48:11.677086 kernel: pnp: PnP ACPI init
Dec 13 02:48:11.677135 kernel: system 00:00: [io  0x1000-0x103f] has been reserved
Dec 13 02:48:11.677179 kernel: system 00:00: [io  0x1040-0x104f] has been reserved
Dec 13 02:48:11.677221 kernel: system 00:00: [io  0x0cf0-0x0cf1] has been reserved
Dec 13 02:48:11.677264 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved
Dec 13 02:48:11.677311 kernel: pnp 00:06: [dma 2]
Dec 13 02:48:11.677355 kernel: system 00:07: [io  0xfce0-0xfcff] has been reserved
Dec 13 02:48:11.677400 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved
Dec 13 02:48:11.677442 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved
Dec 13 02:48:11.677450 kernel: pnp: PnP ACPI: found 8 devices
Dec 13 02:48:11.677456 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Dec 13 02:48:11.677462 kernel: NET: Registered PF_INET protocol family
Dec 13 02:48:11.677467 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear)
Dec 13 02:48:11.677473 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear)
Dec 13 02:48:11.677479 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Dec 13 02:48:11.677486 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 13 02:48:11.677492 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear)
Dec 13 02:48:11.677498 kernel: TCP: Hash tables configured (established 16384 bind 16384)
Dec 13 02:48:11.677504 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear)
Dec 13 02:48:11.677510 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear)
Dec 13 02:48:11.677516 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Dec 13 02:48:11.677522 kernel: NET: Registered PF_XDP protocol family
Dec 13 02:48:11.677569 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000
Dec 13 02:48:11.677633 kernel: pci 0000:00:15.3: bridge window [io  0x1000-0x0fff] to [bus 06] add_size 1000
Dec 13 02:48:11.677685 kernel: pci 0000:00:15.4: bridge window [io  0x1000-0x0fff] to [bus 07] add_size 1000
Dec 13 02:48:11.677732 kernel: pci 0000:00:15.5: bridge window [io  0x1000-0x0fff] to [bus 08] add_size 1000
Dec 13 02:48:11.677780 kernel: pci 0000:00:15.6: bridge window [io  0x1000-0x0fff] to [bus 09] add_size 1000
Dec 13 02:48:11.677832 kernel: pci 0000:00:15.7: bridge window [io  0x1000-0x0fff] to [bus 0a] add_size 1000
Dec 13 02:48:11.677885 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000
Dec 13 02:48:11.677936 kernel: pci 0000:00:16.3: bridge window [io  0x1000-0x0fff] to [bus 0e] add_size 1000
Dec 13 02:48:11.677985 kernel: pci 0000:00:16.4: bridge window [io  0x1000-0x0fff] to [bus 0f] add_size 1000
Dec 13 02:48:11.678032 kernel: pci 0000:00:16.5: bridge window [io  0x1000-0x0fff] to [bus 10] add_size 1000
Dec 13 02:48:11.678078 kernel: pci 0000:00:16.6: bridge window [io  0x1000-0x0fff] to [bus 11] add_size 1000
Dec 13 02:48:11.678124 kernel: pci 0000:00:16.7: bridge window [io  0x1000-0x0fff] to [bus 12] add_size 1000
Dec 13 02:48:11.678171 kernel: pci 0000:00:17.3: bridge window [io  0x1000-0x0fff] to [bus 16] add_size 1000
Dec 13 02:48:11.678220 kernel: pci 0000:00:17.4: bridge window [io  0x1000-0x0fff] to [bus 17] add_size 1000
Dec 13 02:48:11.678266 kernel: pci 0000:00:17.5: bridge window [io  0x1000-0x0fff] to [bus 18] add_size 1000
Dec 13 02:48:11.678312 kernel: pci 0000:00:17.6: bridge window [io  0x1000-0x0fff] to [bus 19] add_size 1000
Dec 13 02:48:11.678358 kernel: pci 0000:00:17.7: bridge window [io  0x1000-0x0fff] to [bus 1a] add_size 1000
Dec 13 02:48:11.678406 kernel: pci 0000:00:18.2: bridge window [io  0x1000-0x0fff] to [bus 1d] add_size 1000
Dec 13 02:48:11.678451 kernel: pci 0000:00:18.3: bridge window [io  0x1000-0x0fff] to [bus 1e] add_size 1000
Dec 13 02:48:11.678499 kernel: pci 0000:00:18.4: bridge window [io  0x1000-0x0fff] to [bus 1f] add_size 1000
Dec 13 02:48:11.678546 kernel: pci 0000:00:18.5: bridge window [io  0x1000-0x0fff] to [bus 20] add_size 1000
Dec 13 02:48:11.678593 kernel: pci 0000:00:18.6: bridge window [io  0x1000-0x0fff] to [bus 21] add_size 1000
Dec 13 02:48:11.678640 kernel: pci 0000:00:18.7: bridge window [io  0x1000-0x0fff] to [bus 22] add_size 1000
Dec 13 02:48:11.678687 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref]
Dec 13 02:48:11.678734 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref]
Dec 13 02:48:11.678782 kernel: pci 0000:00:15.3: BAR 13: no space for [io  size 0x1000]
Dec 13 02:48:11.678828 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io  size 0x1000]
Dec 13 02:48:11.678874 kernel: pci 0000:00:15.4: BAR 13: no space for [io  size 0x1000]
Dec 13 02:48:11.678936 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io  size 0x1000]
Dec 13 02:48:11.678985 kernel: pci 0000:00:15.5: BAR 13: no space for [io  size 0x1000]
Dec 13 02:48:11.679032 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io  size 0x1000]
Dec 13 02:48:11.679078 kernel: pci 0000:00:15.6: BAR 13: no space for [io  size 0x1000]
Dec 13 02:48:11.679124 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io  size 0x1000]
Dec 13 02:48:11.679173 kernel: pci 0000:00:15.7: BAR 13: no space for [io  size 0x1000]
Dec 13 02:48:11.679220 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io  size 0x1000]
Dec 13 02:48:11.679265 kernel: pci 0000:00:16.3: BAR 13: no space for [io  size 0x1000]
Dec 13 02:48:11.679311 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io  size 0x1000]
Dec 13 02:48:11.679357 kernel: pci 0000:00:16.4: BAR 13: no space for [io  size 0x1000]
Dec 13 02:48:11.679403 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io  size 0x1000]
Dec 13 02:48:11.679449 kernel: pci 0000:00:16.5: BAR 13: no space for [io  size 0x1000]
Dec 13 02:48:11.681980 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io  size 0x1000]
Dec 13 02:48:11.682040 kernel: pci 0000:00:16.6: BAR 13: no space for [io  size 0x1000]
Dec 13 02:48:11.682090 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io  size 0x1000]
Dec 13 02:48:11.682138 kernel: pci 0000:00:16.7: BAR 13: no space for [io  size 0x1000]
Dec 13 02:48:11.682186 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io  size 0x1000]
Dec 13 02:48:11.682232 kernel: pci 0000:00:17.3: BAR 13: no space for [io  size 0x1000]
Dec 13 02:48:11.682279 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io  size 0x1000]
Dec 13 02:48:11.682326 kernel: pci 0000:00:17.4: BAR 13: no space for [io  size 0x1000]
Dec 13 02:48:11.682373 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io  size 0x1000]
Dec 13 02:48:11.682422 kernel: pci 0000:00:17.5: BAR 13: no space for [io  size 0x1000]
Dec 13 02:48:11.682467 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io  size 0x1000]
Dec 13 02:48:11.682513 kernel: pci 0000:00:17.6: BAR 13: no space for [io  size 0x1000]
Dec 13 02:48:11.682559 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io  size 0x1000]
Dec 13 02:48:11.682606 kernel: pci 0000:00:17.7: BAR 13: no space for [io  size 0x1000]
Dec 13 02:48:11.682651 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io  size 0x1000]
Dec 13 02:48:11.682697 kernel: pci 0000:00:18.2: BAR 13: no space for [io  size 0x1000]
Dec 13 02:48:11.682742 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io  size 0x1000]
Dec 13 02:48:11.682789 kernel: pci 0000:00:18.3: BAR 13: no space for [io  size 0x1000]
Dec 13 02:48:11.682836 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io  size 0x1000]
Dec 13 02:48:11.682890 kernel: pci 0000:00:18.4: BAR 13: no space for [io  size 0x1000]
Dec 13 02:48:11.682937 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io  size 0x1000]
Dec 13 02:48:11.682983 kernel: pci 0000:00:18.5: BAR 13: no space for [io  size 0x1000]
Dec 13 02:48:11.683029 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io  size 0x1000]
Dec 13 02:48:11.683075 kernel: pci 0000:00:18.6: BAR 13: no space for [io  size 0x1000]
Dec 13 02:48:11.683121 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io  size 0x1000]
Dec 13 02:48:11.683167 kernel: pci 0000:00:18.7: BAR 13: no space for [io  size 0x1000]
Dec 13 02:48:11.683215 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io  size 0x1000]
Dec 13 02:48:11.683261 kernel: pci 0000:00:18.7: BAR 13: no space for [io  size 0x1000]
Dec 13 02:48:11.683306 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io  size 0x1000]
Dec 13 02:48:11.683351 kernel: pci 0000:00:18.6: BAR 13: no space for [io  size 0x1000]
Dec 13 02:48:11.683397 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io  size 0x1000]
Dec 13 02:48:11.683443 kernel: pci 0000:00:18.5: BAR 13: no space for [io  size 0x1000]
Dec 13 02:48:11.683489 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io  size 0x1000]
Dec 13 02:48:11.683534 kernel: pci 0000:00:18.4: BAR 13: no space for [io  size 0x1000]
Dec 13 02:48:11.683581 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io  size 0x1000]
Dec 13 02:48:11.683628 kernel: pci 0000:00:18.3: BAR 13: no space for [io  size 0x1000]
Dec 13 02:48:11.683674 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io  size 0x1000]
Dec 13 02:48:11.683720 kernel: pci 0000:00:18.2: BAR 13: no space for [io  size 0x1000]
Dec 13 02:48:11.683766 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io  size 0x1000]
Dec 13 02:48:11.683812 kernel: pci 0000:00:17.7: BAR 13: no space for [io  size 0x1000]
Dec 13 02:48:11.683858 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io  size 0x1000]
Dec 13 02:48:11.683915 kernel: pci 0000:00:17.6: BAR 13: no space for [io  size 0x1000]
Dec 13 02:48:11.683963 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io  size 0x1000]
Dec 13 02:48:11.684010 kernel: pci 0000:00:17.5: BAR 13: no space for [io  size 0x1000]
Dec 13 02:48:11.684058 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io  size 0x1000]
Dec 13 02:48:11.684103 kernel: pci 0000:00:17.4: BAR 13: no space for [io  size 0x1000]
Dec 13 02:48:11.684149 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io  size 0x1000]
Dec 13 02:48:11.684195 kernel: pci 0000:00:17.3: BAR 13: no space for [io  size 0x1000]
Dec 13 02:48:11.684241 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io  size 0x1000]
Dec 13 02:48:11.684286 kernel: pci 0000:00:16.7: BAR 13: no space for [io  size 0x1000]
Dec 13 02:48:11.684332 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io  size 0x1000]
Dec 13 02:48:11.684378 kernel: pci 0000:00:16.6: BAR 13: no space for [io  size 0x1000]
Dec 13 02:48:11.684424 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io  size 0x1000]
Dec 13 02:48:11.684470 kernel: pci 0000:00:16.5: BAR 13: no space for [io  size 0x1000]
Dec 13 02:48:11.684518 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io  size 0x1000]
Dec 13 02:48:11.684565 kernel: pci 0000:00:16.4: BAR 13: no space for [io  size 0x1000]
Dec 13 02:48:11.684611 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io  size 0x1000]
Dec 13 02:48:11.684657 kernel: pci 0000:00:16.3: BAR 13: no space for [io  size 0x1000]
Dec 13 02:48:11.684705 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io  size 0x1000]
Dec 13 02:48:11.684751 kernel: pci 0000:00:15.7: BAR 13: no space for [io  size 0x1000]
Dec 13 02:48:11.684797 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io  size 0x1000]
Dec 13 02:48:11.684847 kernel: pci 0000:00:15.6: BAR 13: no space for [io  size 0x1000]
Dec 13 02:48:11.684900 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io  size 0x1000]
Dec 13 02:48:11.684950 kernel: pci 0000:00:15.5: BAR 13: no space for [io  size 0x1000]
Dec 13 02:48:11.684997 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io  size 0x1000]
Dec 13 02:48:11.685043 kernel: pci 0000:00:15.4: BAR 13: no space for [io  size 0x1000]
Dec 13 02:48:11.685089 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io  size 0x1000]
Dec 13 02:48:11.685136 kernel: pci 0000:00:15.3: BAR 13: no space for [io  size 0x1000]
Dec 13 02:48:11.685183 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io  size 0x1000]
Dec 13 02:48:11.685230 kernel: pci 0000:00:01.0: PCI bridge to [bus 01]
Dec 13 02:48:11.685277 kernel: pci 0000:00:11.0: PCI bridge to [bus 02]
Dec 13 02:48:11.685324 kernel: pci 0000:00:11.0:   bridge window [io  0x2000-0x3fff]
Dec 13 02:48:11.685372 kernel: pci 0000:00:11.0:   bridge window [mem 0xfd600000-0xfdffffff]
Dec 13 02:48:11.685419 kernel: pci 0000:00:11.0:   bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref]
Dec 13 02:48:11.685469 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref]
Dec 13 02:48:11.685517 kernel: pci 0000:00:15.0: PCI bridge to [bus 03]
Dec 13 02:48:11.685564 kernel: pci 0000:00:15.0:   bridge window [io  0x4000-0x4fff]
Dec 13 02:48:11.685610 kernel: pci 0000:00:15.0:   bridge window [mem 0xfd500000-0xfd5fffff]
Dec 13 02:48:11.685656 kernel: pci 0000:00:15.0:   bridge window [mem 0xc0000000-0xc01fffff 64bit pref]
Dec 13 02:48:11.685704 kernel: pci 0000:00:15.1: PCI bridge to [bus 04]
Dec 13 02:48:11.685750 kernel: pci 0000:00:15.1:   bridge window [io  0x8000-0x8fff]
Dec 13 02:48:11.685798 kernel: pci 0000:00:15.1:   bridge window [mem 0xfd100000-0xfd1fffff]
Dec 13 02:48:11.685844 kernel: pci 0000:00:15.1:   bridge window [mem 0xe7800000-0xe78fffff 64bit pref]
Dec 13 02:48:11.685897 kernel: pci 0000:00:15.2: PCI bridge to [bus 05]
Dec 13 02:48:11.685944 kernel: pci 0000:00:15.2:   bridge window [io  0xc000-0xcfff]
Dec 13 02:48:11.685989 kernel: pci 0000:00:15.2:   bridge window [mem 0xfcd00000-0xfcdfffff]
Dec 13 02:48:11.686035 kernel: pci 0000:00:15.2:   bridge window [mem 0xe7400000-0xe74fffff 64bit pref]
Dec 13 02:48:11.686081 kernel: pci 0000:00:15.3: PCI bridge to [bus 06]
Dec 13 02:48:11.686127 kernel: pci 0000:00:15.3:   bridge window [mem 0xfc900000-0xfc9fffff]
Dec 13 02:48:11.686173 kernel: pci 0000:00:15.3:   bridge window [mem 0xe7000000-0xe70fffff 64bit pref]
Dec 13 02:48:11.686223 kernel: pci 0000:00:15.4: PCI bridge to [bus 07]
Dec 13 02:48:11.686270 kernel: pci 0000:00:15.4:   bridge window [mem 0xfc500000-0xfc5fffff]
Dec 13 02:48:11.686316 kernel: pci 0000:00:15.4:   bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref]
Dec 13 02:48:11.686361 kernel: pci 0000:00:15.5: PCI bridge to [bus 08]
Dec 13 02:48:11.686408 kernel: pci 0000:00:15.5:   bridge window [mem 0xfc100000-0xfc1fffff]
Dec 13 02:48:11.686453 kernel: pci 0000:00:15.5:   bridge window [mem 0xe6800000-0xe68fffff 64bit pref]
Dec 13 02:48:11.686502 kernel: pci 0000:00:15.6: PCI bridge to [bus 09]
Dec 13 02:48:11.686548 kernel: pci 0000:00:15.6:   bridge window [mem 0xfbd00000-0xfbdfffff]
Dec 13 02:48:11.686594 kernel: pci 0000:00:15.6:   bridge window [mem 0xe6400000-0xe64fffff 64bit pref]
Dec 13 02:48:11.686641 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a]
Dec 13 02:48:11.686687 kernel: pci 0000:00:15.7:   bridge window [mem 0xfb900000-0xfb9fffff]
Dec 13 02:48:11.686732 kernel: pci 0000:00:15.7:   bridge window [mem 0xe6000000-0xe60fffff 64bit pref]
Dec 13 02:48:11.686781 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref]
Dec 13 02:48:11.686828 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b]
Dec 13 02:48:11.686875 kernel: pci 0000:00:16.0:   bridge window [io  0x5000-0x5fff]
Dec 13 02:48:11.686936 kernel: pci 0000:00:16.0:   bridge window [mem 0xfd400000-0xfd4fffff]
Dec 13 02:48:11.686984 kernel: pci 0000:00:16.0:   bridge window [mem 0xc0200000-0xc03fffff 64bit pref]
Dec 13 02:48:11.687031 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c]
Dec 13 02:48:11.687077 kernel: pci 0000:00:16.1:   bridge window [io  0x9000-0x9fff]
Dec 13 02:48:11.687124 kernel: pci 0000:00:16.1:   bridge window [mem 0xfd000000-0xfd0fffff]
Dec 13 02:48:11.687170 kernel: pci 0000:00:16.1:   bridge window [mem 0xe7700000-0xe77fffff 64bit pref]
Dec 13 02:48:11.687218 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d]
Dec 13 02:48:11.687265 kernel: pci 0000:00:16.2:   bridge window [io  0xd000-0xdfff]
Dec 13 02:48:11.687310 kernel: pci 0000:00:16.2:   bridge window [mem 0xfcc00000-0xfccfffff]
Dec 13 02:48:11.687357 kernel: pci 0000:00:16.2:   bridge window [mem 0xe7300000-0xe73fffff 64bit pref]
Dec 13 02:48:11.687405 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e]
Dec 13 02:48:11.687450 kernel: pci 0000:00:16.3:   bridge window [mem 0xfc800000-0xfc8fffff]
Dec 13 02:48:11.687496 kernel: pci 0000:00:16.3:   bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref]
Dec 13 02:48:11.687541 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f]
Dec 13 02:48:11.687587 kernel: pci 0000:00:16.4:   bridge window [mem 0xfc400000-0xfc4fffff]
Dec 13 02:48:11.687633 kernel: pci 0000:00:16.4:   bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref]
Dec 13 02:48:11.687680 kernel: pci 0000:00:16.5: PCI bridge to [bus 10]
Dec 13 02:48:11.687725 kernel: pci 0000:00:16.5:   bridge window [mem 0xfc000000-0xfc0fffff]
Dec 13 02:48:11.687772 kernel: pci 0000:00:16.5:   bridge window [mem 0xe6700000-0xe67fffff 64bit pref]
Dec 13 02:48:11.687824 kernel: pci 0000:00:16.6: PCI bridge to [bus 11]
Dec 13 02:48:11.687870 kernel: pci 0000:00:16.6:   bridge window [mem 0xfbc00000-0xfbcfffff]
Dec 13 02:48:11.697412 kernel: pci 0000:00:16.6:   bridge window [mem 0xe6300000-0xe63fffff 64bit pref]
Dec 13 02:48:11.697472 kernel: pci 0000:00:16.7: PCI bridge to [bus 12]
Dec 13 02:48:11.697522 kernel: pci 0000:00:16.7:   bridge window [mem 0xfb800000-0xfb8fffff]
Dec 13 02:48:11.697569 kernel: pci 0000:00:16.7:   bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref]
Dec 13 02:48:11.697618 kernel: pci 0000:00:17.0: PCI bridge to [bus 13]
Dec 13 02:48:11.697664 kernel: pci 0000:00:17.0:   bridge window [io  0x6000-0x6fff]
Dec 13 02:48:11.697711 kernel: pci 0000:00:17.0:   bridge window [mem 0xfd300000-0xfd3fffff]
Dec 13 02:48:11.697758 kernel: pci 0000:00:17.0:   bridge window [mem 0xe7a00000-0xe7afffff 64bit pref]
Dec 13 02:48:11.697812 kernel: pci 0000:00:17.1: PCI bridge to [bus 14]
Dec 13 02:48:11.697860 kernel: pci 0000:00:17.1:   bridge window [io  0xa000-0xafff]
Dec 13 02:48:11.697921 kernel: pci 0000:00:17.1:   bridge window [mem 0xfcf00000-0xfcffffff]
Dec 13 02:48:11.697968 kernel: pci 0000:00:17.1:   bridge window [mem 0xe7600000-0xe76fffff 64bit pref]
Dec 13 02:48:11.698016 kernel: pci 0000:00:17.2: PCI bridge to [bus 15]
Dec 13 02:48:11.698062 kernel: pci 0000:00:17.2:   bridge window [io  0xe000-0xefff]
Dec 13 02:48:11.698108 kernel: pci 0000:00:17.2:   bridge window [mem 0xfcb00000-0xfcbfffff]
Dec 13 02:48:11.698153 kernel: pci 0000:00:17.2:   bridge window [mem 0xe7200000-0xe72fffff 64bit pref]
Dec 13 02:48:11.698200 kernel: pci 0000:00:17.3: PCI bridge to [bus 16]
Dec 13 02:48:11.698248 kernel: pci 0000:00:17.3:   bridge window [mem 0xfc700000-0xfc7fffff]
Dec 13 02:48:11.698296 kernel: pci 0000:00:17.3:   bridge window [mem 0xe6e00000-0xe6efffff 64bit pref]
Dec 13 02:48:11.698341 kernel: pci 0000:00:17.4: PCI bridge to [bus 17]
Dec 13 02:48:11.698388 kernel: pci 0000:00:17.4:   bridge window [mem 0xfc300000-0xfc3fffff]
Dec 13 02:48:11.698435 kernel: pci 0000:00:17.4:   bridge window [mem 0xe6a00000-0xe6afffff 64bit pref]
Dec 13 02:48:11.698481 kernel: pci 0000:00:17.5: PCI bridge to [bus 18]
Dec 13 02:48:11.698526 kernel: pci 0000:00:17.5:   bridge window [mem 0xfbf00000-0xfbffffff]
Dec 13 02:48:11.698573 kernel: pci 0000:00:17.5:   bridge window [mem 0xe6600000-0xe66fffff 64bit pref]
Dec 13 02:48:11.698619 kernel: pci 0000:00:17.6: PCI bridge to [bus 19]
Dec 13 02:48:11.698667 kernel: pci 0000:00:17.6:   bridge window [mem 0xfbb00000-0xfbbfffff]
Dec 13 02:48:11.698712 kernel: pci 0000:00:17.6:   bridge window [mem 0xe6200000-0xe62fffff 64bit pref]
Dec 13 02:48:11.698757 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a]
Dec 13 02:48:11.698803 kernel: pci 0000:00:17.7:   bridge window [mem 0xfb700000-0xfb7fffff]
Dec 13 02:48:11.698849 kernel: pci 0000:00:17.7:   bridge window [mem 0xe5e00000-0xe5efffff 64bit pref]
Dec 13 02:48:11.698903 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b]
Dec 13 02:48:11.698950 kernel: pci 0000:00:18.0:   bridge window [io  0x7000-0x7fff]
Dec 13 02:48:11.698996 kernel: pci 0000:00:18.0:   bridge window [mem 0xfd200000-0xfd2fffff]
Dec 13 02:48:11.699041 kernel: pci 0000:00:18.0:   bridge window [mem 0xe7900000-0xe79fffff 64bit pref]
Dec 13 02:48:11.699089 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c]
Dec 13 02:48:11.699136 kernel: pci 0000:00:18.1:   bridge window [io  0xb000-0xbfff]
Dec 13 02:48:11.699182 kernel: pci 0000:00:18.1:   bridge window [mem 0xfce00000-0xfcefffff]
Dec 13 02:48:11.699228 kernel: pci 0000:00:18.1:   bridge window [mem 0xe7500000-0xe75fffff 64bit pref]
Dec 13 02:48:11.699274 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d]
Dec 13 02:48:11.699320 kernel: pci 0000:00:18.2:   bridge window [mem 0xfca00000-0xfcafffff]
Dec 13 02:48:11.699366 kernel: pci 0000:00:18.2:   bridge window [mem 0xe7100000-0xe71fffff 64bit pref]
Dec 13 02:48:11.699412 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e]
Dec 13 02:48:11.699458 kernel: pci 0000:00:18.3:   bridge window [mem 0xfc600000-0xfc6fffff]
Dec 13 02:48:11.699502 kernel: pci 0000:00:18.3:   bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref]
Dec 13 02:48:11.699551 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f]
Dec 13 02:48:11.699598 kernel: pci 0000:00:18.4:   bridge window [mem 0xfc200000-0xfc2fffff]
Dec 13 02:48:11.699644 kernel: pci 0000:00:18.4:   bridge window [mem 0xe6900000-0xe69fffff 64bit pref]
Dec 13 02:48:11.699691 kernel: pci 0000:00:18.5: PCI bridge to [bus 20]
Dec 13 02:48:11.699738 kernel: pci 0000:00:18.5:   bridge window [mem 0xfbe00000-0xfbefffff]
Dec 13 02:48:11.699783 kernel: pci 0000:00:18.5:   bridge window [mem 0xe6500000-0xe65fffff 64bit pref]
Dec 13 02:48:11.699830 kernel: pci 0000:00:18.6: PCI bridge to [bus 21]
Dec 13 02:48:11.699875 kernel: pci 0000:00:18.6:   bridge window [mem 0xfba00000-0xfbafffff]
Dec 13 02:48:11.699940 kernel: pci 0000:00:18.6:   bridge window [mem 0xe6100000-0xe61fffff 64bit pref]
Dec 13 02:48:11.699988 kernel: pci 0000:00:18.7: PCI bridge to [bus 22]
Dec 13 02:48:11.700038 kernel: pci 0000:00:18.7:   bridge window [mem 0xfb600000-0xfb6fffff]
Dec 13 02:48:11.700084 kernel: pci 0000:00:18.7:   bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref]
Dec 13 02:48:11.700131 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window]
Dec 13 02:48:11.700172 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000cffff window]
Dec 13 02:48:11.700212 kernel: pci_bus 0000:00: resource 6 [mem 0x000d0000-0x000d3fff window]
Dec 13 02:48:11.700254 kernel: pci_bus 0000:00: resource 7 [mem 0x000d4000-0x000d7fff window]
Dec 13 02:48:11.700294 kernel: pci_bus 0000:00: resource 8 [mem 0x000d8000-0x000dbfff window]
Dec 13 02:48:11.700337 kernel: pci_bus 0000:00: resource 9 [mem 0xc0000000-0xfebfffff window]
Dec 13 02:48:11.700378 kernel: pci_bus 0000:00: resource 10 [io  0x0000-0x0cf7 window]
Dec 13 02:48:11.700418 kernel: pci_bus 0000:00: resource 11 [io  0x0d00-0xfeff window]
Dec 13 02:48:11.700464 kernel: pci_bus 0000:02: resource 0 [io  0x2000-0x3fff]
Dec 13 02:48:11.700507 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff]
Dec 13 02:48:11.700550 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref]
Dec 13 02:48:11.700592 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window]
Dec 13 02:48:11.700635 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000cffff window]
Dec 13 02:48:11.700679 kernel: pci_bus 0000:02: resource 6 [mem 0x000d0000-0x000d3fff window]
Dec 13 02:48:11.700722 kernel: pci_bus 0000:02: resource 7 [mem 0x000d4000-0x000d7fff window]
Dec 13 02:48:11.700764 kernel: pci_bus 0000:02: resource 8 [mem 0x000d8000-0x000dbfff window]
Dec 13 02:48:11.700806 kernel: pci_bus 0000:02: resource 9 [mem 0xc0000000-0xfebfffff window]
Dec 13 02:48:11.700853 kernel: pci_bus 0000:02: resource 10 [io  0x0000-0x0cf7 window]
Dec 13 02:48:11.700923 kernel: pci_bus 0000:02: resource 11 [io  0x0d00-0xfeff window]
Dec 13 02:48:11.700974 kernel: pci_bus 0000:03: resource 0 [io  0x4000-0x4fff]
Dec 13 02:48:11.701019 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff]
Dec 13 02:48:11.701060 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref]
Dec 13 02:48:11.701106 kernel: pci_bus 0000:04: resource 0 [io  0x8000-0x8fff]
Dec 13 02:48:11.701148 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff]
Dec 13 02:48:11.701189 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref]
Dec 13 02:48:11.701236 kernel: pci_bus 0000:05: resource 0 [io  0xc000-0xcfff]
Dec 13 02:48:11.701278 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff]
Dec 13 02:48:11.701322 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref]
Dec 13 02:48:11.701368 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff]
Dec 13 02:48:11.701411 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref]
Dec 13 02:48:11.701459 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff]
Dec 13 02:48:11.701509 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref]
Dec 13 02:48:11.701559 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff]
Dec 13 02:48:11.701604 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref]
Dec 13 02:48:11.701651 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff]
Dec 13 02:48:11.701695 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref]
Dec 13 02:48:11.701744 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff]
Dec 13 02:48:11.701788 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref]
Dec 13 02:48:11.701839 kernel: pci_bus 0000:0b: resource 0 [io  0x5000-0x5fff]
Dec 13 02:48:11.701899 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff]
Dec 13 02:48:11.701944 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref]
Dec 13 02:48:11.701992 kernel: pci_bus 0000:0c: resource 0 [io  0x9000-0x9fff]
Dec 13 02:48:11.702036 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff]
Dec 13 02:48:11.702080 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref]
Dec 13 02:48:11.702126 kernel: pci_bus 0000:0d: resource 0 [io  0xd000-0xdfff]
Dec 13 02:48:11.702170 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff]
Dec 13 02:48:11.702215 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref]
Dec 13 02:48:11.702261 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff]
Dec 13 02:48:11.702304 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref]
Dec 13 02:48:11.702352 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff]
Dec 13 02:48:11.702395 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref]
Dec 13 02:48:11.702444 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff]
Dec 13 02:48:11.702490 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref]
Dec 13 02:48:11.702536 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff]
Dec 13 02:48:11.702579 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref]
Dec 13 02:48:11.702624 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff]
Dec 13 02:48:11.702667 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref]
Dec 13 02:48:11.702714 kernel: pci_bus 0000:13: resource 0 [io  0x6000-0x6fff]
Dec 13 02:48:11.702760 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff]
Dec 13 02:48:11.702802 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref]
Dec 13 02:48:11.702848 kernel: pci_bus 0000:14: resource 0 [io  0xa000-0xafff]
Dec 13 02:48:11.704920 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff]
Dec 13 02:48:11.704973 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref]
Dec 13 02:48:11.705022 kernel: pci_bus 0000:15: resource 0 [io  0xe000-0xefff]
Dec 13 02:48:11.705067 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff]
Dec 13 02:48:11.705112 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref]
Dec 13 02:48:11.705162 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff]
Dec 13 02:48:11.705206 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref]
Dec 13 02:48:11.705252 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff]
Dec 13 02:48:11.705296 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref]
Dec 13 02:48:11.705342 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff]
Dec 13 02:48:11.705387 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref]
Dec 13 02:48:11.705436 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff]
Dec 13 02:48:11.705480 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref]
Dec 13 02:48:11.705526 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff]
Dec 13 02:48:11.705569 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref]
Dec 13 02:48:11.705616 kernel: pci_bus 0000:1b: resource 0 [io  0x7000-0x7fff]
Dec 13 02:48:11.705661 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff]
Dec 13 02:48:11.705704 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref]
Dec 13 02:48:11.705752 kernel: pci_bus 0000:1c: resource 0 [io  0xb000-0xbfff]
Dec 13 02:48:11.705796 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff]
Dec 13 02:48:11.705838 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref]
Dec 13 02:48:11.706906 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff]
Dec 13 02:48:11.706962 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref]
Dec 13 02:48:11.707014 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff]
Dec 13 02:48:11.707059 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref]
Dec 13 02:48:11.707104 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff]
Dec 13 02:48:11.707148 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref]
Dec 13 02:48:11.707194 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff]
Dec 13 02:48:11.707237 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref]
Dec 13 02:48:11.707285 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff]
Dec 13 02:48:11.707329 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref]
Dec 13 02:48:11.707375 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff]
Dec 13 02:48:11.707419 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref]
Dec 13 02:48:11.707470 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Dec 13 02:48:11.707479 kernel: PCI: CLS 32 bytes, default 64
Dec 13 02:48:11.707488 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer
Dec 13 02:48:11.707494 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns
Dec 13 02:48:11.707500 kernel: clocksource: Switched to clocksource tsc
Dec 13 02:48:11.707506 kernel: Initialise system trusted keyrings
Dec 13 02:48:11.707512 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0
Dec 13 02:48:11.707519 kernel: Key type asymmetric registered
Dec 13 02:48:11.707525 kernel: Asymmetric key parser 'x509' registered
Dec 13 02:48:11.707530 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249)
Dec 13 02:48:11.707536 kernel: io scheduler mq-deadline registered
Dec 13 02:48:11.707543 kernel: io scheduler kyber registered
Dec 13 02:48:11.707550 kernel: io scheduler bfq registered
Dec 13 02:48:11.707599 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24
Dec 13 02:48:11.707647 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Dec 13 02:48:11.707696 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25
Dec 13 02:48:11.707743 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Dec 13 02:48:11.707791 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26
Dec 13 02:48:11.707842 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Dec 13 02:48:11.708920 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27
Dec 13 02:48:11.708979 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Dec 13 02:48:11.709031 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28
Dec 13 02:48:11.709080 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Dec 13 02:48:11.709128 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29
Dec 13 02:48:11.709177 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Dec 13 02:48:11.709227 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30
Dec 13 02:48:11.709275 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Dec 13 02:48:11.709323 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31
Dec 13 02:48:11.709371 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Dec 13 02:48:11.709417 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32
Dec 13 02:48:11.709466 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Dec 13 02:48:11.709512 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33
Dec 13 02:48:11.709558 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Dec 13 02:48:11.709604 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34
Dec 13 02:48:11.709651 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Dec 13 02:48:11.709698 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35
Dec 13 02:48:11.709745 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Dec 13 02:48:11.709794 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36
Dec 13 02:48:11.709840 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Dec 13 02:48:11.710910 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37
Dec 13 02:48:11.710970 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Dec 13 02:48:11.711022 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38
Dec 13 02:48:11.711074 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Dec 13 02:48:11.711121 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39
Dec 13 02:48:11.711168 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Dec 13 02:48:11.711216 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40
Dec 13 02:48:11.711262 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Dec 13 02:48:11.711309 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41
Dec 13 02:48:11.711356 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Dec 13 02:48:11.711403 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42
Dec 13 02:48:11.711448 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Dec 13 02:48:11.711494 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43
Dec 13 02:48:11.711540 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Dec 13 02:48:11.711586 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44
Dec 13 02:48:11.711635 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Dec 13 02:48:11.711682 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45
Dec 13 02:48:11.711728 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Dec 13 02:48:11.711774 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46
Dec 13 02:48:11.711827 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Dec 13 02:48:11.711875 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47
Dec 13 02:48:11.712960 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Dec 13 02:48:11.713023 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48
Dec 13 02:48:11.713074 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Dec 13 02:48:11.713122 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49
Dec 13 02:48:11.713186 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Dec 13 02:48:11.713235 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50
Dec 13 02:48:11.713284 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Dec 13 02:48:11.713331 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51
Dec 13 02:48:11.713377 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Dec 13 02:48:11.713423 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52
Dec 13 02:48:11.713469 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Dec 13 02:48:11.713519 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53
Dec 13 02:48:11.713565 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Dec 13 02:48:11.713612 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54
Dec 13 02:48:11.713659 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Dec 13 02:48:11.713706 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55
Dec 13 02:48:11.713752 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+
Dec 13 02:48:11.713763 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00
Dec 13 02:48:11.713769 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec 13 02:48:11.713776 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Dec 13 02:48:11.713782 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12
Dec 13 02:48:11.713789 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Dec 13 02:48:11.713795 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Dec 13 02:48:11.713845 kernel: rtc_cmos 00:01: registered as rtc0
Dec 13 02:48:11.714918 kernel: rtc_cmos 00:01: setting system clock to 2024-12-13T02:48:11 UTC (1734058091)
Dec 13 02:48:11.714969 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram
Dec 13 02:48:11.714978 kernel: fail to initialize ptp_kvm
Dec 13 02:48:11.714985 kernel: intel_pstate: CPU model not supported
Dec 13 02:48:11.714991 kernel: NET: Registered PF_INET6 protocol family
Dec 13 02:48:11.714997 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0
Dec 13 02:48:11.715003 kernel: Segment Routing with IPv6
Dec 13 02:48:11.715011 kernel: In-situ OAM (IOAM) with IPv6
Dec 13 02:48:11.715017 kernel: NET: Registered PF_PACKET protocol family
Dec 13 02:48:11.715025 kernel: Key type dns_resolver registered
Dec 13 02:48:11.715031 kernel: IPI shorthand broadcast: enabled
Dec 13 02:48:11.715037 kernel: sched_clock: Marking stable (838396261, 223083651)->(1128565306, -67085394)
Dec 13 02:48:11.715043 kernel: registered taskstats version 1
Dec 13 02:48:11.715049 kernel: Loading compiled-in X.509 certificates
Dec 13 02:48:11.715055 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: d9defb0205602bee9bb670636cbe5c74194fdb5e'
Dec 13 02:48:11.715061 kernel: Key type .fscrypt registered
Dec 13 02:48:11.715067 kernel: Key type fscrypt-provisioning registered
Dec 13 02:48:11.715073 kernel: ima: No TPM chip found, activating TPM-bypass!
Dec 13 02:48:11.715080 kernel: ima: Allocated hash algorithm: sha1
Dec 13 02:48:11.715086 kernel: ima: No architecture policies found
Dec 13 02:48:11.715092 kernel: clk: Disabling unused clocks
Dec 13 02:48:11.715099 kernel: Freeing unused kernel image (initmem) memory: 47476K
Dec 13 02:48:11.715105 kernel: Write protecting the kernel read-only data: 28672k
Dec 13 02:48:11.715111 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K
Dec 13 02:48:11.715118 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K
Dec 13 02:48:11.715124 kernel: Run /init as init process
Dec 13 02:48:11.715130 kernel:   with arguments:
Dec 13 02:48:11.715137 kernel:     /init
Dec 13 02:48:11.715143 kernel:   with environment:
Dec 13 02:48:11.715149 kernel:     HOME=/
Dec 13 02:48:11.715154 kernel:     TERM=linux
Dec 13 02:48:11.715160 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Dec 13 02:48:11.715168 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 13 02:48:11.715175 systemd[1]: Detected virtualization vmware.
Dec 13 02:48:11.715182 systemd[1]: Detected architecture x86-64.
Dec 13 02:48:11.715189 systemd[1]: Running in initrd.
Dec 13 02:48:11.715195 systemd[1]: No hostname configured, using default hostname.
Dec 13 02:48:11.715202 systemd[1]: Hostname set to <localhost>.
Dec 13 02:48:11.715208 systemd[1]: Initializing machine ID from random generator.
Dec 13 02:48:11.715214 systemd[1]: Queued start job for default target initrd.target.
Dec 13 02:48:11.715220 systemd[1]: Started systemd-ask-password-console.path.
Dec 13 02:48:11.715226 systemd[1]: Reached target cryptsetup.target.
Dec 13 02:48:11.715232 systemd[1]: Reached target paths.target.
Dec 13 02:48:11.715239 systemd[1]: Reached target slices.target.
Dec 13 02:48:11.715245 systemd[1]: Reached target swap.target.
Dec 13 02:48:11.715252 systemd[1]: Reached target timers.target.
Dec 13 02:48:11.715259 systemd[1]: Listening on iscsid.socket.
Dec 13 02:48:11.715265 systemd[1]: Listening on iscsiuio.socket.
Dec 13 02:48:11.715271 systemd[1]: Listening on systemd-journald-audit.socket.
Dec 13 02:48:11.715277 systemd[1]: Listening on systemd-journald-dev-log.socket.
Dec 13 02:48:11.715284 systemd[1]: Listening on systemd-journald.socket.
Dec 13 02:48:11.715291 systemd[1]: Listening on systemd-networkd.socket.
Dec 13 02:48:11.715297 systemd[1]: Listening on systemd-udevd-control.socket.
Dec 13 02:48:11.715303 systemd[1]: Listening on systemd-udevd-kernel.socket.
Dec 13 02:48:11.715309 systemd[1]: Reached target sockets.target.
Dec 13 02:48:11.715316 systemd[1]: Starting kmod-static-nodes.service...
Dec 13 02:48:11.715322 systemd[1]: Finished network-cleanup.service.
Dec 13 02:48:11.715328 systemd[1]: Starting systemd-fsck-usr.service...
Dec 13 02:48:11.715334 systemd[1]: Starting systemd-journald.service...
Dec 13 02:48:11.715342 systemd[1]: Starting systemd-modules-load.service...
Dec 13 02:48:11.715348 systemd[1]: Starting systemd-resolved.service...
Dec 13 02:48:11.715354 systemd[1]: Starting systemd-vconsole-setup.service...
Dec 13 02:48:11.715361 systemd[1]: Finished kmod-static-nodes.service.
Dec 13 02:48:11.715367 systemd[1]: Finished systemd-fsck-usr.service.
Dec 13 02:48:11.715373 systemd[1]: Starting systemd-tmpfiles-setup-dev.service...
Dec 13 02:48:11.715380 systemd[1]: Finished systemd-tmpfiles-setup-dev.service.
Dec 13 02:48:11.715386 kernel: audit: type=1130 audit(1734058091.655:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:11.715392 systemd[1]: Finished systemd-vconsole-setup.service.
Dec 13 02:48:11.715401 kernel: audit: type=1130 audit(1734058091.659:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:11.715407 systemd[1]: Starting dracut-cmdline-ask.service...
Dec 13 02:48:11.715413 systemd[1]: Finished dracut-cmdline-ask.service.
Dec 13 02:48:11.715419 systemd[1]: Starting dracut-cmdline.service...
Dec 13 02:48:11.715425 kernel: audit: type=1130 audit(1734058091.677:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:11.715431 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Dec 13 02:48:11.715438 systemd[1]: Started systemd-resolved.service.
Dec 13 02:48:11.715444 systemd[1]: Reached target nss-lookup.target.
Dec 13 02:48:11.715452 kernel: Bridge firewalling registered
Dec 13 02:48:11.715458 kernel: audit: type=1130 audit(1734058091.692:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:11.715467 systemd-journald[216]: Journal started
Dec 13 02:48:11.715499 systemd-journald[216]: Runtime Journal (/run/log/journal/689b970c052a46b0944c45c9d5256979) is 4.8M, max 38.8M, 34.0M free.
Dec 13 02:48:11.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:11.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:11.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:11.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:11.644797 systemd-modules-load[217]: Inserted module 'overlay'
Dec 13 02:48:11.718772 systemd[1]: Started systemd-journald.service.
Dec 13 02:48:11.688780 systemd-resolved[218]: Positive Trust Anchors:
Dec 13 02:48:11.723204 kernel: audit: type=1130 audit(1734058091.716:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:11.723216 kernel: SCSI subsystem initialized
Dec 13 02:48:11.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:11.688785 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Dec 13 02:48:11.688805 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test
Dec 13 02:48:11.692631 systemd-resolved[218]: Defaulting to hostname 'linux'.
Dec 13 02:48:11.702957 systemd-modules-load[217]: Inserted module 'br_netfilter'
Dec 13 02:48:11.725323 dracut-cmdline[232]: dracut-dracut-053
Dec 13 02:48:11.725323 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA
Dec 13 02:48:11.725323 dracut-cmdline[232]: BEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c
Dec 13 02:48:11.733289 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Dec 13 02:48:11.733307 kernel: device-mapper: uevent: version 1.0.3
Dec 13 02:48:11.734353 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com
Dec 13 02:48:11.736258 systemd-modules-load[217]: Inserted module 'dm_multipath'
Dec 13 02:48:11.736674 systemd[1]: Finished systemd-modules-load.service.
Dec 13 02:48:11.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:11.739335 systemd[1]: Starting systemd-sysctl.service...
Dec 13 02:48:11.740222 kernel: audit: type=1130 audit(1734058091.734:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:11.740240 kernel: Loading iSCSI transport class v2.0-870.
Dec 13 02:48:11.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:11.746065 systemd[1]: Finished systemd-sysctl.service.
Dec 13 02:48:11.748892 kernel: audit: type=1130 audit(1734058091.744:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:11.755890 kernel: iscsi: registered transport (tcp)
Dec 13 02:48:11.771291 kernel: iscsi: registered transport (qla4xxx)
Dec 13 02:48:11.771334 kernel: QLogic iSCSI HBA Driver
Dec 13 02:48:11.787794 systemd[1]: Finished dracut-cmdline.service.
Dec 13 02:48:11.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:11.790907 kernel: audit: type=1130 audit(1734058091.786:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:11.788429 systemd[1]: Starting dracut-pre-udev.service...
Dec 13 02:48:11.826910 kernel: raid6: avx2x4   gen() 46173 MB/s
Dec 13 02:48:11.842899 kernel: raid6: avx2x4   xor() 20100 MB/s
Dec 13 02:48:11.859894 kernel: raid6: avx2x2   gen() 51089 MB/s
Dec 13 02:48:11.876902 kernel: raid6: avx2x2   xor() 30581 MB/s
Dec 13 02:48:11.893896 kernel: raid6: avx2x1   gen() 42684 MB/s
Dec 13 02:48:11.910894 kernel: raid6: avx2x1   xor() 27041 MB/s
Dec 13 02:48:11.927925 kernel: raid6: sse2x4   gen() 20929 MB/s
Dec 13 02:48:11.944913 kernel: raid6: sse2x4   xor() 11841 MB/s
Dec 13 02:48:11.961897 kernel: raid6: sse2x2   gen() 21462 MB/s
Dec 13 02:48:11.978896 kernel: raid6: sse2x2   xor() 13258 MB/s
Dec 13 02:48:11.995896 kernel: raid6: sse2x1   gen() 18147 MB/s
Dec 13 02:48:12.013080 kernel: raid6: sse2x1   xor()  8874 MB/s
Dec 13 02:48:12.013126 kernel: raid6: using algorithm avx2x2 gen() 51089 MB/s
Dec 13 02:48:12.013149 kernel: raid6: .... xor() 30581 MB/s, rmw enabled
Dec 13 02:48:12.014286 kernel: raid6: using avx2x2 recovery algorithm
Dec 13 02:48:12.022903 kernel: xor: automatically using best checksumming function   avx       
Dec 13 02:48:12.083900 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no
Dec 13 02:48:12.088904 systemd[1]: Finished dracut-pre-udev.service.
Dec 13 02:48:12.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:12.089568 systemd[1]: Starting systemd-udevd.service...
Dec 13 02:48:12.087000 audit: BPF prog-id=7 op=LOAD
Dec 13 02:48:12.087000 audit: BPF prog-id=8 op=LOAD
Dec 13 02:48:12.092903 kernel: audit: type=1130 audit(1734058092.087:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:12.100052 systemd-udevd[415]: Using default interface naming scheme 'v252'.
Dec 13 02:48:12.102722 systemd[1]: Started systemd-udevd.service.
Dec 13 02:48:12.103258 systemd[1]: Starting dracut-pre-trigger.service...
Dec 13 02:48:12.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:12.110785 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation
Dec 13 02:48:12.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:12.126681 systemd[1]: Finished dracut-pre-trigger.service.
Dec 13 02:48:12.127215 systemd[1]: Starting systemd-udev-trigger.service...
Dec 13 02:48:12.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:12.186281 systemd[1]: Finished systemd-udev-trigger.service.
Dec 13 02:48:12.241991 kernel: libata version 3.00 loaded.
Dec 13 02:48:12.242024 kernel: VMware PVSCSI driver - version 1.0.7.0-k
Dec 13 02:48:12.243928 kernel: ata_piix 0000:00:07.1: version 2.13
Dec 13 02:48:12.251592 kernel: vmw_pvscsi: using 64bit dma
Dec 13 02:48:12.251603 kernel: vmw_pvscsi: max_id: 16
Dec 13 02:48:12.251610 kernel: vmw_pvscsi: setting ring_pages to 8
Dec 13 02:48:12.251617 kernel: scsi host0: ata_piix
Dec 13 02:48:12.251680 kernel: scsi host1: ata_piix
Dec 13 02:48:12.251735 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14
Dec 13 02:48:12.251743 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15
Dec 13 02:48:12.253145 kernel: vmw_pvscsi: enabling reqCallThreshold
Dec 13 02:48:12.253160 kernel: vmw_pvscsi: driver-based request coalescing enabled
Dec 13 02:48:12.253168 kernel: vmw_pvscsi: using MSI-X
Dec 13 02:48:12.254478 kernel: scsi host2: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254
Dec 13 02:48:12.255220 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #2
Dec 13 02:48:12.257746 kernel: scsi 2:0:0:0: Direct-Access     VMware   Virtual disk     2.0  PQ: 0 ANSI: 6
Dec 13 02:48:12.264311 kernel: VMware vmxnet3 virtual NIC driver - version 1.6.0.0-k-NAPI
Dec 13 02:48:12.264333 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2
Dec 13 02:48:12.265774 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps
Dec 13 02:48:12.280904 kernel: cryptd: max_cpu_qlen set to 1000
Dec 13 02:48:12.416898 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33
Dec 13 02:48:12.422931 kernel: scsi 1:0:0:0: CD-ROM            NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5
Dec 13 02:48:12.427895 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0
Dec 13 02:48:12.430947 kernel: AVX2 version of gcm_enc/dec engaged.
Dec 13 02:48:12.430966 kernel: AES CTR mode by8 optimization enabled
Dec 13 02:48:12.443895 kernel: sd 2:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB)
Dec 13 02:48:12.484924 kernel: sd 2:0:0:0: [sda] Write Protect is off
Dec 13 02:48:12.485005 kernel: sd 2:0:0:0: [sda] Mode Sense: 31 00 00 00
Dec 13 02:48:12.485065 kernel: sd 2:0:0:0: [sda] Cache data unavailable
Dec 13 02:48:12.485122 kernel: sd 2:0:0:0: [sda] Assuming drive cache: write through
Dec 13 02:48:12.485179 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray
Dec 13 02:48:12.485245 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Dec 13 02:48:12.485253 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0
Dec 13 02:48:12.485310 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Dec 13 02:48:12.485318 kernel: sd 2:0:0:0: [sda] Attached SCSI disk
Dec 13 02:48:12.560217 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device.
Dec 13 02:48:12.564054 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device.
Dec 13 02:48:12.564932 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (466)
Dec 13 02:48:12.568338 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device.
Dec 13 02:48:12.568452 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device.
Dec 13 02:48:12.569205 systemd[1]: Starting disk-uuid.service...
Dec 13 02:48:12.572745 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device.
Dec 13 02:48:12.591894 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Dec 13 02:48:12.596896 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Dec 13 02:48:13.605369 disk-uuid[549]: The operation has completed successfully.
Dec 13 02:48:13.605890 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Dec 13 02:48:13.639542 systemd[1]: disk-uuid.service: Deactivated successfully.
Dec 13 02:48:13.639601 systemd[1]: Finished disk-uuid.service.
Dec 13 02:48:13.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:13.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:13.640196 systemd[1]: Starting verity-setup.service...
Dec 13 02:48:13.650895 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2"
Dec 13 02:48:13.702260 systemd[1]: Found device dev-mapper-usr.device.
Dec 13 02:48:13.702715 systemd[1]: Mounting sysusr-usr.mount...
Dec 13 02:48:13.703390 systemd[1]: Finished verity-setup.service.
Dec 13 02:48:13.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:13.755519 systemd[1]: Mounted sysusr-usr.mount.
Dec 13 02:48:13.755894 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none.
Dec 13 02:48:13.756131 systemd[1]: Starting afterburn-network-kargs.service...
Dec 13 02:48:13.756622 systemd[1]: Starting ignition-setup.service...
Dec 13 02:48:13.773361 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm
Dec 13 02:48:13.773383 kernel: BTRFS info (device sda6): using free space tree
Dec 13 02:48:13.773391 kernel: BTRFS info (device sda6): has skinny extents
Dec 13 02:48:13.779893 kernel: BTRFS info (device sda6): enabling ssd optimizations
Dec 13 02:48:13.787354 systemd[1]: mnt-oem.mount: Deactivated successfully.
Dec 13 02:48:13.792914 systemd[1]: Finished ignition-setup.service.
Dec 13 02:48:13.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:13.793496 systemd[1]: Starting ignition-fetch-offline.service...
Dec 13 02:48:13.957536 systemd[1]: Finished afterburn-network-kargs.service.
Dec 13 02:48:13.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:13.958339 systemd[1]: Starting parse-ip-for-networkd.service...
Dec 13 02:48:14.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:14.009000 audit: BPF prog-id=9 op=LOAD
Dec 13 02:48:14.010830 systemd[1]: Finished parse-ip-for-networkd.service.
Dec 13 02:48:14.011686 systemd[1]: Starting systemd-networkd.service...
Dec 13 02:48:14.026210 systemd-networkd[733]: lo: Link UP
Dec 13 02:48:14.026218 systemd-networkd[733]: lo: Gained carrier
Dec 13 02:48:14.026678 systemd-networkd[733]: Enumeration completed
Dec 13 02:48:14.027015 systemd-networkd[733]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network.
Dec 13 02:48:14.031521 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated
Dec 13 02:48:14.031667 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps
Dec 13 02:48:14.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:14.027671 systemd[1]: Started systemd-networkd.service.
Dec 13 02:48:14.027835 systemd[1]: Reached target network.target.
Dec 13 02:48:14.028409 systemd[1]: Starting iscsiuio.service...
Dec 13 02:48:14.030123 systemd-networkd[733]: ens192: Link UP
Dec 13 02:48:14.030125 systemd-networkd[733]: ens192: Gained carrier
Dec 13 02:48:14.033067 systemd[1]: Started iscsiuio.service.
Dec 13 02:48:14.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:14.033662 systemd[1]: Starting iscsid.service...
Dec 13 02:48:14.036149 iscsid[738]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi
Dec 13 02:48:14.036149 iscsid[738]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.<reversed domain name>[:identifier].
Dec 13 02:48:14.036149 iscsid[738]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6.
Dec 13 02:48:14.036149 iscsid[738]: If using hardware iscsi like qla4xxx this message can be ignored.
Dec 13 02:48:14.036149 iscsid[738]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi
Dec 13 02:48:14.036149 iscsid[738]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf
Dec 13 02:48:14.037401 systemd[1]: Started iscsid.service.
Dec 13 02:48:14.038357 systemd[1]: Starting dracut-initqueue.service...
Dec 13 02:48:14.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:14.046049 systemd[1]: Finished dracut-initqueue.service.
Dec 13 02:48:14.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:14.046432 systemd[1]: Reached target remote-fs-pre.target.
Dec 13 02:48:14.047023 systemd[1]: Reached target remote-cryptsetup.target.
Dec 13 02:48:14.047243 systemd[1]: Reached target remote-fs.target.
Dec 13 02:48:14.048341 systemd[1]: Starting dracut-pre-mount.service...
Dec 13 02:48:14.052051 ignition[605]: Ignition 2.14.0
Dec 13 02:48:14.052059 ignition[605]: Stage: fetch-offline
Dec 13 02:48:14.052091 ignition[605]: reading system config file "/usr/lib/ignition/base.d/base.ign"
Dec 13 02:48:14.052105 ignition[605]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed
Dec 13 02:48:14.053533 systemd[1]: Finished dracut-pre-mount.service.
Dec 13 02:48:14.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:14.055158 ignition[605]: no config dir at "/usr/lib/ignition/base.platform.d/vmware"
Dec 13 02:48:14.055378 ignition[605]: parsed url from cmdline: ""
Dec 13 02:48:14.055418 ignition[605]: no config URL provided
Dec 13 02:48:14.055532 ignition[605]: reading system config file "/usr/lib/ignition/user.ign"
Dec 13 02:48:14.055671 ignition[605]: no config at "/usr/lib/ignition/user.ign"
Dec 13 02:48:14.060905 ignition[605]: config successfully fetched
Dec 13 02:48:14.060984 ignition[605]: parsing config with SHA512: 3b531fa326c5207d45a47567d9967e14c1fcfcaff1d0d6f498fa456e3c6cb60b31d42e69ec266e8656b968d411c047994b148af9276b632a6a8fbc68932bad34
Dec 13 02:48:14.064783 unknown[605]: fetched base config from "system"
Dec 13 02:48:14.064792 unknown[605]: fetched user config from "vmware"
Dec 13 02:48:14.065059 ignition[605]: fetch-offline: fetch-offline passed
Dec 13 02:48:14.065106 ignition[605]: Ignition finished successfully
Dec 13 02:48:14.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:14.065951 systemd[1]: Finished ignition-fetch-offline.service.
Dec 13 02:48:14.066104 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json).
Dec 13 02:48:14.066561 systemd[1]: Starting ignition-kargs.service...
Dec 13 02:48:14.072000 ignition[753]: Ignition 2.14.0
Dec 13 02:48:14.072008 ignition[753]: Stage: kargs
Dec 13 02:48:14.072079 ignition[753]: reading system config file "/usr/lib/ignition/base.d/base.ign"
Dec 13 02:48:14.072090 ignition[753]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed
Dec 13 02:48:14.073367 ignition[753]: no config dir at "/usr/lib/ignition/base.platform.d/vmware"
Dec 13 02:48:14.074577 ignition[753]: kargs: kargs passed
Dec 13 02:48:14.074609 ignition[753]: Ignition finished successfully
Dec 13 02:48:14.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:14.075562 systemd[1]: Finished ignition-kargs.service.
Dec 13 02:48:14.076170 systemd[1]: Starting ignition-disks.service...
Dec 13 02:48:14.081024 ignition[759]: Ignition 2.14.0
Dec 13 02:48:14.081422 ignition[759]: Stage: disks
Dec 13 02:48:14.081608 ignition[759]: reading system config file "/usr/lib/ignition/base.d/base.ign"
Dec 13 02:48:14.081761 ignition[759]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed
Dec 13 02:48:14.083126 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/vmware"
Dec 13 02:48:14.084652 ignition[759]: disks: disks passed
Dec 13 02:48:14.084796 ignition[759]: Ignition finished successfully
Dec 13 02:48:14.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:14.085442 systemd[1]: Finished ignition-disks.service.
Dec 13 02:48:14.085598 systemd[1]: Reached target initrd-root-device.target.
Dec 13 02:48:14.085690 systemd[1]: Reached target local-fs-pre.target.
Dec 13 02:48:14.085773 systemd[1]: Reached target local-fs.target.
Dec 13 02:48:14.085852 systemd[1]: Reached target sysinit.target.
Dec 13 02:48:14.085939 systemd[1]: Reached target basic.target.
Dec 13 02:48:14.086488 systemd[1]: Starting systemd-fsck-root.service...
Dec 13 02:48:14.097477 systemd-fsck[767]: ROOT: clean, 621/1628000 files, 124058/1617920 blocks
Dec 13 02:48:14.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:14.098819 systemd[1]: Finished systemd-fsck-root.service.
Dec 13 02:48:14.099358 systemd[1]: Mounting sysroot.mount...
Dec 13 02:48:14.107067 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none.
Dec 13 02:48:14.106848 systemd[1]: Mounted sysroot.mount.
Dec 13 02:48:14.106960 systemd[1]: Reached target initrd-root-fs.target.
Dec 13 02:48:14.107924 systemd[1]: Mounting sysroot-usr.mount...
Dec 13 02:48:14.108250 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met.
Dec 13 02:48:14.108268 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Dec 13 02:48:14.108282 systemd[1]: Reached target ignition-diskful.target.
Dec 13 02:48:14.109492 systemd[1]: Mounted sysroot-usr.mount.
Dec 13 02:48:14.110092 systemd[1]: Starting initrd-setup-root.service...
Dec 13 02:48:14.113155 initrd-setup-root[777]: cut: /sysroot/etc/passwd: No such file or directory
Dec 13 02:48:14.116716 initrd-setup-root[785]: cut: /sysroot/etc/group: No such file or directory
Dec 13 02:48:14.118507 initrd-setup-root[793]: cut: /sysroot/etc/shadow: No such file or directory
Dec 13 02:48:14.120616 initrd-setup-root[801]: cut: /sysroot/etc/gshadow: No such file or directory
Dec 13 02:48:14.149747 systemd[1]: Finished initrd-setup-root.service.
Dec 13 02:48:14.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:14.150338 systemd[1]: Starting ignition-mount.service...
Dec 13 02:48:14.150785 systemd[1]: Starting sysroot-boot.service...
Dec 13 02:48:14.154622 bash[818]: umount: /sysroot/usr/share/oem: not mounted.
Dec 13 02:48:14.159597 ignition[819]: INFO     : Ignition 2.14.0
Dec 13 02:48:14.159807 ignition[819]: INFO     : Stage: mount
Dec 13 02:48:14.159993 ignition[819]: INFO     : reading system config file "/usr/lib/ignition/base.d/base.ign"
Dec 13 02:48:14.160146 ignition[819]: DEBUG    : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed
Dec 13 02:48:14.161646 ignition[819]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/vmware"
Dec 13 02:48:14.162827 ignition[819]: INFO     : mount: mount passed
Dec 13 02:48:14.162964 ignition[819]: INFO     : Ignition finished successfully
Dec 13 02:48:14.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:14.163432 systemd[1]: Finished ignition-mount.service.
Dec 13 02:48:14.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:14.169118 systemd[1]: Finished sysroot-boot.service.
Dec 13 02:48:14.717743 systemd[1]: Mounting sysroot-usr-share-oem.mount...
Dec 13 02:48:14.727864 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (828)
Dec 13 02:48:14.727898 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm
Dec 13 02:48:14.727910 kernel: BTRFS info (device sda6): using free space tree
Dec 13 02:48:14.728762 kernel: BTRFS info (device sda6): has skinny extents
Dec 13 02:48:14.732899 kernel: BTRFS info (device sda6): enabling ssd optimizations
Dec 13 02:48:14.734951 systemd[1]: Mounted sysroot-usr-share-oem.mount.
Dec 13 02:48:14.735805 systemd[1]: Starting ignition-files.service...
Dec 13 02:48:14.747336 ignition[848]: INFO     : Ignition 2.14.0
Dec 13 02:48:14.747336 ignition[848]: INFO     : Stage: files
Dec 13 02:48:14.747729 ignition[848]: INFO     : reading system config file "/usr/lib/ignition/base.d/base.ign"
Dec 13 02:48:14.747729 ignition[848]: DEBUG    : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed
Dec 13 02:48:14.749010 ignition[848]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/vmware"
Dec 13 02:48:14.751453 ignition[848]: DEBUG    : files: compiled without relabeling support, skipping
Dec 13 02:48:14.751860 ignition[848]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Dec 13 02:48:14.751860 ignition[848]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Dec 13 02:48:14.754617 ignition[848]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Dec 13 02:48:14.754853 ignition[848]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Dec 13 02:48:14.755778 unknown[848]: wrote ssh authorized keys file for user: core
Dec 13 02:48:14.756091 ignition[848]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Dec 13 02:48:14.756770 ignition[848]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/home/core/install.sh"
Dec 13 02:48:14.756770 ignition[848]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh"
Dec 13 02:48:14.757267 ignition[848]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Dec 13 02:48:14.757267 ignition[848]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Dec 13 02:48:14.757267 ignition[848]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw"
Dec 13 02:48:14.757267 ignition[848]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw"
Dec 13 02:48:14.758168 ignition[848]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/etc/systemd/system/vmtoolsd.service"
Dec 13 02:48:14.758168 ignition[848]: INFO     : files: createFilesystemsFiles: createFiles: op(6): oem config not found in "/usr/share/oem", looking on oem partition
Dec 13 02:48:14.763593 ignition[848]: INFO     : files: createFilesystemsFiles: createFiles: op(6): op(7): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem1652039759"
Dec 13 02:48:14.763593 ignition[848]: CRITICAL : files: createFilesystemsFiles: createFiles: op(6): op(7): [failed]   mounting "/dev/disk/by-label/OEM" at "/mnt/oem1652039759": device or resource busy
Dec 13 02:48:14.763593 ignition[848]: ERROR    : files: createFilesystemsFiles: createFiles: op(6): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1652039759", trying btrfs: device or resource busy
Dec 13 02:48:14.763593 ignition[848]: INFO     : files: createFilesystemsFiles: createFiles: op(6): op(8): [started]  mounting "/dev/disk/by-label/OEM" at "/mnt/oem1652039759"
Dec 13 02:48:14.765547 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (848)
Dec 13 02:48:14.765560 ignition[848]: INFO     : files: createFilesystemsFiles: createFiles: op(6): op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1652039759"
Dec 13 02:48:14.766017 ignition[848]: INFO     : files: createFilesystemsFiles: createFiles: op(6): op(9): [started]  unmounting "/mnt/oem1652039759"
Dec 13 02:48:14.766181 ignition[848]: INFO     : files: createFilesystemsFiles: createFiles: op(6): op(9): [finished] unmounting "/mnt/oem1652039759"
Dec 13 02:48:14.766181 ignition[848]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/systemd/system/vmtoolsd.service"
Dec 13 02:48:14.766181 ignition[848]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [started]  writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw"
Dec 13 02:48:14.766181 ignition[848]: INFO     : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1
Dec 13 02:48:14.770823 systemd[1]: mnt-oem1652039759.mount: Deactivated successfully.
Dec 13 02:48:15.917103 systemd-networkd[733]: ens192: Gained IPv6LL
Dec 13 02:48:20.321794 ignition[848]: INFO     : files: createFilesystemsFiles: createFiles: op(a): GET result: OK
Dec 13 02:48:20.979799 ignition[848]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw"
Dec 13 02:48:20.980195 ignition[848]: INFO     : files: createFilesystemsFiles: createFiles: op(b): [started]  writing file "/sysroot/etc/systemd/network/00-vmware.network"
Dec 13 02:48:20.980374 ignition[848]: INFO     : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network"
Dec 13 02:48:20.980374 ignition[848]: INFO     : files: op(c): [started]  processing unit "vmtoolsd.service"
Dec 13 02:48:20.980374 ignition[848]: INFO     : files: op(c): [finished] processing unit "vmtoolsd.service"
Dec 13 02:48:20.980374 ignition[848]: INFO     : files: op(d): [started]  processing unit "coreos-metadata.service"
Dec 13 02:48:20.980374 ignition[848]: INFO     : files: op(d): op(e): [started]  writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service"
Dec 13 02:48:20.980374 ignition[848]: INFO     : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service"
Dec 13 02:48:20.980374 ignition[848]: INFO     : files: op(d): [finished] processing unit "coreos-metadata.service"
Dec 13 02:48:20.980374 ignition[848]: INFO     : files: op(f): [started]  setting preset to enabled for "vmtoolsd.service"
Dec 13 02:48:20.981705 ignition[848]: INFO     : files: op(f): [finished] setting preset to enabled for "vmtoolsd.service"
Dec 13 02:48:20.981705 ignition[848]: INFO     : files: op(10): [started]  setting preset to disabled for "coreos-metadata.service"
Dec 13 02:48:20.981705 ignition[848]: INFO     : files: op(10): op(11): [started]  removing enablement symlink(s) for "coreos-metadata.service"
Dec 13 02:48:21.025794 ignition[848]: INFO     : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service"
Dec 13 02:48:21.026060 ignition[848]: INFO     : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service"
Dec 13 02:48:21.026060 ignition[848]: INFO     : files: createResultFile: createFiles: op(12): [started]  writing file "/sysroot/etc/.ignition-result.json"
Dec 13 02:48:21.026060 ignition[848]: INFO     : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json"
Dec 13 02:48:21.026060 ignition[848]: INFO     : files: files passed
Dec 13 02:48:21.026060 ignition[848]: INFO     : Ignition finished successfully
Dec 13 02:48:21.030900 kernel: kauditd_printk_skb: 24 callbacks suppressed
Dec 13 02:48:21.030924 kernel: audit: type=1130 audit(1734058101.026:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:21.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:21.028152 systemd[1]: Finished ignition-files.service.
Dec 13 02:48:21.028832 systemd[1]: Starting initrd-setup-root-after-ignition.service...
Dec 13 02:48:21.028947 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile).
Dec 13 02:48:21.029299 systemd[1]: Starting ignition-quench.service...
Dec 13 02:48:21.034638 systemd[1]: ignition-quench.service: Deactivated successfully.
Dec 13 02:48:21.039811 kernel: audit: type=1130 audit(1734058101.032:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:21.040151 kernel: audit: type=1131 audit(1734058101.032:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:21.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:21.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:21.034682 systemd[1]: Finished ignition-quench.service.
Dec 13 02:48:21.040629 initrd-setup-root-after-ignition[874]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Dec 13 02:48:21.040935 systemd[1]: Finished initrd-setup-root-after-ignition.service.
Dec 13 02:48:21.043620 kernel: audit: type=1130 audit(1734058101.039:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:21.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:21.041108 systemd[1]: Reached target ignition-complete.target.
Dec 13 02:48:21.044129 systemd[1]: Starting initrd-parse-etc.service...
Dec 13 02:48:21.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:21.052321 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Dec 13 02:48:21.057407 kernel: audit: type=1130 audit(1734058101.050:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:21.057423 kernel: audit: type=1131 audit(1734058101.050:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:21.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:21.052373 systemd[1]: Finished initrd-parse-etc.service.
Dec 13 02:48:21.052546 systemd[1]: Reached target initrd-fs.target.
Dec 13 02:48:21.057308 systemd[1]: Reached target initrd.target.
Dec 13 02:48:21.057478 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met.
Dec 13 02:48:21.057964 systemd[1]: Starting dracut-pre-pivot.service...
Dec 13 02:48:21.064614 systemd[1]: Finished dracut-pre-pivot.service.
Dec 13 02:48:21.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:21.065151 systemd[1]: Starting initrd-cleanup.service...
Dec 13 02:48:21.067937 kernel: audit: type=1130 audit(1734058101.062:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:21.072078 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Dec 13 02:48:21.072126 systemd[1]: Finished initrd-cleanup.service.
Dec 13 02:48:21.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:21.072661 systemd[1]: Stopped target nss-lookup.target.
Dec 13 02:48:21.077183 kernel: audit: type=1130 audit(1734058101.070:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:21.077196 kernel: audit: type=1131 audit(1734058101.070:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:21.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:21.077253 systemd[1]: Stopped target remote-cryptsetup.target.
Dec 13 02:48:21.077472 systemd[1]: Stopped target timers.target.
Dec 13 02:48:21.077723 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Dec 13 02:48:21.077876 systemd[1]: Stopped dracut-pre-pivot.service.
Dec 13 02:48:21.076000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:21.078194 systemd[1]: Stopped target initrd.target.
Dec 13 02:48:21.080629 systemd[1]: Stopped target basic.target.
Dec 13 02:48:21.080825 systemd[1]: Stopped target ignition-complete.target.
Dec 13 02:48:21.080951 kernel: audit: type=1131 audit(1734058101.076:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:21.081055 systemd[1]: Stopped target ignition-diskful.target.
Dec 13 02:48:21.081260 systemd[1]: Stopped target initrd-root-device.target.
Dec 13 02:48:21.081487 systemd[1]: Stopped target remote-fs.target.
Dec 13 02:48:21.081683 systemd[1]: Stopped target remote-fs-pre.target.
Dec 13 02:48:21.081905 systemd[1]: Stopped target sysinit.target.
Dec 13 02:48:21.082100 systemd[1]: Stopped target local-fs.target.
Dec 13 02:48:21.082293 systemd[1]: Stopped target local-fs-pre.target.
Dec 13 02:48:21.082491 systemd[1]: Stopped target swap.target.
Dec 13 02:48:21.082680 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Dec 13 02:48:21.082830 systemd[1]: Stopped dracut-pre-mount.service.
Dec 13 02:48:21.083049 systemd[1]: Stopped target cryptsetup.target.
Dec 13 02:48:21.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:21.083154 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Dec 13 02:48:21.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:21.083180 systemd[1]: Stopped dracut-initqueue.service.
Dec 13 02:48:21.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:21.083333 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Dec 13 02:48:21.083357 systemd[1]: Stopped ignition-fetch-offline.service.
Dec 13 02:48:21.083494 systemd[1]: Stopped target paths.target.
Dec 13 02:48:21.083630 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Dec 13 02:48:21.085903 systemd[1]: Stopped systemd-ask-password-console.path.
Dec 13 02:48:21.086010 systemd[1]: Stopped target slices.target.
Dec 13 02:48:21.086170 systemd[1]: Stopped target sockets.target.
Dec 13 02:48:21.086334 systemd[1]: iscsid.socket: Deactivated successfully.
Dec 13 02:48:21.086349 systemd[1]: Closed iscsid.socket.
Dec 13 02:48:21.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:21.086483 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Dec 13 02:48:21.086504 systemd[1]: Stopped initrd-setup-root-after-ignition.service.
Dec 13 02:48:21.086655 systemd[1]: ignition-files.service: Deactivated successfully.
Dec 13 02:48:21.086675 systemd[1]: Stopped ignition-files.service.
Dec 13 02:48:21.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:21.087184 systemd[1]: Stopping ignition-mount.service...
Dec 13 02:48:21.087375 systemd[1]: Stopping iscsiuio.service...
Dec 13 02:48:21.087811 systemd[1]: Stopping sysroot-boot.service...
Dec 13 02:48:21.087920 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Dec 13 02:48:21.087949 systemd[1]: Stopped systemd-udev-trigger.service.
Dec 13 02:48:21.088074 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Dec 13 02:48:21.088093 systemd[1]: Stopped dracut-pre-trigger.service.
Dec 13 02:48:21.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:21.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:21.091071 systemd[1]: iscsiuio.service: Deactivated successfully.
Dec 13 02:48:21.091152 systemd[1]: Stopped iscsiuio.service.
Dec 13 02:48:21.091541 systemd[1]: iscsiuio.socket: Deactivated successfully.
Dec 13 02:48:21.091560 systemd[1]: Closed iscsiuio.socket.
Dec 13 02:48:21.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:21.094682 ignition[887]: INFO     : Ignition 2.14.0
Dec 13 02:48:21.094682 ignition[887]: INFO     : Stage: umount
Dec 13 02:48:21.094682 ignition[887]: INFO     : reading system config file "/usr/lib/ignition/base.d/base.ign"
Dec 13 02:48:21.094682 ignition[887]: DEBUG    : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed
Dec 13 02:48:21.096355 ignition[887]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/vmware"
Dec 13 02:48:21.097063 ignition[887]: INFO     : umount: umount passed
Dec 13 02:48:21.097063 ignition[887]: INFO     : Ignition finished successfully
Dec 13 02:48:21.098377 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Dec 13 02:48:21.098647 systemd[1]: ignition-mount.service: Deactivated successfully.
Dec 13 02:48:21.098694 systemd[1]: Stopped ignition-mount.service.
Dec 13 02:48:21.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:21.099253 systemd[1]: Stopped target network.target.
Dec 13 02:48:21.099451 systemd[1]: ignition-disks.service: Deactivated successfully.
Dec 13 02:48:21.099476 systemd[1]: Stopped ignition-disks.service.
Dec 13 02:48:21.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:21.099834 systemd[1]: ignition-kargs.service: Deactivated successfully.
Dec 13 02:48:21.099856 systemd[1]: Stopped ignition-kargs.service.
Dec 13 02:48:21.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:21.100236 systemd[1]: ignition-setup.service: Deactivated successfully.
Dec 13 02:48:21.100258 systemd[1]: Stopped ignition-setup.service.
Dec 13 02:48:21.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:21.100668 systemd[1]: Stopping systemd-networkd.service...
Dec 13 02:48:21.100975 systemd[1]: Stopping systemd-resolved.service...
Dec 13 02:48:21.106729 systemd[1]: systemd-networkd.service: Deactivated successfully.
Dec 13 02:48:21.106952 systemd[1]: Stopped systemd-networkd.service.
Dec 13 02:48:21.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:21.107506 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Dec 13 02:48:21.107526 systemd[1]: Closed systemd-networkd.socket.
Dec 13 02:48:21.108213 systemd[1]: Stopping network-cleanup.service...
Dec 13 02:48:21.108471 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Dec 13 02:48:21.108499 systemd[1]: Stopped parse-ip-for-networkd.service.
Dec 13 02:48:21.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:21.108903 systemd[1]: afterburn-network-kargs.service: Deactivated successfully.
Dec 13 02:48:21.108929 systemd[1]: Stopped afterburn-network-kargs.service.
Dec 13 02:48:21.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:21.109330 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 13 02:48:21.109352 systemd[1]: Stopped systemd-sysctl.service.
Dec 13 02:48:21.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:21.109750 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 13 02:48:21.108000 audit: BPF prog-id=9 op=UNLOAD
Dec 13 02:48:21.109772 systemd[1]: Stopped systemd-modules-load.service.
Dec 13 02:48:21.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:21.110277 systemd[1]: Stopping systemd-udevd.service...
Dec 13 02:48:21.111126 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 13 02:48:21.111382 systemd[1]: systemd-resolved.service: Deactivated successfully.
Dec 13 02:48:21.111429 systemd[1]: Stopped systemd-resolved.service.
Dec 13 02:48:21.110000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:21.112000 audit: BPF prog-id=6 op=UNLOAD
Dec 13 02:48:21.114666 systemd[1]: network-cleanup.service: Deactivated successfully.
Dec 13 02:48:21.114872 systemd[1]: Stopped network-cleanup.service.
Dec 13 02:48:21.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:21.115273 systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec 13 02:48:21.115339 systemd[1]: Stopped systemd-udevd.service.
Dec 13 02:48:21.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:21.115922 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Dec 13 02:48:21.115944 systemd[1]: Closed systemd-udevd-control.socket.
Dec 13 02:48:21.116288 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Dec 13 02:48:21.116306 systemd[1]: Closed systemd-udevd-kernel.socket.
Dec 13 02:48:21.116632 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Dec 13 02:48:21.116655 systemd[1]: Stopped dracut-pre-udev.service.
Dec 13 02:48:21.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:21.117045 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Dec 13 02:48:21.117068 systemd[1]: Stopped dracut-cmdline.service.
Dec 13 02:48:21.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:21.117424 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Dec 13 02:48:21.117447 systemd[1]: Stopped dracut-cmdline-ask.service.
Dec 13 02:48:21.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:21.118182 systemd[1]: Starting initrd-udevadm-cleanup-db.service...
Dec 13 02:48:21.118451 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Dec 13 02:48:21.118481 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service.
Dec 13 02:48:21.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:21.118934 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Dec 13 02:48:21.118958 systemd[1]: Stopped kmod-static-nodes.service.
Dec 13 02:48:21.117000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:21.119334 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec 13 02:48:21.119357 systemd[1]: Stopped systemd-vconsole-setup.service.
Dec 13 02:48:21.117000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:21.121726 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Dec 13 02:48:21.122035 systemd[1]: sysroot-boot.service: Deactivated successfully.
Dec 13 02:48:21.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:21.122080 systemd[1]: Stopped sysroot-boot.service.
Dec 13 02:48:21.122729 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Dec 13 02:48:21.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:21.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:21.122770 systemd[1]: Finished initrd-udevadm-cleanup-db.service.
Dec 13 02:48:21.123013 systemd[1]: Reached target initrd-switch-root.target.
Dec 13 02:48:21.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:21.123136 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Dec 13 02:48:21.123159 systemd[1]: Stopped initrd-setup-root.service.
Dec 13 02:48:21.123661 systemd[1]: Starting initrd-switch-root.service...
Dec 13 02:48:21.130496 systemd[1]: Switching root.
Dec 13 02:48:21.147873 iscsid[738]: iscsid shutting down.
Dec 13 02:48:21.148032 systemd-journald[216]: Received SIGTERM from PID 1 (n/a).
Dec 13 02:48:21.148061 systemd-journald[216]: Journal stopped
Dec 13 02:48:23.590271 kernel: SELinux:  Class mctp_socket not defined in policy.
Dec 13 02:48:23.590292 kernel: SELinux:  Class anon_inode not defined in policy.
Dec 13 02:48:23.590300 kernel: SELinux: the above unknown classes and permissions will be allowed
Dec 13 02:48:23.590306 kernel: SELinux:  policy capability network_peer_controls=1
Dec 13 02:48:23.590311 kernel: SELinux:  policy capability open_perms=1
Dec 13 02:48:23.590317 kernel: SELinux:  policy capability extended_socket_class=1
Dec 13 02:48:23.590324 kernel: SELinux:  policy capability always_check_network=0
Dec 13 02:48:23.590330 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 13 02:48:23.590336 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 13 02:48:23.590341 kernel: SELinux:  policy capability genfs_seclabel_symlinks=0
Dec 13 02:48:23.590346 kernel: SELinux:  policy capability ioctl_skip_cloexec=0
Dec 13 02:48:23.590353 systemd[1]: Successfully loaded SELinux policy in 128.262ms.
Dec 13 02:48:23.590361 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 5.583ms.
Dec 13 02:48:23.590368 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 13 02:48:23.590375 systemd[1]: Detected virtualization vmware.
Dec 13 02:48:23.590381 systemd[1]: Detected architecture x86-64.
Dec 13 02:48:23.590387 systemd[1]: Detected first boot.
Dec 13 02:48:23.590395 systemd[1]: Initializing machine ID from random generator.
Dec 13 02:48:23.590401 kernel: SELinux:  Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped).
Dec 13 02:48:23.590407 systemd[1]: Populated /etc with preset unit settings.
Dec 13 02:48:23.590414 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Dec 13 02:48:23.590421 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 13 02:48:23.590428 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Dec 13 02:48:23.590435 systemd[1]: iscsid.service: Deactivated successfully.
Dec 13 02:48:23.590442 systemd[1]: Stopped iscsid.service.
Dec 13 02:48:23.590449 systemd[1]: initrd-switch-root.service: Deactivated successfully.
Dec 13 02:48:23.590458 systemd[1]: Stopped initrd-switch-root.service.
Dec 13 02:48:23.590464 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Dec 13 02:48:23.590471 systemd[1]: Created slice system-addon\x2dconfig.slice.
Dec 13 02:48:23.590477 systemd[1]: Created slice system-addon\x2drun.slice.
Dec 13 02:48:23.590484 systemd[1]: Created slice system-getty.slice.
Dec 13 02:48:23.590491 systemd[1]: Created slice system-modprobe.slice.
Dec 13 02:48:23.590498 systemd[1]: Created slice system-serial\x2dgetty.slice.
Dec 13 02:48:23.590504 systemd[1]: Created slice system-system\x2dcloudinit.slice.
Dec 13 02:48:23.590511 systemd[1]: Created slice system-systemd\x2dfsck.slice.
Dec 13 02:48:23.590517 systemd[1]: Created slice user.slice.
Dec 13 02:48:23.590523 systemd[1]: Started systemd-ask-password-console.path.
Dec 13 02:48:23.590530 systemd[1]: Started systemd-ask-password-wall.path.
Dec 13 02:48:23.590537 systemd[1]: Set up automount boot.automount.
Dec 13 02:48:23.590543 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount.
Dec 13 02:48:23.590551 systemd[1]: Stopped target initrd-switch-root.target.
Dec 13 02:48:23.590559 systemd[1]: Stopped target initrd-fs.target.
Dec 13 02:48:23.590565 systemd[1]: Stopped target initrd-root-fs.target.
Dec 13 02:48:23.590572 systemd[1]: Reached target integritysetup.target.
Dec 13 02:48:23.590578 systemd[1]: Reached target remote-cryptsetup.target.
Dec 13 02:48:23.590585 systemd[1]: Reached target remote-fs.target.
Dec 13 02:48:23.590591 systemd[1]: Reached target slices.target.
Dec 13 02:48:23.590598 systemd[1]: Reached target swap.target.
Dec 13 02:48:23.590605 systemd[1]: Reached target torcx.target.
Dec 13 02:48:23.590613 systemd[1]: Reached target veritysetup.target.
Dec 13 02:48:23.590620 systemd[1]: Listening on systemd-coredump.socket.
Dec 13 02:48:23.590626 systemd[1]: Listening on systemd-initctl.socket.
Dec 13 02:48:23.590633 systemd[1]: Listening on systemd-networkd.socket.
Dec 13 02:48:23.590641 systemd[1]: Listening on systemd-udevd-control.socket.
Dec 13 02:48:23.590648 systemd[1]: Listening on systemd-udevd-kernel.socket.
Dec 13 02:48:23.590654 systemd[1]: Listening on systemd-userdbd.socket.
Dec 13 02:48:23.590661 systemd[1]: Mounting dev-hugepages.mount...
Dec 13 02:48:23.590667 systemd[1]: Mounting dev-mqueue.mount...
Dec 13 02:48:23.590674 systemd[1]: Mounting media.mount...
Dec 13 02:48:23.590681 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 02:48:23.590688 systemd[1]: Mounting sys-kernel-debug.mount...
Dec 13 02:48:23.590695 systemd[1]: Mounting sys-kernel-tracing.mount...
Dec 13 02:48:23.590702 systemd[1]: Mounting tmp.mount...
Dec 13 02:48:23.590709 systemd[1]: Starting flatcar-tmpfiles.service...
Dec 13 02:48:23.590715 systemd[1]: Starting ignition-delete-config.service...
Dec 13 02:48:23.590722 systemd[1]: Starting kmod-static-nodes.service...
Dec 13 02:48:23.590729 systemd[1]: Starting modprobe@configfs.service...
Dec 13 02:48:23.590735 systemd[1]: Starting modprobe@dm_mod.service...
Dec 13 02:48:23.590742 systemd[1]: Starting modprobe@drm.service...
Dec 13 02:48:23.590749 systemd[1]: Starting modprobe@efi_pstore.service...
Dec 13 02:48:23.590758 systemd[1]: Starting modprobe@fuse.service...
Dec 13 02:48:23.590766 systemd[1]: Starting modprobe@loop.service...
Dec 13 02:48:23.590773 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Dec 13 02:48:23.590780 systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Dec 13 02:48:23.590787 systemd[1]: Stopped systemd-fsck-root.service.
Dec 13 02:48:23.590793 systemd[1]: systemd-fsck-usr.service: Deactivated successfully.
Dec 13 02:48:23.590800 systemd[1]: Stopped systemd-fsck-usr.service.
Dec 13 02:48:23.590807 systemd[1]: Stopped systemd-journald.service.
Dec 13 02:48:23.590813 systemd[1]: Starting systemd-journald.service...
Dec 13 02:48:23.590821 systemd[1]: Starting systemd-modules-load.service...
Dec 13 02:48:23.590829 systemd[1]: Starting systemd-network-generator.service...
Dec 13 02:48:23.590836 systemd[1]: Starting systemd-remount-fs.service...
Dec 13 02:48:23.590842 systemd[1]: Starting systemd-udev-trigger.service...
Dec 13 02:48:23.590849 systemd[1]: verity-setup.service: Deactivated successfully.
Dec 13 02:48:23.590856 systemd[1]: Stopped verity-setup.service.
Dec 13 02:48:23.590863 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 02:48:23.590869 systemd[1]: Mounted dev-hugepages.mount.
Dec 13 02:48:23.590876 systemd[1]: Mounted dev-mqueue.mount.
Dec 13 02:48:23.590895 systemd[1]: Mounted media.mount.
Dec 13 02:48:23.590904 systemd[1]: Mounted sys-kernel-debug.mount.
Dec 13 02:48:23.590911 systemd[1]: Mounted sys-kernel-tracing.mount.
Dec 13 02:48:23.590918 systemd[1]: Mounted tmp.mount.
Dec 13 02:48:23.590925 systemd[1]: Finished kmod-static-nodes.service.
Dec 13 02:48:23.590931 systemd[1]: Finished flatcar-tmpfiles.service.
Dec 13 02:48:23.590937 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 13 02:48:23.590946 systemd[1]: Finished modprobe@configfs.service.
Dec 13 02:48:23.593217 kernel: fuse: init (API version 7.34)
Dec 13 02:48:23.593229 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Dec 13 02:48:23.593239 systemd[1]: Finished modprobe@dm_mod.service.
Dec 13 02:48:23.593247 systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec 13 02:48:23.593255 systemd[1]: Finished modprobe@drm.service.
Dec 13 02:48:23.593261 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 13 02:48:23.593268 systemd[1]: Finished modprobe@efi_pstore.service.
Dec 13 02:48:23.593275 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Dec 13 02:48:23.593282 systemd[1]: Finished modprobe@fuse.service.
Dec 13 02:48:23.593289 systemd[1]: Finished systemd-network-generator.service.
Dec 13 02:48:23.593297 systemd[1]: Finished systemd-remount-fs.service.
Dec 13 02:48:23.593303 systemd[1]: Reached target network-pre.target.
Dec 13 02:48:23.593310 systemd[1]: Mounting sys-fs-fuse-connections.mount...
Dec 13 02:48:23.593317 systemd[1]: Mounting sys-kernel-config.mount...
Dec 13 02:48:23.593324 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Dec 13 02:48:23.593331 systemd[1]: Starting systemd-hwdb-update.service...
Dec 13 02:48:23.593338 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec 13 02:48:23.593345 systemd[1]: Starting systemd-random-seed.service...
Dec 13 02:48:23.593352 systemd[1]: Starting systemd-sysusers.service...
Dec 13 02:48:23.593360 systemd[1]: Finished systemd-modules-load.service.
Dec 13 02:48:23.593367 systemd[1]: Mounted sys-fs-fuse-connections.mount.
Dec 13 02:48:23.593374 systemd[1]: Mounted sys-kernel-config.mount.
Dec 13 02:48:23.593384 systemd[1]: Starting systemd-sysctl.service...
Dec 13 02:48:23.593392 systemd[1]: Finished systemd-random-seed.service.
Dec 13 02:48:23.593399 systemd[1]: Reached target first-boot-complete.target.
Dec 13 02:48:23.593409 systemd-journald[1017]: Journal started
Dec 13 02:48:23.593448 systemd-journald[1017]: Runtime Journal (/run/log/journal/034ba87069264c78b27cde348833f1b3) is 4.8M, max 38.8M, 34.0M free.
Dec 13 02:48:21.569000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1
Dec 13 02:48:21.612000 audit[1]: AVC avc:  denied  { bpf } for  pid=1 comm="systemd" capability=39  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1
Dec 13 02:48:21.612000 audit[1]: AVC avc:  denied  { perfmon } for  pid=1 comm="systemd" capability=38  scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1
Dec 13 02:48:21.612000 audit: BPF prog-id=10 op=LOAD
Dec 13 02:48:21.612000 audit: BPF prog-id=10 op=UNLOAD
Dec 13 02:48:21.612000 audit: BPF prog-id=11 op=LOAD
Dec 13 02:48:21.612000 audit: BPF prog-id=11 op=UNLOAD
Dec 13 02:48:21.688000 audit[921]: AVC avc:  denied  { associate } for  pid=921 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023"
Dec 13 02:48:21.688000 audit[921]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8b2 a1=c0000cede0 a2=c0000d70c0 a3=32 items=0 ppid=904 pid=921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 02:48:21.688000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61
Dec 13 02:48:21.689000 audit[921]: AVC avc:  denied  { associate } for  pid=921 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1
Dec 13 02:48:21.689000 audit[921]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d989 a2=1ed a3=0 items=2 ppid=904 pid=921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 02:48:21.689000 audit: CWD cwd="/"
Dec 13 02:48:21.689000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:21.689000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:21.689000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61
Dec 13 02:48:23.439000 audit: BPF prog-id=12 op=LOAD
Dec 13 02:48:23.439000 audit: BPF prog-id=3 op=UNLOAD
Dec 13 02:48:23.439000 audit: BPF prog-id=13 op=LOAD
Dec 13 02:48:23.439000 audit: BPF prog-id=14 op=LOAD
Dec 13 02:48:23.439000 audit: BPF prog-id=4 op=UNLOAD
Dec 13 02:48:23.439000 audit: BPF prog-id=5 op=UNLOAD
Dec 13 02:48:23.440000 audit: BPF prog-id=15 op=LOAD
Dec 13 02:48:23.440000 audit: BPF prog-id=12 op=UNLOAD
Dec 13 02:48:23.440000 audit: BPF prog-id=16 op=LOAD
Dec 13 02:48:23.440000 audit: BPF prog-id=17 op=LOAD
Dec 13 02:48:23.440000 audit: BPF prog-id=13 op=UNLOAD
Dec 13 02:48:23.441000 audit: BPF prog-id=14 op=UNLOAD
Dec 13 02:48:23.441000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:23.444000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:23.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:23.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:23.448000 audit: BPF prog-id=15 op=UNLOAD
Dec 13 02:48:23.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:23.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:23.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:23.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:23.512000 audit: BPF prog-id=18 op=LOAD
Dec 13 02:48:23.512000 audit: BPF prog-id=19 op=LOAD
Dec 13 02:48:23.512000 audit: BPF prog-id=20 op=LOAD
Dec 13 02:48:23.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:23.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:23.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:23.539000 audit: BPF prog-id=17 op=UNLOAD
Dec 13 02:48:23.539000 audit: BPF prog-id=16 op=UNLOAD
Dec 13 02:48:23.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:23.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:23.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:23.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:23.599636 systemd[1]: Started systemd-journald.service.
Dec 13 02:48:23.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:23.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:23.599745 systemd-journald[1017]: Time spent on flushing to /var/log/journal/034ba87069264c78b27cde348833f1b3 is 68.660ms for 1958 entries.
Dec 13 02:48:23.599745 systemd-journald[1017]: System Journal (/var/log/journal/034ba87069264c78b27cde348833f1b3) is 8.0M, max 584.8M, 576.8M free.
Dec 13 02:48:23.686214 systemd-journald[1017]: Received client request to flush runtime journal.
Dec 13 02:48:23.686241 kernel: loop: module loaded
Dec 13 02:48:23.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:23.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:23.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:23.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:23.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:23.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:23.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:23.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:23.581000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1
Dec 13 02:48:23.581000 audit[1017]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff827ede50 a2=4000 a3=7fff827edeec items=0 ppid=1 pid=1017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 02:48:23.581000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald"
Dec 13 02:48:23.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:23.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:23.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:23.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:23.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:23.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:23.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:23.440033 systemd[1]: Queued start job for default target multi-user.target.
Dec 13 02:48:21.687633 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2024-12-13T02:48:21Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]"
Dec 13 02:48:23.443396 systemd[1]: systemd-journald.service: Deactivated successfully.
Dec 13 02:48:23.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:21.688059 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2024-12-13T02:48:21Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json
Dec 13 02:48:23.595524 systemd[1]: Starting systemd-journal-flush.service...
Dec 13 02:48:21.688072 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2024-12-13T02:48:21Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json
Dec 13 02:48:23.601358 systemd[1]: modprobe@loop.service: Deactivated successfully.
Dec 13 02:48:21.688093 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2024-12-13T02:48:21Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12"
Dec 13 02:48:23.601436 systemd[1]: Finished modprobe@loop.service.
Dec 13 02:48:21.688099 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2024-12-13T02:48:21Z" level=debug msg="skipped missing lower profile" missing profile=oem
Dec 13 02:48:23.601617 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met.
Dec 13 02:48:21.688120 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2024-12-13T02:48:21Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory"
Dec 13 02:48:23.603812 systemd[1]: Finished systemd-sysctl.service.
Dec 13 02:48:21.688127 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2024-12-13T02:48:21Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)=
Dec 13 02:48:23.631715 systemd[1]: Finished systemd-sysusers.service.
Dec 13 02:48:21.688254 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2024-12-13T02:48:21Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack
Dec 13 02:48:23.632678 systemd[1]: Starting systemd-tmpfiles-setup-dev.service...
Dec 13 02:48:23.688338 jq[988]: true
Dec 13 02:48:21.688278 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2024-12-13T02:48:21Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json
Dec 13 02:48:23.663178 systemd[1]: Finished systemd-tmpfiles-setup-dev.service.
Dec 13 02:48:21.688287 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2024-12-13T02:48:21Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json
Dec 13 02:48:23.678859 systemd[1]: Finished systemd-udev-trigger.service.
Dec 13 02:48:21.689119 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2024-12-13T02:48:21Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10
Dec 13 02:48:23.679808 systemd[1]: Starting systemd-udev-settle.service...
Dec 13 02:48:21.689140 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2024-12-13T02:48:21Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl
Dec 13 02:48:23.687248 systemd[1]: Finished systemd-journal-flush.service.
Dec 13 02:48:21.689151 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2024-12-13T02:48:21Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6
Dec 13 02:48:21.689160 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2024-12-13T02:48:21Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store
Dec 13 02:48:21.689170 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2024-12-13T02:48:21Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6
Dec 13 02:48:21.689177 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2024-12-13T02:48:21Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store
Dec 13 02:48:23.248289 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2024-12-13T02:48:23Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl
Dec 13 02:48:23.248437 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2024-12-13T02:48:23Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl
Dec 13 02:48:23.248497 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2024-12-13T02:48:23Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl
Dec 13 02:48:23.689646 jq[1034]: true
Dec 13 02:48:23.248592 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2024-12-13T02:48:23Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl
Dec 13 02:48:23.248623 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2024-12-13T02:48:23Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile=
Dec 13 02:48:23.248662 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2024-12-13T02:48:23Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx
Dec 13 02:48:23.687865 ignition[1035]: Ignition 2.14.0
Dec 13 02:48:23.688034 ignition[1035]: deleting config from guestinfo properties
Dec 13 02:48:23.690722 udevadm[1054]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in.
Dec 13 02:48:23.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ignition-delete-config comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:23.692515 systemd[1]: Finished ignition-delete-config.service.
Dec 13 02:48:23.691734 ignition[1035]: Successfully deleted config
Dec 13 02:48:24.075560 systemd[1]: Finished systemd-hwdb-update.service.
Dec 13 02:48:24.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:24.074000 audit: BPF prog-id=21 op=LOAD
Dec 13 02:48:24.074000 audit: BPF prog-id=22 op=LOAD
Dec 13 02:48:24.074000 audit: BPF prog-id=7 op=UNLOAD
Dec 13 02:48:24.074000 audit: BPF prog-id=8 op=UNLOAD
Dec 13 02:48:24.076865 systemd[1]: Starting systemd-udevd.service...
Dec 13 02:48:24.092065 systemd-udevd[1056]: Using default interface naming scheme 'v252'.
Dec 13 02:48:24.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:24.111000 audit: BPF prog-id=23 op=LOAD
Dec 13 02:48:24.112120 systemd[1]: Started systemd-udevd.service.
Dec 13 02:48:24.113305 systemd[1]: Starting systemd-networkd.service...
Dec 13 02:48:24.123000 audit: BPF prog-id=24 op=LOAD
Dec 13 02:48:24.123000 audit: BPF prog-id=25 op=LOAD
Dec 13 02:48:24.124000 audit: BPF prog-id=26 op=LOAD
Dec 13 02:48:24.126241 systemd[1]: Starting systemd-userdbd.service...
Dec 13 02:48:24.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:24.149193 systemd[1]: Started systemd-userdbd.service.
Dec 13 02:48:24.155741 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped.
Dec 13 02:48:24.172894 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2
Dec 13 02:48:24.176892 kernel: ACPI: button: Power Button [PWRF]
Dec 13 02:48:24.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:24.207152 systemd-networkd[1064]: lo: Link UP
Dec 13 02:48:24.207156 systemd-networkd[1064]: lo: Gained carrier
Dec 13 02:48:24.207411 systemd-networkd[1064]: Enumeration completed
Dec 13 02:48:24.207460 systemd[1]: Started systemd-networkd.service.
Dec 13 02:48:24.208277 systemd-networkd[1064]: ens192: Configuring with /etc/systemd/network/00-vmware.network.
Dec 13 02:48:24.211201 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated
Dec 13 02:48:24.211310 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps
Dec 13 02:48:24.212373 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): ens192: link becomes ready
Dec 13 02:48:24.212549 systemd-networkd[1064]: ens192: Link UP
Dec 13 02:48:24.212682 systemd-networkd[1064]: ens192: Gained carrier
Dec 13 02:48:24.214901 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1071)
Dec 13 02:48:24.245000 audit[1061]: AVC avc:  denied  { confidentiality } for  pid=1061 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1
Dec 13 02:48:24.245000 audit[1061]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=56201cb18bc0 a1=337fc a2=7ff9aa7f4bc5 a3=5 items=110 ppid=1056 pid=1061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 02:48:24.245000 audit: CWD cwd="/"
Dec 13 02:48:24.245000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=1 name=(null) inode=24965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=2 name=(null) inode=24965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=3 name=(null) inode=24966 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=4 name=(null) inode=24965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=5 name=(null) inode=24967 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=6 name=(null) inode=24965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=7 name=(null) inode=24968 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=8 name=(null) inode=24968 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=9 name=(null) inode=24969 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=10 name=(null) inode=24968 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=11 name=(null) inode=24970 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=12 name=(null) inode=24968 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=13 name=(null) inode=24971 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=14 name=(null) inode=24968 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=15 name=(null) inode=24972 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=16 name=(null) inode=24968 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=17 name=(null) inode=24973 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=18 name=(null) inode=24965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=19 name=(null) inode=24974 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=20 name=(null) inode=24974 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=21 name=(null) inode=24975 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=22 name=(null) inode=24974 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=23 name=(null) inode=24976 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=24 name=(null) inode=24974 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=25 name=(null) inode=24977 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=26 name=(null) inode=24974 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=27 name=(null) inode=24978 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=28 name=(null) inode=24974 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=29 name=(null) inode=24979 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.249565 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device.
Dec 13 02:48:24.245000 audit: PATH item=30 name=(null) inode=24965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=31 name=(null) inode=24980 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=32 name=(null) inode=24980 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=33 name=(null) inode=24981 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=34 name=(null) inode=24980 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=35 name=(null) inode=24982 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=36 name=(null) inode=24980 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=37 name=(null) inode=24983 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=38 name=(null) inode=24980 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=39 name=(null) inode=24984 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=40 name=(null) inode=24980 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=41 name=(null) inode=24985 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=42 name=(null) inode=24965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=43 name=(null) inode=24986 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=44 name=(null) inode=24986 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=45 name=(null) inode=24987 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=46 name=(null) inode=24986 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=47 name=(null) inode=24988 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=48 name=(null) inode=24986 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=49 name=(null) inode=24989 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=50 name=(null) inode=24986 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=51 name=(null) inode=24990 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=52 name=(null) inode=24986 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=53 name=(null) inode=24991 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=55 name=(null) inode=24992 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=56 name=(null) inode=24992 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=57 name=(null) inode=24993 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=58 name=(null) inode=24992 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=59 name=(null) inode=24994 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=60 name=(null) inode=24992 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=61 name=(null) inode=24995 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=62 name=(null) inode=24995 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=63 name=(null) inode=24996 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=64 name=(null) inode=24995 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=65 name=(null) inode=24997 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=66 name=(null) inode=24995 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=67 name=(null) inode=24998 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=68 name=(null) inode=24995 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=69 name=(null) inode=24999 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=70 name=(null) inode=24995 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=71 name=(null) inode=25000 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=72 name=(null) inode=24992 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=73 name=(null) inode=25001 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=74 name=(null) inode=25001 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=75 name=(null) inode=25002 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=76 name=(null) inode=25001 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=77 name=(null) inode=25003 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=78 name=(null) inode=25001 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=79 name=(null) inode=25004 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=80 name=(null) inode=25001 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=81 name=(null) inode=25005 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=82 name=(null) inode=25001 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=83 name=(null) inode=25006 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=84 name=(null) inode=24992 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=85 name=(null) inode=25007 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=86 name=(null) inode=25007 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=87 name=(null) inode=25008 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=88 name=(null) inode=25007 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=89 name=(null) inode=25009 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=90 name=(null) inode=25007 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=91 name=(null) inode=25010 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=92 name=(null) inode=25007 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=93 name=(null) inode=25011 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=94 name=(null) inode=25007 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=95 name=(null) inode=25012 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=96 name=(null) inode=24992 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=97 name=(null) inode=25013 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=98 name=(null) inode=25013 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=99 name=(null) inode=25014 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=100 name=(null) inode=25013 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=101 name=(null) inode=25015 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=102 name=(null) inode=25013 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=103 name=(null) inode=25016 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=104 name=(null) inode=25013 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=105 name=(null) inode=25017 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=106 name=(null) inode=25013 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=107 name=(null) inode=25018 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PATH item=109 name=(null) inode=25019 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0
Dec 13 02:48:24.245000 audit: PROCTITLE proctitle="(udev-worker)"
Dec 13 02:48:24.255891 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled!
Dec 13 02:48:24.260351 kernel: vmw_vmci 0000:00:07.7: Found VMCI PCI device at 0x11080, irq 16
Dec 13 02:48:24.260702 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc
Dec 13 02:48:24.261464 kernel: Guest personality initialized and is active
Dec 13 02:48:24.262832 kernel: VMCI host device registered (name=vmci, major=10, minor=125)
Dec 13 02:48:24.262874 kernel: Initialized host personality
Dec 13 02:48:24.282900 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3
Dec 13 02:48:24.297896 kernel: mousedev: PS/2 mouse device common for all mice
Dec 13 02:48:24.298243 (udev-worker)[1060]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte.
Dec 13 02:48:24.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:24.317162 systemd[1]: Finished systemd-udev-settle.service.
Dec 13 02:48:24.318036 systemd[1]: Starting lvm2-activation-early.service...
Dec 13 02:48:24.339352 lvm[1089]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Dec 13 02:48:24.375558 systemd[1]: Finished lvm2-activation-early.service.
Dec 13 02:48:24.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:24.375740 systemd[1]: Reached target cryptsetup.target.
Dec 13 02:48:24.376646 systemd[1]: Starting lvm2-activation.service...
Dec 13 02:48:24.379129 lvm[1090]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Dec 13 02:48:24.394323 systemd[1]: Finished lvm2-activation.service.
Dec 13 02:48:24.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:24.394478 systemd[1]: Reached target local-fs-pre.target.
Dec 13 02:48:24.394573 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Dec 13 02:48:24.394590 systemd[1]: Reached target local-fs.target.
Dec 13 02:48:24.394678 systemd[1]: Reached target machines.target.
Dec 13 02:48:24.395555 systemd[1]: Starting ldconfig.service...
Dec 13 02:48:24.396044 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met.
Dec 13 02:48:24.396073 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 13 02:48:24.396828 systemd[1]: Starting systemd-boot-update.service...
Dec 13 02:48:24.397630 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service...
Dec 13 02:48:24.398479 systemd[1]: Starting systemd-machine-id-commit.service...
Dec 13 02:48:24.399354 systemd[1]: Starting systemd-sysext.service...
Dec 13 02:48:24.402217 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1092 (bootctl)
Dec 13 02:48:24.402816 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service...
Dec 13 02:48:24.414229 systemd[1]: Unmounting usr-share-oem.mount...
Dec 13 02:48:24.426945 systemd[1]: usr-share-oem.mount: Deactivated successfully.
Dec 13 02:48:24.427059 systemd[1]: Unmounted usr-share-oem.mount.
Dec 13 02:48:24.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:24.439113 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service.
Dec 13 02:48:24.447925 kernel: loop0: detected capacity change from 0 to 211296
Dec 13 02:48:24.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:24.465853 systemd[1]: Finished systemd-machine-id-commit.service.
Dec 13 02:48:24.490940 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher
Dec 13 02:48:24.509274 kernel: loop1: detected capacity change from 0 to 211296
Dec 13 02:48:24.524836 (sd-sysext)[1105]: Using extensions 'kubernetes'.
Dec 13 02:48:24.525384 (sd-sysext)[1105]: Merged extensions into '/usr'.
Dec 13 02:48:24.530408 systemd[1]: etc-machine\x2did.mount: Deactivated successfully.
Dec 13 02:48:24.536182 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 02:48:24.538123 systemd-fsck[1102]: fsck.fat 4.2 (2021-01-31)
Dec 13 02:48:24.538123 systemd-fsck[1102]: /dev/sda1: 789 files, 119291/258078 clusters
Dec 13 02:48:24.537200 systemd[1]: Mounting usr-share-oem.mount...
Dec 13 02:48:24.537927 systemd[1]: Starting modprobe@dm_mod.service...
Dec 13 02:48:24.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:24.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:24.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:24.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:24.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:24.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:24.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:24.540516 systemd[1]: Starting modprobe@efi_pstore.service...
Dec 13 02:48:24.541228 systemd[1]: Starting modprobe@loop.service...
Dec 13 02:48:24.541363 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met.
Dec 13 02:48:24.541432 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 13 02:48:24.541501 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 02:48:24.542081 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Dec 13 02:48:24.542150 systemd[1]: Finished modprobe@dm_mod.service.
Dec 13 02:48:24.543424 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service.
Dec 13 02:48:24.543740 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 13 02:48:24.543827 systemd[1]: Finished modprobe@efi_pstore.service.
Dec 13 02:48:24.544117 systemd[1]: modprobe@loop.service: Deactivated successfully.
Dec 13 02:48:24.544179 systemd[1]: Finished modprobe@loop.service.
Dec 13 02:48:24.545133 systemd[1]: Mounting boot.mount...
Dec 13 02:48:24.545232 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec 13 02:48:24.545317 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met.
Dec 13 02:48:24.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:24.548365 systemd[1]: Mounted usr-share-oem.mount.
Dec 13 02:48:24.549248 systemd[1]: Finished systemd-sysext.service.
Dec 13 02:48:24.550072 systemd[1]: Starting ensure-sysext.service...
Dec 13 02:48:24.551144 systemd[1]: Starting systemd-tmpfiles-setup.service...
Dec 13 02:48:24.557042 systemd[1]: Mounted boot.mount.
Dec 13 02:48:24.558922 systemd[1]: Reloading.
Dec 13 02:48:24.569490 systemd-tmpfiles[1113]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring.
Dec 13 02:48:24.571724 systemd-tmpfiles[1113]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring.
Dec 13 02:48:24.575032 systemd-tmpfiles[1113]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring.
Dec 13 02:48:24.594548 /usr/lib/systemd/system-generators/torcx-generator[1132]: time="2024-12-13T02:48:24Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]"
Dec 13 02:48:24.594565 /usr/lib/systemd/system-generators/torcx-generator[1132]: time="2024-12-13T02:48:24Z" level=info msg="torcx already run"
Dec 13 02:48:24.672693 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Dec 13 02:48:24.672703 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 13 02:48:24.685527 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Dec 13 02:48:24.717000 audit: BPF prog-id=27 op=LOAD
Dec 13 02:48:24.717000 audit: BPF prog-id=28 op=LOAD
Dec 13 02:48:24.717000 audit: BPF prog-id=21 op=UNLOAD
Dec 13 02:48:24.717000 audit: BPF prog-id=22 op=UNLOAD
Dec 13 02:48:24.717000 audit: BPF prog-id=29 op=LOAD
Dec 13 02:48:24.717000 audit: BPF prog-id=23 op=UNLOAD
Dec 13 02:48:24.718000 audit: BPF prog-id=30 op=LOAD
Dec 13 02:48:24.718000 audit: BPF prog-id=24 op=UNLOAD
Dec 13 02:48:24.718000 audit: BPF prog-id=31 op=LOAD
Dec 13 02:48:24.718000 audit: BPF prog-id=32 op=LOAD
Dec 13 02:48:24.718000 audit: BPF prog-id=25 op=UNLOAD
Dec 13 02:48:24.718000 audit: BPF prog-id=26 op=UNLOAD
Dec 13 02:48:24.719000 audit: BPF prog-id=33 op=LOAD
Dec 13 02:48:24.719000 audit: BPF prog-id=18 op=UNLOAD
Dec 13 02:48:24.719000 audit: BPF prog-id=34 op=LOAD
Dec 13 02:48:24.719000 audit: BPF prog-id=35 op=LOAD
Dec 13 02:48:24.719000 audit: BPF prog-id=19 op=UNLOAD
Dec 13 02:48:24.719000 audit: BPF prog-id=20 op=UNLOAD
Dec 13 02:48:24.722244 systemd[1]: Finished systemd-boot-update.service.
Dec 13 02:48:24.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:24.722522 systemd[1]: Finished systemd-tmpfiles-setup.service.
Dec 13 02:48:24.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:24.725013 systemd[1]: Starting audit-rules.service...
Dec 13 02:48:24.725766 systemd[1]: Starting clean-ca-certificates.service...
Dec 13 02:48:24.726590 systemd[1]: Starting systemd-journal-catalog-update.service...
Dec 13 02:48:24.725000 audit: BPF prog-id=36 op=LOAD
Dec 13 02:48:24.727000 audit: BPF prog-id=37 op=LOAD
Dec 13 02:48:24.728019 systemd[1]: Starting systemd-resolved.service...
Dec 13 02:48:24.729992 systemd[1]: Starting systemd-timesyncd.service...
Dec 13 02:48:24.730722 systemd[1]: Starting systemd-update-utmp.service...
Dec 13 02:48:24.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:24.734086 systemd[1]: Finished clean-ca-certificates.service.
Dec 13 02:48:24.734251 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Dec 13 02:48:24.736000 audit[1198]: SYSTEM_BOOT pid=1198 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:24.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:24.739300 systemd[1]: Finished systemd-update-utmp.service.
Dec 13 02:48:24.741607 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 02:48:24.742387 systemd[1]: Starting modprobe@dm_mod.service...
Dec 13 02:48:24.743240 systemd[1]: Starting modprobe@efi_pstore.service...
Dec 13 02:48:24.744004 systemd[1]: Starting modprobe@loop.service...
Dec 13 02:48:24.744127 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met.
Dec 13 02:48:24.744200 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 13 02:48:24.744271 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Dec 13 02:48:24.744320 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 02:48:24.745007 systemd[1]: modprobe@loop.service: Deactivated successfully.
Dec 13 02:48:24.745092 systemd[1]: Finished modprobe@loop.service.
Dec 13 02:48:24.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:24.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:24.746374 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 02:48:24.747077 systemd[1]: Starting modprobe@loop.service...
Dec 13 02:48:24.747195 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met.
Dec 13 02:48:24.747265 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 13 02:48:24.747333 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Dec 13 02:48:24.747385 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 02:48:24.749087 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 13 02:48:24.749166 systemd[1]: Finished modprobe@efi_pstore.service.
Dec 13 02:48:24.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:24.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:24.750456 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 02:48:24.751144 systemd[1]: Starting modprobe@drm.service...
Dec 13 02:48:24.751992 systemd[1]: Starting modprobe@efi_pstore.service...
Dec 13 02:48:24.752138 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met.
Dec 13 02:48:24.752216 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 13 02:48:24.753033 systemd[1]: Starting systemd-networkd-wait-online.service...
Dec 13 02:48:24.753342 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Dec 13 02:48:24.753421 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen).
Dec 13 02:48:24.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:24.753000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:24.755475 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Dec 13 02:48:24.755551 systemd[1]: Finished modprobe@dm_mod.service.
Dec 13 02:48:24.756870 systemd[1]: Finished ensure-sysext.service.
Dec 13 02:48:24.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:24.759768 systemd[1]: modprobe@loop.service: Deactivated successfully.
Dec 13 02:48:24.759843 systemd[1]: Finished modprobe@loop.service.
Dec 13 02:48:24.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:24.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:24.760023 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met.
Dec 13 02:48:24.760171 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 13 02:48:24.760273 systemd[1]: Finished modprobe@efi_pstore.service.
Dec 13 02:48:24.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:24.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:24.760402 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec 13 02:48:24.760526 systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec 13 02:48:24.760593 systemd[1]: Finished modprobe@drm.service.
Dec 13 02:48:24.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:24.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:24.791722 systemd-resolved[1196]: Positive Trust Anchors:
Dec 13 02:48:24.791730 systemd-resolved[1196]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Dec 13 02:48:24.791749 systemd-resolved[1196]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test
Dec 13 02:48:24.793387 systemd[1]: Finished systemd-journal-catalog-update.service.
Dec 13 02:48:24.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:24.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Dec 13 02:48:24.796528 systemd[1]: Started systemd-timesyncd.service.
Dec 13 02:48:24.796689 systemd[1]: Reached target time-set.target.
Dec 13 02:48:24.797000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1
Dec 13 02:48:24.797000 audit[1221]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff4546ab60 a2=420 a3=0 items=0 ppid=1193 pid=1221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null)
Dec 13 02:48:24.797000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573
Dec 13 02:48:24.799465 augenrules[1221]: No rules
Dec 13 02:48:24.800135 systemd[1]: Finished audit-rules.service.
Dec 13 02:48:24.816280 systemd-resolved[1196]: Defaulting to hostname 'linux'.
Dec 13 02:48:24.817221 systemd[1]: Started systemd-resolved.service.
Dec 13 02:48:24.817354 systemd[1]: Reached target network.target.
Dec 13 02:48:24.817448 systemd[1]: Reached target nss-lookup.target.
Dec 13 02:48:24.826851 ldconfig[1091]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start.
Dec 13 02:48:24.840712 systemd[1]: Finished ldconfig.service.
Dec 13 02:48:24.841685 systemd[1]: Starting systemd-update-done.service...
Dec 13 02:48:24.845473 systemd[1]: Finished systemd-update-done.service.
Dec 13 02:48:24.845614 systemd[1]: Reached target sysinit.target.
Dec 13 02:48:24.845747 systemd[1]: Started motdgen.path.
Dec 13 02:48:24.845844 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path.
Dec 13 02:48:24.846023 systemd[1]: Started logrotate.timer.
Dec 13 02:48:24.846144 systemd[1]: Started mdadm.timer.
Dec 13 02:48:24.846258 systemd[1]: Started systemd-tmpfiles-clean.timer.
Dec 13 02:48:24.846348 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Dec 13 02:48:24.846365 systemd[1]: Reached target paths.target.
Dec 13 02:48:24.846443 systemd[1]: Reached target timers.target.
Dec 13 02:48:24.846666 systemd[1]: Listening on dbus.socket.
Dec 13 02:48:24.847375 systemd[1]: Starting docker.socket...
Dec 13 02:48:24.849028 systemd[1]: Listening on sshd.socket.
Dec 13 02:48:24.849171 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 13 02:48:24.849392 systemd[1]: Listening on docker.socket.
Dec 13 02:48:24.849513 systemd[1]: Reached target sockets.target.
Dec 13 02:48:24.849601 systemd[1]: Reached target basic.target.
Dec 13 02:48:24.849707 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met.
Dec 13 02:48:24.849724 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met.
Dec 13 02:48:24.850322 systemd[1]: Starting containerd.service...
Dec 13 02:48:24.851012 systemd[1]: Starting dbus.service...
Dec 13 02:48:24.851693 systemd[1]: Starting enable-oem-cloudinit.service...
Dec 13 02:48:24.853960 systemd[1]: Starting extend-filesystems.service...
Dec 13 02:48:24.868608 jq[1231]: false
Dec 13 02:48:24.854091 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment).
Dec 13 02:48:24.854786 systemd[1]: Starting motdgen.service...
Dec 13 02:48:24.855600 systemd[1]: Starting ssh-key-proc-cmdline.service...
Dec 13 02:48:24.856382 systemd[1]: Starting sshd-keygen.service...
Dec 13 02:48:24.857941 systemd[1]: Starting systemd-logind.service...
Dec 13 02:48:24.858046 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 13 02:48:24.858078 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0).
Dec 13 02:48:24.858431 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details.
Dec 13 02:48:24.877336 jq[1239]: true
Dec 13 02:48:24.858741 systemd[1]: Starting update-engine.service...
Dec 13 02:48:24.859444 systemd[1]: Starting update-ssh-keys-after-ignition.service...
Dec 13 02:48:24.860451 systemd[1]: Starting vmtoolsd.service...
Dec 13 02:48:24.861571 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'.
Dec 13 02:48:24.861912 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped.
Dec 13 02:48:24.882458 jq[1255]: true
Dec 13 02:48:24.864733 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully.
Dec 13 02:48:24.864846 systemd[1]: Finished ssh-key-proc-cmdline.service.
Dec 13 02:48:24.872436 systemd[1]: Started vmtoolsd.service.
Dec 13 02:48:24.888075 systemd[1]: motdgen.service: Deactivated successfully.
Dec 13 02:48:24.888174 systemd[1]: Finished motdgen.service.
Dec 13 02:48:24.892038 extend-filesystems[1232]: Found loop1
Dec 13 02:48:24.892354 extend-filesystems[1232]: Found sda
Dec 13 02:48:24.892522 extend-filesystems[1232]: Found sda1
Dec 13 02:48:24.893001 extend-filesystems[1232]: Found sda2
Dec 13 02:48:24.893158 extend-filesystems[1232]: Found sda3
Dec 13 02:48:24.893307 extend-filesystems[1232]: Found usr
Dec 13 02:48:24.893494 extend-filesystems[1232]: Found sda4
Dec 13 02:48:24.893678 extend-filesystems[1232]: Found sda6
Dec 13 02:48:24.894100 extend-filesystems[1232]: Found sda7
Dec 13 02:48:24.894481 extend-filesystems[1232]: Found sda9
Dec 13 02:48:24.894663 extend-filesystems[1232]: Checking size of /dev/sda9
Dec 13 02:48:24.895838 dbus-daemon[1230]: [system] SELinux support is enabled
Dec 13 02:48:24.895998 systemd[1]: Started dbus.service.
Dec 13 02:48:24.897275 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml).
Dec 13 02:48:24.897291 systemd[1]: Reached target system-config.target.
Dec 13 02:48:24.897435 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url).
Dec 13 02:48:24.897444 systemd[1]: Reached target user-config.target.
Dec 13 02:48:24.907566 extend-filesystems[1232]: Old size kept for /dev/sda9
Dec 13 02:48:24.907758 extend-filesystems[1232]: Found sr0
Dec 13 02:48:24.908346 systemd[1]: extend-filesystems.service: Deactivated successfully.
Dec 13 02:48:24.908444 systemd[1]: Finished extend-filesystems.service.
Dec 13 02:48:24.917828 bash[1275]: Updated "/home/core/.ssh/authorized_keys"
Dec 13 02:48:24.917990 systemd[1]: Finished update-ssh-keys-after-ignition.service.
Dec 13 02:48:24.941911 kernel: NET: Registered PF_VSOCK protocol family
Dec 13 02:48:24.946057 update_engine[1237]: I1213 02:48:24.945509  1237 main.cc:92] Flatcar Update Engine starting
Dec 13 02:48:24.947521 systemd[1]: Started update-engine.service.
Dec 13 02:48:24.947642 update_engine[1237]: I1213 02:48:24.947539  1237 update_check_scheduler.cc:74] Next update check in 10m45s
Dec 13 02:48:24.948800 systemd[1]: Started locksmithd.service.
Dec 13 02:48:24.958902 env[1248]: time="2024-12-13T02:48:24.957631487Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16
Dec 13 02:48:24.963600 systemd-logind[1236]: Watching system buttons on /dev/input/event1 (Power Button)
Dec 13 02:48:24.963761 systemd-logind[1236]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard)
Dec 13 02:48:24.965062 systemd-logind[1236]: New seat seat0.
Dec 13 02:49:35.219694 systemd-resolved[1196]: Clock change detected. Flushing caches.
Dec 13 02:49:35.219801 systemd-timesyncd[1197]: Contacted time server 108.61.56.35:123 (0.flatcar.pool.ntp.org).
Dec 13 02:49:35.219878 systemd-timesyncd[1197]: Initial clock synchronization to Fri 2024-12-13 02:49:35.219566 UTC.
Dec 13 02:49:35.223857 systemd[1]: Started systemd-logind.service.
Dec 13 02:49:35.230822 env[1248]: time="2024-12-13T02:49:35.230799131Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Dec 13 02:49:35.230901 env[1248]: time="2024-12-13T02:49:35.230887958Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Dec 13 02:49:35.231604 env[1248]: time="2024-12-13T02:49:35.231587024Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Dec 13 02:49:35.231604 env[1248]: time="2024-12-13T02:49:35.231603137Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Dec 13 02:49:35.231704 env[1248]: time="2024-12-13T02:49:35.231690338Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Dec 13 02:49:35.231739 env[1248]: time="2024-12-13T02:49:35.231703650Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Dec 13 02:49:35.231739 env[1248]: time="2024-12-13T02:49:35.231713651Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
Dec 13 02:49:35.231739 env[1248]: time="2024-12-13T02:49:35.231719356Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Dec 13 02:49:35.231786 env[1248]: time="2024-12-13T02:49:35.231762636Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Dec 13 02:49:35.231915 env[1248]: time="2024-12-13T02:49:35.231903286Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Dec 13 02:49:35.231983 env[1248]: time="2024-12-13T02:49:35.231969850Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Dec 13 02:49:35.231983 env[1248]: time="2024-12-13T02:49:35.231981252Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Dec 13 02:49:35.232029 env[1248]: time="2024-12-13T02:49:35.232009780Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
Dec 13 02:49:35.232029 env[1248]: time="2024-12-13T02:49:35.232017827Z" level=info msg="metadata content store policy set" policy=shared
Dec 13 02:49:35.233077 env[1248]: time="2024-12-13T02:49:35.233064437Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Dec 13 02:49:35.233110 env[1248]: time="2024-12-13T02:49:35.233080519Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Dec 13 02:49:35.233110 env[1248]: time="2024-12-13T02:49:35.233089677Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Dec 13 02:49:35.233110 env[1248]: time="2024-12-13T02:49:35.233104953Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Dec 13 02:49:35.233160 env[1248]: time="2024-12-13T02:49:35.233112466Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Dec 13 02:49:35.233160 env[1248]: time="2024-12-13T02:49:35.233119983Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Dec 13 02:49:35.233160 env[1248]: time="2024-12-13T02:49:35.233132371Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Dec 13 02:49:35.233160 env[1248]: time="2024-12-13T02:49:35.233139922Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Dec 13 02:49:35.233160 env[1248]: time="2024-12-13T02:49:35.233147000Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
Dec 13 02:49:35.233160 env[1248]: time="2024-12-13T02:49:35.233153562Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Dec 13 02:49:35.233160 env[1248]: time="2024-12-13T02:49:35.233159712Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Dec 13 02:49:35.233259 env[1248]: time="2024-12-13T02:49:35.233165906Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Dec 13 02:49:35.233259 env[1248]: time="2024-12-13T02:49:35.233218889Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Dec 13 02:49:35.233289 env[1248]: time="2024-12-13T02:49:35.233265081Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Dec 13 02:49:35.233407 env[1248]: time="2024-12-13T02:49:35.233395236Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Dec 13 02:49:35.233434 env[1248]: time="2024-12-13T02:49:35.233413219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Dec 13 02:49:35.233434 env[1248]: time="2024-12-13T02:49:35.233421161Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Dec 13 02:49:35.233472 env[1248]: time="2024-12-13T02:49:35.233445643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Dec 13 02:49:35.233472 env[1248]: time="2024-12-13T02:49:35.233453293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Dec 13 02:49:35.233472 env[1248]: time="2024-12-13T02:49:35.233459785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Dec 13 02:49:35.233472 env[1248]: time="2024-12-13T02:49:35.233465546Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Dec 13 02:49:35.233472 env[1248]: time="2024-12-13T02:49:35.233471631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Dec 13 02:49:35.233557 env[1248]: time="2024-12-13T02:49:35.233478527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Dec 13 02:49:35.233557 env[1248]: time="2024-12-13T02:49:35.233484833Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Dec 13 02:49:35.233557 env[1248]: time="2024-12-13T02:49:35.233491772Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Dec 13 02:49:35.233557 env[1248]: time="2024-12-13T02:49:35.233499083Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Dec 13 02:49:35.233620 env[1248]: time="2024-12-13T02:49:35.233579049Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Dec 13 02:49:35.233620 env[1248]: time="2024-12-13T02:49:35.233588590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Dec 13 02:49:35.233620 env[1248]: time="2024-12-13T02:49:35.233594873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Dec 13 02:49:35.233620 env[1248]: time="2024-12-13T02:49:35.233600749Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Dec 13 02:49:35.233620 env[1248]: time="2024-12-13T02:49:35.233607932Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
Dec 13 02:49:35.233620 env[1248]: time="2024-12-13T02:49:35.233613584Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Dec 13 02:49:35.233773 env[1248]: time="2024-12-13T02:49:35.233625094Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin"
Dec 13 02:49:35.233773 env[1248]: time="2024-12-13T02:49:35.233645732Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Dec 13 02:49:35.233806 env[1248]: time="2024-12-13T02:49:35.233760495Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Dec 13 02:49:35.233806 env[1248]: time="2024-12-13T02:49:35.233794008Z" level=info msg="Connect containerd service"
Dec 13 02:49:35.235822 env[1248]: time="2024-12-13T02:49:35.233812808Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Dec 13 02:49:35.236795 env[1248]: time="2024-12-13T02:49:35.236776912Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Dec 13 02:49:35.236924 env[1248]: time="2024-12-13T02:49:35.236911949Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Dec 13 02:49:35.236951 env[1248]: time="2024-12-13T02:49:35.236937856Z" level=info msg=serving... address=/run/containerd/containerd.sock
Dec 13 02:49:35.237000 systemd[1]: Started containerd.service.
Dec 13 02:49:35.237348 env[1248]: time="2024-12-13T02:49:35.237313975Z" level=info msg="containerd successfully booted in 0.030837s"
Dec 13 02:49:35.240008 env[1248]: time="2024-12-13T02:49:35.238826151Z" level=info msg="Start subscribing containerd event"
Dec 13 02:49:35.240008 env[1248]: time="2024-12-13T02:49:35.238901250Z" level=info msg="Start recovering state"
Dec 13 02:49:35.240008 env[1248]: time="2024-12-13T02:49:35.238982087Z" level=info msg="Start event monitor"
Dec 13 02:49:35.240008 env[1248]: time="2024-12-13T02:49:35.238998299Z" level=info msg="Start snapshots syncer"
Dec 13 02:49:35.240008 env[1248]: time="2024-12-13T02:49:35.239004971Z" level=info msg="Start cni network conf syncer for default"
Dec 13 02:49:35.240008 env[1248]: time="2024-12-13T02:49:35.239012198Z" level=info msg="Start streaming server"
Dec 13 02:49:35.315369 locksmithd[1286]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot"
Dec 13 02:49:35.649758 sshd_keygen[1249]: ssh-keygen: generating new host keys: RSA ECDSA ED25519
Dec 13 02:49:35.662012 systemd[1]: Finished sshd-keygen.service.
Dec 13 02:49:35.663141 systemd[1]: Starting issuegen.service...
Dec 13 02:49:35.666173 systemd[1]: issuegen.service: Deactivated successfully.
Dec 13 02:49:35.666256 systemd[1]: Finished issuegen.service.
Dec 13 02:49:35.667273 systemd[1]: Starting systemd-user-sessions.service...
Dec 13 02:49:35.670903 systemd[1]: Finished systemd-user-sessions.service.
Dec 13 02:49:35.671833 systemd[1]: Started getty@tty1.service.
Dec 13 02:49:35.672717 systemd[1]: Started serial-getty@ttyS0.service.
Dec 13 02:49:35.672942 systemd[1]: Reached target getty.target.
Dec 13 02:49:35.766783 systemd-networkd[1064]: ens192: Gained IPv6LL
Dec 13 02:49:35.768056 systemd[1]: Finished systemd-networkd-wait-online.service.
Dec 13 02:49:35.768442 systemd[1]: Reached target network-online.target.
Dec 13 02:49:35.770020 systemd[1]: Starting kubelet.service...
Dec 13 02:49:36.934193 systemd[1]: Started kubelet.service.
Dec 13 02:49:36.934497 systemd[1]: Reached target multi-user.target.
Dec 13 02:49:36.935386 systemd[1]: Starting systemd-update-utmp-runlevel.service...
Dec 13 02:49:36.940175 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Dec 13 02:49:36.940294 systemd[1]: Finished systemd-update-utmp-runlevel.service.
Dec 13 02:49:36.940466 systemd[1]: Startup finished in 873ms (kernel) + 9.808s (initrd) + 5.340s (userspace) = 16.021s.
Dec 13 02:49:36.971085 login[1352]: pam_lastlog(login:session): file /var/log/lastlog is locked/read
Dec 13 02:49:36.971595 login[1351]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0)
Dec 13 02:49:36.980193 systemd[1]: Created slice user-500.slice.
Dec 13 02:49:36.981007 systemd[1]: Starting user-runtime-dir@500.service...
Dec 13 02:49:36.985397 systemd-logind[1236]: New session 1 of user core.
Dec 13 02:49:36.988425 systemd[1]: Finished user-runtime-dir@500.service.
Dec 13 02:49:36.989413 systemd[1]: Starting user@500.service...
Dec 13 02:49:36.991975 (systemd)[1360]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0)
Dec 13 02:49:37.041026 systemd[1360]: Queued start job for default target default.target.
Dec 13 02:49:37.041544 systemd[1360]: Reached target paths.target.
Dec 13 02:49:37.041610 systemd[1360]: Reached target sockets.target.
Dec 13 02:49:37.041675 systemd[1360]: Reached target timers.target.
Dec 13 02:49:37.041729 systemd[1360]: Reached target basic.target.
Dec 13 02:49:37.041805 systemd[1360]: Reached target default.target.
Dec 13 02:49:37.041839 systemd[1]: Started user@500.service.
Dec 13 02:49:37.041901 systemd[1360]: Startup finished in 45ms.
Dec 13 02:49:37.042581 systemd[1]: Started session-1.scope.
Dec 13 02:49:37.605382 kubelet[1357]: E1213 02:49:37.605340    1357 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Dec 13 02:49:37.606849 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 02:49:37.606954 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 13 02:49:37.973184 login[1352]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0)
Dec 13 02:49:37.977036 systemd[1]: Started session-2.scope.
Dec 13 02:49:37.977335 systemd-logind[1236]: New session 2 of user core.
Dec 13 02:49:47.857444 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
Dec 13 02:49:47.857622 systemd[1]: Stopped kubelet.service.
Dec 13 02:49:47.858870 systemd[1]: Starting kubelet.service...
Dec 13 02:49:48.105740 systemd[1]: Started kubelet.service.
Dec 13 02:49:48.226792 kubelet[1390]: E1213 02:49:48.226727    1390 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Dec 13 02:49:48.229123 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 02:49:48.229220 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 13 02:49:58.479885 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
Dec 13 02:49:58.480032 systemd[1]: Stopped kubelet.service.
Dec 13 02:49:58.481215 systemd[1]: Starting kubelet.service...
Dec 13 02:49:58.804980 systemd[1]: Started kubelet.service.
Dec 13 02:49:58.834544 kubelet[1400]: E1213 02:49:58.834501    1400 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Dec 13 02:49:58.835867 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 02:49:58.835940 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 13 02:50:09.052467 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3.
Dec 13 02:50:09.052657 systemd[1]: Stopped kubelet.service.
Dec 13 02:50:09.053955 systemd[1]: Starting kubelet.service...
Dec 13 02:50:09.380639 systemd[1]: Started kubelet.service.
Dec 13 02:50:09.513576 kubelet[1410]: E1213 02:50:09.513542    1410 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Dec 13 02:50:09.515170 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 02:50:09.515264 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 13 02:50:15.333825 systemd[1]: Created slice system-sshd.slice.
Dec 13 02:50:15.334783 systemd[1]: Started sshd@0-139.178.70.104:22-147.75.109.163:39562.service.
Dec 13 02:50:15.426735 sshd[1417]: Accepted publickey for core from 147.75.109.163 port 39562 ssh2: RSA SHA256:k2ByNGL46war/Xsk68FiWoh37KWlcdLKudymf+Foujk
Dec 13 02:50:15.427678 sshd[1417]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 02:50:15.431806 systemd[1]: Started session-3.scope.
Dec 13 02:50:15.432058 systemd-logind[1236]: New session 3 of user core.
Dec 13 02:50:15.479441 systemd[1]: Started sshd@1-139.178.70.104:22-147.75.109.163:39572.service.
Dec 13 02:50:15.516390 sshd[1422]: Accepted publickey for core from 147.75.109.163 port 39572 ssh2: RSA SHA256:k2ByNGL46war/Xsk68FiWoh37KWlcdLKudymf+Foujk
Dec 13 02:50:15.517199 sshd[1422]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 02:50:15.520298 systemd[1]: Started session-4.scope.
Dec 13 02:50:15.520501 systemd-logind[1236]: New session 4 of user core.
Dec 13 02:50:15.571769 sshd[1422]: pam_unix(sshd:session): session closed for user core
Dec 13 02:50:15.573897 systemd[1]: Started sshd@2-139.178.70.104:22-147.75.109.163:39588.service.
Dec 13 02:50:15.575729 systemd[1]: sshd@1-139.178.70.104:22-147.75.109.163:39572.service: Deactivated successfully.
Dec 13 02:50:15.576221 systemd[1]: session-4.scope: Deactivated successfully.
Dec 13 02:50:15.577891 systemd-logind[1236]: Session 4 logged out. Waiting for processes to exit.
Dec 13 02:50:15.578645 systemd-logind[1236]: Removed session 4.
Dec 13 02:50:15.612441 sshd[1427]: Accepted publickey for core from 147.75.109.163 port 39588 ssh2: RSA SHA256:k2ByNGL46war/Xsk68FiWoh37KWlcdLKudymf+Foujk
Dec 13 02:50:15.612924 sshd[1427]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 02:50:15.615769 systemd[1]: Started session-5.scope.
Dec 13 02:50:15.615978 systemd-logind[1236]: New session 5 of user core.
Dec 13 02:50:15.663711 sshd[1427]: pam_unix(sshd:session): session closed for user core
Dec 13 02:50:15.666409 systemd[1]: sshd@2-139.178.70.104:22-147.75.109.163:39588.service: Deactivated successfully.
Dec 13 02:50:15.666859 systemd[1]: session-5.scope: Deactivated successfully.
Dec 13 02:50:15.667493 systemd-logind[1236]: Session 5 logged out. Waiting for processes to exit.
Dec 13 02:50:15.668339 systemd[1]: Started sshd@3-139.178.70.104:22-147.75.109.163:39600.service.
Dec 13 02:50:15.669290 systemd-logind[1236]: Removed session 5.
Dec 13 02:50:15.707156 sshd[1434]: Accepted publickey for core from 147.75.109.163 port 39600 ssh2: RSA SHA256:k2ByNGL46war/Xsk68FiWoh37KWlcdLKudymf+Foujk
Dec 13 02:50:15.708350 sshd[1434]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 02:50:15.711285 systemd-logind[1236]: New session 6 of user core.
Dec 13 02:50:15.711982 systemd[1]: Started session-6.scope.
Dec 13 02:50:15.763447 sshd[1434]: pam_unix(sshd:session): session closed for user core
Dec 13 02:50:15.765492 systemd[1]: Started sshd@4-139.178.70.104:22-147.75.109.163:39604.service.
Dec 13 02:50:15.765781 systemd[1]: sshd@3-139.178.70.104:22-147.75.109.163:39600.service: Deactivated successfully.
Dec 13 02:50:15.766144 systemd[1]: session-6.scope: Deactivated successfully.
Dec 13 02:50:15.766534 systemd-logind[1236]: Session 6 logged out. Waiting for processes to exit.
Dec 13 02:50:15.767696 systemd-logind[1236]: Removed session 6.
Dec 13 02:50:15.802566 sshd[1439]: Accepted publickey for core from 147.75.109.163 port 39604 ssh2: RSA SHA256:k2ByNGL46war/Xsk68FiWoh37KWlcdLKudymf+Foujk
Dec 13 02:50:15.803320 sshd[1439]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0)
Dec 13 02:50:15.805792 systemd-logind[1236]: New session 7 of user core.
Dec 13 02:50:15.806224 systemd[1]: Started session-7.scope.
Dec 13 02:50:15.892812 sudo[1443]:     core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh
Dec 13 02:50:15.892947 sudo[1443]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500)
Dec 13 02:50:15.899040 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+")
Dec 13 02:50:15.900621 systemd[1]: Starting coreos-metadata.service...
Dec 13 02:50:15.914072 systemd[1]: coreos-metadata.service: Deactivated successfully.
Dec 13 02:50:15.914173 systemd[1]: Finished coreos-metadata.service.
Dec 13 02:50:16.850028 systemd[1]: Stopped kubelet.service.
Dec 13 02:50:16.851337 systemd[1]: Starting kubelet.service...
Dec 13 02:50:16.863652 systemd[1]: Reloading.
Dec 13 02:50:16.912042 /usr/lib/systemd/system-generators/torcx-generator[1511]: time="2024-12-13T02:50:16Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]"
Dec 13 02:50:16.912241 /usr/lib/systemd/system-generators/torcx-generator[1511]: time="2024-12-13T02:50:16Z" level=info msg="torcx already run"
Dec 13 02:50:16.971020 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon.
Dec 13 02:50:16.971032 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 13 02:50:16.983065 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Dec 13 02:50:17.062973 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM
Dec 13 02:50:17.063026 systemd[1]: kubelet.service: Failed with result 'signal'.
Dec 13 02:50:17.063177 systemd[1]: Stopped kubelet.service.
Dec 13 02:50:17.064475 systemd[1]: Starting kubelet.service...
Dec 13 02:50:17.419795 systemd[1]: Started kubelet.service.
Dec 13 02:50:17.449226 kubelet[1576]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Dec 13 02:50:17.449440 kubelet[1576]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Dec 13 02:50:17.449484 kubelet[1576]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Dec 13 02:50:17.449578 kubelet[1576]: I1213 02:50:17.449558    1576 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Dec 13 02:50:17.684805 kubelet[1576]: I1213 02:50:17.684494    1576 server.go:487] "Kubelet version" kubeletVersion="v1.29.2"
Dec 13 02:50:17.684805 kubelet[1576]: I1213 02:50:17.684513    1576 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Dec 13 02:50:17.684805 kubelet[1576]: I1213 02:50:17.684661    1576 server.go:919] "Client rotation is on, will bootstrap in background"
Dec 13 02:50:17.996753 kubelet[1576]: I1213 02:50:17.996295    1576 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Dec 13 02:50:18.032562 kubelet[1576]: I1213 02:50:18.032544    1576 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Dec 13 02:50:18.032904 kubelet[1576]: I1213 02:50:18.032886    1576 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Dec 13 02:50:18.033094 kubelet[1576]: I1213 02:50:18.033085    1576 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Dec 13 02:50:18.034570 kubelet[1576]: I1213 02:50:18.034560    1576 topology_manager.go:138] "Creating topology manager with none policy"
Dec 13 02:50:18.034630 kubelet[1576]: I1213 02:50:18.034622    1576 container_manager_linux.go:301] "Creating device plugin manager"
Dec 13 02:50:18.035699 kubelet[1576]: I1213 02:50:18.035688    1576 state_mem.go:36] "Initialized new in-memory state store"
Dec 13 02:50:18.035845 kubelet[1576]: I1213 02:50:18.035835    1576 kubelet.go:396] "Attempting to sync node with API server"
Dec 13 02:50:18.035898 kubelet[1576]: I1213 02:50:18.035891    1576 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
Dec 13 02:50:18.035966 kubelet[1576]: I1213 02:50:18.035957    1576 kubelet.go:312] "Adding apiserver pod source"
Dec 13 02:50:18.036028 kubelet[1576]: I1213 02:50:18.036020    1576 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Dec 13 02:50:18.036198 kubelet[1576]: E1213 02:50:18.036144    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:50:18.036198 kubelet[1576]: E1213 02:50:18.036172    1576 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:50:18.037388 kubelet[1576]: I1213 02:50:18.037373    1576 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1"
Dec 13 02:50:18.039808 kubelet[1576]: W1213 02:50:18.039793    1576 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Dec 13 02:50:18.039914 kubelet[1576]: E1213 02:50:18.039904    1576 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Dec 13 02:50:18.040847 kubelet[1576]: W1213 02:50:18.040837    1576 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "10.67.124.136" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Dec 13 02:50:18.040951 kubelet[1576]: E1213 02:50:18.040943    1576 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.67.124.136" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Dec 13 02:50:18.041336 kubelet[1576]: I1213 02:50:18.041319    1576 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Dec 13 02:50:18.041388 kubelet[1576]: W1213 02:50:18.041367    1576 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Dec 13 02:50:18.041853 kubelet[1576]: I1213 02:50:18.041839    1576 server.go:1256] "Started kubelet"
Dec 13 02:50:18.043843 kernel: SELinux:  Context system_u:object_r:container_file_t:s0 is not valid (left unmapped).
Dec 13 02:50:18.044382 kubelet[1576]: I1213 02:50:18.043932    1576 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Dec 13 02:50:18.050415 kubelet[1576]: I1213 02:50:18.050399    1576 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
Dec 13 02:50:18.051266 kubelet[1576]: I1213 02:50:18.051255    1576 server.go:461] "Adding debug handlers to kubelet server"
Dec 13 02:50:18.052093 kubelet[1576]: I1213 02:50:18.052081    1576 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Dec 13 02:50:18.052347 kubelet[1576]: I1213 02:50:18.052337    1576 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Dec 13 02:50:18.055008 kubelet[1576]: I1213 02:50:18.054984    1576 volume_manager.go:291] "Starting Kubelet Volume Manager"
Dec 13 02:50:18.055529 kubelet[1576]: I1213 02:50:18.055514    1576 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
Dec 13 02:50:18.055644 kubelet[1576]: I1213 02:50:18.055637    1576 reconciler_new.go:29] "Reconciler: start to sync state"
Dec 13 02:50:18.057100 kubelet[1576]: E1213 02:50:18.057075    1576 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Dec 13 02:50:18.057730 kubelet[1576]: I1213 02:50:18.057718    1576 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Dec 13 02:50:18.059127 kubelet[1576]: I1213 02:50:18.059115    1576 factory.go:221] Registration of the containerd container factory successfully
Dec 13 02:50:18.059245 kubelet[1576]: I1213 02:50:18.059230    1576 factory.go:221] Registration of the systemd container factory successfully
Dec 13 02:50:18.070217 kubelet[1576]: E1213 02:50:18.070188    1576 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.67.124.136\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms"
Dec 13 02:50:18.070308 kubelet[1576]: W1213 02:50:18.070237    1576 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
Dec 13 02:50:18.070308 kubelet[1576]: E1213 02:50:18.070254    1576 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
Dec 13 02:50:18.071607 kubelet[1576]: E1213 02:50:18.071590    1576 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.67.124.136.18109cc30a3ac7fc  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.67.124.136,UID:10.67.124.136,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.67.124.136,},FirstTimestamp:2024-12-13 02:50:18.04182118 +0000 UTC m=+0.619164536,LastTimestamp:2024-12-13 02:50:18.04182118 +0000 UTC m=+0.619164536,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.67.124.136,}"
Dec 13 02:50:18.072826 kubelet[1576]: E1213 02:50:18.072808    1576 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.67.124.136.18109cc30b235961  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.67.124.136,UID:10.67.124.136,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.67.124.136,},FirstTimestamp:2024-12-13 02:50:18.057062753 +0000 UTC m=+0.634406114,LastTimestamp:2024-12-13 02:50:18.057062753 +0000 UTC m=+0.634406114,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.67.124.136,}"
Dec 13 02:50:18.078149 kubelet[1576]: I1213 02:50:18.075540    1576 cpu_manager.go:214] "Starting CPU manager" policy="none"
Dec 13 02:50:18.078149 kubelet[1576]: I1213 02:50:18.075558    1576 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Dec 13 02:50:18.078149 kubelet[1576]: I1213 02:50:18.075578    1576 state_mem.go:36] "Initialized new in-memory state store"
Dec 13 02:50:18.078149 kubelet[1576]: I1213 02:50:18.076471    1576 policy_none.go:49] "None policy: Start"
Dec 13 02:50:18.082012 kubelet[1576]: E1213 02:50:18.079561    1576 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.67.124.136.18109cc30c36e9f9  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.67.124.136,UID:10.67.124.136,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.67.124.136 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.67.124.136,},FirstTimestamp:2024-12-13 02:50:18.075122169 +0000 UTC m=+0.652465526,LastTimestamp:2024-12-13 02:50:18.075122169 +0000 UTC m=+0.652465526,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.67.124.136,}"
Dec 13 02:50:18.082933 kubelet[1576]: I1213 02:50:18.082546    1576 memory_manager.go:170] "Starting memorymanager" policy="None"
Dec 13 02:50:18.082933 kubelet[1576]: I1213 02:50:18.082567    1576 state_mem.go:35] "Initializing new in-memory state store"
Dec 13 02:50:18.089921 systemd[1]: Created slice kubepods.slice.
Dec 13 02:50:18.096244 systemd[1]: Created slice kubepods-burstable.slice.
Dec 13 02:50:18.101162 systemd[1]: Created slice kubepods-besteffort.slice.
Dec 13 02:50:18.108244 kubelet[1576]: I1213 02:50:18.108219    1576 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Dec 13 02:50:18.108392 kubelet[1576]: I1213 02:50:18.108379    1576 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Dec 13 02:50:18.111212 kubelet[1576]: E1213 02:50:18.111177    1576 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.67.124.136\" not found"
Dec 13 02:50:18.156227 kubelet[1576]: I1213 02:50:18.156204    1576 kubelet_node_status.go:73] "Attempting to register node" node="10.67.124.136"
Dec 13 02:50:18.164239 kubelet[1576]: I1213 02:50:18.164209    1576 kubelet_node_status.go:76] "Successfully registered node" node="10.67.124.136"
Dec 13 02:50:18.201609 kubelet[1576]: E1213 02:50:18.201584    1576 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.67.124.136\" not found"
Dec 13 02:50:18.303553 kubelet[1576]: E1213 02:50:18.302284    1576 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.67.124.136\" not found"
Dec 13 02:50:18.402823 kubelet[1576]: E1213 02:50:18.402804    1576 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.67.124.136\" not found"
Dec 13 02:50:18.439603 kubelet[1576]: I1213 02:50:18.439577    1576 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Dec 13 02:50:18.440403 kubelet[1576]: I1213 02:50:18.440389    1576 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Dec 13 02:50:18.440510 kubelet[1576]: I1213 02:50:18.440499    1576 status_manager.go:217] "Starting to sync pod status with apiserver"
Dec 13 02:50:18.440603 kubelet[1576]: I1213 02:50:18.440591    1576 kubelet.go:2329] "Starting kubelet main sync loop"
Dec 13 02:50:18.440700 kubelet[1576]: E1213 02:50:18.440692    1576 kubelet.go:2353] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
Dec 13 02:50:18.503032 kubelet[1576]: E1213 02:50:18.503008    1576 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.67.124.136\" not found"
Dec 13 02:50:18.603968 kubelet[1576]: E1213 02:50:18.603882    1576 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.67.124.136\" not found"
Dec 13 02:50:18.686704 kubelet[1576]: I1213 02:50:18.686678    1576 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials"
Dec 13 02:50:18.686951 kubelet[1576]: W1213 02:50:18.686938    1576 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received
Dec 13 02:50:18.705021 kubelet[1576]: E1213 02:50:18.705003    1576 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.67.124.136\" not found"
Dec 13 02:50:18.805568 kubelet[1576]: E1213 02:50:18.805536    1576 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.67.124.136\" not found"
Dec 13 02:50:18.906327 kubelet[1576]: E1213 02:50:18.906123    1576 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.67.124.136\" not found"
Dec 13 02:50:18.941633 sudo[1443]: pam_unix(sudo:session): session closed for user root
Dec 13 02:50:18.952407 sshd[1439]: pam_unix(sshd:session): session closed for user core
Dec 13 02:50:18.953972 systemd[1]: sshd@4-139.178.70.104:22-147.75.109.163:39604.service: Deactivated successfully.
Dec 13 02:50:18.954572 systemd[1]: session-7.scope: Deactivated successfully.
Dec 13 02:50:18.955144 systemd-logind[1236]: Session 7 logged out. Waiting for processes to exit.
Dec 13 02:50:18.955792 systemd-logind[1236]: Removed session 7.
Dec 13 02:50:19.007302 kubelet[1576]: I1213 02:50:19.007274    1576 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24"
Dec 13 02:50:19.007710 env[1248]: time="2024-12-13T02:50:19.007623787Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Dec 13 02:50:19.007967 kubelet[1576]: I1213 02:50:19.007774    1576 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24"
Dec 13 02:50:19.036449 kubelet[1576]: I1213 02:50:19.036425    1576 apiserver.go:52] "Watching apiserver"
Dec 13 02:50:19.036534 kubelet[1576]: E1213 02:50:19.036486    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:50:19.052650 kubelet[1576]: I1213 02:50:19.052634    1576 topology_manager.go:215] "Topology Admit Handler" podUID="0b841991-65eb-408b-881c-9deda56e2ce4" podNamespace="kube-system" podName="cilium-mt8qr"
Dec 13 02:50:19.052834 kubelet[1576]: I1213 02:50:19.052823    1576 topology_manager.go:215] "Topology Admit Handler" podUID="d16d81af-040d-46d7-aea9-519459e05fcc" podNamespace="kube-system" podName="kube-proxy-wv2qw"
Dec 13 02:50:19.056090 kubelet[1576]: I1213 02:50:19.055840    1576 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Dec 13 02:50:19.056844 systemd[1]: Created slice kubepods-besteffort-podd16d81af_040d_46d7_aea9_519459e05fcc.slice.
Dec 13 02:50:19.061083 kubelet[1576]: I1213 02:50:19.061061    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d16d81af-040d-46d7-aea9-519459e05fcc-xtables-lock\") pod \"kube-proxy-wv2qw\" (UID: \"d16d81af-040d-46d7-aea9-519459e05fcc\") " pod="kube-system/kube-proxy-wv2qw"
Dec 13 02:50:19.061164 kubelet[1576]: I1213 02:50:19.061095    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0b841991-65eb-408b-881c-9deda56e2ce4-cilium-run\") pod \"cilium-mt8qr\" (UID: \"0b841991-65eb-408b-881c-9deda56e2ce4\") " pod="kube-system/cilium-mt8qr"
Dec 13 02:50:19.061164 kubelet[1576]: I1213 02:50:19.061112    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b841991-65eb-408b-881c-9deda56e2ce4-etc-cni-netd\") pod \"cilium-mt8qr\" (UID: \"0b841991-65eb-408b-881c-9deda56e2ce4\") " pod="kube-system/cilium-mt8qr"
Dec 13 02:50:19.061164 kubelet[1576]: I1213 02:50:19.061128    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0b841991-65eb-408b-881c-9deda56e2ce4-host-proc-sys-kernel\") pod \"cilium-mt8qr\" (UID: \"0b841991-65eb-408b-881c-9deda56e2ce4\") " pod="kube-system/cilium-mt8qr"
Dec 13 02:50:19.061164 kubelet[1576]: I1213 02:50:19.061151    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d16d81af-040d-46d7-aea9-519459e05fcc-kube-proxy\") pod \"kube-proxy-wv2qw\" (UID: \"d16d81af-040d-46d7-aea9-519459e05fcc\") " pod="kube-system/kube-proxy-wv2qw"
Dec 13 02:50:19.061281 kubelet[1576]: I1213 02:50:19.061171    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb8bd\" (UniqueName: \"kubernetes.io/projected/d16d81af-040d-46d7-aea9-519459e05fcc-kube-api-access-tb8bd\") pod \"kube-proxy-wv2qw\" (UID: \"d16d81af-040d-46d7-aea9-519459e05fcc\") " pod="kube-system/kube-proxy-wv2qw"
Dec 13 02:50:19.061281 kubelet[1576]: I1213 02:50:19.061186    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0b841991-65eb-408b-881c-9deda56e2ce4-cni-path\") pod \"cilium-mt8qr\" (UID: \"0b841991-65eb-408b-881c-9deda56e2ce4\") " pod="kube-system/cilium-mt8qr"
Dec 13 02:50:19.061281 kubelet[1576]: I1213 02:50:19.061209    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0b841991-65eb-408b-881c-9deda56e2ce4-host-proc-sys-net\") pod \"cilium-mt8qr\" (UID: \"0b841991-65eb-408b-881c-9deda56e2ce4\") " pod="kube-system/cilium-mt8qr"
Dec 13 02:50:19.061281 kubelet[1576]: I1213 02:50:19.061226    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0b841991-65eb-408b-881c-9deda56e2ce4-hubble-tls\") pod \"cilium-mt8qr\" (UID: \"0b841991-65eb-408b-881c-9deda56e2ce4\") " pod="kube-system/cilium-mt8qr"
Dec 13 02:50:19.061281 kubelet[1576]: I1213 02:50:19.061243    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d16d81af-040d-46d7-aea9-519459e05fcc-lib-modules\") pod \"kube-proxy-wv2qw\" (UID: \"d16d81af-040d-46d7-aea9-519459e05fcc\") " pod="kube-system/kube-proxy-wv2qw"
Dec 13 02:50:19.061281 kubelet[1576]: I1213 02:50:19.061257    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0b841991-65eb-408b-881c-9deda56e2ce4-bpf-maps\") pod \"cilium-mt8qr\" (UID: \"0b841991-65eb-408b-881c-9deda56e2ce4\") " pod="kube-system/cilium-mt8qr"
Dec 13 02:50:19.061447 kubelet[1576]: I1213 02:50:19.061282    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0b841991-65eb-408b-881c-9deda56e2ce4-hostproc\") pod \"cilium-mt8qr\" (UID: \"0b841991-65eb-408b-881c-9deda56e2ce4\") " pod="kube-system/cilium-mt8qr"
Dec 13 02:50:19.061447 kubelet[1576]: I1213 02:50:19.061303    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0b841991-65eb-408b-881c-9deda56e2ce4-cilium-cgroup\") pod \"cilium-mt8qr\" (UID: \"0b841991-65eb-408b-881c-9deda56e2ce4\") " pod="kube-system/cilium-mt8qr"
Dec 13 02:50:19.061447 kubelet[1576]: I1213 02:50:19.061320    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b841991-65eb-408b-881c-9deda56e2ce4-lib-modules\") pod \"cilium-mt8qr\" (UID: \"0b841991-65eb-408b-881c-9deda56e2ce4\") " pod="kube-system/cilium-mt8qr"
Dec 13 02:50:19.061447 kubelet[1576]: I1213 02:50:19.061334    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wljg8\" (UniqueName: \"kubernetes.io/projected/0b841991-65eb-408b-881c-9deda56e2ce4-kube-api-access-wljg8\") pod \"cilium-mt8qr\" (UID: \"0b841991-65eb-408b-881c-9deda56e2ce4\") " pod="kube-system/cilium-mt8qr"
Dec 13 02:50:19.061447 kubelet[1576]: I1213 02:50:19.061348    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b841991-65eb-408b-881c-9deda56e2ce4-xtables-lock\") pod \"cilium-mt8qr\" (UID: \"0b841991-65eb-408b-881c-9deda56e2ce4\") " pod="kube-system/cilium-mt8qr"
Dec 13 02:50:19.061447 kubelet[1576]: I1213 02:50:19.061362    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0b841991-65eb-408b-881c-9deda56e2ce4-clustermesh-secrets\") pod \"cilium-mt8qr\" (UID: \"0b841991-65eb-408b-881c-9deda56e2ce4\") " pod="kube-system/cilium-mt8qr"
Dec 13 02:50:19.062336 kubelet[1576]: I1213 02:50:19.061402    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0b841991-65eb-408b-881c-9deda56e2ce4-cilium-config-path\") pod \"cilium-mt8qr\" (UID: \"0b841991-65eb-408b-881c-9deda56e2ce4\") " pod="kube-system/cilium-mt8qr"
Dec 13 02:50:19.064685 systemd[1]: Created slice kubepods-burstable-pod0b841991_65eb_408b_881c_9deda56e2ce4.slice.
Dec 13 02:50:19.363602 env[1248]: time="2024-12-13T02:50:19.363564483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wv2qw,Uid:d16d81af-040d-46d7-aea9-519459e05fcc,Namespace:kube-system,Attempt:0,}"
Dec 13 02:50:19.372955 env[1248]: time="2024-12-13T02:50:19.372639479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mt8qr,Uid:0b841991-65eb-408b-881c-9deda56e2ce4,Namespace:kube-system,Attempt:0,}"
Dec 13 02:50:20.037248 kubelet[1576]: E1213 02:50:20.037220    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:50:20.460969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1914748852.mount: Deactivated successfully.
Dec 13 02:50:20.513947 env[1248]: time="2024-12-13T02:50:20.513915195Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:50:20.524337 systemd[1]: Started sshd@5-139.178.70.104:22-45.148.10.203:47650.service.
Dec 13 02:50:20.528329 env[1248]: time="2024-12-13T02:50:20.528298842Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:50:20.608377 env[1248]: time="2024-12-13T02:50:20.608343965Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:50:20.625675 env[1248]: time="2024-12-13T02:50:20.625614834Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:50:20.639326 env[1248]: time="2024-12-13T02:50:20.639303730Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:50:20.651381 env[1248]: time="2024-12-13T02:50:20.651365027Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:50:20.668270 env[1248]: time="2024-12-13T02:50:20.668252601Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:50:20.683137 env[1248]: time="2024-12-13T02:50:20.683084388Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:50:20.743018 env[1248]: time="2024-12-13T02:50:20.742884261Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 02:50:20.743018 env[1248]: time="2024-12-13T02:50:20.742928399Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 02:50:20.743018 env[1248]: time="2024-12-13T02:50:20.742943328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 02:50:20.743489 env[1248]: time="2024-12-13T02:50:20.743251451Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8d4948a9b4794816949f3c3e326a383ef30834aeadee43770c12ec44163fd568 pid=1637 runtime=io.containerd.runc.v2
Dec 13 02:50:20.749264 env[1248]: time="2024-12-13T02:50:20.749136769Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 02:50:20.749264 env[1248]: time="2024-12-13T02:50:20.749164923Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 02:50:20.749264 env[1248]: time="2024-12-13T02:50:20.749172470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 02:50:20.749443 env[1248]: time="2024-12-13T02:50:20.749275938Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/41d8f3ee744b0c709d68f8b88dc7dc20f303b4f0b405ae1f101689cd150cf41d pid=1647 runtime=io.containerd.runc.v2
Dec 13 02:50:20.760429 systemd[1]: Started cri-containerd-41d8f3ee744b0c709d68f8b88dc7dc20f303b4f0b405ae1f101689cd150cf41d.scope.
Dec 13 02:50:20.768326 systemd[1]: Started cri-containerd-8d4948a9b4794816949f3c3e326a383ef30834aeadee43770c12ec44163fd568.scope.
Dec 13 02:50:20.796023 env[1248]: time="2024-12-13T02:50:20.795989132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mt8qr,Uid:0b841991-65eb-408b-881c-9deda56e2ce4,Namespace:kube-system,Attempt:0,} returns sandbox id \"41d8f3ee744b0c709d68f8b88dc7dc20f303b4f0b405ae1f101689cd150cf41d\""
Dec 13 02:50:20.797716 env[1248]: time="2024-12-13T02:50:20.797692008Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\""
Dec 13 02:50:20.800719 env[1248]: time="2024-12-13T02:50:20.800689814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wv2qw,Uid:d16d81af-040d-46d7-aea9-519459e05fcc,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d4948a9b4794816949f3c3e326a383ef30834aeadee43770c12ec44163fd568\""
Dec 13 02:50:20.919180 update_engine[1237]: I1213 02:50:20.919121  1237 update_attempter.cc:509] Updating boot flags...
Dec 13 02:50:21.037389 kubelet[1576]: E1213 02:50:21.037322    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:50:21.226679 sshd[1621]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.148.10.203  user=root
Dec 13 02:50:22.037981 kubelet[1576]: E1213 02:50:22.037952    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:50:23.038577 kubelet[1576]: E1213 02:50:23.038556    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:50:23.112629 sshd[1621]: Failed password for root from 45.148.10.203 port 47650 ssh2
Dec 13 02:50:23.335609 sshd[1621]: Connection closed by authenticating user root 45.148.10.203 port 47650 [preauth]
Dec 13 02:50:23.336266 systemd[1]: sshd@5-139.178.70.104:22-45.148.10.203:47650.service: Deactivated successfully.
Dec 13 02:50:24.039665 kubelet[1576]: E1213 02:50:24.039630    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:50:25.040391 kubelet[1576]: E1213 02:50:25.040361    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:50:26.040805 kubelet[1576]: E1213 02:50:26.040775    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:50:26.188089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2320184558.mount: Deactivated successfully.
Dec 13 02:50:27.041129 kubelet[1576]: E1213 02:50:27.041104    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:50:27.473581 systemd[1]: Started sshd@6-139.178.70.104:22-45.148.10.203:47656.service.
Dec 13 02:50:28.041217 kubelet[1576]: E1213 02:50:28.041185    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:50:28.172137 sshd[1731]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.148.10.203  user=root
Dec 13 02:50:28.427339 env[1248]: time="2024-12-13T02:50:28.427305559Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:50:28.428033 env[1248]: time="2024-12-13T02:50:28.428020391Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:50:28.428876 env[1248]: time="2024-12-13T02:50:28.428859417Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:50:28.429302 env[1248]: time="2024-12-13T02:50:28.429283314Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\""
Dec 13 02:50:28.429923 env[1248]: time="2024-12-13T02:50:28.429907284Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\""
Dec 13 02:50:28.430734 env[1248]: time="2024-12-13T02:50:28.430718308Z" level=info msg="CreateContainer within sandbox \"41d8f3ee744b0c709d68f8b88dc7dc20f303b4f0b405ae1f101689cd150cf41d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Dec 13 02:50:28.437450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount184059491.mount: Deactivated successfully.
Dec 13 02:50:28.441981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2737077825.mount: Deactivated successfully.
Dec 13 02:50:28.445159 env[1248]: time="2024-12-13T02:50:28.445126757Z" level=info msg="CreateContainer within sandbox \"41d8f3ee744b0c709d68f8b88dc7dc20f303b4f0b405ae1f101689cd150cf41d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d9bc4724ce9071d79645dd71f212dada80d395f024a61f88bf925cc49c349ef4\""
Dec 13 02:50:28.445633 env[1248]: time="2024-12-13T02:50:28.445618413Z" level=info msg="StartContainer for \"d9bc4724ce9071d79645dd71f212dada80d395f024a61f88bf925cc49c349ef4\""
Dec 13 02:50:28.458424 systemd[1]: Started cri-containerd-d9bc4724ce9071d79645dd71f212dada80d395f024a61f88bf925cc49c349ef4.scope.
Dec 13 02:50:28.498453 systemd[1]: cri-containerd-d9bc4724ce9071d79645dd71f212dada80d395f024a61f88bf925cc49c349ef4.scope: Deactivated successfully.
Dec 13 02:50:28.521306 env[1248]: time="2024-12-13T02:50:28.521158721Z" level=info msg="StartContainer for \"d9bc4724ce9071d79645dd71f212dada80d395f024a61f88bf925cc49c349ef4\" returns successfully"
Dec 13 02:50:28.709017 env[1248]: time="2024-12-13T02:50:28.708497347Z" level=info msg="shim disconnected" id=d9bc4724ce9071d79645dd71f212dada80d395f024a61f88bf925cc49c349ef4
Dec 13 02:50:28.709017 env[1248]: time="2024-12-13T02:50:28.708544298Z" level=warning msg="cleaning up after shim disconnected" id=d9bc4724ce9071d79645dd71f212dada80d395f024a61f88bf925cc49c349ef4 namespace=k8s.io
Dec 13 02:50:28.709017 env[1248]: time="2024-12-13T02:50:28.708553111Z" level=info msg="cleaning up dead shim"
Dec 13 02:50:28.714911 env[1248]: time="2024-12-13T02:50:28.714886479Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:50:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1784 runtime=io.containerd.runc.v2\n"
Dec 13 02:50:29.041918 kubelet[1576]: E1213 02:50:29.041647    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:50:29.436053 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9bc4724ce9071d79645dd71f212dada80d395f024a61f88bf925cc49c349ef4-rootfs.mount: Deactivated successfully.
Dec 13 02:50:29.458588 env[1248]: time="2024-12-13T02:50:29.458555881Z" level=info msg="CreateContainer within sandbox \"41d8f3ee744b0c709d68f8b88dc7dc20f303b4f0b405ae1f101689cd150cf41d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Dec 13 02:50:29.465344 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3882326839.mount: Deactivated successfully.
Dec 13 02:50:29.470412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2823278334.mount: Deactivated successfully.
Dec 13 02:50:29.485909 env[1248]: time="2024-12-13T02:50:29.485867819Z" level=info msg="CreateContainer within sandbox \"41d8f3ee744b0c709d68f8b88dc7dc20f303b4f0b405ae1f101689cd150cf41d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3905773c3c3da365d84e5d8251fc0fd0cd1866b2f5fe4c0a00e46ee3b60fe68f\""
Dec 13 02:50:29.486320 env[1248]: time="2024-12-13T02:50:29.486299792Z" level=info msg="StartContainer for \"3905773c3c3da365d84e5d8251fc0fd0cd1866b2f5fe4c0a00e46ee3b60fe68f\""
Dec 13 02:50:29.511605 systemd[1]: Started cri-containerd-3905773c3c3da365d84e5d8251fc0fd0cd1866b2f5fe4c0a00e46ee3b60fe68f.scope.
Dec 13 02:50:29.544851 env[1248]: time="2024-12-13T02:50:29.544631800Z" level=info msg="StartContainer for \"3905773c3c3da365d84e5d8251fc0fd0cd1866b2f5fe4c0a00e46ee3b60fe68f\" returns successfully"
Dec 13 02:50:29.552170 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 13 02:50:29.552304 systemd[1]: Stopped systemd-sysctl.service.
Dec 13 02:50:29.553045 systemd[1]: Stopping systemd-sysctl.service...
Dec 13 02:50:29.554221 systemd[1]: Starting systemd-sysctl.service...
Dec 13 02:50:29.555647 systemd[1]: cri-containerd-3905773c3c3da365d84e5d8251fc0fd0cd1866b2f5fe4c0a00e46ee3b60fe68f.scope: Deactivated successfully.
Dec 13 02:50:29.567819 systemd[1]: Finished systemd-sysctl.service.
Dec 13 02:50:29.625641 env[1248]: time="2024-12-13T02:50:29.625599627Z" level=info msg="shim disconnected" id=3905773c3c3da365d84e5d8251fc0fd0cd1866b2f5fe4c0a00e46ee3b60fe68f
Dec 13 02:50:29.625641 env[1248]: time="2024-12-13T02:50:29.625626276Z" level=warning msg="cleaning up after shim disconnected" id=3905773c3c3da365d84e5d8251fc0fd0cd1866b2f5fe4c0a00e46ee3b60fe68f namespace=k8s.io
Dec 13 02:50:29.625641 env[1248]: time="2024-12-13T02:50:29.625632055Z" level=info msg="cleaning up dead shim"
Dec 13 02:50:29.638267 env[1248]: time="2024-12-13T02:50:29.638224069Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:50:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1848 runtime=io.containerd.runc.v2\n"
Dec 13 02:50:30.042810 kubelet[1576]: E1213 02:50:30.042775    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:50:30.169089 env[1248]: time="2024-12-13T02:50:30.169059715Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:50:30.169819 env[1248]: time="2024-12-13T02:50:30.169700275Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:50:30.170718 env[1248]: time="2024-12-13T02:50:30.170705853Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:50:30.171301 env[1248]: time="2024-12-13T02:50:30.171241947Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:50:30.172557 env[1248]: time="2024-12-13T02:50:30.171768677Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\""
Dec 13 02:50:30.174623 env[1248]: time="2024-12-13T02:50:30.174343089Z" level=info msg="CreateContainer within sandbox \"8d4948a9b4794816949f3c3e326a383ef30834aeadee43770c12ec44163fd568\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Dec 13 02:50:30.180901 env[1248]: time="2024-12-13T02:50:30.180862158Z" level=info msg="CreateContainer within sandbox \"8d4948a9b4794816949f3c3e326a383ef30834aeadee43770c12ec44163fd568\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"86f93f8eafab5021342ca5a46f8b6420acfda6c8d377b7ef3f9cbc65da177f49\""
Dec 13 02:50:30.181646 env[1248]: time="2024-12-13T02:50:30.181624487Z" level=info msg="StartContainer for \"86f93f8eafab5021342ca5a46f8b6420acfda6c8d377b7ef3f9cbc65da177f49\""
Dec 13 02:50:30.193761 systemd[1]: Started cri-containerd-86f93f8eafab5021342ca5a46f8b6420acfda6c8d377b7ef3f9cbc65da177f49.scope.
Dec 13 02:50:30.227930 env[1248]: time="2024-12-13T02:50:30.227895784Z" level=info msg="StartContainer for \"86f93f8eafab5021342ca5a46f8b6420acfda6c8d377b7ef3f9cbc65da177f49\" returns successfully"
Dec 13 02:50:30.353645 sshd[1731]: Failed password for root from 45.148.10.203 port 47656 ssh2
Dec 13 02:50:30.437724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1417042630.mount: Deactivated successfully.
Dec 13 02:50:30.461614 env[1248]: time="2024-12-13T02:50:30.461589302Z" level=info msg="CreateContainer within sandbox \"41d8f3ee744b0c709d68f8b88dc7dc20f303b4f0b405ae1f101689cd150cf41d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Dec 13 02:50:30.470258 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4112763343.mount: Deactivated successfully.
Dec 13 02:50:30.476329 kubelet[1576]: I1213 02:50:30.476236    1576 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-wv2qw" podStartSLOduration=3.105561988 podStartE2EDuration="12.476205768s" podCreationTimestamp="2024-12-13 02:50:18 +0000 UTC" firstStartedPulling="2024-12-13 02:50:20.802204105 +0000 UTC m=+3.379547456" lastFinishedPulling="2024-12-13 02:50:30.172847884 +0000 UTC m=+12.750191236" observedRunningTime="2024-12-13 02:50:30.465811365 +0000 UTC m=+13.043154721" watchObservedRunningTime="2024-12-13 02:50:30.476205768 +0000 UTC m=+13.053549124"
Dec 13 02:50:30.477231 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount114319679.mount: Deactivated successfully.
Dec 13 02:50:30.477913 env[1248]: time="2024-12-13T02:50:30.477889245Z" level=info msg="CreateContainer within sandbox \"41d8f3ee744b0c709d68f8b88dc7dc20f303b4f0b405ae1f101689cd150cf41d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a3a2a664a2436ae710ef4d2a4ce2f2e74bd7e1ac0e9926761733416ef24a8d81\""
Dec 13 02:50:30.478635 env[1248]: time="2024-12-13T02:50:30.478621460Z" level=info msg="StartContainer for \"a3a2a664a2436ae710ef4d2a4ce2f2e74bd7e1ac0e9926761733416ef24a8d81\""
Dec 13 02:50:30.491609 systemd[1]: Started cri-containerd-a3a2a664a2436ae710ef4d2a4ce2f2e74bd7e1ac0e9926761733416ef24a8d81.scope.
Dec 13 02:50:30.518899 env[1248]: time="2024-12-13T02:50:30.518872023Z" level=info msg="StartContainer for \"a3a2a664a2436ae710ef4d2a4ce2f2e74bd7e1ac0e9926761733416ef24a8d81\" returns successfully"
Dec 13 02:50:30.522888 systemd[1]: cri-containerd-a3a2a664a2436ae710ef4d2a4ce2f2e74bd7e1ac0e9926761733416ef24a8d81.scope: Deactivated successfully.
Dec 13 02:50:30.541975 env[1248]: time="2024-12-13T02:50:30.541947556Z" level=info msg="shim disconnected" id=a3a2a664a2436ae710ef4d2a4ce2f2e74bd7e1ac0e9926761733416ef24a8d81
Dec 13 02:50:30.542149 env[1248]: time="2024-12-13T02:50:30.542137763Z" level=warning msg="cleaning up after shim disconnected" id=a3a2a664a2436ae710ef4d2a4ce2f2e74bd7e1ac0e9926761733416ef24a8d81 namespace=k8s.io
Dec 13 02:50:30.542209 env[1248]: time="2024-12-13T02:50:30.542199528Z" level=info msg="cleaning up dead shim"
Dec 13 02:50:30.546859 env[1248]: time="2024-12-13T02:50:30.546834282Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:50:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2066 runtime=io.containerd.runc.v2\n"
Dec 13 02:50:31.043945 kubelet[1576]: E1213 02:50:31.043904    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:50:31.470433 env[1248]: time="2024-12-13T02:50:31.470399996Z" level=info msg="CreateContainer within sandbox \"41d8f3ee744b0c709d68f8b88dc7dc20f303b4f0b405ae1f101689cd150cf41d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Dec 13 02:50:31.478604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3269624140.mount: Deactivated successfully.
Dec 13 02:50:31.481848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3760793546.mount: Deactivated successfully.
Dec 13 02:50:31.484578 env[1248]: time="2024-12-13T02:50:31.484552704Z" level=info msg="CreateContainer within sandbox \"41d8f3ee744b0c709d68f8b88dc7dc20f303b4f0b405ae1f101689cd150cf41d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f3d8296cdc4edd61abff13e1b4355cd0ee1f0334d0ca3e9b547cded788725feb\""
Dec 13 02:50:31.485080 env[1248]: time="2024-12-13T02:50:31.485060047Z" level=info msg="StartContainer for \"f3d8296cdc4edd61abff13e1b4355cd0ee1f0334d0ca3e9b547cded788725feb\""
Dec 13 02:50:31.495806 systemd[1]: Started cri-containerd-f3d8296cdc4edd61abff13e1b4355cd0ee1f0334d0ca3e9b547cded788725feb.scope.
Dec 13 02:50:31.514504 env[1248]: time="2024-12-13T02:50:31.514479374Z" level=info msg="StartContainer for \"f3d8296cdc4edd61abff13e1b4355cd0ee1f0334d0ca3e9b547cded788725feb\" returns successfully"
Dec 13 02:50:31.515892 systemd[1]: cri-containerd-f3d8296cdc4edd61abff13e1b4355cd0ee1f0334d0ca3e9b547cded788725feb.scope: Deactivated successfully.
Dec 13 02:50:31.531981 env[1248]: time="2024-12-13T02:50:31.531945931Z" level=info msg="shim disconnected" id=f3d8296cdc4edd61abff13e1b4355cd0ee1f0334d0ca3e9b547cded788725feb
Dec 13 02:50:31.531981 env[1248]: time="2024-12-13T02:50:31.531976306Z" level=warning msg="cleaning up after shim disconnected" id=f3d8296cdc4edd61abff13e1b4355cd0ee1f0334d0ca3e9b547cded788725feb namespace=k8s.io
Dec 13 02:50:31.531981 env[1248]: time="2024-12-13T02:50:31.531984121Z" level=info msg="cleaning up dead shim"
Dec 13 02:50:31.536754 env[1248]: time="2024-12-13T02:50:31.536728718Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:50:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2124 runtime=io.containerd.runc.v2\n"
Dec 13 02:50:32.044504 kubelet[1576]: E1213 02:50:32.044481    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:50:32.253013 sshd[1731]: Connection closed by authenticating user root 45.148.10.203 port 47656 [preauth]
Dec 13 02:50:32.253487 systemd[1]: sshd@6-139.178.70.104:22-45.148.10.203:47656.service: Deactivated successfully.
Dec 13 02:50:32.466586 env[1248]: time="2024-12-13T02:50:32.466558095Z" level=info msg="CreateContainer within sandbox \"41d8f3ee744b0c709d68f8b88dc7dc20f303b4f0b405ae1f101689cd150cf41d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Dec 13 02:50:32.474186 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2672962431.mount: Deactivated successfully.
Dec 13 02:50:32.477595 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1153196621.mount: Deactivated successfully.
Dec 13 02:50:32.479703 env[1248]: time="2024-12-13T02:50:32.479678242Z" level=info msg="CreateContainer within sandbox \"41d8f3ee744b0c709d68f8b88dc7dc20f303b4f0b405ae1f101689cd150cf41d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a4ce6926d7886e742c20c221efcf290ce5711409816404f8adcf88eba5e50125\""
Dec 13 02:50:32.480511 env[1248]: time="2024-12-13T02:50:32.480494143Z" level=info msg="StartContainer for \"a4ce6926d7886e742c20c221efcf290ce5711409816404f8adcf88eba5e50125\""
Dec 13 02:50:32.491247 systemd[1]: Started cri-containerd-a4ce6926d7886e742c20c221efcf290ce5711409816404f8adcf88eba5e50125.scope.
Dec 13 02:50:32.510948 env[1248]: time="2024-12-13T02:50:32.510916433Z" level=info msg="StartContainer for \"a4ce6926d7886e742c20c221efcf290ce5711409816404f8adcf88eba5e50125\" returns successfully"
Dec 13 02:50:32.573562 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
Dec 13 02:50:32.611562 kubelet[1576]: I1213 02:50:32.611540    1576 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
Dec 13 02:50:32.822541 kernel: Initializing XFRM netlink socket
Dec 13 02:50:32.824542 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
Dec 13 02:50:33.044844 kubelet[1576]: E1213 02:50:33.044810    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:50:33.483089 kubelet[1576]: I1213 02:50:33.482913    1576 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-mt8qr" podStartSLOduration=7.850628238 podStartE2EDuration="15.482879403s" podCreationTimestamp="2024-12-13 02:50:18 +0000 UTC" firstStartedPulling="2024-12-13 02:50:20.797282619 +0000 UTC m=+3.374625968" lastFinishedPulling="2024-12-13 02:50:28.42953378 +0000 UTC m=+11.006877133" observedRunningTime="2024-12-13 02:50:33.482620288 +0000 UTC m=+16.059963648" watchObservedRunningTime="2024-12-13 02:50:33.482879403 +0000 UTC m=+16.060222762"
Dec 13 02:50:34.045659 kubelet[1576]: E1213 02:50:34.045626    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:50:34.435115 systemd-networkd[1064]: cilium_host: Link UP
Dec 13 02:50:34.436170 systemd-networkd[1064]: cilium_net: Link UP
Dec 13 02:50:34.438789 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready
Dec 13 02:50:34.438864 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready
Dec 13 02:50:34.438992 systemd-networkd[1064]: cilium_net: Gained carrier
Dec 13 02:50:34.439127 systemd-networkd[1064]: cilium_host: Gained carrier
Dec 13 02:50:34.520200 systemd-networkd[1064]: cilium_vxlan: Link UP
Dec 13 02:50:34.520204 systemd-networkd[1064]: cilium_vxlan: Gained carrier
Dec 13 02:50:34.550681 systemd-networkd[1064]: cilium_net: Gained IPv6LL
Dec 13 02:50:34.687538 kernel: NET: Registered PF_ALG protocol family
Dec 13 02:50:35.046164 kubelet[1576]: E1213 02:50:35.046138    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:50:35.148136 systemd-networkd[1064]: lxc_health: Link UP
Dec 13 02:50:35.161032 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready
Dec 13 02:50:35.160713 systemd-networkd[1064]: lxc_health: Gained carrier
Dec 13 02:50:35.287351 systemd-networkd[1064]: cilium_host: Gained IPv6LL
Dec 13 02:50:35.582196 kubelet[1576]: I1213 02:50:35.582169    1576 topology_manager.go:215] "Topology Admit Handler" podUID="b9cf3743-2329-4b07-9cd1-e827fdc866b3" podNamespace="default" podName="nginx-deployment-6d5f899847-5mvnl"
Dec 13 02:50:35.586832 systemd[1]: Created slice kubepods-besteffort-podb9cf3743_2329_4b07_9cd1_e827fdc866b3.slice.
Dec 13 02:50:35.652050 kubelet[1576]: I1213 02:50:35.652017    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vckp4\" (UniqueName: \"kubernetes.io/projected/b9cf3743-2329-4b07-9cd1-e827fdc866b3-kube-api-access-vckp4\") pod \"nginx-deployment-6d5f899847-5mvnl\" (UID: \"b9cf3743-2329-4b07-9cd1-e827fdc866b3\") " pod="default/nginx-deployment-6d5f899847-5mvnl"
Dec 13 02:50:35.889134 env[1248]: time="2024-12-13T02:50:35.889068776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-5mvnl,Uid:b9cf3743-2329-4b07-9cd1-e827fdc866b3,Namespace:default,Attempt:0,}"
Dec 13 02:50:35.928954 systemd-networkd[1064]: lxc9b42429e5ea3: Link UP
Dec 13 02:50:35.934547 kernel: eth0: renamed from tmpd836f
Dec 13 02:50:35.937555 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Dec 13 02:50:35.937598 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc9b42429e5ea3: link becomes ready
Dec 13 02:50:35.938562 systemd-networkd[1064]: lxc9b42429e5ea3: Gained carrier
Dec 13 02:50:36.047232 kubelet[1576]: E1213 02:50:36.047189    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:50:36.391911 systemd[1]: Started sshd@7-139.178.70.104:22-45.148.10.203:38902.service.
Dec 13 02:50:36.502699 systemd-networkd[1064]: cilium_vxlan: Gained IPv6LL
Dec 13 02:50:36.950636 systemd-networkd[1064]: lxc_health: Gained IPv6LL
Dec 13 02:50:37.047666 kubelet[1576]: E1213 02:50:37.047634    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:50:37.078714 systemd-networkd[1064]: lxc9b42429e5ea3: Gained IPv6LL
Dec 13 02:50:37.130090 sshd[2631]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.148.10.203  user=root
Dec 13 02:50:38.036292 kubelet[1576]: E1213 02:50:38.036267    1576 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:50:38.048584 kubelet[1576]: E1213 02:50:38.048562    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:50:38.461584 env[1248]: time="2024-12-13T02:50:38.461528313Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 02:50:38.461584 env[1248]: time="2024-12-13T02:50:38.461561000Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 02:50:38.461584 env[1248]: time="2024-12-13T02:50:38.461568257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 02:50:38.462093 env[1248]: time="2024-12-13T02:50:38.462064848Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d836fc3e465e3acaff8fc7c30a750093bda70bd78dec99cca2c1ac6253184c15 pid=2654 runtime=io.containerd.runc.v2
Dec 13 02:50:38.473364 systemd[1]: run-containerd-runc-k8s.io-d836fc3e465e3acaff8fc7c30a750093bda70bd78dec99cca2c1ac6253184c15-runc.xWtZiC.mount: Deactivated successfully.
Dec 13 02:50:38.476037 systemd[1]: Started cri-containerd-d836fc3e465e3acaff8fc7c30a750093bda70bd78dec99cca2c1ac6253184c15.scope.
Dec 13 02:50:38.487582 systemd-resolved[1196]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Dec 13 02:50:38.506784 env[1248]: time="2024-12-13T02:50:38.506758357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-5mvnl,Uid:b9cf3743-2329-4b07-9cd1-e827fdc866b3,Namespace:default,Attempt:0,} returns sandbox id \"d836fc3e465e3acaff8fc7c30a750093bda70bd78dec99cca2c1ac6253184c15\""
Dec 13 02:50:38.507935 env[1248]: time="2024-12-13T02:50:38.507905371Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\""
Dec 13 02:50:38.879619 sshd[2631]: Failed password for root from 45.148.10.203 port 38902 ssh2
Dec 13 02:50:39.049440 kubelet[1576]: E1213 02:50:39.049330    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:50:39.238590 sshd[2631]: Connection closed by authenticating user root 45.148.10.203 port 38902 [preauth]
Dec 13 02:50:39.238936 systemd[1]: sshd@7-139.178.70.104:22-45.148.10.203:38902.service: Deactivated successfully.
Dec 13 02:50:40.050156 kubelet[1576]: E1213 02:50:40.050128    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:50:41.051283 kubelet[1576]: E1213 02:50:41.051247    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:50:41.068247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2649185762.mount: Deactivated successfully.
Dec 13 02:50:42.051659 kubelet[1576]: E1213 02:50:42.051634    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:50:42.272986 env[1248]: time="2024-12-13T02:50:42.272882032Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:50:42.273645 env[1248]: time="2024-12-13T02:50:42.273631237Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:50:42.275152 env[1248]: time="2024-12-13T02:50:42.275117146Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:50:42.275880 env[1248]: time="2024-12-13T02:50:42.275844422Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\""
Dec 13 02:50:42.277732 env[1248]: time="2024-12-13T02:50:42.277235747Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:50:42.278096 env[1248]: time="2024-12-13T02:50:42.278077373Z" level=info msg="CreateContainer within sandbox \"d836fc3e465e3acaff8fc7c30a750093bda70bd78dec99cca2c1ac6253184c15\" for container &ContainerMetadata{Name:nginx,Attempt:0,}"
Dec 13 02:50:42.288736 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount358347203.mount: Deactivated successfully.
Dec 13 02:50:42.292478 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2184205795.mount: Deactivated successfully.
Dec 13 02:50:42.334217 env[1248]: time="2024-12-13T02:50:42.334181544Z" level=info msg="CreateContainer within sandbox \"d836fc3e465e3acaff8fc7c30a750093bda70bd78dec99cca2c1ac6253184c15\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"0b8eeefbfe9240a69f863eca825f17af5aef49e1ea823029ba8d01201a37d343\""
Dec 13 02:50:42.334854 env[1248]: time="2024-12-13T02:50:42.334835866Z" level=info msg="StartContainer for \"0b8eeefbfe9240a69f863eca825f17af5aef49e1ea823029ba8d01201a37d343\""
Dec 13 02:50:42.348285 systemd[1]: Started cri-containerd-0b8eeefbfe9240a69f863eca825f17af5aef49e1ea823029ba8d01201a37d343.scope.
Dec 13 02:50:42.376313 env[1248]: time="2024-12-13T02:50:42.376278698Z" level=info msg="StartContainer for \"0b8eeefbfe9240a69f863eca825f17af5aef49e1ea823029ba8d01201a37d343\" returns successfully"
Dec 13 02:50:42.486436 kubelet[1576]: I1213 02:50:42.486411    1576 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-5mvnl" podStartSLOduration=3.717909914 podStartE2EDuration="7.486365977s" podCreationTimestamp="2024-12-13 02:50:35 +0000 UTC" firstStartedPulling="2024-12-13 02:50:38.507592592 +0000 UTC m=+21.084935945" lastFinishedPulling="2024-12-13 02:50:42.276048655 +0000 UTC m=+24.853392008" observedRunningTime="2024-12-13 02:50:42.48627766 +0000 UTC m=+25.063621021" watchObservedRunningTime="2024-12-13 02:50:42.486365977 +0000 UTC m=+25.063709338"
Dec 13 02:50:43.051967 kubelet[1576]: E1213 02:50:43.051937    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:50:43.377823 systemd[1]: Started sshd@8-139.178.70.104:22-45.148.10.203:49202.service.
Dec 13 02:50:44.052822 kubelet[1576]: E1213 02:50:44.052777    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:50:44.082607 sshd[2745]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.148.10.203  user=root
Dec 13 02:50:45.053056 kubelet[1576]: E1213 02:50:45.053007    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:50:45.461693 sshd[2745]: Failed password for root from 45.148.10.203 port 49202 ssh2
Dec 13 02:50:46.054040 kubelet[1576]: E1213 02:50:46.053971    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:50:46.190965 sshd[2745]: Connection closed by authenticating user root 45.148.10.203 port 49202 [preauth]
Dec 13 02:50:46.191701 systemd[1]: sshd@8-139.178.70.104:22-45.148.10.203:49202.service: Deactivated successfully.
Dec 13 02:50:47.055138 kubelet[1576]: E1213 02:50:47.055092    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:50:48.056104 kubelet[1576]: E1213 02:50:48.056059    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:50:49.042046 kubelet[1576]: I1213 02:50:49.042012    1576 topology_manager.go:215] "Topology Admit Handler" podUID="521a0817-7e07-402f-bb83-727d3ad5067d" podNamespace="default" podName="nfs-server-provisioner-0"
Dec 13 02:50:49.046428 systemd[1]: Created slice kubepods-besteffort-pod521a0817_7e07_402f_bb83_727d3ad5067d.slice.
Dec 13 02:50:49.056293 kubelet[1576]: E1213 02:50:49.056264    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:50:49.122037 kubelet[1576]: I1213 02:50:49.121995    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqd42\" (UniqueName: \"kubernetes.io/projected/521a0817-7e07-402f-bb83-727d3ad5067d-kube-api-access-mqd42\") pod \"nfs-server-provisioner-0\" (UID: \"521a0817-7e07-402f-bb83-727d3ad5067d\") " pod="default/nfs-server-provisioner-0"
Dec 13 02:50:49.122037 kubelet[1576]: I1213 02:50:49.122026    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/521a0817-7e07-402f-bb83-727d3ad5067d-data\") pod \"nfs-server-provisioner-0\" (UID: \"521a0817-7e07-402f-bb83-727d3ad5067d\") " pod="default/nfs-server-provisioner-0"
Dec 13 02:50:49.350265 env[1248]: time="2024-12-13T02:50:49.350229014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:521a0817-7e07-402f-bb83-727d3ad5067d,Namespace:default,Attempt:0,}"
Dec 13 02:50:49.402038 systemd-networkd[1064]: lxcef6453e54311: Link UP
Dec 13 02:50:49.411544 kernel: eth0: renamed from tmp516ea
Dec 13 02:50:49.417255 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Dec 13 02:50:49.417329 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcef6453e54311: link becomes ready
Dec 13 02:50:49.417280 systemd-networkd[1064]: lxcef6453e54311: Gained carrier
Dec 13 02:50:49.635402 env[1248]: time="2024-12-13T02:50:49.635206360Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 02:50:49.635402 env[1248]: time="2024-12-13T02:50:49.635229942Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 02:50:49.635600 env[1248]: time="2024-12-13T02:50:49.635547360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 02:50:49.638232 env[1248]: time="2024-12-13T02:50:49.635715705Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/516ea8dc69de76bc1e009cab57c3b629b6e6e0527b6ada409964aff6a7894f3b pid=2793 runtime=io.containerd.runc.v2
Dec 13 02:50:49.646454 systemd[1]: Started cri-containerd-516ea8dc69de76bc1e009cab57c3b629b6e6e0527b6ada409964aff6a7894f3b.scope.
Dec 13 02:50:49.657728 systemd-resolved[1196]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Dec 13 02:50:49.677531 env[1248]: time="2024-12-13T02:50:49.677495492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:521a0817-7e07-402f-bb83-727d3ad5067d,Namespace:default,Attempt:0,} returns sandbox id \"516ea8dc69de76bc1e009cab57c3b629b6e6e0527b6ada409964aff6a7894f3b\""
Dec 13 02:50:49.678615 env[1248]: time="2024-12-13T02:50:49.678562519Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\""
Dec 13 02:50:50.057036 kubelet[1576]: E1213 02:50:50.057007    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:50:50.230613 systemd[1]: run-containerd-runc-k8s.io-516ea8dc69de76bc1e009cab57c3b629b6e6e0527b6ada409964aff6a7894f3b-runc.SZ3lPQ.mount: Deactivated successfully.
Dec 13 02:50:50.329997 systemd[1]: Started sshd@9-139.178.70.104:22-45.148.10.203:49204.service.
Dec 13 02:50:51.058151 kubelet[1576]: E1213 02:50:51.058114    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:50:51.083741 sshd[2827]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.148.10.203  user=root
Dec 13 02:50:51.084098 sshd[2827]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked
Dec 13 02:50:51.414710 systemd-networkd[1064]: lxcef6453e54311: Gained IPv6LL
Dec 13 02:50:52.058348 kubelet[1576]: E1213 02:50:52.058315    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:50:52.397495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1691934790.mount: Deactivated successfully.
Dec 13 02:50:53.059137 kubelet[1576]: E1213 02:50:53.059104    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:50:53.420740 sshd[2827]: Failed password for root from 45.148.10.203 port 49204 ssh2
Dec 13 02:50:54.059333 kubelet[1576]: E1213 02:50:54.059299    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:50:54.246464 env[1248]: time="2024-12-13T02:50:54.246432343Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:50:54.247262 env[1248]: time="2024-12-13T02:50:54.247243788Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:50:54.248215 env[1248]: time="2024-12-13T02:50:54.248199939Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:50:54.249144 env[1248]: time="2024-12-13T02:50:54.249129249Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:50:54.249626 env[1248]: time="2024-12-13T02:50:54.249608909Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\""
Dec 13 02:50:54.251100 env[1248]: time="2024-12-13T02:50:54.251080301Z" level=info msg="CreateContainer within sandbox \"516ea8dc69de76bc1e009cab57c3b629b6e6e0527b6ada409964aff6a7894f3b\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}"
Dec 13 02:50:54.256687 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1934717658.mount: Deactivated successfully.
Dec 13 02:50:54.260435 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount321117269.mount: Deactivated successfully.
Dec 13 02:50:54.270319 env[1248]: time="2024-12-13T02:50:54.270288943Z" level=info msg="CreateContainer within sandbox \"516ea8dc69de76bc1e009cab57c3b629b6e6e0527b6ada409964aff6a7894f3b\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"7813f73ed751c864eb5641065ce730bb8be85b26e9d0b6a31612896a65749c5e\""
Dec 13 02:50:54.270864 env[1248]: time="2024-12-13T02:50:54.270851311Z" level=info msg="StartContainer for \"7813f73ed751c864eb5641065ce730bb8be85b26e9d0b6a31612896a65749c5e\""
Dec 13 02:50:54.286495 systemd[1]: Started cri-containerd-7813f73ed751c864eb5641065ce730bb8be85b26e9d0b6a31612896a65749c5e.scope.
Dec 13 02:50:54.312826 env[1248]: time="2024-12-13T02:50:54.312755424Z" level=info msg="StartContainer for \"7813f73ed751c864eb5641065ce730bb8be85b26e9d0b6a31612896a65749c5e\" returns successfully"
Dec 13 02:50:54.514911 kubelet[1576]: I1213 02:50:54.514810    1576 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=0.943314202 podStartE2EDuration="5.514774984s" podCreationTimestamp="2024-12-13 02:50:49 +0000 UTC" firstStartedPulling="2024-12-13 02:50:49.678293127 +0000 UTC m=+32.255636481" lastFinishedPulling="2024-12-13 02:50:54.24975391 +0000 UTC m=+36.827097263" observedRunningTime="2024-12-13 02:50:54.514614541 +0000 UTC m=+37.091957903" watchObservedRunningTime="2024-12-13 02:50:54.514774984 +0000 UTC m=+37.092118345"
Dec 13 02:50:55.060386 kubelet[1576]: E1213 02:50:55.060349    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:50:55.155440 sshd[2827]: Connection closed by authenticating user root 45.148.10.203 port 49204 [preauth]
Dec 13 02:50:55.154155 systemd[1]: sshd@9-139.178.70.104:22-45.148.10.203:49204.service: Deactivated successfully.
Dec 13 02:50:56.060595 kubelet[1576]: E1213 02:50:56.060565    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:50:57.061053 kubelet[1576]: E1213 02:50:57.061022    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:50:58.036602 kubelet[1576]: E1213 02:50:58.036570    1576 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:50:58.061756 kubelet[1576]: E1213 02:50:58.061730    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:50:59.062694 kubelet[1576]: E1213 02:50:59.062657    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:50:59.292308 systemd[1]: Started sshd@10-139.178.70.104:22-45.148.10.203:58574.service.
Dec 13 02:50:59.997302 sshd[2897]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.148.10.203  user=root
Dec 13 02:51:00.063199 kubelet[1576]: E1213 02:51:00.063167    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:51:01.064000 kubelet[1576]: E1213 02:51:01.063976    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:51:01.767802 sshd[2897]: Failed password for root from 45.148.10.203 port 58574 ssh2
Dec 13 02:51:02.065045 kubelet[1576]: E1213 02:51:02.065021    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:51:02.105769 sshd[2897]: Connection closed by authenticating user root 45.148.10.203 port 58574 [preauth]
Dec 13 02:51:02.106405 systemd[1]: sshd@10-139.178.70.104:22-45.148.10.203:58574.service: Deactivated successfully.
Dec 13 02:51:03.066412 kubelet[1576]: E1213 02:51:03.066377    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:51:03.886811 kubelet[1576]: I1213 02:51:03.886721    1576 topology_manager.go:215] "Topology Admit Handler" podUID="f57abc55-d12d-40a5-8d76-b963884c2e14" podNamespace="default" podName="test-pod-1"
Dec 13 02:51:03.891141 systemd[1]: Created slice kubepods-besteffort-podf57abc55_d12d_40a5_8d76_b963884c2e14.slice.
Dec 13 02:51:03.901429 kubelet[1576]: I1213 02:51:03.901411    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0cd0f03c-de4e-4616-b6aa-c70786a38e93\" (UniqueName: \"kubernetes.io/nfs/f57abc55-d12d-40a5-8d76-b963884c2e14-pvc-0cd0f03c-de4e-4616-b6aa-c70786a38e93\") pod \"test-pod-1\" (UID: \"f57abc55-d12d-40a5-8d76-b963884c2e14\") " pod="default/test-pod-1"
Dec 13 02:51:03.901531 kubelet[1576]: I1213 02:51:03.901435    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxxr8\" (UniqueName: \"kubernetes.io/projected/f57abc55-d12d-40a5-8d76-b963884c2e14-kube-api-access-bxxr8\") pod \"test-pod-1\" (UID: \"f57abc55-d12d-40a5-8d76-b963884c2e14\") " pod="default/test-pod-1"
Dec 13 02:51:04.025559 kernel: FS-Cache: Loaded
Dec 13 02:51:04.056829 kernel: RPC: Registered named UNIX socket transport module.
Dec 13 02:51:04.056904 kernel: RPC: Registered udp transport module.
Dec 13 02:51:04.056923 kernel: RPC: Registered tcp transport module.
Dec 13 02:51:04.058154 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Dec 13 02:51:04.066639 kubelet[1576]: E1213 02:51:04.066625    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:51:04.127539 kernel: FS-Cache: Netfs 'nfs' registered for caching
Dec 13 02:51:04.285920 kernel: NFS: Registering the id_resolver key type
Dec 13 02:51:04.286000 kernel: Key type id_resolver registered
Dec 13 02:51:04.286018 kernel: Key type id_legacy registered
Dec 13 02:51:04.322351 nfsidmap[2918]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain'
Dec 13 02:51:04.323424 nfsidmap[2919]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain'
Dec 13 02:51:04.494364 env[1248]: time="2024-12-13T02:51:04.494056008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:f57abc55-d12d-40a5-8d76-b963884c2e14,Namespace:default,Attempt:0,}"
Dec 13 02:51:04.561568 systemd-networkd[1064]: lxce50fbb5fe0a8: Link UP
Dec 13 02:51:04.567537 kernel: eth0: renamed from tmpdea43
Dec 13 02:51:04.572462 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Dec 13 02:51:04.572498 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxce50fbb5fe0a8: link becomes ready
Dec 13 02:51:04.572698 systemd-networkd[1064]: lxce50fbb5fe0a8: Gained carrier
Dec 13 02:51:04.698315 env[1248]: time="2024-12-13T02:51:04.698201021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 02:51:04.698315 env[1248]: time="2024-12-13T02:51:04.698227221Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 02:51:04.698315 env[1248]: time="2024-12-13T02:51:04.698234018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 02:51:04.698533 env[1248]: time="2024-12-13T02:51:04.698501558Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dea430e022833c0327222c7e9813f41d72258400231d06820aff63a383951299 pid=2958 runtime=io.containerd.runc.v2
Dec 13 02:51:04.706000 systemd[1]: Started cri-containerd-dea430e022833c0327222c7e9813f41d72258400231d06820aff63a383951299.scope.
Dec 13 02:51:04.716958 systemd-resolved[1196]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Dec 13 02:51:04.735240 env[1248]: time="2024-12-13T02:51:04.735213905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:f57abc55-d12d-40a5-8d76-b963884c2e14,Namespace:default,Attempt:0,} returns sandbox id \"dea430e022833c0327222c7e9813f41d72258400231d06820aff63a383951299\""
Dec 13 02:51:04.736310 env[1248]: time="2024-12-13T02:51:04.736297578Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\""
Dec 13 02:51:05.067089 kubelet[1576]: E1213 02:51:05.067005    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:51:05.068949 env[1248]: time="2024-12-13T02:51:05.068909514Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:51:05.069945 env[1248]: time="2024-12-13T02:51:05.069925449Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:51:05.071153 env[1248]: time="2024-12-13T02:51:05.071132715Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:51:05.072448 env[1248]: time="2024-12-13T02:51:05.072429176Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:51:05.072953 env[1248]: time="2024-12-13T02:51:05.072936928Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\""
Dec 13 02:51:05.074016 env[1248]: time="2024-12-13T02:51:05.073996944Z" level=info msg="CreateContainer within sandbox \"dea430e022833c0327222c7e9813f41d72258400231d06820aff63a383951299\" for container &ContainerMetadata{Name:test,Attempt:0,}"
Dec 13 02:51:05.079514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2888095542.mount: Deactivated successfully.
Dec 13 02:51:05.082108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3047864234.mount: Deactivated successfully.
Dec 13 02:51:05.085954 env[1248]: time="2024-12-13T02:51:05.085934252Z" level=info msg="CreateContainer within sandbox \"dea430e022833c0327222c7e9813f41d72258400231d06820aff63a383951299\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"c421e3d9f2a0e00942c2d7062a8295763508c72c8e46d242fed819e6bad1d4f3\""
Dec 13 02:51:05.086456 env[1248]: time="2024-12-13T02:51:05.086443653Z" level=info msg="StartContainer for \"c421e3d9f2a0e00942c2d7062a8295763508c72c8e46d242fed819e6bad1d4f3\""
Dec 13 02:51:05.096321 systemd[1]: Started cri-containerd-c421e3d9f2a0e00942c2d7062a8295763508c72c8e46d242fed819e6bad1d4f3.scope.
Dec 13 02:51:05.111796 env[1248]: time="2024-12-13T02:51:05.111770159Z" level=info msg="StartContainer for \"c421e3d9f2a0e00942c2d7062a8295763508c72c8e46d242fed819e6bad1d4f3\" returns successfully"
Dec 13 02:51:06.067839 kubelet[1576]: E1213 02:51:06.067806    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:51:06.070634 systemd-networkd[1064]: lxce50fbb5fe0a8: Gained IPv6LL
Dec 13 02:51:06.244891 systemd[1]: Started sshd@11-139.178.70.104:22-45.148.10.203:60100.service.
Dec 13 02:51:06.949556 sshd[3052]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.148.10.203  user=root
Dec 13 02:51:07.069222 kubelet[1576]: E1213 02:51:07.069153    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:51:08.070170 kubelet[1576]: E1213 02:51:08.070136    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:51:09.015901 sshd[3052]: Failed password for root from 45.148.10.203 port 60100 ssh2
Dec 13 02:51:09.071297 kubelet[1576]: E1213 02:51:09.071276    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:51:10.071782 kubelet[1576]: E1213 02:51:10.071753    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:51:11.029854 sshd[3052]: Connection closed by authenticating user root 45.148.10.203 port 60100 [preauth]
Dec 13 02:51:11.030793 systemd[1]: sshd@11-139.178.70.104:22-45.148.10.203:60100.service: Deactivated successfully.
Dec 13 02:51:11.072453 kubelet[1576]: E1213 02:51:11.072416    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:51:11.663232 kubelet[1576]: I1213 02:51:11.663205    1576 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=22.326018659 podStartE2EDuration="22.663113244s" podCreationTimestamp="2024-12-13 02:50:49 +0000 UTC" firstStartedPulling="2024-12-13 02:51:04.735997203 +0000 UTC m=+47.313340556" lastFinishedPulling="2024-12-13 02:51:05.073091788 +0000 UTC m=+47.650435141" observedRunningTime="2024-12-13 02:51:05.522216054 +0000 UTC m=+48.099559409" watchObservedRunningTime="2024-12-13 02:51:11.663113244 +0000 UTC m=+54.240456612"
Dec 13 02:51:11.686284 env[1248]: time="2024-12-13T02:51:11.686193954Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Dec 13 02:51:11.689870 env[1248]: time="2024-12-13T02:51:11.689845990Z" level=info msg="StopContainer for \"a4ce6926d7886e742c20c221efcf290ce5711409816404f8adcf88eba5e50125\" with timeout 2 (s)"
Dec 13 02:51:11.691598 env[1248]: time="2024-12-13T02:51:11.691576698Z" level=info msg="Stop container \"a4ce6926d7886e742c20c221efcf290ce5711409816404f8adcf88eba5e50125\" with signal terminated"
Dec 13 02:51:11.695599 systemd-networkd[1064]: lxc_health: Link DOWN
Dec 13 02:51:11.695606 systemd-networkd[1064]: lxc_health: Lost carrier
Dec 13 02:51:11.716850 systemd[1]: cri-containerd-a4ce6926d7886e742c20c221efcf290ce5711409816404f8adcf88eba5e50125.scope: Deactivated successfully.
Dec 13 02:51:11.717055 systemd[1]: cri-containerd-a4ce6926d7886e742c20c221efcf290ce5711409816404f8adcf88eba5e50125.scope: Consumed 4.430s CPU time.
Dec 13 02:51:11.729399 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a4ce6926d7886e742c20c221efcf290ce5711409816404f8adcf88eba5e50125-rootfs.mount: Deactivated successfully.
Dec 13 02:51:11.735304 env[1248]: time="2024-12-13T02:51:11.735263802Z" level=info msg="shim disconnected" id=a4ce6926d7886e742c20c221efcf290ce5711409816404f8adcf88eba5e50125
Dec 13 02:51:11.735486 env[1248]: time="2024-12-13T02:51:11.735471549Z" level=warning msg="cleaning up after shim disconnected" id=a4ce6926d7886e742c20c221efcf290ce5711409816404f8adcf88eba5e50125 namespace=k8s.io
Dec 13 02:51:11.735574 env[1248]: time="2024-12-13T02:51:11.735562068Z" level=info msg="cleaning up dead shim"
Dec 13 02:51:11.741335 env[1248]: time="2024-12-13T02:51:11.741305239Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:51:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3098 runtime=io.containerd.runc.v2\n"
Dec 13 02:51:11.742325 env[1248]: time="2024-12-13T02:51:11.742304553Z" level=info msg="StopContainer for \"a4ce6926d7886e742c20c221efcf290ce5711409816404f8adcf88eba5e50125\" returns successfully"
Dec 13 02:51:11.742950 env[1248]: time="2024-12-13T02:51:11.742932657Z" level=info msg="StopPodSandbox for \"41d8f3ee744b0c709d68f8b88dc7dc20f303b4f0b405ae1f101689cd150cf41d\""
Dec 13 02:51:11.743090 env[1248]: time="2024-12-13T02:51:11.743073860Z" level=info msg="Container to stop \"3905773c3c3da365d84e5d8251fc0fd0cd1866b2f5fe4c0a00e46ee3b60fe68f\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Dec 13 02:51:11.743169 env[1248]: time="2024-12-13T02:51:11.743154051Z" level=info msg="Container to stop \"a4ce6926d7886e742c20c221efcf290ce5711409816404f8adcf88eba5e50125\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Dec 13 02:51:11.743241 env[1248]: time="2024-12-13T02:51:11.743225903Z" level=info msg="Container to stop \"d9bc4724ce9071d79645dd71f212dada80d395f024a61f88bf925cc49c349ef4\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Dec 13 02:51:11.743309 env[1248]: time="2024-12-13T02:51:11.743294730Z" level=info msg="Container to stop \"a3a2a664a2436ae710ef4d2a4ce2f2e74bd7e1ac0e9926761733416ef24a8d81\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Dec 13 02:51:11.743378 env[1248]: time="2024-12-13T02:51:11.743364359Z" level=info msg="Container to stop \"f3d8296cdc4edd61abff13e1b4355cd0ee1f0334d0ca3e9b547cded788725feb\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Dec 13 02:51:11.744986 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-41d8f3ee744b0c709d68f8b88dc7dc20f303b4f0b405ae1f101689cd150cf41d-shm.mount: Deactivated successfully.
Dec 13 02:51:11.749385 systemd[1]: cri-containerd-41d8f3ee744b0c709d68f8b88dc7dc20f303b4f0b405ae1f101689cd150cf41d.scope: Deactivated successfully.
Dec 13 02:51:11.764501 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41d8f3ee744b0c709d68f8b88dc7dc20f303b4f0b405ae1f101689cd150cf41d-rootfs.mount: Deactivated successfully.
Dec 13 02:51:11.770017 env[1248]: time="2024-12-13T02:51:11.769987968Z" level=info msg="shim disconnected" id=41d8f3ee744b0c709d68f8b88dc7dc20f303b4f0b405ae1f101689cd150cf41d
Dec 13 02:51:11.770100 env[1248]: time="2024-12-13T02:51:11.770021242Z" level=warning msg="cleaning up after shim disconnected" id=41d8f3ee744b0c709d68f8b88dc7dc20f303b4f0b405ae1f101689cd150cf41d namespace=k8s.io
Dec 13 02:51:11.770100 env[1248]: time="2024-12-13T02:51:11.770027855Z" level=info msg="cleaning up dead shim"
Dec 13 02:51:11.774493 env[1248]: time="2024-12-13T02:51:11.774474195Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:51:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3129 runtime=io.containerd.runc.v2\n"
Dec 13 02:51:11.774725 env[1248]: time="2024-12-13T02:51:11.774709687Z" level=info msg="TearDown network for sandbox \"41d8f3ee744b0c709d68f8b88dc7dc20f303b4f0b405ae1f101689cd150cf41d\" successfully"
Dec 13 02:51:11.774784 env[1248]: time="2024-12-13T02:51:11.774772381Z" level=info msg="StopPodSandbox for \"41d8f3ee744b0c709d68f8b88dc7dc20f303b4f0b405ae1f101689cd150cf41d\" returns successfully"
Dec 13 02:51:11.951253 kubelet[1576]: I1213 02:51:11.951162    1576 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0b841991-65eb-408b-881c-9deda56e2ce4-host-proc-sys-net\") pod \"0b841991-65eb-408b-881c-9deda56e2ce4\" (UID: \"0b841991-65eb-408b-881c-9deda56e2ce4\") "
Dec 13 02:51:11.951253 kubelet[1576]: I1213 02:51:11.951209    1576 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b841991-65eb-408b-881c-9deda56e2ce4-lib-modules\") pod \"0b841991-65eb-408b-881c-9deda56e2ce4\" (UID: \"0b841991-65eb-408b-881c-9deda56e2ce4\") "
Dec 13 02:51:11.951253 kubelet[1576]: I1213 02:51:11.951229    1576 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b841991-65eb-408b-881c-9deda56e2ce4-xtables-lock\") pod \"0b841991-65eb-408b-881c-9deda56e2ce4\" (UID: \"0b841991-65eb-408b-881c-9deda56e2ce4\") "
Dec 13 02:51:11.952029 kubelet[1576]: I1213 02:51:11.951905    1576 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0b841991-65eb-408b-881c-9deda56e2ce4-clustermesh-secrets\") pod \"0b841991-65eb-408b-881c-9deda56e2ce4\" (UID: \"0b841991-65eb-408b-881c-9deda56e2ce4\") "
Dec 13 02:51:11.952029 kubelet[1576]: I1213 02:51:11.951925    1576 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0b841991-65eb-408b-881c-9deda56e2ce4-cilium-run\") pod \"0b841991-65eb-408b-881c-9deda56e2ce4\" (UID: \"0b841991-65eb-408b-881c-9deda56e2ce4\") "
Dec 13 02:51:11.952029 kubelet[1576]: I1213 02:51:11.951939    1576 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0b841991-65eb-408b-881c-9deda56e2ce4-cni-path\") pod \"0b841991-65eb-408b-881c-9deda56e2ce4\" (UID: \"0b841991-65eb-408b-881c-9deda56e2ce4\") "
Dec 13 02:51:11.952029 kubelet[1576]: I1213 02:51:11.951951    1576 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0b841991-65eb-408b-881c-9deda56e2ce4-hostproc\") pod \"0b841991-65eb-408b-881c-9deda56e2ce4\" (UID: \"0b841991-65eb-408b-881c-9deda56e2ce4\") "
Dec 13 02:51:11.952029 kubelet[1576]: I1213 02:51:11.951976    1576 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0b841991-65eb-408b-881c-9deda56e2ce4-cilium-cgroup\") pod \"0b841991-65eb-408b-881c-9deda56e2ce4\" (UID: \"0b841991-65eb-408b-881c-9deda56e2ce4\") "
Dec 13 02:51:11.952029 kubelet[1576]: I1213 02:51:11.951991    1576 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b841991-65eb-408b-881c-9deda56e2ce4-etc-cni-netd\") pod \"0b841991-65eb-408b-881c-9deda56e2ce4\" (UID: \"0b841991-65eb-408b-881c-9deda56e2ce4\") "
Dec 13 02:51:11.952194 kubelet[1576]: I1213 02:51:11.952008    1576 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0b841991-65eb-408b-881c-9deda56e2ce4-hubble-tls\") pod \"0b841991-65eb-408b-881c-9deda56e2ce4\" (UID: \"0b841991-65eb-408b-881c-9deda56e2ce4\") "
Dec 13 02:51:11.952194 kubelet[1576]: I1213 02:51:11.952026    1576 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0b841991-65eb-408b-881c-9deda56e2ce4-cilium-config-path\") pod \"0b841991-65eb-408b-881c-9deda56e2ce4\" (UID: \"0b841991-65eb-408b-881c-9deda56e2ce4\") "
Dec 13 02:51:11.952194 kubelet[1576]: I1213 02:51:11.952052    1576 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0b841991-65eb-408b-881c-9deda56e2ce4-host-proc-sys-kernel\") pod \"0b841991-65eb-408b-881c-9deda56e2ce4\" (UID: \"0b841991-65eb-408b-881c-9deda56e2ce4\") "
Dec 13 02:51:11.952194 kubelet[1576]: I1213 02:51:11.952073    1576 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0b841991-65eb-408b-881c-9deda56e2ce4-bpf-maps\") pod \"0b841991-65eb-408b-881c-9deda56e2ce4\" (UID: \"0b841991-65eb-408b-881c-9deda56e2ce4\") "
Dec 13 02:51:11.952194 kubelet[1576]: I1213 02:51:11.952088    1576 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wljg8\" (UniqueName: \"kubernetes.io/projected/0b841991-65eb-408b-881c-9deda56e2ce4-kube-api-access-wljg8\") pod \"0b841991-65eb-408b-881c-9deda56e2ce4\" (UID: \"0b841991-65eb-408b-881c-9deda56e2ce4\") "
Dec 13 02:51:11.952724 kubelet[1576]: I1213 02:51:11.952362    1576 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b841991-65eb-408b-881c-9deda56e2ce4-hostproc" (OuterVolumeSpecName: "hostproc") pod "0b841991-65eb-408b-881c-9deda56e2ce4" (UID: "0b841991-65eb-408b-881c-9deda56e2ce4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 02:51:11.952724 kubelet[1576]: I1213 02:51:11.952402    1576 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b841991-65eb-408b-881c-9deda56e2ce4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0b841991-65eb-408b-881c-9deda56e2ce4" (UID: "0b841991-65eb-408b-881c-9deda56e2ce4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 02:51:11.952724 kubelet[1576]: I1213 02:51:11.952417    1576 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b841991-65eb-408b-881c-9deda56e2ce4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0b841991-65eb-408b-881c-9deda56e2ce4" (UID: "0b841991-65eb-408b-881c-9deda56e2ce4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 02:51:11.952724 kubelet[1576]: I1213 02:51:11.952563    1576 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b841991-65eb-408b-881c-9deda56e2ce4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0b841991-65eb-408b-881c-9deda56e2ce4" (UID: "0b841991-65eb-408b-881c-9deda56e2ce4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 02:51:11.954982 kubelet[1576]: I1213 02:51:11.953082    1576 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b841991-65eb-408b-881c-9deda56e2ce4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0b841991-65eb-408b-881c-9deda56e2ce4" (UID: "0b841991-65eb-408b-881c-9deda56e2ce4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 02:51:11.954982 kubelet[1576]: I1213 02:51:11.953104    1576 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b841991-65eb-408b-881c-9deda56e2ce4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0b841991-65eb-408b-881c-9deda56e2ce4" (UID: "0b841991-65eb-408b-881c-9deda56e2ce4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 02:51:11.954982 kubelet[1576]: I1213 02:51:11.953169    1576 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b841991-65eb-408b-881c-9deda56e2ce4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0b841991-65eb-408b-881c-9deda56e2ce4" (UID: "0b841991-65eb-408b-881c-9deda56e2ce4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 02:51:11.954982 kubelet[1576]: I1213 02:51:11.953191    1576 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b841991-65eb-408b-881c-9deda56e2ce4-cni-path" (OuterVolumeSpecName: "cni-path") pod "0b841991-65eb-408b-881c-9deda56e2ce4" (UID: "0b841991-65eb-408b-881c-9deda56e2ce4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 02:51:11.954982 kubelet[1576]: I1213 02:51:11.953205    1576 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b841991-65eb-408b-881c-9deda56e2ce4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0b841991-65eb-408b-881c-9deda56e2ce4" (UID: "0b841991-65eb-408b-881c-9deda56e2ce4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 02:51:11.955145 kubelet[1576]: I1213 02:51:11.954578    1576 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b841991-65eb-408b-881c-9deda56e2ce4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0b841991-65eb-408b-881c-9deda56e2ce4" (UID: "0b841991-65eb-408b-881c-9deda56e2ce4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Dec 13 02:51:11.955145 kubelet[1576]: I1213 02:51:11.954609    1576 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b841991-65eb-408b-881c-9deda56e2ce4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0b841991-65eb-408b-881c-9deda56e2ce4" (UID: "0b841991-65eb-408b-881c-9deda56e2ce4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 02:51:11.955568 kubelet[1576]: I1213 02:51:11.955552    1576 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b841991-65eb-408b-881c-9deda56e2ce4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0b841991-65eb-408b-881c-9deda56e2ce4" (UID: "0b841991-65eb-408b-881c-9deda56e2ce4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue ""
Dec 13 02:51:11.956288 systemd[1]: var-lib-kubelet-pods-0b841991\x2d65eb\x2d408b\x2d881c\x2d9deda56e2ce4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully.
Dec 13 02:51:11.957249 kubelet[1576]: I1213 02:51:11.957237    1576 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b841991-65eb-408b-881c-9deda56e2ce4-kube-api-access-wljg8" (OuterVolumeSpecName: "kube-api-access-wljg8") pod "0b841991-65eb-408b-881c-9deda56e2ce4" (UID: "0b841991-65eb-408b-881c-9deda56e2ce4"). InnerVolumeSpecName "kube-api-access-wljg8". PluginName "kubernetes.io/projected", VolumeGidValue ""
Dec 13 02:51:11.958342 kubelet[1576]: I1213 02:51:11.958330    1576 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b841991-65eb-408b-881c-9deda56e2ce4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0b841991-65eb-408b-881c-9deda56e2ce4" (UID: "0b841991-65eb-408b-881c-9deda56e2ce4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Dec 13 02:51:12.052923 kubelet[1576]: I1213 02:51:12.052895    1576 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0b841991-65eb-408b-881c-9deda56e2ce4-host-proc-sys-net\") on node \"10.67.124.136\" DevicePath \"\""
Dec 13 02:51:12.053051 kubelet[1576]: I1213 02:51:12.053041    1576 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b841991-65eb-408b-881c-9deda56e2ce4-lib-modules\") on node \"10.67.124.136\" DevicePath \"\""
Dec 13 02:51:12.053131 kubelet[1576]: I1213 02:51:12.053122    1576 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b841991-65eb-408b-881c-9deda56e2ce4-xtables-lock\") on node \"10.67.124.136\" DevicePath \"\""
Dec 13 02:51:12.053205 kubelet[1576]: I1213 02:51:12.053196    1576 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0b841991-65eb-408b-881c-9deda56e2ce4-clustermesh-secrets\") on node \"10.67.124.136\" DevicePath \"\""
Dec 13 02:51:12.053274 kubelet[1576]: I1213 02:51:12.053265    1576 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0b841991-65eb-408b-881c-9deda56e2ce4-cilium-cgroup\") on node \"10.67.124.136\" DevicePath \"\""
Dec 13 02:51:12.053343 kubelet[1576]: I1213 02:51:12.053335    1576 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0b841991-65eb-408b-881c-9deda56e2ce4-cilium-run\") on node \"10.67.124.136\" DevicePath \"\""
Dec 13 02:51:12.053411 kubelet[1576]: I1213 02:51:12.053402    1576 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0b841991-65eb-408b-881c-9deda56e2ce4-cni-path\") on node \"10.67.124.136\" DevicePath \"\""
Dec 13 02:51:12.053493 kubelet[1576]: I1213 02:51:12.053485    1576 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0b841991-65eb-408b-881c-9deda56e2ce4-hostproc\") on node \"10.67.124.136\" DevicePath \"\""
Dec 13 02:51:12.053582 kubelet[1576]: I1213 02:51:12.053574    1576 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b841991-65eb-408b-881c-9deda56e2ce4-etc-cni-netd\") on node \"10.67.124.136\" DevicePath \"\""
Dec 13 02:51:12.053654 kubelet[1576]: I1213 02:51:12.053645    1576 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0b841991-65eb-408b-881c-9deda56e2ce4-hubble-tls\") on node \"10.67.124.136\" DevicePath \"\""
Dec 13 02:51:12.053721 kubelet[1576]: I1213 02:51:12.053712    1576 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0b841991-65eb-408b-881c-9deda56e2ce4-cilium-config-path\") on node \"10.67.124.136\" DevicePath \"\""
Dec 13 02:51:12.053791 kubelet[1576]: I1213 02:51:12.053782    1576 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0b841991-65eb-408b-881c-9deda56e2ce4-host-proc-sys-kernel\") on node \"10.67.124.136\" DevicePath \"\""
Dec 13 02:51:12.053858 kubelet[1576]: I1213 02:51:12.053850    1576 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0b841991-65eb-408b-881c-9deda56e2ce4-bpf-maps\") on node \"10.67.124.136\" DevicePath \"\""
Dec 13 02:51:12.053932 kubelet[1576]: I1213 02:51:12.053923    1576 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-wljg8\" (UniqueName: \"kubernetes.io/projected/0b841991-65eb-408b-881c-9deda56e2ce4-kube-api-access-wljg8\") on node \"10.67.124.136\" DevicePath \"\""
Dec 13 02:51:12.073134 kubelet[1576]: E1213 02:51:12.073108    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:51:12.447831 systemd[1]: Removed slice kubepods-burstable-pod0b841991_65eb_408b_881c_9deda56e2ce4.slice.
Dec 13 02:51:12.447882 systemd[1]: kubepods-burstable-pod0b841991_65eb_408b_881c_9deda56e2ce4.slice: Consumed 4.500s CPU time.
Dec 13 02:51:12.529106 kubelet[1576]: I1213 02:51:12.529088    1576 scope.go:117] "RemoveContainer" containerID="a4ce6926d7886e742c20c221efcf290ce5711409816404f8adcf88eba5e50125"
Dec 13 02:51:12.531739 env[1248]: time="2024-12-13T02:51:12.531360525Z" level=info msg="RemoveContainer for \"a4ce6926d7886e742c20c221efcf290ce5711409816404f8adcf88eba5e50125\""
Dec 13 02:51:12.533491 env[1248]: time="2024-12-13T02:51:12.533404040Z" level=info msg="RemoveContainer for \"a4ce6926d7886e742c20c221efcf290ce5711409816404f8adcf88eba5e50125\" returns successfully"
Dec 13 02:51:12.533715 kubelet[1576]: I1213 02:51:12.533702    1576 scope.go:117] "RemoveContainer" containerID="f3d8296cdc4edd61abff13e1b4355cd0ee1f0334d0ca3e9b547cded788725feb"
Dec 13 02:51:12.534571 env[1248]: time="2024-12-13T02:51:12.534388672Z" level=info msg="RemoveContainer for \"f3d8296cdc4edd61abff13e1b4355cd0ee1f0334d0ca3e9b547cded788725feb\""
Dec 13 02:51:12.535719 env[1248]: time="2024-12-13T02:51:12.535700154Z" level=info msg="RemoveContainer for \"f3d8296cdc4edd61abff13e1b4355cd0ee1f0334d0ca3e9b547cded788725feb\" returns successfully"
Dec 13 02:51:12.535909 kubelet[1576]: I1213 02:51:12.535897    1576 scope.go:117] "RemoveContainer" containerID="a3a2a664a2436ae710ef4d2a4ce2f2e74bd7e1ac0e9926761733416ef24a8d81"
Dec 13 02:51:12.537104 env[1248]: time="2024-12-13T02:51:12.537084694Z" level=info msg="RemoveContainer for \"a3a2a664a2436ae710ef4d2a4ce2f2e74bd7e1ac0e9926761733416ef24a8d81\""
Dec 13 02:51:12.538340 env[1248]: time="2024-12-13T02:51:12.538326499Z" level=info msg="RemoveContainer for \"a3a2a664a2436ae710ef4d2a4ce2f2e74bd7e1ac0e9926761733416ef24a8d81\" returns successfully"
Dec 13 02:51:12.538592 kubelet[1576]: I1213 02:51:12.538581    1576 scope.go:117] "RemoveContainer" containerID="3905773c3c3da365d84e5d8251fc0fd0cd1866b2f5fe4c0a00e46ee3b60fe68f"
Dec 13 02:51:12.540178 env[1248]: time="2024-12-13T02:51:12.540165176Z" level=info msg="RemoveContainer for \"3905773c3c3da365d84e5d8251fc0fd0cd1866b2f5fe4c0a00e46ee3b60fe68f\""
Dec 13 02:51:12.541180 env[1248]: time="2024-12-13T02:51:12.541166604Z" level=info msg="RemoveContainer for \"3905773c3c3da365d84e5d8251fc0fd0cd1866b2f5fe4c0a00e46ee3b60fe68f\" returns successfully"
Dec 13 02:51:12.544103 kubelet[1576]: I1213 02:51:12.544093    1576 scope.go:117] "RemoveContainer" containerID="d9bc4724ce9071d79645dd71f212dada80d395f024a61f88bf925cc49c349ef4"
Dec 13 02:51:12.544764 env[1248]: time="2024-12-13T02:51:12.544751185Z" level=info msg="RemoveContainer for \"d9bc4724ce9071d79645dd71f212dada80d395f024a61f88bf925cc49c349ef4\""
Dec 13 02:51:12.545752 env[1248]: time="2024-12-13T02:51:12.545739009Z" level=info msg="RemoveContainer for \"d9bc4724ce9071d79645dd71f212dada80d395f024a61f88bf925cc49c349ef4\" returns successfully"
Dec 13 02:51:12.545870 kubelet[1576]: I1213 02:51:12.545862    1576 scope.go:117] "RemoveContainer" containerID="a4ce6926d7886e742c20c221efcf290ce5711409816404f8adcf88eba5e50125"
Dec 13 02:51:12.546062 env[1248]: time="2024-12-13T02:51:12.546027053Z" level=error msg="ContainerStatus for \"a4ce6926d7886e742c20c221efcf290ce5711409816404f8adcf88eba5e50125\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a4ce6926d7886e742c20c221efcf290ce5711409816404f8adcf88eba5e50125\": not found"
Dec 13 02:51:12.546176 kubelet[1576]: E1213 02:51:12.546158    1576 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a4ce6926d7886e742c20c221efcf290ce5711409816404f8adcf88eba5e50125\": not found" containerID="a4ce6926d7886e742c20c221efcf290ce5711409816404f8adcf88eba5e50125"
Dec 13 02:51:12.546224 kubelet[1576]: I1213 02:51:12.546208    1576 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a4ce6926d7886e742c20c221efcf290ce5711409816404f8adcf88eba5e50125"} err="failed to get container status \"a4ce6926d7886e742c20c221efcf290ce5711409816404f8adcf88eba5e50125\": rpc error: code = NotFound desc = an error occurred when try to find container \"a4ce6926d7886e742c20c221efcf290ce5711409816404f8adcf88eba5e50125\": not found"
Dec 13 02:51:12.546224 kubelet[1576]: I1213 02:51:12.546215    1576 scope.go:117] "RemoveContainer" containerID="f3d8296cdc4edd61abff13e1b4355cd0ee1f0334d0ca3e9b547cded788725feb"
Dec 13 02:51:12.546337 env[1248]: time="2024-12-13T02:51:12.546300813Z" level=error msg="ContainerStatus for \"f3d8296cdc4edd61abff13e1b4355cd0ee1f0334d0ca3e9b547cded788725feb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f3d8296cdc4edd61abff13e1b4355cd0ee1f0334d0ca3e9b547cded788725feb\": not found"
Dec 13 02:51:12.546424 kubelet[1576]: E1213 02:51:12.546416    1576 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f3d8296cdc4edd61abff13e1b4355cd0ee1f0334d0ca3e9b547cded788725feb\": not found" containerID="f3d8296cdc4edd61abff13e1b4355cd0ee1f0334d0ca3e9b547cded788725feb"
Dec 13 02:51:12.546501 kubelet[1576]: I1213 02:51:12.546494    1576 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f3d8296cdc4edd61abff13e1b4355cd0ee1f0334d0ca3e9b547cded788725feb"} err="failed to get container status \"f3d8296cdc4edd61abff13e1b4355cd0ee1f0334d0ca3e9b547cded788725feb\": rpc error: code = NotFound desc = an error occurred when try to find container \"f3d8296cdc4edd61abff13e1b4355cd0ee1f0334d0ca3e9b547cded788725feb\": not found"
Dec 13 02:51:12.546568 kubelet[1576]: I1213 02:51:12.546561    1576 scope.go:117] "RemoveContainer" containerID="a3a2a664a2436ae710ef4d2a4ce2f2e74bd7e1ac0e9926761733416ef24a8d81"
Dec 13 02:51:12.546742 env[1248]: time="2024-12-13T02:51:12.546719404Z" level=error msg="ContainerStatus for \"a3a2a664a2436ae710ef4d2a4ce2f2e74bd7e1ac0e9926761733416ef24a8d81\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a3a2a664a2436ae710ef4d2a4ce2f2e74bd7e1ac0e9926761733416ef24a8d81\": not found"
Dec 13 02:51:12.546840 kubelet[1576]: E1213 02:51:12.546829    1576 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a3a2a664a2436ae710ef4d2a4ce2f2e74bd7e1ac0e9926761733416ef24a8d81\": not found" containerID="a3a2a664a2436ae710ef4d2a4ce2f2e74bd7e1ac0e9926761733416ef24a8d81"
Dec 13 02:51:12.546873 kubelet[1576]: I1213 02:51:12.546844    1576 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a3a2a664a2436ae710ef4d2a4ce2f2e74bd7e1ac0e9926761733416ef24a8d81"} err="failed to get container status \"a3a2a664a2436ae710ef4d2a4ce2f2e74bd7e1ac0e9926761733416ef24a8d81\": rpc error: code = NotFound desc = an error occurred when try to find container \"a3a2a664a2436ae710ef4d2a4ce2f2e74bd7e1ac0e9926761733416ef24a8d81\": not found"
Dec 13 02:51:12.546873 kubelet[1576]: I1213 02:51:12.546849    1576 scope.go:117] "RemoveContainer" containerID="3905773c3c3da365d84e5d8251fc0fd0cd1866b2f5fe4c0a00e46ee3b60fe68f"
Dec 13 02:51:12.547002 env[1248]: time="2024-12-13T02:51:12.546980449Z" level=error msg="ContainerStatus for \"3905773c3c3da365d84e5d8251fc0fd0cd1866b2f5fe4c0a00e46ee3b60fe68f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3905773c3c3da365d84e5d8251fc0fd0cd1866b2f5fe4c0a00e46ee3b60fe68f\": not found"
Dec 13 02:51:12.547097 kubelet[1576]: E1213 02:51:12.547085    1576 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3905773c3c3da365d84e5d8251fc0fd0cd1866b2f5fe4c0a00e46ee3b60fe68f\": not found" containerID="3905773c3c3da365d84e5d8251fc0fd0cd1866b2f5fe4c0a00e46ee3b60fe68f"
Dec 13 02:51:12.547133 kubelet[1576]: I1213 02:51:12.547104    1576 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3905773c3c3da365d84e5d8251fc0fd0cd1866b2f5fe4c0a00e46ee3b60fe68f"} err="failed to get container status \"3905773c3c3da365d84e5d8251fc0fd0cd1866b2f5fe4c0a00e46ee3b60fe68f\": rpc error: code = NotFound desc = an error occurred when try to find container \"3905773c3c3da365d84e5d8251fc0fd0cd1866b2f5fe4c0a00e46ee3b60fe68f\": not found"
Dec 13 02:51:12.547133 kubelet[1576]: I1213 02:51:12.547109    1576 scope.go:117] "RemoveContainer" containerID="d9bc4724ce9071d79645dd71f212dada80d395f024a61f88bf925cc49c349ef4"
Dec 13 02:51:12.547253 env[1248]: time="2024-12-13T02:51:12.547231839Z" level=error msg="ContainerStatus for \"d9bc4724ce9071d79645dd71f212dada80d395f024a61f88bf925cc49c349ef4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d9bc4724ce9071d79645dd71f212dada80d395f024a61f88bf925cc49c349ef4\": not found"
Dec 13 02:51:12.547346 kubelet[1576]: E1213 02:51:12.547335    1576 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d9bc4724ce9071d79645dd71f212dada80d395f024a61f88bf925cc49c349ef4\": not found" containerID="d9bc4724ce9071d79645dd71f212dada80d395f024a61f88bf925cc49c349ef4"
Dec 13 02:51:12.547384 kubelet[1576]: I1213 02:51:12.547355    1576 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d9bc4724ce9071d79645dd71f212dada80d395f024a61f88bf925cc49c349ef4"} err="failed to get container status \"d9bc4724ce9071d79645dd71f212dada80d395f024a61f88bf925cc49c349ef4\": rpc error: code = NotFound desc = an error occurred when try to find container \"d9bc4724ce9071d79645dd71f212dada80d395f024a61f88bf925cc49c349ef4\": not found"
Dec 13 02:51:12.673110 systemd[1]: var-lib-kubelet-pods-0b841991\x2d65eb\x2d408b\x2d881c\x2d9deda56e2ce4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwljg8.mount: Deactivated successfully.
Dec 13 02:51:12.673185 systemd[1]: var-lib-kubelet-pods-0b841991\x2d65eb\x2d408b\x2d881c\x2d9deda56e2ce4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully.
Dec 13 02:51:13.073894 kubelet[1576]: E1213 02:51:13.073868    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:51:13.116446 kubelet[1576]: E1213 02:51:13.116434    1576 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Dec 13 02:51:14.074998 kubelet[1576]: E1213 02:51:14.074951    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:51:14.084665 kubelet[1576]: I1213 02:51:14.084636    1576 topology_manager.go:215] "Topology Admit Handler" podUID="9530807d-c94a-45cf-a35d-e3d92cc8c9c5" podNamespace="kube-system" podName="cilium-tpm62"
Dec 13 02:51:14.084665 kubelet[1576]: E1213 02:51:14.084674    1576 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0b841991-65eb-408b-881c-9deda56e2ce4" containerName="mount-cgroup"
Dec 13 02:51:14.084839 kubelet[1576]: E1213 02:51:14.084682    1576 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0b841991-65eb-408b-881c-9deda56e2ce4" containerName="apply-sysctl-overwrites"
Dec 13 02:51:14.084839 kubelet[1576]: E1213 02:51:14.084687    1576 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0b841991-65eb-408b-881c-9deda56e2ce4" containerName="mount-bpf-fs"
Dec 13 02:51:14.084839 kubelet[1576]: E1213 02:51:14.084690    1576 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0b841991-65eb-408b-881c-9deda56e2ce4" containerName="cilium-agent"
Dec 13 02:51:14.084839 kubelet[1576]: E1213 02:51:14.084694    1576 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0b841991-65eb-408b-881c-9deda56e2ce4" containerName="clean-cilium-state"
Dec 13 02:51:14.084839 kubelet[1576]: I1213 02:51:14.084715    1576 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b841991-65eb-408b-881c-9deda56e2ce4" containerName="cilium-agent"
Dec 13 02:51:14.087964 kubelet[1576]: W1213 02:51:14.087943    1576 reflector.go:539] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:10.67.124.136" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.67.124.136' and this object
Dec 13 02:51:14.088104 kubelet[1576]: E1213 02:51:14.088094    1576 reflector.go:147] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:10.67.124.136" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.67.124.136' and this object
Dec 13 02:51:14.088126 systemd[1]: Created slice kubepods-burstable-pod9530807d_c94a_45cf_a35d_e3d92cc8c9c5.slice.
Dec 13 02:51:14.088707 kubelet[1576]: W1213 02:51:14.088694    1576 reflector.go:539] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:10.67.124.136" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.67.124.136' and this object
Dec 13 02:51:14.088781 kubelet[1576]: E1213 02:51:14.088771    1576 reflector.go:147] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:10.67.124.136" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.67.124.136' and this object
Dec 13 02:51:14.088883 kubelet[1576]: W1213 02:51:14.088873    1576 reflector.go:539] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:10.67.124.136" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '10.67.124.136' and this object
Dec 13 02:51:14.088946 kubelet[1576]: E1213 02:51:14.088937    1576 reflector.go:147] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:10.67.124.136" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '10.67.124.136' and this object
Dec 13 02:51:14.089041 kubelet[1576]: W1213 02:51:14.089031    1576 reflector.go:539] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:10.67.124.136" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.67.124.136' and this object
Dec 13 02:51:14.089105 kubelet[1576]: E1213 02:51:14.089096    1576 reflector.go:147] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:10.67.124.136" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.67.124.136' and this object
Dec 13 02:51:14.090191 kubelet[1576]: I1213 02:51:14.090178    1576 topology_manager.go:215] "Topology Admit Handler" podUID="f7328223-3fff-4a6d-864c-a9cbb8a8886b" podNamespace="kube-system" podName="cilium-operator-5cc964979-xngxg"
Dec 13 02:51:14.098082 systemd[1]: Created slice kubepods-besteffort-podf7328223_3fff_4a6d_864c_a9cbb8a8886b.slice.
Dec 13 02:51:14.165334 kubelet[1576]: I1213 02:51:14.165299    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-cni-path\") pod \"cilium-tpm62\" (UID: \"9530807d-c94a-45cf-a35d-e3d92cc8c9c5\") " pod="kube-system/cilium-tpm62"
Dec 13 02:51:14.165540 kubelet[1576]: I1213 02:51:14.165507    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f7328223-3fff-4a6d-864c-a9cbb8a8886b-cilium-config-path\") pod \"cilium-operator-5cc964979-xngxg\" (UID: \"f7328223-3fff-4a6d-864c-a9cbb8a8886b\") " pod="kube-system/cilium-operator-5cc964979-xngxg"
Dec 13 02:51:14.165648 kubelet[1576]: I1213 02:51:14.165627    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-host-proc-sys-kernel\") pod \"cilium-tpm62\" (UID: \"9530807d-c94a-45cf-a35d-e3d92cc8c9c5\") " pod="kube-system/cilium-tpm62"
Dec 13 02:51:14.165707 kubelet[1576]: I1213 02:51:14.165666    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4css7\" (UniqueName: \"kubernetes.io/projected/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-kube-api-access-4css7\") pod \"cilium-tpm62\" (UID: \"9530807d-c94a-45cf-a35d-e3d92cc8c9c5\") " pod="kube-system/cilium-tpm62"
Dec 13 02:51:14.165707 kubelet[1576]: I1213 02:51:14.165680    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-bpf-maps\") pod \"cilium-tpm62\" (UID: \"9530807d-c94a-45cf-a35d-e3d92cc8c9c5\") " pod="kube-system/cilium-tpm62"
Dec 13 02:51:14.165707 kubelet[1576]: I1213 02:51:14.165692    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-lib-modules\") pod \"cilium-tpm62\" (UID: \"9530807d-c94a-45cf-a35d-e3d92cc8c9c5\") " pod="kube-system/cilium-tpm62"
Dec 13 02:51:14.165707 kubelet[1576]: I1213 02:51:14.165703    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-host-proc-sys-net\") pod \"cilium-tpm62\" (UID: \"9530807d-c94a-45cf-a35d-e3d92cc8c9c5\") " pod="kube-system/cilium-tpm62"
Dec 13 02:51:14.165844 kubelet[1576]: I1213 02:51:14.165716    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-cilium-config-path\") pod \"cilium-tpm62\" (UID: \"9530807d-c94a-45cf-a35d-e3d92cc8c9c5\") " pod="kube-system/cilium-tpm62"
Dec 13 02:51:14.165844 kubelet[1576]: I1213 02:51:14.165727    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-cilium-run\") pod \"cilium-tpm62\" (UID: \"9530807d-c94a-45cf-a35d-e3d92cc8c9c5\") " pod="kube-system/cilium-tpm62"
Dec 13 02:51:14.165844 kubelet[1576]: I1213 02:51:14.165737    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-hostproc\") pod \"cilium-tpm62\" (UID: \"9530807d-c94a-45cf-a35d-e3d92cc8c9c5\") " pod="kube-system/cilium-tpm62"
Dec 13 02:51:14.165844 kubelet[1576]: I1213 02:51:14.165748    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm7fw\" (UniqueName: \"kubernetes.io/projected/f7328223-3fff-4a6d-864c-a9cbb8a8886b-kube-api-access-jm7fw\") pod \"cilium-operator-5cc964979-xngxg\" (UID: \"f7328223-3fff-4a6d-864c-a9cbb8a8886b\") " pod="kube-system/cilium-operator-5cc964979-xngxg"
Dec 13 02:51:14.165844 kubelet[1576]: I1213 02:51:14.165759    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-hubble-tls\") pod \"cilium-tpm62\" (UID: \"9530807d-c94a-45cf-a35d-e3d92cc8c9c5\") " pod="kube-system/cilium-tpm62"
Dec 13 02:51:14.166022 kubelet[1576]: I1213 02:51:14.165772    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-cilium-cgroup\") pod \"cilium-tpm62\" (UID: \"9530807d-c94a-45cf-a35d-e3d92cc8c9c5\") " pod="kube-system/cilium-tpm62"
Dec 13 02:51:14.166022 kubelet[1576]: I1213 02:51:14.165783    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-xtables-lock\") pod \"cilium-tpm62\" (UID: \"9530807d-c94a-45cf-a35d-e3d92cc8c9c5\") " pod="kube-system/cilium-tpm62"
Dec 13 02:51:14.166022 kubelet[1576]: I1213 02:51:14.165796    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-cilium-ipsec-secrets\") pod \"cilium-tpm62\" (UID: \"9530807d-c94a-45cf-a35d-e3d92cc8c9c5\") " pod="kube-system/cilium-tpm62"
Dec 13 02:51:14.166022 kubelet[1576]: I1213 02:51:14.165807    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-clustermesh-secrets\") pod \"cilium-tpm62\" (UID: \"9530807d-c94a-45cf-a35d-e3d92cc8c9c5\") " pod="kube-system/cilium-tpm62"
Dec 13 02:51:14.166022 kubelet[1576]: I1213 02:51:14.165824    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-etc-cni-netd\") pod \"cilium-tpm62\" (UID: \"9530807d-c94a-45cf-a35d-e3d92cc8c9c5\") " pod="kube-system/cilium-tpm62"
Dec 13 02:51:14.211235 kubelet[1576]: E1213 02:51:14.211195    1576 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-4css7 lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-tpm62" podUID="9530807d-c94a-45cf-a35d-e3d92cc8c9c5"
Dec 13 02:51:14.443120 kubelet[1576]: I1213 02:51:14.443060    1576 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="0b841991-65eb-408b-881c-9deda56e2ce4" path="/var/lib/kubelet/pods/0b841991-65eb-408b-881c-9deda56e2ce4/volumes"
Dec 13 02:51:14.567849 kubelet[1576]: I1213 02:51:14.567826    1576 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-cni-path\") pod \"9530807d-c94a-45cf-a35d-e3d92cc8c9c5\" (UID: \"9530807d-c94a-45cf-a35d-e3d92cc8c9c5\") "
Dec 13 02:51:14.568024 kubelet[1576]: I1213 02:51:14.568001    1576 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-cni-path" (OuterVolumeSpecName: "cni-path") pod "9530807d-c94a-45cf-a35d-e3d92cc8c9c5" (UID: "9530807d-c94a-45cf-a35d-e3d92cc8c9c5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 02:51:14.568105 kubelet[1576]: I1213 02:51:14.568092    1576 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-etc-cni-netd\") pod \"9530807d-c94a-45cf-a35d-e3d92cc8c9c5\" (UID: \"9530807d-c94a-45cf-a35d-e3d92cc8c9c5\") "
Dec 13 02:51:14.568203 kubelet[1576]: I1213 02:51:14.568192    1576 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-host-proc-sys-net\") pod \"9530807d-c94a-45cf-a35d-e3d92cc8c9c5\" (UID: \"9530807d-c94a-45cf-a35d-e3d92cc8c9c5\") "
Dec 13 02:51:14.568268 kubelet[1576]: I1213 02:51:14.568250    1576 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9530807d-c94a-45cf-a35d-e3d92cc8c9c5" (UID: "9530807d-c94a-45cf-a35d-e3d92cc8c9c5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 02:51:14.568268 kubelet[1576]: I1213 02:51:14.568099    1576 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9530807d-c94a-45cf-a35d-e3d92cc8c9c5" (UID: "9530807d-c94a-45cf-a35d-e3d92cc8c9c5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 02:51:14.568361 kubelet[1576]: I1213 02:51:14.568344    1576 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9530807d-c94a-45cf-a35d-e3d92cc8c9c5" (UID: "9530807d-c94a-45cf-a35d-e3d92cc8c9c5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 02:51:14.568416 kubelet[1576]: I1213 02:51:14.568404    1576 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-cilium-run\") pod \"9530807d-c94a-45cf-a35d-e3d92cc8c9c5\" (UID: \"9530807d-c94a-45cf-a35d-e3d92cc8c9c5\") "
Dec 13 02:51:14.568509 kubelet[1576]: I1213 02:51:14.568498    1576 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-xtables-lock\") pod \"9530807d-c94a-45cf-a35d-e3d92cc8c9c5\" (UID: \"9530807d-c94a-45cf-a35d-e3d92cc8c9c5\") "
Dec 13 02:51:14.568603 kubelet[1576]: I1213 02:51:14.568593    1576 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-lib-modules\") pod \"9530807d-c94a-45cf-a35d-e3d92cc8c9c5\" (UID: \"9530807d-c94a-45cf-a35d-e3d92cc8c9c5\") "
Dec 13 02:51:14.568744 kubelet[1576]: I1213 02:51:14.568733    1576 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-hostproc\") pod \"9530807d-c94a-45cf-a35d-e3d92cc8c9c5\" (UID: \"9530807d-c94a-45cf-a35d-e3d92cc8c9c5\") "
Dec 13 02:51:14.568809 kubelet[1576]: I1213 02:51:14.568792    1576 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-hostproc" (OuterVolumeSpecName: "hostproc") pod "9530807d-c94a-45cf-a35d-e3d92cc8c9c5" (UID: "9530807d-c94a-45cf-a35d-e3d92cc8c9c5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 02:51:14.568809 kubelet[1576]: I1213 02:51:14.568549    1576 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9530807d-c94a-45cf-a35d-e3d92cc8c9c5" (UID: "9530807d-c94a-45cf-a35d-e3d92cc8c9c5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 02:51:14.568809 kubelet[1576]: I1213 02:51:14.568657    1576 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9530807d-c94a-45cf-a35d-e3d92cc8c9c5" (UID: "9530807d-c94a-45cf-a35d-e3d92cc8c9c5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 02:51:14.568902 kubelet[1576]: I1213 02:51:14.568885    1576 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9530807d-c94a-45cf-a35d-e3d92cc8c9c5" (UID: "9530807d-c94a-45cf-a35d-e3d92cc8c9c5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 02:51:14.568957 kubelet[1576]: I1213 02:51:14.568947    1576 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-cilium-cgroup\") pod \"9530807d-c94a-45cf-a35d-e3d92cc8c9c5\" (UID: \"9530807d-c94a-45cf-a35d-e3d92cc8c9c5\") "
Dec 13 02:51:14.569038 kubelet[1576]: I1213 02:51:14.569029    1576 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-host-proc-sys-kernel\") pod \"9530807d-c94a-45cf-a35d-e3d92cc8c9c5\" (UID: \"9530807d-c94a-45cf-a35d-e3d92cc8c9c5\") "
Dec 13 02:51:14.569113 kubelet[1576]: I1213 02:51:14.569095    1576 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9530807d-c94a-45cf-a35d-e3d92cc8c9c5" (UID: "9530807d-c94a-45cf-a35d-e3d92cc8c9c5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 02:51:14.569113 kubelet[1576]: I1213 02:51:14.569101    1576 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-bpf-maps\") pod \"9530807d-c94a-45cf-a35d-e3d92cc8c9c5\" (UID: \"9530807d-c94a-45cf-a35d-e3d92cc8c9c5\") "
Dec 13 02:51:14.569182 kubelet[1576]: I1213 02:51:14.569135    1576 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4css7\" (UniqueName: \"kubernetes.io/projected/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-kube-api-access-4css7\") pod \"9530807d-c94a-45cf-a35d-e3d92cc8c9c5\" (UID: \"9530807d-c94a-45cf-a35d-e3d92cc8c9c5\") "
Dec 13 02:51:14.569213 kubelet[1576]: I1213 02:51:14.569187    1576 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-lib-modules\") on node \"10.67.124.136\" DevicePath \"\""
Dec 13 02:51:14.569213 kubelet[1576]: I1213 02:51:14.569197    1576 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-hostproc\") on node \"10.67.124.136\" DevicePath \"\""
Dec 13 02:51:14.569213 kubelet[1576]: I1213 02:51:14.569205    1576 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-cilium-cgroup\") on node \"10.67.124.136\" DevicePath \"\""
Dec 13 02:51:14.569213 kubelet[1576]: I1213 02:51:14.569213    1576 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-host-proc-sys-kernel\") on node \"10.67.124.136\" DevicePath \"\""
Dec 13 02:51:14.569310 kubelet[1576]: I1213 02:51:14.569221    1576 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-cni-path\") on node \"10.67.124.136\" DevicePath \"\""
Dec 13 02:51:14.569310 kubelet[1576]: I1213 02:51:14.569228    1576 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-etc-cni-netd\") on node \"10.67.124.136\" DevicePath \"\""
Dec 13 02:51:14.569310 kubelet[1576]: I1213 02:51:14.569236    1576 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-host-proc-sys-net\") on node \"10.67.124.136\" DevicePath \"\""
Dec 13 02:51:14.569310 kubelet[1576]: I1213 02:51:14.569243    1576 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-cilium-run\") on node \"10.67.124.136\" DevicePath \"\""
Dec 13 02:51:14.569310 kubelet[1576]: I1213 02:51:14.569250    1576 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-xtables-lock\") on node \"10.67.124.136\" DevicePath \"\""
Dec 13 02:51:14.569459 kubelet[1576]: I1213 02:51:14.569444    1576 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9530807d-c94a-45cf-a35d-e3d92cc8c9c5" (UID: "9530807d-c94a-45cf-a35d-e3d92cc8c9c5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Dec 13 02:51:14.572407 systemd[1]: var-lib-kubelet-pods-9530807d\x2dc94a\x2d45cf\x2da35d\x2de3d92cc8c9c5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4css7.mount: Deactivated successfully.
Dec 13 02:51:14.573196 kubelet[1576]: I1213 02:51:14.573078    1576 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-kube-api-access-4css7" (OuterVolumeSpecName: "kube-api-access-4css7") pod "9530807d-c94a-45cf-a35d-e3d92cc8c9c5" (UID: "9530807d-c94a-45cf-a35d-e3d92cc8c9c5"). InnerVolumeSpecName "kube-api-access-4css7". PluginName "kubernetes.io/projected", VolumeGidValue ""
Dec 13 02:51:14.670443 kubelet[1576]: I1213 02:51:14.670414    1576 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-4css7\" (UniqueName: \"kubernetes.io/projected/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-kube-api-access-4css7\") on node \"10.67.124.136\" DevicePath \"\""
Dec 13 02:51:14.670443 kubelet[1576]: I1213 02:51:14.670441    1576 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-bpf-maps\") on node \"10.67.124.136\" DevicePath \"\""
Dec 13 02:51:14.972153 kubelet[1576]: I1213 02:51:14.972130    1576 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-cilium-ipsec-secrets\") pod \"9530807d-c94a-45cf-a35d-e3d92cc8c9c5\" (UID: \"9530807d-c94a-45cf-a35d-e3d92cc8c9c5\") "
Dec 13 02:51:14.974899 systemd[1]: var-lib-kubelet-pods-9530807d\x2dc94a\x2d45cf\x2da35d\x2de3d92cc8c9c5-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully.
Dec 13 02:51:14.975718 kubelet[1576]: I1213 02:51:14.975694    1576 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "9530807d-c94a-45cf-a35d-e3d92cc8c9c5" (UID: "9530807d-c94a-45cf-a35d-e3d92cc8c9c5"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Dec 13 02:51:15.072666 kubelet[1576]: I1213 02:51:15.072642    1576 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-cilium-ipsec-secrets\") on node \"10.67.124.136\" DevicePath \"\""
Dec 13 02:51:15.075954 kubelet[1576]: E1213 02:51:15.075939    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:51:15.168203 systemd[1]: Started sshd@12-139.178.70.104:22-45.148.10.203:44348.service.
Dec 13 02:51:15.266768 kubelet[1576]: E1213 02:51:15.266531    1576 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition
Dec 13 02:51:15.266768 kubelet[1576]: E1213 02:51:15.266605    1576 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f7328223-3fff-4a6d-864c-a9cbb8a8886b-cilium-config-path podName:f7328223-3fff-4a6d-864c-a9cbb8a8886b nodeName:}" failed. No retries permitted until 2024-12-13 02:51:15.766586879 +0000 UTC m=+58.343930241 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/f7328223-3fff-4a6d-864c-a9cbb8a8886b-cilium-config-path") pod "cilium-operator-5cc964979-xngxg" (UID: "f7328223-3fff-4a6d-864c-a9cbb8a8886b") : failed to sync configmap cache: timed out waiting for the condition
Dec 13 02:51:15.267328 kubelet[1576]: E1213 02:51:15.267315    1576 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition
Dec 13 02:51:15.267417 kubelet[1576]: E1213 02:51:15.267406    1576 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-clustermesh-secrets podName:9530807d-c94a-45cf-a35d-e3d92cc8c9c5 nodeName:}" failed. No retries permitted until 2024-12-13 02:51:15.767396104 +0000 UTC m=+58.344739462 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-clustermesh-secrets") pod "cilium-tpm62" (UID: "9530807d-c94a-45cf-a35d-e3d92cc8c9c5") : failed to sync secret cache: timed out waiting for the condition
Dec 13 02:51:15.267551 kubelet[1576]: E1213 02:51:15.267349    1576 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition
Dec 13 02:51:15.267636 kubelet[1576]: E1213 02:51:15.267622    1576 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-tpm62: failed to sync secret cache: timed out waiting for the condition
Dec 13 02:51:15.267699 kubelet[1576]: E1213 02:51:15.267365    1576 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition
Dec 13 02:51:15.267742 kubelet[1576]: E1213 02:51:15.267719    1576 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-cilium-config-path podName:9530807d-c94a-45cf-a35d-e3d92cc8c9c5 nodeName:}" failed. No retries permitted until 2024-12-13 02:51:15.767709445 +0000 UTC m=+58.345052809 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-cilium-config-path") pod "cilium-tpm62" (UID: "9530807d-c94a-45cf-a35d-e3d92cc8c9c5") : failed to sync configmap cache: timed out waiting for the condition
Dec 13 02:51:15.267826 kubelet[1576]: E1213 02:51:15.267817    1576 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-hubble-tls podName:9530807d-c94a-45cf-a35d-e3d92cc8c9c5 nodeName:}" failed. No retries permitted until 2024-12-13 02:51:15.767807157 +0000 UTC m=+58.345150515 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-hubble-tls") pod "cilium-tpm62" (UID: "9530807d-c94a-45cf-a35d-e3d92cc8c9c5") : failed to sync secret cache: timed out waiting for the condition
Dec 13 02:51:15.536309 systemd[1]: Removed slice kubepods-burstable-pod9530807d_c94a_45cf_a35d_e3d92cc8c9c5.slice.
Dec 13 02:51:15.559934 kubelet[1576]: I1213 02:51:15.559908    1576 topology_manager.go:215] "Topology Admit Handler" podUID="98f08bb0-a768-47b2-9664-c90bf10edefc" podNamespace="kube-system" podName="cilium-jnvkg"
Dec 13 02:51:15.564175 systemd[1]: Created slice kubepods-burstable-pod98f08bb0_a768_47b2_9664_c90bf10edefc.slice.
Dec 13 02:51:15.574406 kubelet[1576]: I1213 02:51:15.574389    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/98f08bb0-a768-47b2-9664-c90bf10edefc-cilium-cgroup\") pod \"cilium-jnvkg\" (UID: \"98f08bb0-a768-47b2-9664-c90bf10edefc\") " pod="kube-system/cilium-jnvkg"
Dec 13 02:51:15.574537 kubelet[1576]: I1213 02:51:15.574529    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/98f08bb0-a768-47b2-9664-c90bf10edefc-clustermesh-secrets\") pod \"cilium-jnvkg\" (UID: \"98f08bb0-a768-47b2-9664-c90bf10edefc\") " pod="kube-system/cilium-jnvkg"
Dec 13 02:51:15.574606 kubelet[1576]: I1213 02:51:15.574599    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/98f08bb0-a768-47b2-9664-c90bf10edefc-cilium-run\") pod \"cilium-jnvkg\" (UID: \"98f08bb0-a768-47b2-9664-c90bf10edefc\") " pod="kube-system/cilium-jnvkg"
Dec 13 02:51:15.574680 kubelet[1576]: I1213 02:51:15.574673    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/98f08bb0-a768-47b2-9664-c90bf10edefc-lib-modules\") pod \"cilium-jnvkg\" (UID: \"98f08bb0-a768-47b2-9664-c90bf10edefc\") " pod="kube-system/cilium-jnvkg"
Dec 13 02:51:15.574748 kubelet[1576]: I1213 02:51:15.574742    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68mm2\" (UniqueName: \"kubernetes.io/projected/98f08bb0-a768-47b2-9664-c90bf10edefc-kube-api-access-68mm2\") pod \"cilium-jnvkg\" (UID: \"98f08bb0-a768-47b2-9664-c90bf10edefc\") " pod="kube-system/cilium-jnvkg"
Dec 13 02:51:15.574815 kubelet[1576]: I1213 02:51:15.574808    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/98f08bb0-a768-47b2-9664-c90bf10edefc-hostproc\") pod \"cilium-jnvkg\" (UID: \"98f08bb0-a768-47b2-9664-c90bf10edefc\") " pod="kube-system/cilium-jnvkg"
Dec 13 02:51:15.574883 kubelet[1576]: I1213 02:51:15.574877    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/98f08bb0-a768-47b2-9664-c90bf10edefc-cilium-config-path\") pod \"cilium-jnvkg\" (UID: \"98f08bb0-a768-47b2-9664-c90bf10edefc\") " pod="kube-system/cilium-jnvkg"
Dec 13 02:51:15.574951 kubelet[1576]: I1213 02:51:15.574945    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/98f08bb0-a768-47b2-9664-c90bf10edefc-host-proc-sys-kernel\") pod \"cilium-jnvkg\" (UID: \"98f08bb0-a768-47b2-9664-c90bf10edefc\") " pod="kube-system/cilium-jnvkg"
Dec 13 02:51:15.575025 kubelet[1576]: I1213 02:51:15.575019    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/98f08bb0-a768-47b2-9664-c90bf10edefc-xtables-lock\") pod \"cilium-jnvkg\" (UID: \"98f08bb0-a768-47b2-9664-c90bf10edefc\") " pod="kube-system/cilium-jnvkg"
Dec 13 02:51:15.575091 kubelet[1576]: I1213 02:51:15.575085    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/98f08bb0-a768-47b2-9664-c90bf10edefc-etc-cni-netd\") pod \"cilium-jnvkg\" (UID: \"98f08bb0-a768-47b2-9664-c90bf10edefc\") " pod="kube-system/cilium-jnvkg"
Dec 13 02:51:15.575156 kubelet[1576]: I1213 02:51:15.575150    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/98f08bb0-a768-47b2-9664-c90bf10edefc-host-proc-sys-net\") pod \"cilium-jnvkg\" (UID: \"98f08bb0-a768-47b2-9664-c90bf10edefc\") " pod="kube-system/cilium-jnvkg"
Dec 13 02:51:15.575220 kubelet[1576]: I1213 02:51:15.575214    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/98f08bb0-a768-47b2-9664-c90bf10edefc-bpf-maps\") pod \"cilium-jnvkg\" (UID: \"98f08bb0-a768-47b2-9664-c90bf10edefc\") " pod="kube-system/cilium-jnvkg"
Dec 13 02:51:15.575288 kubelet[1576]: I1213 02:51:15.575282    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/98f08bb0-a768-47b2-9664-c90bf10edefc-cni-path\") pod \"cilium-jnvkg\" (UID: \"98f08bb0-a768-47b2-9664-c90bf10edefc\") " pod="kube-system/cilium-jnvkg"
Dec 13 02:51:15.575356 kubelet[1576]: I1213 02:51:15.575350    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/98f08bb0-a768-47b2-9664-c90bf10edefc-cilium-ipsec-secrets\") pod \"cilium-jnvkg\" (UID: \"98f08bb0-a768-47b2-9664-c90bf10edefc\") " pod="kube-system/cilium-jnvkg"
Dec 13 02:51:15.575424 kubelet[1576]: I1213 02:51:15.575418    1576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/98f08bb0-a768-47b2-9664-c90bf10edefc-hubble-tls\") pod \"cilium-jnvkg\" (UID: \"98f08bb0-a768-47b2-9664-c90bf10edefc\") " pod="kube-system/cilium-jnvkg"
Dec 13 02:51:15.575490 kubelet[1576]: I1213 02:51:15.575484    1576 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-clustermesh-secrets\") on node \"10.67.124.136\" DevicePath \"\""
Dec 13 02:51:15.575553 kubelet[1576]: I1213 02:51:15.575546    1576 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-hubble-tls\") on node \"10.67.124.136\" DevicePath \"\""
Dec 13 02:51:15.575605 kubelet[1576]: I1213 02:51:15.575599    1576 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9530807d-c94a-45cf-a35d-e3d92cc8c9c5-cilium-config-path\") on node \"10.67.124.136\" DevicePath \"\""
Dec 13 02:51:15.866162 sshd[3150]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.148.10.203  user=root
Dec 13 02:51:15.870380 env[1248]: time="2024-12-13T02:51:15.870345016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jnvkg,Uid:98f08bb0-a768-47b2-9664-c90bf10edefc,Namespace:kube-system,Attempt:0,}"
Dec 13 02:51:15.878458 env[1248]: time="2024-12-13T02:51:15.878348647Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 02:51:15.878458 env[1248]: time="2024-12-13T02:51:15.878371051Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 02:51:15.878458 env[1248]: time="2024-12-13T02:51:15.878377927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 02:51:15.878619 env[1248]: time="2024-12-13T02:51:15.878503624Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/616396897a2b7f5e3a23246803dd3245b28482c323680fcd2c7a70070bba9f52 pid=3164 runtime=io.containerd.runc.v2
Dec 13 02:51:15.896330 systemd[1]: Started cri-containerd-616396897a2b7f5e3a23246803dd3245b28482c323680fcd2c7a70070bba9f52.scope.
Dec 13 02:51:15.901308 env[1248]: time="2024-12-13T02:51:15.901241436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-xngxg,Uid:f7328223-3fff-4a6d-864c-a9cbb8a8886b,Namespace:kube-system,Attempt:0,}"
Dec 13 02:51:15.914225 env[1248]: time="2024-12-13T02:51:15.914124829Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Dec 13 02:51:15.914225 env[1248]: time="2024-12-13T02:51:15.914148998Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Dec 13 02:51:15.914225 env[1248]: time="2024-12-13T02:51:15.914156118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Dec 13 02:51:15.914418 env[1248]: time="2024-12-13T02:51:15.914386497Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c3dbc06b979f09c33346b8776edda3a58f90a9c400bbe3efd74cb1947ee1bde9 pid=3201 runtime=io.containerd.runc.v2
Dec 13 02:51:15.915562 env[1248]: time="2024-12-13T02:51:15.915540711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jnvkg,Uid:98f08bb0-a768-47b2-9664-c90bf10edefc,Namespace:kube-system,Attempt:0,} returns sandbox id \"616396897a2b7f5e3a23246803dd3245b28482c323680fcd2c7a70070bba9f52\""
Dec 13 02:51:15.917062 env[1248]: time="2024-12-13T02:51:15.917047553Z" level=info msg="CreateContainer within sandbox \"616396897a2b7f5e3a23246803dd3245b28482c323680fcd2c7a70070bba9f52\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Dec 13 02:51:15.924597 systemd[1]: Started cri-containerd-c3dbc06b979f09c33346b8776edda3a58f90a9c400bbe3efd74cb1947ee1bde9.scope.
Dec 13 02:51:15.928239 env[1248]: time="2024-12-13T02:51:15.927514482Z" level=info msg="CreateContainer within sandbox \"616396897a2b7f5e3a23246803dd3245b28482c323680fcd2c7a70070bba9f52\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"154f656ce4c056bab666a74065653e7b917d9d7cdb4ed4370b005f282772db4b\""
Dec 13 02:51:15.928647 env[1248]: time="2024-12-13T02:51:15.928633150Z" level=info msg="StartContainer for \"154f656ce4c056bab666a74065653e7b917d9d7cdb4ed4370b005f282772db4b\""
Dec 13 02:51:15.939821 systemd[1]: Started cri-containerd-154f656ce4c056bab666a74065653e7b917d9d7cdb4ed4370b005f282772db4b.scope.
Dec 13 02:51:15.968356 env[1248]: time="2024-12-13T02:51:15.968329300Z" level=info msg="StartContainer for \"154f656ce4c056bab666a74065653e7b917d9d7cdb4ed4370b005f282772db4b\" returns successfully"
Dec 13 02:51:15.971139 env[1248]: time="2024-12-13T02:51:15.971118554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-xngxg,Uid:f7328223-3fff-4a6d-864c-a9cbb8a8886b,Namespace:kube-system,Attempt:0,} returns sandbox id \"c3dbc06b979f09c33346b8776edda3a58f90a9c400bbe3efd74cb1947ee1bde9\""
Dec 13 02:51:15.972104 env[1248]: time="2024-12-13T02:51:15.972090125Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\""
Dec 13 02:51:15.982989 systemd[1]: cri-containerd-154f656ce4c056bab666a74065653e7b917d9d7cdb4ed4370b005f282772db4b.scope: Deactivated successfully.
Dec 13 02:51:16.003821 env[1248]: time="2024-12-13T02:51:16.003790494Z" level=info msg="shim disconnected" id=154f656ce4c056bab666a74065653e7b917d9d7cdb4ed4370b005f282772db4b
Dec 13 02:51:16.003821 env[1248]: time="2024-12-13T02:51:16.003821530Z" level=warning msg="cleaning up after shim disconnected" id=154f656ce4c056bab666a74065653e7b917d9d7cdb4ed4370b005f282772db4b namespace=k8s.io
Dec 13 02:51:16.003821 env[1248]: time="2024-12-13T02:51:16.003827765Z" level=info msg="cleaning up dead shim"
Dec 13 02:51:16.008196 env[1248]: time="2024-12-13T02:51:16.008181380Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:51:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3291 runtime=io.containerd.runc.v2\n"
Dec 13 02:51:16.077226 kubelet[1576]: E1213 02:51:16.077191    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:51:16.443542 kubelet[1576]: I1213 02:51:16.443499    1576 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="9530807d-c94a-45cf-a35d-e3d92cc8c9c5" path="/var/lib/kubelet/pods/9530807d-c94a-45cf-a35d-e3d92cc8c9c5/volumes"
Dec 13 02:51:16.540772 env[1248]: time="2024-12-13T02:51:16.540749405Z" level=info msg="CreateContainer within sandbox \"616396897a2b7f5e3a23246803dd3245b28482c323680fcd2c7a70070bba9f52\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Dec 13 02:51:16.546113 env[1248]: time="2024-12-13T02:51:16.546081466Z" level=info msg="CreateContainer within sandbox \"616396897a2b7f5e3a23246803dd3245b28482c323680fcd2c7a70070bba9f52\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ebeb8ab16cc98a055a5f8b353354bf360321c8469df68ffa36eac7595726e138\""
Dec 13 02:51:16.546684 env[1248]: time="2024-12-13T02:51:16.546668761Z" level=info msg="StartContainer for \"ebeb8ab16cc98a055a5f8b353354bf360321c8469df68ffa36eac7595726e138\""
Dec 13 02:51:16.555809 systemd[1]: Started cri-containerd-ebeb8ab16cc98a055a5f8b353354bf360321c8469df68ffa36eac7595726e138.scope.
Dec 13 02:51:16.572414 env[1248]: time="2024-12-13T02:51:16.572386563Z" level=info msg="StartContainer for \"ebeb8ab16cc98a055a5f8b353354bf360321c8469df68ffa36eac7595726e138\" returns successfully"
Dec 13 02:51:16.584792 systemd[1]: cri-containerd-ebeb8ab16cc98a055a5f8b353354bf360321c8469df68ffa36eac7595726e138.scope: Deactivated successfully.
Dec 13 02:51:16.598008 env[1248]: time="2024-12-13T02:51:16.597982721Z" level=info msg="shim disconnected" id=ebeb8ab16cc98a055a5f8b353354bf360321c8469df68ffa36eac7595726e138
Dec 13 02:51:16.598144 env[1248]: time="2024-12-13T02:51:16.598133976Z" level=warning msg="cleaning up after shim disconnected" id=ebeb8ab16cc98a055a5f8b353354bf360321c8469df68ffa36eac7595726e138 namespace=k8s.io
Dec 13 02:51:16.598199 env[1248]: time="2024-12-13T02:51:16.598189445Z" level=info msg="cleaning up dead shim"
Dec 13 02:51:16.602451 env[1248]: time="2024-12-13T02:51:16.602433463Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:51:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3352 runtime=io.containerd.runc.v2\n"
Dec 13 02:51:17.077702 kubelet[1576]: E1213 02:51:17.077641    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:51:17.190505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1010941129.mount: Deactivated successfully.
Dec 13 02:51:17.539340 env[1248]: time="2024-12-13T02:51:17.539279325Z" level=info msg="CreateContainer within sandbox \"616396897a2b7f5e3a23246803dd3245b28482c323680fcd2c7a70070bba9f52\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Dec 13 02:51:17.546495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3289255260.mount: Deactivated successfully.
Dec 13 02:51:17.569466 env[1248]: time="2024-12-13T02:51:17.569441296Z" level=info msg="CreateContainer within sandbox \"616396897a2b7f5e3a23246803dd3245b28482c323680fcd2c7a70070bba9f52\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7c238e7fc7a96dfb90d463532ccc58e6f6aa7e542cd0238a2933aaa8417ba72d\""
Dec 13 02:51:17.569996 env[1248]: time="2024-12-13T02:51:17.569983395Z" level=info msg="StartContainer for \"7c238e7fc7a96dfb90d463532ccc58e6f6aa7e542cd0238a2933aaa8417ba72d\""
Dec 13 02:51:17.601136 systemd[1]: Started cri-containerd-7c238e7fc7a96dfb90d463532ccc58e6f6aa7e542cd0238a2933aaa8417ba72d.scope.
Dec 13 02:51:17.632024 env[1248]: time="2024-12-13T02:51:17.631995343Z" level=info msg="StartContainer for \"7c238e7fc7a96dfb90d463532ccc58e6f6aa7e542cd0238a2933aaa8417ba72d\" returns successfully"
Dec 13 02:51:17.636352 systemd[1]: cri-containerd-7c238e7fc7a96dfb90d463532ccc58e6f6aa7e542cd0238a2933aaa8417ba72d.scope: Deactivated successfully.
Dec 13 02:51:17.821968 env[1248]: time="2024-12-13T02:51:17.821935738Z" level=info msg="shim disconnected" id=7c238e7fc7a96dfb90d463532ccc58e6f6aa7e542cd0238a2933aaa8417ba72d
Dec 13 02:51:17.821968 env[1248]: time="2024-12-13T02:51:17.821967028Z" level=warning msg="cleaning up after shim disconnected" id=7c238e7fc7a96dfb90d463532ccc58e6f6aa7e542cd0238a2933aaa8417ba72d namespace=k8s.io
Dec 13 02:51:17.822126 env[1248]: time="2024-12-13T02:51:17.821975652Z" level=info msg="cleaning up dead shim"
Dec 13 02:51:17.827661 env[1248]: time="2024-12-13T02:51:17.827623417Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:51:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3411 runtime=io.containerd.runc.v2\n"
Dec 13 02:51:17.831621 sshd[3150]: Failed password for root from 45.148.10.203 port 44348 ssh2
Dec 13 02:51:17.846530 env[1248]: time="2024-12-13T02:51:17.846494237Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:51:17.847083 env[1248]: time="2024-12-13T02:51:17.847065762Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:51:17.847899 env[1248]: time="2024-12-13T02:51:17.847885295Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
Dec 13 02:51:17.848240 env[1248]: time="2024-12-13T02:51:17.848224219Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\""
Dec 13 02:51:17.849501 env[1248]: time="2024-12-13T02:51:17.849481169Z" level=info msg="CreateContainer within sandbox \"c3dbc06b979f09c33346b8776edda3a58f90a9c400bbe3efd74cb1947ee1bde9\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}"
Dec 13 02:51:17.858780 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount935090479.mount: Deactivated successfully.
Dec 13 02:51:17.862425 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2143256319.mount: Deactivated successfully.
Dec 13 02:51:17.869324 env[1248]: time="2024-12-13T02:51:17.869283143Z" level=info msg="CreateContainer within sandbox \"c3dbc06b979f09c33346b8776edda3a58f90a9c400bbe3efd74cb1947ee1bde9\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9cb3f8927c344ac91063bb722028ecc038fea56d73c8645900de0c01468a04a9\""
Dec 13 02:51:17.870136 env[1248]: time="2024-12-13T02:51:17.870114738Z" level=info msg="StartContainer for \"9cb3f8927c344ac91063bb722028ecc038fea56d73c8645900de0c01468a04a9\""
Dec 13 02:51:17.889512 systemd[1]: Started cri-containerd-9cb3f8927c344ac91063bb722028ecc038fea56d73c8645900de0c01468a04a9.scope.
Dec 13 02:51:17.928170 env[1248]: time="2024-12-13T02:51:17.928133974Z" level=info msg="StartContainer for \"9cb3f8927c344ac91063bb722028ecc038fea56d73c8645900de0c01468a04a9\" returns successfully"
Dec 13 02:51:17.974078 sshd[3150]: Connection closed by authenticating user root 45.148.10.203 port 44348 [preauth]
Dec 13 02:51:17.974738 systemd[1]: sshd@12-139.178.70.104:22-45.148.10.203:44348.service: Deactivated successfully.
Dec 13 02:51:18.036452 kubelet[1576]: E1213 02:51:18.036395    1576 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:51:18.058064 env[1248]: time="2024-12-13T02:51:18.058039488Z" level=info msg="StopPodSandbox for \"41d8f3ee744b0c709d68f8b88dc7dc20f303b4f0b405ae1f101689cd150cf41d\""
Dec 13 02:51:18.058250 env[1248]: time="2024-12-13T02:51:18.058216847Z" level=info msg="TearDown network for sandbox \"41d8f3ee744b0c709d68f8b88dc7dc20f303b4f0b405ae1f101689cd150cf41d\" successfully"
Dec 13 02:51:18.058375 env[1248]: time="2024-12-13T02:51:18.058361315Z" level=info msg="StopPodSandbox for \"41d8f3ee744b0c709d68f8b88dc7dc20f303b4f0b405ae1f101689cd150cf41d\" returns successfully"
Dec 13 02:51:18.058819 env[1248]: time="2024-12-13T02:51:18.058796087Z" level=info msg="RemovePodSandbox for \"41d8f3ee744b0c709d68f8b88dc7dc20f303b4f0b405ae1f101689cd150cf41d\""
Dec 13 02:51:18.058874 env[1248]: time="2024-12-13T02:51:18.058818545Z" level=info msg="Forcibly stopping sandbox \"41d8f3ee744b0c709d68f8b88dc7dc20f303b4f0b405ae1f101689cd150cf41d\""
Dec 13 02:51:18.058907 env[1248]: time="2024-12-13T02:51:18.058870829Z" level=info msg="TearDown network for sandbox \"41d8f3ee744b0c709d68f8b88dc7dc20f303b4f0b405ae1f101689cd150cf41d\" successfully"
Dec 13 02:51:18.060904 env[1248]: time="2024-12-13T02:51:18.060883046Z" level=info msg="RemovePodSandbox \"41d8f3ee744b0c709d68f8b88dc7dc20f303b4f0b405ae1f101689cd150cf41d\" returns successfully"
Dec 13 02:51:18.078412 kubelet[1576]: E1213 02:51:18.078349    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:51:18.117651 kubelet[1576]: E1213 02:51:18.117633    1576 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Dec 13 02:51:18.542109 env[1248]: time="2024-12-13T02:51:18.542030390Z" level=info msg="CreateContainer within sandbox \"616396897a2b7f5e3a23246803dd3245b28482c323680fcd2c7a70070bba9f52\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Dec 13 02:51:18.548763 env[1248]: time="2024-12-13T02:51:18.548709702Z" level=info msg="CreateContainer within sandbox \"616396897a2b7f5e3a23246803dd3245b28482c323680fcd2c7a70070bba9f52\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"819d922c8de0f644752bf2498160b4331999e8c36ae946e72c0669ce25e80843\""
Dec 13 02:51:18.549413 env[1248]: time="2024-12-13T02:51:18.549384962Z" level=info msg="StartContainer for \"819d922c8de0f644752bf2498160b4331999e8c36ae946e72c0669ce25e80843\""
Dec 13 02:51:18.564410 systemd[1]: Started cri-containerd-819d922c8de0f644752bf2498160b4331999e8c36ae946e72c0669ce25e80843.scope.
Dec 13 02:51:18.568033 kubelet[1576]: I1213 02:51:18.565442    1576 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-xngxg" podStartSLOduration=2.688727755 podStartE2EDuration="4.56539682s" podCreationTimestamp="2024-12-13 02:51:14 +0000 UTC" firstStartedPulling="2024-12-13 02:51:15.971812218 +0000 UTC m=+58.549155567" lastFinishedPulling="2024-12-13 02:51:17.848481275 +0000 UTC m=+60.425824632" observedRunningTime="2024-12-13 02:51:18.561961804 +0000 UTC m=+61.139305165" watchObservedRunningTime="2024-12-13 02:51:18.56539682 +0000 UTC m=+61.142740182"
Dec 13 02:51:18.588650 env[1248]: time="2024-12-13T02:51:18.588624346Z" level=info msg="StartContainer for \"819d922c8de0f644752bf2498160b4331999e8c36ae946e72c0669ce25e80843\" returns successfully"
Dec 13 02:51:18.590514 systemd[1]: cri-containerd-819d922c8de0f644752bf2498160b4331999e8c36ae946e72c0669ce25e80843.scope: Deactivated successfully.
Dec 13 02:51:18.607396 env[1248]: time="2024-12-13T02:51:18.607361243Z" level=info msg="shim disconnected" id=819d922c8de0f644752bf2498160b4331999e8c36ae946e72c0669ce25e80843
Dec 13 02:51:18.607572 env[1248]: time="2024-12-13T02:51:18.607558491Z" level=warning msg="cleaning up after shim disconnected" id=819d922c8de0f644752bf2498160b4331999e8c36ae946e72c0669ce25e80843 namespace=k8s.io
Dec 13 02:51:18.607636 env[1248]: time="2024-12-13T02:51:18.607626278Z" level=info msg="cleaning up dead shim"
Dec 13 02:51:18.613469 env[1248]: time="2024-12-13T02:51:18.613430538Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:51:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3509 runtime=io.containerd.runc.v2\n"
Dec 13 02:51:19.079430 kubelet[1576]: E1213 02:51:19.079356    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:51:19.088351 kubelet[1576]: I1213 02:51:19.088245    1576 setters.go:568] "Node became not ready" node="10.67.124.136" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T02:51:19Z","lastTransitionTime":"2024-12-13T02:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"}
Dec 13 02:51:19.546573 env[1248]: time="2024-12-13T02:51:19.546327389Z" level=info msg="CreateContainer within sandbox \"616396897a2b7f5e3a23246803dd3245b28482c323680fcd2c7a70070bba9f52\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Dec 13 02:51:19.560709 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount851221682.mount: Deactivated successfully.
Dec 13 02:51:19.569605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2116390668.mount: Deactivated successfully.
Dec 13 02:51:19.573749 env[1248]: time="2024-12-13T02:51:19.573684593Z" level=info msg="CreateContainer within sandbox \"616396897a2b7f5e3a23246803dd3245b28482c323680fcd2c7a70070bba9f52\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9f6a82dda0610cdf2f5a1b2ec02f50cc1cf856fc29fe632dcddd46fb7a618db9\""
Dec 13 02:51:19.574211 env[1248]: time="2024-12-13T02:51:19.574191422Z" level=info msg="StartContainer for \"9f6a82dda0610cdf2f5a1b2ec02f50cc1cf856fc29fe632dcddd46fb7a618db9\""
Dec 13 02:51:19.584114 systemd[1]: Started cri-containerd-9f6a82dda0610cdf2f5a1b2ec02f50cc1cf856fc29fe632dcddd46fb7a618db9.scope.
Dec 13 02:51:19.605040 env[1248]: time="2024-12-13T02:51:19.605008789Z" level=info msg="StartContainer for \"9f6a82dda0610cdf2f5a1b2ec02f50cc1cf856fc29fe632dcddd46fb7a618db9\" returns successfully"
Dec 13 02:51:20.080377 kubelet[1576]: E1213 02:51:20.080348    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:51:20.100630 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni))
Dec 13 02:51:20.559020 kubelet[1576]: I1213 02:51:20.558990    1576 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-jnvkg" podStartSLOduration=5.558962076 podStartE2EDuration="5.558962076s" podCreationTimestamp="2024-12-13 02:51:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:51:20.558645077 +0000 UTC m=+63.135988448" watchObservedRunningTime="2024-12-13 02:51:20.558962076 +0000 UTC m=+63.136305438"
Dec 13 02:51:21.081322 kubelet[1576]: E1213 02:51:21.081296    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:51:22.082655 kubelet[1576]: E1213 02:51:22.082633    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:51:22.113274 systemd[1]: Started sshd@13-139.178.70.104:22-45.148.10.203:34270.service.
Dec 13 02:51:22.306538 systemd-networkd[1064]: lxc_health: Link UP
Dec 13 02:51:22.314257 systemd-networkd[1064]: lxc_health: Gained carrier
Dec 13 02:51:22.314539 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready
Dec 13 02:51:22.467580 systemd[1]: run-containerd-runc-k8s.io-9f6a82dda0610cdf2f5a1b2ec02f50cc1cf856fc29fe632dcddd46fb7a618db9-runc.hYo7xs.mount: Deactivated successfully.
Dec 13 02:51:22.816917 sshd[3991]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.148.10.203  user=root
Dec 13 02:51:23.083918 kubelet[1576]: E1213 02:51:23.083811    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:51:23.606706 systemd-networkd[1064]: lxc_health: Gained IPv6LL
Dec 13 02:51:24.084010 kubelet[1576]: E1213 02:51:24.083975    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:51:24.411709 sshd[3991]: Failed password for root from 45.148.10.203 port 34270 ssh2
Dec 13 02:51:24.578827 systemd[1]: run-containerd-runc-k8s.io-9f6a82dda0610cdf2f5a1b2ec02f50cc1cf856fc29fe632dcddd46fb7a618db9-runc.LB6QkS.mount: Deactivated successfully.
Dec 13 02:51:24.925214 sshd[3991]: Connection closed by authenticating user root 45.148.10.203 port 34270 [preauth]
Dec 13 02:51:24.925732 systemd[1]: sshd@13-139.178.70.104:22-45.148.10.203:34270.service: Deactivated successfully.
Dec 13 02:51:25.084120 kubelet[1576]: E1213 02:51:25.084092    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:51:26.084309 kubelet[1576]: E1213 02:51:26.084284    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:51:26.679387 systemd[1]: run-containerd-runc-k8s.io-9f6a82dda0610cdf2f5a1b2ec02f50cc1cf856fc29fe632dcddd46fb7a618db9-runc.pl8lzg.mount: Deactivated successfully.
Dec 13 02:51:27.085356 kubelet[1576]: E1213 02:51:27.085304    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:51:28.086049 kubelet[1576]: E1213 02:51:28.086020    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:51:29.086897 kubelet[1576]: E1213 02:51:29.086864    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Dec 13 02:51:30.087944 kubelet[1576]: E1213 02:51:30.087908    1576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"