Feb 9 19:55:29.653845 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Feb 9 17:23:38 -00 2024 Feb 9 19:55:29.653861 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:55:29.653867 kernel: Disabled fast string operations Feb 9 19:55:29.653871 kernel: BIOS-provided physical RAM map: Feb 9 19:55:29.653875 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Feb 9 19:55:29.653879 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Feb 9 19:55:29.653885 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Feb 9 19:55:29.653889 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Feb 9 19:55:29.653893 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Feb 9 19:55:29.653897 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Feb 9 19:55:29.653904 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Feb 9 19:55:29.653921 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Feb 9 19:55:29.653935 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Feb 9 19:55:29.653940 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Feb 9 19:55:29.653955 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Feb 9 19:55:29.653963 kernel: NX (Execute Disable) protection: active Feb 9 19:55:29.653968 kernel: SMBIOS 2.7 present. Feb 9 19:55:29.653974 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Feb 9 19:55:29.653981 kernel: vmware: hypercall mode: 0x00 Feb 9 19:55:29.653986 kernel: Hypervisor detected: VMware Feb 9 19:55:29.653992 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Feb 9 19:55:29.653997 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Feb 9 19:55:29.654001 kernel: vmware: using clock offset of 2657402447 ns Feb 9 19:55:29.654006 kernel: tsc: Detected 3408.000 MHz processor Feb 9 19:55:29.654011 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 19:55:29.654015 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 19:55:29.654020 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Feb 9 19:55:29.654025 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 19:55:29.654029 kernel: total RAM covered: 3072M Feb 9 19:55:29.654039 kernel: Found optimal setting for mtrr clean up Feb 9 19:55:29.654044 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Feb 9 19:55:29.654052 kernel: Using GB pages for direct mapping Feb 9 19:55:29.660429 kernel: ACPI: Early table checksum verification disabled Feb 9 19:55:29.660439 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Feb 9 19:55:29.660444 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Feb 9 19:55:29.660449 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Feb 9 19:55:29.660454 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Feb 9 19:55:29.660458 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Feb 9 19:55:29.660463 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Feb 9 19:55:29.660471 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Feb 9 19:55:29.660477 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Feb 9 19:55:29.660483 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Feb 9 19:55:29.660488 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Feb 9 19:55:29.660493 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Feb 9 19:55:29.660499 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Feb 9 19:55:29.660504 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Feb 9 19:55:29.660509 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Feb 9 19:55:29.660514 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Feb 9 19:55:29.660519 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Feb 9 19:55:29.660524 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Feb 9 19:55:29.660529 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Feb 9 19:55:29.660534 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Feb 9 19:55:29.660539 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Feb 9 19:55:29.660545 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Feb 9 19:55:29.660551 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Feb 9 19:55:29.660556 kernel: system APIC only can use physical flat Feb 9 19:55:29.660561 kernel: Setting APIC routing to physical flat. Feb 9 19:55:29.660566 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 9 19:55:29.660571 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Feb 9 19:55:29.660576 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Feb 9 19:55:29.660581 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Feb 9 19:55:29.660586 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Feb 9 19:55:29.660592 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Feb 9 19:55:29.660597 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Feb 9 19:55:29.660601 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Feb 9 19:55:29.660606 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Feb 9 19:55:29.660611 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Feb 9 19:55:29.660616 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Feb 9 19:55:29.660621 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Feb 9 19:55:29.660626 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Feb 9 19:55:29.660631 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Feb 9 19:55:29.660636 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Feb 9 19:55:29.660642 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Feb 9 19:55:29.660647 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Feb 9 19:55:29.660652 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Feb 9 19:55:29.660657 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Feb 9 19:55:29.660662 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Feb 9 19:55:29.660667 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Feb 9 19:55:29.660672 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Feb 9 19:55:29.660677 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Feb 9 19:55:29.660682 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Feb 9 19:55:29.660687 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Feb 9 19:55:29.660693 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Feb 9 19:55:29.660698 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Feb 9 19:55:29.660703 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Feb 9 19:55:29.660708 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Feb 9 19:55:29.660712 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Feb 9 19:55:29.660718 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Feb 9 19:55:29.660723 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Feb 9 19:55:29.660728 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Feb 9 19:55:29.660733 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Feb 9 19:55:29.660738 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Feb 9 19:55:29.660744 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Feb 9 19:55:29.660749 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Feb 9 19:55:29.660754 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Feb 9 19:55:29.660759 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Feb 9 19:55:29.660764 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Feb 9 19:55:29.660769 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Feb 9 19:55:29.660774 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Feb 9 19:55:29.660779 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Feb 9 19:55:29.660784 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Feb 9 19:55:29.660788 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Feb 9 19:55:29.660794 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Feb 9 19:55:29.660799 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Feb 9 19:55:29.660804 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Feb 9 19:55:29.660809 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Feb 9 19:55:29.660814 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Feb 9 19:55:29.660819 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Feb 9 19:55:29.660824 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Feb 9 19:55:29.660829 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Feb 9 19:55:29.660834 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Feb 9 19:55:29.660839 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Feb 9 19:55:29.660845 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Feb 9 19:55:29.660850 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Feb 9 19:55:29.660855 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Feb 9 19:55:29.660860 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Feb 9 19:55:29.660864 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Feb 9 19:55:29.660870 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Feb 9 19:55:29.660879 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Feb 9 19:55:29.660885 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Feb 9 19:55:29.660890 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Feb 9 19:55:29.660895 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Feb 9 19:55:29.660901 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Feb 9 19:55:29.660907 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Feb 9 19:55:29.660913 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Feb 9 19:55:29.660918 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Feb 9 19:55:29.660923 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Feb 9 19:55:29.660928 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Feb 9 19:55:29.660934 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Feb 9 19:55:29.660939 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Feb 9 19:55:29.660945 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Feb 9 19:55:29.660950 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Feb 9 19:55:29.660956 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Feb 9 19:55:29.660961 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Feb 9 19:55:29.660966 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Feb 9 19:55:29.660972 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Feb 9 19:55:29.660977 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Feb 9 19:55:29.660982 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Feb 9 19:55:29.660987 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Feb 9 19:55:29.660994 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Feb 9 19:55:29.660999 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Feb 9 19:55:29.661004 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Feb 9 19:55:29.661010 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Feb 9 19:55:29.661015 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Feb 9 19:55:29.661020 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Feb 9 19:55:29.661026 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Feb 9 19:55:29.661031 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Feb 9 19:55:29.661036 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Feb 9 19:55:29.661042 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Feb 9 19:55:29.661048 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Feb 9 19:55:29.661053 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Feb 9 19:55:29.661059 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Feb 9 19:55:29.661064 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Feb 9 19:55:29.661069 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Feb 9 19:55:29.661074 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Feb 9 19:55:29.661080 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Feb 9 19:55:29.661085 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Feb 9 19:55:29.661090 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Feb 9 19:55:29.661096 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Feb 9 19:55:29.661102 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Feb 9 19:55:29.661107 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Feb 9 19:55:29.661112 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Feb 9 19:55:29.661118 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Feb 9 19:55:29.661123 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Feb 9 19:55:29.661128 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Feb 9 19:55:29.661134 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Feb 9 19:55:29.661139 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Feb 9 19:55:29.661144 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Feb 9 19:55:29.661149 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Feb 9 19:55:29.661155 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Feb 9 19:55:29.661161 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Feb 9 19:55:29.661166 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Feb 9 19:55:29.661171 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Feb 9 19:55:29.661177 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Feb 9 19:55:29.661182 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Feb 9 19:55:29.661187 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Feb 9 19:55:29.661193 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Feb 9 19:55:29.661198 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Feb 9 19:55:29.661203 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Feb 9 19:55:29.661210 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Feb 9 19:55:29.661215 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Feb 9 19:55:29.661221 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Feb 9 19:55:29.661226 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Feb 9 19:55:29.661231 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Feb 9 19:55:29.661236 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Feb 9 19:55:29.661242 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 9 19:55:29.661247 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Feb 9 19:55:29.661253 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Feb 9 19:55:29.661259 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Feb 9 19:55:29.661265 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Feb 9 19:55:29.661270 kernel: Zone ranges: Feb 9 19:55:29.661276 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 19:55:29.661282 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Feb 9 19:55:29.661287 kernel: Normal empty Feb 9 19:55:29.661292 kernel: Movable zone start for each node Feb 9 19:55:29.661298 kernel: Early memory node ranges Feb 9 19:55:29.661303 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Feb 9 19:55:29.661308 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Feb 9 19:55:29.661315 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Feb 9 19:55:29.661320 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Feb 9 19:55:29.661326 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 19:55:29.661331 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Feb 9 19:55:29.661336 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Feb 9 19:55:29.661342 kernel: ACPI: PM-Timer IO Port: 0x1008 Feb 9 19:55:29.661347 kernel: system APIC only can use physical flat Feb 9 19:55:29.661352 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Feb 9 19:55:29.661358 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Feb 9 19:55:29.661364 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Feb 9 19:55:29.661370 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Feb 9 19:55:29.661375 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Feb 9 19:55:29.661380 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Feb 9 19:55:29.661386 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Feb 9 19:55:29.661391 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Feb 9 19:55:29.661396 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Feb 9 19:55:29.661401 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Feb 9 19:55:29.661407 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Feb 9 19:55:29.661446 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Feb 9 19:55:29.661454 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Feb 9 19:55:29.661460 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Feb 9 19:55:29.661465 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Feb 9 19:55:29.661471 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Feb 9 19:55:29.661476 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Feb 9 19:55:29.661482 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Feb 9 19:55:29.661487 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Feb 9 19:55:29.661492 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Feb 9 19:55:29.661498 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Feb 9 19:55:29.661504 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Feb 9 19:55:29.661510 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Feb 9 19:55:29.661515 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Feb 9 19:55:29.661521 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Feb 9 19:55:29.661526 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Feb 9 19:55:29.661531 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Feb 9 19:55:29.661537 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Feb 9 19:55:29.661542 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Feb 9 19:55:29.661547 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Feb 9 19:55:29.661554 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Feb 9 19:55:29.661559 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Feb 9 19:55:29.661564 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Feb 9 19:55:29.661570 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Feb 9 19:55:29.661575 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Feb 9 19:55:29.661580 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Feb 9 19:55:29.661586 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Feb 9 19:55:29.661591 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Feb 9 19:55:29.661596 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Feb 9 19:55:29.661602 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Feb 9 19:55:29.661608 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Feb 9 19:55:29.661614 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Feb 9 19:55:29.661619 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Feb 9 19:55:29.661625 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Feb 9 19:55:29.661630 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Feb 9 19:55:29.661635 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Feb 9 19:55:29.661641 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Feb 9 19:55:29.661646 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Feb 9 19:55:29.661651 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Feb 9 19:55:29.661658 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Feb 9 19:55:29.661663 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Feb 9 19:55:29.661669 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Feb 9 19:55:29.661674 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Feb 9 19:55:29.661680 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Feb 9 19:55:29.661685 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Feb 9 19:55:29.661690 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Feb 9 19:55:29.661696 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Feb 9 19:55:29.661701 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Feb 9 19:55:29.661706 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Feb 9 19:55:29.661713 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Feb 9 19:55:29.661718 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Feb 9 19:55:29.661724 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Feb 9 19:55:29.661730 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Feb 9 19:55:29.661735 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Feb 9 19:55:29.661740 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Feb 9 19:55:29.661746 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Feb 9 19:55:29.661753 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Feb 9 19:55:29.661761 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Feb 9 19:55:29.661769 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Feb 9 19:55:29.661777 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Feb 9 19:55:29.661783 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Feb 9 19:55:29.661789 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Feb 9 19:55:29.661794 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Feb 9 19:55:29.661800 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Feb 9 19:55:29.661805 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Feb 9 19:55:29.661810 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Feb 9 19:55:29.661816 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Feb 9 19:55:29.661821 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Feb 9 19:55:29.661827 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Feb 9 19:55:29.661833 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Feb 9 19:55:29.661838 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Feb 9 19:55:29.661844 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Feb 9 19:55:29.661849 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Feb 9 19:55:29.661854 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Feb 9 19:55:29.661860 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Feb 9 19:55:29.661865 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Feb 9 19:55:29.661870 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Feb 9 19:55:29.661880 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Feb 9 19:55:29.661886 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Feb 9 19:55:29.661893 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Feb 9 19:55:29.661901 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Feb 9 19:55:29.661907 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Feb 9 19:55:29.661912 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Feb 9 19:55:29.661918 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Feb 9 19:55:29.661923 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Feb 9 19:55:29.661928 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Feb 9 19:55:29.661934 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Feb 9 19:55:29.661940 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Feb 9 19:55:29.661946 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Feb 9 19:55:29.661951 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Feb 9 19:55:29.661956 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Feb 9 19:55:29.661962 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Feb 9 19:55:29.661967 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Feb 9 19:55:29.661972 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Feb 9 19:55:29.661978 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Feb 9 19:55:29.661983 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Feb 9 19:55:29.661989 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Feb 9 19:55:29.661995 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Feb 9 19:55:29.662000 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Feb 9 19:55:29.662006 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Feb 9 19:55:29.662011 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Feb 9 19:55:29.662017 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Feb 9 19:55:29.662022 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Feb 9 19:55:29.662028 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Feb 9 19:55:29.662033 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Feb 9 19:55:29.662040 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Feb 9 19:55:29.662045 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Feb 9 19:55:29.662050 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Feb 9 19:55:29.662056 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Feb 9 19:55:29.662061 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Feb 9 19:55:29.662066 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Feb 9 19:55:29.662072 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Feb 9 19:55:29.662077 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Feb 9 19:55:29.662083 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Feb 9 19:55:29.662088 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Feb 9 19:55:29.662095 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Feb 9 19:55:29.662100 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Feb 9 19:55:29.662105 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Feb 9 19:55:29.662111 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Feb 9 19:55:29.662116 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Feb 9 19:55:29.662122 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 19:55:29.662127 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Feb 9 19:55:29.662133 kernel: TSC deadline timer available Feb 9 19:55:29.662138 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Feb 9 19:55:29.662144 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Feb 9 19:55:29.662150 kernel: Booting paravirtualized kernel on VMware hypervisor Feb 9 19:55:29.662155 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 19:55:29.662161 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:128 nr_node_ids:1 Feb 9 19:55:29.662167 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u262144 Feb 9 19:55:29.662172 kernel: pcpu-alloc: s185624 r8192 d31464 u262144 alloc=1*2097152 Feb 9 19:55:29.662178 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Feb 9 19:55:29.662183 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Feb 9 19:55:29.662188 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Feb 9 19:55:29.662194 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Feb 9 19:55:29.662199 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Feb 9 19:55:29.662205 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Feb 9 19:55:29.662210 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Feb 9 19:55:29.662223 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Feb 9 19:55:29.662229 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Feb 9 19:55:29.662235 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Feb 9 19:55:29.662241 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Feb 9 19:55:29.662248 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Feb 9 19:55:29.662253 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Feb 9 19:55:29.662259 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Feb 9 19:55:29.662265 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Feb 9 19:55:29.662270 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Feb 9 19:55:29.662276 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Feb 9 19:55:29.662282 kernel: Policy zone: DMA32 Feb 9 19:55:29.662288 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:55:29.662294 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 19:55:29.662301 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Feb 9 19:55:29.662307 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Feb 9 19:55:29.662313 kernel: printk: log_buf_len min size: 262144 bytes Feb 9 19:55:29.662319 kernel: printk: log_buf_len: 1048576 bytes Feb 9 19:55:29.662324 kernel: printk: early log buf free: 239728(91%) Feb 9 19:55:29.662330 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 19:55:29.662336 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 9 19:55:29.662342 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 19:55:29.662349 kernel: Memory: 1942952K/2096628K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 153416K reserved, 0K cma-reserved) Feb 9 19:55:29.662355 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Feb 9 19:55:29.662361 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 19:55:29.662367 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 19:55:29.662373 kernel: rcu: Hierarchical RCU implementation. Feb 9 19:55:29.662380 kernel: rcu: RCU event tracing is enabled. Feb 9 19:55:29.662387 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Feb 9 19:55:29.662393 kernel: Rude variant of Tasks RCU enabled. Feb 9 19:55:29.662399 kernel: Tracing variant of Tasks RCU enabled. Feb 9 19:55:29.662405 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 19:55:29.662427 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Feb 9 19:55:29.662435 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Feb 9 19:55:29.662442 kernel: random: crng init done Feb 9 19:55:29.662450 kernel: Console: colour VGA+ 80x25 Feb 9 19:55:29.662457 kernel: printk: console [tty0] enabled Feb 9 19:55:29.662473 kernel: printk: console [ttyS0] enabled Feb 9 19:55:29.662481 kernel: ACPI: Core revision 20210730 Feb 9 19:55:29.662487 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Feb 9 19:55:29.662493 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 19:55:29.662511 kernel: x2apic enabled Feb 9 19:55:29.662518 kernel: Switched APIC routing to physical x2apic. Feb 9 19:55:29.662524 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 9 19:55:29.662530 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Feb 9 19:55:29.662536 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Feb 9 19:55:29.662543 kernel: Disabled fast string operations Feb 9 19:55:29.662550 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 9 19:55:29.662559 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 9 19:55:29.662565 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 19:55:29.662575 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 9 19:55:29.662584 kernel: Spectre V2 : Mitigation: Enhanced IBRS Feb 9 19:55:29.662591 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 19:55:29.662596 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Feb 9 19:55:29.662602 kernel: RETBleed: Mitigation: Enhanced IBRS Feb 9 19:55:29.662610 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 9 19:55:29.662616 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 9 19:55:29.662623 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 9 19:55:29.662628 kernel: SRBDS: Unknown: Dependent on hypervisor status Feb 9 19:55:29.662634 kernel: GDS: Unknown: Dependent on hypervisor status Feb 9 19:55:29.662640 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 9 19:55:29.662646 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 9 19:55:29.662653 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 9 19:55:29.662661 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 9 19:55:29.662668 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Feb 9 19:55:29.662674 kernel: Freeing SMP alternatives memory: 32K Feb 9 19:55:29.662680 kernel: pid_max: default: 131072 minimum: 1024 Feb 9 19:55:29.662686 kernel: LSM: Security Framework initializing Feb 9 19:55:29.662691 kernel: SELinux: Initializing. Feb 9 19:55:29.662698 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 9 19:55:29.662703 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 9 19:55:29.662710 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Feb 9 19:55:29.662715 kernel: Performance Events: Skylake events, core PMU driver. Feb 9 19:55:29.662722 kernel: core: CPUID marked event: 'cpu cycles' unavailable Feb 9 19:55:29.662728 kernel: core: CPUID marked event: 'instructions' unavailable Feb 9 19:55:29.662734 kernel: core: CPUID marked event: 'bus cycles' unavailable Feb 9 19:55:29.662739 kernel: core: CPUID marked event: 'cache references' unavailable Feb 9 19:55:29.662745 kernel: core: CPUID marked event: 'cache misses' unavailable Feb 9 19:55:29.662751 kernel: core: CPUID marked event: 'branch instructions' unavailable Feb 9 19:55:29.662757 kernel: core: CPUID marked event: 'branch misses' unavailable Feb 9 19:55:29.662766 kernel: ... version: 1 Feb 9 19:55:29.662773 kernel: ... bit width: 48 Feb 9 19:55:29.662779 kernel: ... generic registers: 4 Feb 9 19:55:29.662785 kernel: ... value mask: 0000ffffffffffff Feb 9 19:55:29.662791 kernel: ... max period: 000000007fffffff Feb 9 19:55:29.662797 kernel: ... fixed-purpose events: 0 Feb 9 19:55:29.662802 kernel: ... event mask: 000000000000000f Feb 9 19:55:29.662808 kernel: signal: max sigframe size: 1776 Feb 9 19:55:29.662814 kernel: rcu: Hierarchical SRCU implementation. Feb 9 19:55:29.662820 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 9 19:55:29.662826 kernel: smp: Bringing up secondary CPUs ... Feb 9 19:55:29.662832 kernel: x86: Booting SMP configuration: Feb 9 19:55:29.662838 kernel: .... node #0, CPUs: #1 Feb 9 19:55:29.662844 kernel: Disabled fast string operations Feb 9 19:55:29.662849 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Feb 9 19:55:29.662855 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Feb 9 19:55:29.662861 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 19:55:29.662867 kernel: smpboot: Max logical packages: 128 Feb 9 19:55:29.662875 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Feb 9 19:55:29.662881 kernel: devtmpfs: initialized Feb 9 19:55:29.662889 kernel: x86/mm: Memory block size: 128MB Feb 9 19:55:29.662894 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Feb 9 19:55:29.662900 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 19:55:29.662906 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Feb 9 19:55:29.662912 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 19:55:29.662918 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 19:55:29.662924 kernel: audit: initializing netlink subsys (disabled) Feb 9 19:55:29.662930 kernel: audit: type=2000 audit(1707508528.058:1): state=initialized audit_enabled=0 res=1 Feb 9 19:55:29.662935 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 19:55:29.662942 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 19:55:29.662948 kernel: cpuidle: using governor menu Feb 9 19:55:29.662954 kernel: Simple Boot Flag at 0x36 set to 0x80 Feb 9 19:55:29.662960 kernel: ACPI: bus type PCI registered Feb 9 19:55:29.662965 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 19:55:29.662971 kernel: dca service started, version 1.12.1 Feb 9 19:55:29.662977 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Feb 9 19:55:29.662984 kernel: PCI: MMCONFIG at [mem 0xf0000000-0xf7ffffff] reserved in E820 Feb 9 19:55:29.662993 kernel: PCI: Using configuration type 1 for base access Feb 9 19:55:29.663000 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 19:55:29.663006 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 19:55:29.663012 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 19:55:29.663018 kernel: ACPI: Added _OSI(Module Device) Feb 9 19:55:29.663024 kernel: ACPI: Added _OSI(Processor Device) Feb 9 19:55:29.663030 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 19:55:29.663036 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 19:55:29.663041 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 19:55:29.663047 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 19:55:29.663054 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 19:55:29.663060 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 19:55:29.663066 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Feb 9 19:55:29.663072 kernel: ACPI: Interpreter enabled Feb 9 19:55:29.663078 kernel: ACPI: PM: (supports S0 S1 S5) Feb 9 19:55:29.663083 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 19:55:29.663089 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 19:55:29.663095 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Feb 9 19:55:29.663104 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Feb 9 19:55:29.663185 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 19:55:29.663248 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Feb 9 19:55:29.663306 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Feb 9 19:55:29.663315 kernel: PCI host bridge to bus 0000:00 Feb 9 19:55:29.663362 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 9 19:55:29.663429 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000cffff window] Feb 9 19:55:29.663481 kernel: pci_bus 0000:00: root bus resource [mem 0x000d0000-0x000d3fff window] Feb 9 19:55:29.663544 kernel: pci_bus 0000:00: root bus resource [mem 0x000d4000-0x000d7fff window] Feb 9 19:55:29.663585 kernel: pci_bus 0000:00: root bus resource [mem 0x000d8000-0x000dbfff window] Feb 9 19:55:29.663625 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Feb 9 19:55:29.663664 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 9 19:55:29.663709 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Feb 9 19:55:29.663749 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Feb 9 19:55:29.663808 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Feb 9 19:55:29.663865 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Feb 9 19:55:29.663931 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Feb 9 19:55:29.663996 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Feb 9 19:55:29.664043 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Feb 9 19:55:29.664101 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 9 19:55:29.664148 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 9 19:55:29.664208 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 9 19:55:29.664255 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 9 19:55:29.664303 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Feb 9 19:55:29.664349 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Feb 9 19:55:29.664394 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Feb 9 19:55:29.664456 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Feb 9 19:55:29.664506 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Feb 9 19:55:29.664552 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Feb 9 19:55:29.664601 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Feb 9 19:55:29.664649 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Feb 9 19:55:29.664712 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Feb 9 19:55:29.664770 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Feb 9 19:55:29.664827 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Feb 9 19:55:29.664878 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 9 19:55:29.664927 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Feb 9 19:55:29.664979 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:29.666488 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Feb 9 19:55:29.666556 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:29.666611 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Feb 9 19:55:29.666671 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:29.666724 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Feb 9 19:55:29.666779 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:29.666831 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Feb 9 19:55:29.666886 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:29.666952 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Feb 9 19:55:29.667011 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:29.667071 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Feb 9 19:55:29.667128 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:29.667180 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Feb 9 19:55:29.667233 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:29.667284 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Feb 9 19:55:29.667337 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:29.667391 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Feb 9 19:55:29.667471 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:29.667523 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Feb 9 19:55:29.667577 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:29.667627 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Feb 9 19:55:29.667684 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:29.667735 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Feb 9 19:55:29.667788 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:29.667839 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Feb 9 19:55:29.667893 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:29.667943 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Feb 9 19:55:29.667999 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:29.668049 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Feb 9 19:55:29.668103 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:29.668153 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Feb 9 19:55:29.668205 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:29.668255 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Feb 9 19:55:29.668310 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:29.668363 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Feb 9 19:55:29.668444 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:29.668501 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Feb 9 19:55:29.668561 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:29.668613 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Feb 9 19:55:29.668666 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:29.668721 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Feb 9 19:55:29.668774 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:29.668825 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Feb 9 19:55:29.668877 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:29.668927 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Feb 9 19:55:29.668981 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:29.669034 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Feb 9 19:55:29.669087 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:29.669137 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Feb 9 19:55:29.669190 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:29.669240 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Feb 9 19:55:29.669297 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:29.669349 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Feb 9 19:55:29.669424 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:29.669480 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Feb 9 19:55:29.669534 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:29.669583 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Feb 9 19:55:29.669637 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:29.669687 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Feb 9 19:55:29.669743 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:29.669793 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Feb 9 19:55:29.669845 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Feb 9 19:55:29.669894 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Feb 9 19:55:29.669948 kernel: pci_bus 0000:01: extended config space not accessible Feb 9 19:55:29.669999 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 9 19:55:29.670055 kernel: pci_bus 0000:02: extended config space not accessible Feb 9 19:55:29.670064 kernel: acpiphp: Slot [32] registered Feb 9 19:55:29.670070 kernel: acpiphp: Slot [33] registered Feb 9 19:55:29.670076 kernel: acpiphp: Slot [34] registered Feb 9 19:55:29.670082 kernel: acpiphp: Slot [35] registered Feb 9 19:55:29.670088 kernel: acpiphp: Slot [36] registered Feb 9 19:55:29.670094 kernel: acpiphp: Slot [37] registered Feb 9 19:55:29.670100 kernel: acpiphp: Slot [38] registered Feb 9 19:55:29.670107 kernel: acpiphp: Slot [39] registered Feb 9 19:55:29.670113 kernel: acpiphp: Slot [40] registered Feb 9 19:55:29.670119 kernel: acpiphp: Slot [41] registered Feb 9 19:55:29.670125 kernel: acpiphp: Slot [42] registered Feb 9 19:55:29.670130 kernel: acpiphp: Slot [43] registered Feb 9 19:55:29.670136 kernel: acpiphp: Slot [44] registered Feb 9 19:55:29.670142 kernel: acpiphp: Slot [45] registered Feb 9 19:55:29.670148 kernel: acpiphp: Slot [46] registered Feb 9 19:55:29.670154 kernel: acpiphp: Slot [47] registered Feb 9 19:55:29.670160 kernel: acpiphp: Slot [48] registered Feb 9 19:55:29.670167 kernel: acpiphp: Slot [49] registered Feb 9 19:55:29.670172 kernel: acpiphp: Slot [50] registered Feb 9 19:55:29.670178 kernel: acpiphp: Slot [51] registered Feb 9 19:55:29.670184 kernel: acpiphp: Slot [52] registered Feb 9 19:55:29.670190 kernel: acpiphp: Slot [53] registered Feb 9 19:55:29.670196 kernel: acpiphp: Slot [54] registered Feb 9 19:55:29.670201 kernel: acpiphp: Slot [55] registered Feb 9 19:55:29.670207 kernel: acpiphp: Slot [56] registered Feb 9 19:55:29.670213 kernel: acpiphp: Slot [57] registered Feb 9 19:55:29.670220 kernel: acpiphp: Slot [58] registered Feb 9 19:55:29.670226 kernel: acpiphp: Slot [59] registered Feb 9 19:55:29.670232 kernel: acpiphp: Slot [60] registered Feb 9 19:55:29.670237 kernel: acpiphp: Slot [61] registered Feb 9 19:55:29.670243 kernel: acpiphp: Slot [62] registered Feb 9 19:55:29.670249 kernel: acpiphp: Slot [63] registered Feb 9 19:55:29.670299 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Feb 9 19:55:29.670348 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Feb 9 19:55:29.670396 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Feb 9 19:55:29.670462 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Feb 9 19:55:29.670512 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Feb 9 19:55:29.670561 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000cffff window] (subtractive decode) Feb 9 19:55:29.670610 kernel: pci 0000:00:11.0: bridge window [mem 0x000d0000-0x000d3fff window] (subtractive decode) Feb 9 19:55:29.670658 kernel: pci 0000:00:11.0: bridge window [mem 0x000d4000-0x000d7fff window] (subtractive decode) Feb 9 19:55:29.670706 kernel: pci 0000:00:11.0: bridge window [mem 0x000d8000-0x000dbfff window] (subtractive decode) Feb 9 19:55:29.670755 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Feb 9 19:55:29.670806 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Feb 9 19:55:29.670875 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Feb 9 19:55:29.670938 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Feb 9 19:55:29.670991 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Feb 9 19:55:29.671042 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Feb 9 19:55:29.671092 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Feb 9 19:55:29.671144 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Feb 9 19:55:29.671194 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Feb 9 19:55:29.671248 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Feb 9 19:55:29.671297 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Feb 9 19:55:29.671346 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Feb 9 19:55:29.671396 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Feb 9 19:55:29.671570 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Feb 9 19:55:29.671623 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Feb 9 19:55:29.671674 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Feb 9 19:55:29.671728 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Feb 9 19:55:29.671778 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Feb 9 19:55:29.671828 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Feb 9 19:55:29.671877 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Feb 9 19:55:29.671928 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Feb 9 19:55:29.671977 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Feb 9 19:55:29.672026 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Feb 9 19:55:29.672078 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Feb 9 19:55:29.672127 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Feb 9 19:55:29.672176 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Feb 9 19:55:29.672226 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Feb 9 19:55:29.672303 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Feb 9 19:55:29.672361 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Feb 9 19:55:29.673485 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Feb 9 19:55:29.673552 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Feb 9 19:55:29.673600 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Feb 9 19:55:29.673647 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Feb 9 19:55:29.673692 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Feb 9 19:55:29.673737 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Feb 9 19:55:29.673789 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Feb 9 19:55:29.673839 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Feb 9 19:55:29.673885 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Feb 9 19:55:29.673931 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Feb 9 19:55:29.673976 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Feb 9 19:55:29.674021 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Feb 9 19:55:29.674067 kernel: pci 0000:0b:00.0: supports D1 D2 Feb 9 19:55:29.674113 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 9 19:55:29.674158 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Feb 9 19:55:29.674207 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Feb 9 19:55:29.674253 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Feb 9 19:55:29.674296 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Feb 9 19:55:29.674342 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Feb 9 19:55:29.674386 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Feb 9 19:55:29.677940 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Feb 9 19:55:29.677995 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Feb 9 19:55:29.678048 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Feb 9 19:55:29.678100 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Feb 9 19:55:29.678155 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Feb 9 19:55:29.678201 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Feb 9 19:55:29.678247 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Feb 9 19:55:29.678292 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Feb 9 19:55:29.678336 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Feb 9 19:55:29.678381 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Feb 9 19:55:29.681310 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Feb 9 19:55:29.681365 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Feb 9 19:55:29.681463 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Feb 9 19:55:29.681516 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Feb 9 19:55:29.681560 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Feb 9 19:55:29.681607 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Feb 9 19:55:29.681652 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Feb 9 19:55:29.681697 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Feb 9 19:55:29.681746 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Feb 9 19:55:29.681791 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Feb 9 19:55:29.681834 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Feb 9 19:55:29.681880 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Feb 9 19:55:29.681924 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Feb 9 19:55:29.681967 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Feb 9 19:55:29.682011 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Feb 9 19:55:29.682058 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Feb 9 19:55:29.682105 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Feb 9 19:55:29.682149 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Feb 9 19:55:29.682193 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Feb 9 19:55:29.682238 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Feb 9 19:55:29.682289 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Feb 9 19:55:29.682346 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Feb 9 19:55:29.682396 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Feb 9 19:55:29.683677 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Feb 9 19:55:29.683727 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Feb 9 19:55:29.683774 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Feb 9 19:55:29.683826 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Feb 9 19:55:29.683875 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Feb 9 19:55:29.683919 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Feb 9 19:55:29.683968 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Feb 9 19:55:29.684012 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Feb 9 19:55:29.684074 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Feb 9 19:55:29.684123 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Feb 9 19:55:29.684182 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Feb 9 19:55:29.684231 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Feb 9 19:55:29.684277 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Feb 9 19:55:29.684323 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Feb 9 19:55:29.684367 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Feb 9 19:55:29.687633 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Feb 9 19:55:29.687706 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Feb 9 19:55:29.687768 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Feb 9 19:55:29.687818 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Feb 9 19:55:29.687867 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Feb 9 19:55:29.687912 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Feb 9 19:55:29.687957 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Feb 9 19:55:29.688001 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Feb 9 19:55:29.688049 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Feb 9 19:55:29.688102 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Feb 9 19:55:29.688166 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Feb 9 19:55:29.688235 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Feb 9 19:55:29.688308 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Feb 9 19:55:29.688364 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Feb 9 19:55:29.689238 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Feb 9 19:55:29.689301 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Feb 9 19:55:29.689348 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Feb 9 19:55:29.689400 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Feb 9 19:55:29.689596 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Feb 9 19:55:29.689657 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Feb 9 19:55:29.689707 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Feb 9 19:55:29.689753 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Feb 9 19:55:29.689799 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Feb 9 19:55:29.689846 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Feb 9 19:55:29.689890 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Feb 9 19:55:29.689939 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Feb 9 19:55:29.689947 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Feb 9 19:55:29.689953 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Feb 9 19:55:29.689960 kernel: ACPI: PCI: Interrupt link LNKB disabled Feb 9 19:55:29.689966 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 9 19:55:29.689972 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Feb 9 19:55:29.689978 kernel: iommu: Default domain type: Translated Feb 9 19:55:29.689984 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 19:55:29.690034 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Feb 9 19:55:29.690095 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 9 19:55:29.690149 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Feb 9 19:55:29.690159 kernel: vgaarb: loaded Feb 9 19:55:29.690169 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 19:55:29.690177 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 19:55:29.690186 kernel: PTP clock support registered Feb 9 19:55:29.690193 kernel: PCI: Using ACPI for IRQ routing Feb 9 19:55:29.690199 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 9 19:55:29.690205 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Feb 9 19:55:29.690213 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Feb 9 19:55:29.690219 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Feb 9 19:55:29.690224 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Feb 9 19:55:29.690230 kernel: clocksource: Switched to clocksource tsc-early Feb 9 19:55:29.690236 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 19:55:29.690242 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 19:55:29.690248 kernel: pnp: PnP ACPI init Feb 9 19:55:29.690301 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Feb 9 19:55:29.690347 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Feb 9 19:55:29.690388 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Feb 9 19:55:29.690456 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Feb 9 19:55:29.690522 kernel: pnp 00:06: [dma 2] Feb 9 19:55:29.690599 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Feb 9 19:55:29.690659 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Feb 9 19:55:29.690713 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Feb 9 19:55:29.690722 kernel: pnp: PnP ACPI: found 8 devices Feb 9 19:55:29.690733 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 19:55:29.690739 kernel: NET: Registered PF_INET protocol family Feb 9 19:55:29.690745 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 19:55:29.690751 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 9 19:55:29.690757 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 19:55:29.690763 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 9 19:55:29.690770 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 9 19:55:29.690777 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 9 19:55:29.690784 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 9 19:55:29.690789 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 9 19:55:29.690796 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 19:55:29.690801 kernel: NET: Registered PF_XDP protocol family Feb 9 19:55:29.690855 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Feb 9 19:55:29.690905 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Feb 9 19:55:29.690953 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Feb 9 19:55:29.691261 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Feb 9 19:55:29.691316 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Feb 9 19:55:29.691379 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Feb 9 19:55:29.691506 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Feb 9 19:55:29.691799 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Feb 9 19:55:29.691857 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Feb 9 19:55:29.691908 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Feb 9 19:55:29.691970 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Feb 9 19:55:29.692039 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Feb 9 19:55:29.692115 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Feb 9 19:55:29.692179 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Feb 9 19:55:29.692251 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Feb 9 19:55:29.692306 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Feb 9 19:55:29.692365 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Feb 9 19:55:29.692507 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Feb 9 19:55:29.692558 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Feb 9 19:55:29.692609 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Feb 9 19:55:29.692668 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Feb 9 19:55:29.692724 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Feb 9 19:55:29.692770 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Feb 9 19:55:29.692816 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Feb 9 19:55:29.692860 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Feb 9 19:55:29.692907 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Feb 9 19:55:29.692952 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:29.693000 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Feb 9 19:55:29.693044 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:29.693090 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Feb 9 19:55:29.693134 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:29.693178 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Feb 9 19:55:29.693223 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:29.693276 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Feb 9 19:55:29.693346 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:29.693434 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Feb 9 19:55:29.693744 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:29.693802 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Feb 9 19:55:29.693851 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:29.693904 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Feb 9 19:55:29.694233 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:29.694290 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Feb 9 19:55:29.694339 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:29.694390 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Feb 9 19:55:29.694460 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:29.694518 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Feb 9 19:55:29.694590 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:29.694660 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Feb 9 19:55:29.694722 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:29.694780 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Feb 9 19:55:29.694833 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:29.694901 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Feb 9 19:55:29.694962 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:29.695017 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Feb 9 19:55:29.695074 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:29.695142 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Feb 9 19:55:29.695189 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:29.695234 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Feb 9 19:55:29.695278 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:29.695325 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Feb 9 19:55:29.695551 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:29.695629 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Feb 9 19:55:29.695690 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:29.696026 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Feb 9 19:55:29.696081 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:29.696130 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Feb 9 19:55:29.696176 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:29.696223 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Feb 9 19:55:29.696269 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:29.696560 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Feb 9 19:55:29.696640 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:29.696692 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Feb 9 19:55:29.697050 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:29.697103 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Feb 9 19:55:29.697151 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:29.697196 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Feb 9 19:55:29.697244 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:29.697289 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Feb 9 19:55:29.697335 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:29.697379 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Feb 9 19:55:29.705309 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:29.705369 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Feb 9 19:55:29.705771 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:29.705830 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Feb 9 19:55:29.705878 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:29.705937 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Feb 9 19:55:29.705989 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:29.706035 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Feb 9 19:55:29.706081 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:29.706127 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Feb 9 19:55:29.706171 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:29.706216 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Feb 9 19:55:29.706261 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:29.706307 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Feb 9 19:55:29.706351 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:29.706399 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Feb 9 19:55:29.706668 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:29.706721 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Feb 9 19:55:29.706769 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:29.706822 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Feb 9 19:55:29.706868 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:29.706914 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Feb 9 19:55:29.706959 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:29.707007 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Feb 9 19:55:29.707051 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:29.707100 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Feb 9 19:55:29.707146 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:29.707192 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Feb 9 19:55:29.707237 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Feb 9 19:55:29.707284 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 9 19:55:29.707330 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Feb 9 19:55:29.707376 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Feb 9 19:55:29.707661 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Feb 9 19:55:29.707723 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Feb 9 19:55:29.707776 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Feb 9 19:55:29.707823 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Feb 9 19:55:29.707870 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Feb 9 19:55:29.707915 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Feb 9 19:55:29.707961 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Feb 9 19:55:29.708008 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Feb 9 19:55:29.708054 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Feb 9 19:55:29.708098 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Feb 9 19:55:29.708147 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Feb 9 19:55:29.708192 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Feb 9 19:55:29.708237 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Feb 9 19:55:29.708282 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Feb 9 19:55:29.708327 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Feb 9 19:55:29.708371 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Feb 9 19:55:29.708430 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Feb 9 19:55:29.708478 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Feb 9 19:55:29.708526 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Feb 9 19:55:29.708570 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Feb 9 19:55:29.708615 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Feb 9 19:55:29.708659 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Feb 9 19:55:29.708705 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Feb 9 19:55:29.708749 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Feb 9 19:55:29.708794 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Feb 9 19:55:29.708839 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Feb 9 19:55:29.708885 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Feb 9 19:55:29.708929 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Feb 9 19:55:29.708973 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Feb 9 19:55:29.709017 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Feb 9 19:55:29.709066 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Feb 9 19:55:29.709113 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Feb 9 19:55:29.709157 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Feb 9 19:55:29.709201 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Feb 9 19:55:29.709246 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Feb 9 19:55:29.709293 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Feb 9 19:55:29.709339 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Feb 9 19:55:29.709385 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Feb 9 19:55:29.709451 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Feb 9 19:55:29.709500 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Feb 9 19:55:29.709545 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Feb 9 19:55:29.709590 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Feb 9 19:55:29.709636 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Feb 9 19:55:29.709679 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Feb 9 19:55:29.709727 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Feb 9 19:55:29.709952 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Feb 9 19:55:29.710004 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Feb 9 19:55:29.710052 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Feb 9 19:55:29.710377 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Feb 9 19:55:29.710464 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Feb 9 19:55:29.710518 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Feb 9 19:55:29.710728 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Feb 9 19:55:29.710783 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Feb 9 19:55:29.710829 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Feb 9 19:55:29.710879 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Feb 9 19:55:29.710924 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Feb 9 19:55:29.711182 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Feb 9 19:55:29.711233 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Feb 9 19:55:29.711281 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Feb 9 19:55:29.711327 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Feb 9 19:55:29.711373 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Feb 9 19:55:29.711436 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Feb 9 19:55:29.711487 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Feb 9 19:55:29.711535 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Feb 9 19:55:29.711580 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Feb 9 19:55:29.711631 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Feb 9 19:55:29.711678 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Feb 9 19:55:29.711725 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Feb 9 19:55:29.711769 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Feb 9 19:55:29.711845 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Feb 9 19:55:29.711928 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Feb 9 19:55:29.711975 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Feb 9 19:55:29.712253 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Feb 9 19:55:29.712675 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Feb 9 19:55:29.712730 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Feb 9 19:55:29.712778 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Feb 9 19:55:29.712824 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Feb 9 19:55:29.712870 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Feb 9 19:55:29.712915 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Feb 9 19:55:29.713199 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Feb 9 19:55:29.713251 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Feb 9 19:55:29.713297 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Feb 9 19:55:29.713352 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Feb 9 19:55:29.713399 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Feb 9 19:55:29.713695 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Feb 9 19:55:29.713755 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Feb 9 19:55:29.713804 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Feb 9 19:55:29.713855 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Feb 9 19:55:29.713901 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Feb 9 19:55:29.713947 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Feb 9 19:55:29.713992 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Feb 9 19:55:29.714037 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Feb 9 19:55:29.714085 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Feb 9 19:55:29.714131 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Feb 9 19:55:29.714175 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Feb 9 19:55:29.714219 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Feb 9 19:55:29.714263 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Feb 9 19:55:29.714309 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Feb 9 19:55:29.714352 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Feb 9 19:55:29.714397 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Feb 9 19:55:29.714662 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Feb 9 19:55:29.714717 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Feb 9 19:55:29.714766 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Feb 9 19:55:29.714817 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Feb 9 19:55:29.714863 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Feb 9 19:55:29.714909 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Feb 9 19:55:29.714954 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Feb 9 19:55:29.715000 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Feb 9 19:55:29.715046 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Feb 9 19:55:29.715092 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Feb 9 19:55:29.715137 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Feb 9 19:55:29.715185 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Feb 9 19:55:29.715225 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000cffff window] Feb 9 19:55:29.715265 kernel: pci_bus 0000:00: resource 6 [mem 0x000d0000-0x000d3fff window] Feb 9 19:55:29.715305 kernel: pci_bus 0000:00: resource 7 [mem 0x000d4000-0x000d7fff window] Feb 9 19:55:29.715345 kernel: pci_bus 0000:00: resource 8 [mem 0x000d8000-0x000dbfff window] Feb 9 19:55:29.715385 kernel: pci_bus 0000:00: resource 9 [mem 0xc0000000-0xfebfffff window] Feb 9 19:55:29.715701 kernel: pci_bus 0000:00: resource 10 [io 0x0000-0x0cf7 window] Feb 9 19:55:29.715746 kernel: pci_bus 0000:00: resource 11 [io 0x0d00-0xfeff window] Feb 9 19:55:29.715791 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Feb 9 19:55:29.715833 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Feb 9 19:55:29.715879 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Feb 9 19:55:29.715921 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Feb 9 19:55:29.715961 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000cffff window] Feb 9 19:55:29.716002 kernel: pci_bus 0000:02: resource 6 [mem 0x000d0000-0x000d3fff window] Feb 9 19:55:29.716045 kernel: pci_bus 0000:02: resource 7 [mem 0x000d4000-0x000d7fff window] Feb 9 19:55:29.716086 kernel: pci_bus 0000:02: resource 8 [mem 0x000d8000-0x000dbfff window] Feb 9 19:55:29.716127 kernel: pci_bus 0000:02: resource 9 [mem 0xc0000000-0xfebfffff window] Feb 9 19:55:29.716167 kernel: pci_bus 0000:02: resource 10 [io 0x0000-0x0cf7 window] Feb 9 19:55:29.716207 kernel: pci_bus 0000:02: resource 11 [io 0x0d00-0xfeff window] Feb 9 19:55:29.716252 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Feb 9 19:55:29.716295 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Feb 9 19:55:29.716339 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Feb 9 19:55:29.716384 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Feb 9 19:55:29.716651 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Feb 9 19:55:29.716700 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Feb 9 19:55:29.716972 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Feb 9 19:55:29.717020 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Feb 9 19:55:29.717063 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Feb 9 19:55:29.717117 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Feb 9 19:55:29.717162 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Feb 9 19:55:29.717208 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Feb 9 19:55:29.717250 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Feb 9 19:55:29.717298 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Feb 9 19:55:29.717349 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Feb 9 19:55:29.717398 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Feb 9 19:55:29.717685 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Feb 9 19:55:29.717740 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Feb 9 19:55:29.717783 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Feb 9 19:55:29.717838 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Feb 9 19:55:29.717884 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Feb 9 19:55:29.717930 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Feb 9 19:55:29.717975 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Feb 9 19:55:29.718018 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Feb 9 19:55:29.718059 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Feb 9 19:55:29.718104 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Feb 9 19:55:29.718146 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Feb 9 19:55:29.718187 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Feb 9 19:55:29.718236 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Feb 9 19:55:29.718279 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Feb 9 19:55:29.718327 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Feb 9 19:55:29.718370 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Feb 9 19:55:29.718660 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Feb 9 19:55:29.718711 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Feb 9 19:55:29.718761 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Feb 9 19:55:29.718804 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Feb 9 19:55:29.718854 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Feb 9 19:55:29.718898 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Feb 9 19:55:29.718942 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Feb 9 19:55:29.718985 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Feb 9 19:55:29.719028 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Feb 9 19:55:29.719072 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Feb 9 19:55:29.719113 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Feb 9 19:55:29.719153 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Feb 9 19:55:29.719198 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Feb 9 19:55:29.719240 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Feb 9 19:55:29.719280 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Feb 9 19:55:29.719327 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Feb 9 19:55:29.719369 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Feb 9 19:55:29.719630 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Feb 9 19:55:29.719681 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Feb 9 19:55:29.719732 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Feb 9 19:55:29.719776 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Feb 9 19:55:29.719822 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Feb 9 19:55:29.719868 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Feb 9 19:55:29.719916 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Feb 9 19:55:29.719959 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Feb 9 19:55:29.720005 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Feb 9 19:55:29.720048 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Feb 9 19:55:29.720092 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Feb 9 19:55:29.720138 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Feb 9 19:55:29.720180 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Feb 9 19:55:29.720221 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Feb 9 19:55:29.720269 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Feb 9 19:55:29.720326 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Feb 9 19:55:29.720612 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Feb 9 19:55:29.720666 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Feb 9 19:55:29.720713 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Feb 9 19:55:29.720970 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Feb 9 19:55:29.721022 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Feb 9 19:55:29.721067 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Feb 9 19:55:29.721113 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Feb 9 19:55:29.721431 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Feb 9 19:55:29.721489 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Feb 9 19:55:29.721539 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Feb 9 19:55:29.721592 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 9 19:55:29.721602 kernel: PCI: CLS 32 bytes, default 64 Feb 9 19:55:29.721609 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 9 19:55:29.721616 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Feb 9 19:55:29.721625 kernel: clocksource: Switched to clocksource tsc Feb 9 19:55:29.721631 kernel: Initialise system trusted keyrings Feb 9 19:55:29.721637 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 9 19:55:29.721643 kernel: Key type asymmetric registered Feb 9 19:55:29.721649 kernel: Asymmetric key parser 'x509' registered Feb 9 19:55:29.721656 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 19:55:29.721662 kernel: io scheduler mq-deadline registered Feb 9 19:55:29.721668 kernel: io scheduler kyber registered Feb 9 19:55:29.721674 kernel: io scheduler bfq registered Feb 9 19:55:29.721979 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Feb 9 19:55:29.722032 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:29.722082 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Feb 9 19:55:29.722133 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:29.722179 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Feb 9 19:55:29.722225 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:29.722270 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Feb 9 19:55:29.722320 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:29.722367 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Feb 9 19:55:29.722448 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:29.722496 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Feb 9 19:55:29.722542 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:29.722588 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Feb 9 19:55:29.722636 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:29.722681 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Feb 9 19:55:29.722730 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:29.722776 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Feb 9 19:55:29.722823 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:29.722871 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Feb 9 19:55:29.722916 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:29.722960 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Feb 9 19:55:29.723005 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:29.723049 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Feb 9 19:55:29.723093 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:29.723137 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Feb 9 19:55:29.723184 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:29.723228 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Feb 9 19:55:29.723273 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:29.723317 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Feb 9 19:55:29.723361 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:29.723414 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Feb 9 19:55:29.723462 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:29.723507 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Feb 9 19:55:29.723580 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:29.723890 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Feb 9 19:55:29.723944 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:29.723995 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Feb 9 19:55:29.724042 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:29.724088 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Feb 9 19:55:29.724134 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:29.724180 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Feb 9 19:55:29.724225 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:29.724519 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Feb 9 19:55:29.724579 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:29.724627 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Feb 9 19:55:29.724674 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:29.724720 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Feb 9 19:55:29.724765 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:29.724813 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Feb 9 19:55:29.724858 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:29.724903 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Feb 9 19:55:29.724947 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:29.724992 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Feb 9 19:55:29.725036 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:29.725084 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Feb 9 19:55:29.725129 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:29.725174 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Feb 9 19:55:29.725218 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:29.725262 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Feb 9 19:55:29.725309 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:29.725355 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Feb 9 19:55:29.725399 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:29.725482 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Feb 9 19:55:29.725528 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 9 19:55:29.725539 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 19:55:29.725547 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 19:55:29.725554 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 19:55:29.725560 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Feb 9 19:55:29.725566 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 9 19:55:29.725573 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 9 19:55:29.725635 kernel: rtc_cmos 00:01: registered as rtc0 Feb 9 19:55:29.725911 kernel: rtc_cmos 00:01: setting system clock to 2024-02-09T19:55:29 UTC (1707508529) Feb 9 19:55:29.725923 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 9 19:55:29.725973 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Feb 9 19:55:29.725982 kernel: fail to initialize ptp_kvm Feb 9 19:55:29.725989 kernel: intel_pstate: CPU model not supported Feb 9 19:55:29.726203 kernel: NET: Registered PF_INET6 protocol family Feb 9 19:55:29.726213 kernel: Segment Routing with IPv6 Feb 9 19:55:29.726220 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 19:55:29.726227 kernel: NET: Registered PF_PACKET protocol family Feb 9 19:55:29.726233 kernel: Key type dns_resolver registered Feb 9 19:55:29.726239 kernel: IPI shorthand broadcast: enabled Feb 9 19:55:29.726248 kernel: sched_clock: Marking stable (903451109, 220807858)->(1189127630, -64868663) Feb 9 19:55:29.726254 kernel: registered taskstats version 1 Feb 9 19:55:29.726260 kernel: Loading compiled-in X.509 certificates Feb 9 19:55:29.726266 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 56154408a02b3bd349a9e9180c9bd837fd1d636a' Feb 9 19:55:29.726273 kernel: Key type .fscrypt registered Feb 9 19:55:29.726279 kernel: Key type fscrypt-provisioning registered Feb 9 19:55:29.726285 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 19:55:29.726291 kernel: ima: Allocated hash algorithm: sha1 Feb 9 19:55:29.726299 kernel: ima: No architecture policies found Feb 9 19:55:29.726305 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 19:55:29.726312 kernel: Write protecting the kernel read-only data: 28672k Feb 9 19:55:29.726318 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 19:55:29.726324 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 19:55:29.726331 kernel: Run /init as init process Feb 9 19:55:29.726337 kernel: with arguments: Feb 9 19:55:29.726343 kernel: /init Feb 9 19:55:29.726350 kernel: with environment: Feb 9 19:55:29.726357 kernel: HOME=/ Feb 9 19:55:29.726363 kernel: TERM=linux Feb 9 19:55:29.726369 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 19:55:29.726391 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:55:29.726403 systemd[1]: Detected virtualization vmware. Feb 9 19:55:29.726427 systemd[1]: Detected architecture x86-64. Feb 9 19:55:29.726434 systemd[1]: Running in initrd. Feb 9 19:55:29.726440 systemd[1]: No hostname configured, using default hostname. Feb 9 19:55:29.726655 systemd[1]: Hostname set to . Feb 9 19:55:29.726664 systemd[1]: Initializing machine ID from random generator. Feb 9 19:55:29.726671 systemd[1]: Queued start job for default target initrd.target. Feb 9 19:55:29.726677 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:55:29.726684 systemd[1]: Reached target cryptsetup.target. Feb 9 19:55:29.726690 systemd[1]: Reached target paths.target. Feb 9 19:55:29.726696 systemd[1]: Reached target slices.target. Feb 9 19:55:29.726702 systemd[1]: Reached target swap.target. Feb 9 19:55:29.726709 systemd[1]: Reached target timers.target. Feb 9 19:55:29.726718 systemd[1]: Listening on iscsid.socket. Feb 9 19:55:29.726725 systemd[1]: Listening on iscsiuio.socket. Feb 9 19:55:29.726731 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:55:29.726939 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:55:29.726948 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:55:29.726955 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:55:29.726962 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:55:29.726970 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:55:29.726977 systemd[1]: Reached target sockets.target. Feb 9 19:55:29.726984 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:55:29.726991 systemd[1]: Finished network-cleanup.service. Feb 9 19:55:29.726997 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 19:55:29.727004 systemd[1]: Starting systemd-journald.service... Feb 9 19:55:29.727010 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:55:29.727017 systemd[1]: Starting systemd-resolved.service... Feb 9 19:55:29.727023 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 19:55:29.727031 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:55:29.727037 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 19:55:29.727044 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:55:29.727050 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:55:29.727057 kernel: audit: type=1130 audit(1707508529.673:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:29.727064 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 19:55:29.727070 kernel: audit: type=1130 audit(1707508529.677:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:29.727077 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 19:55:29.727084 systemd[1]: Started systemd-resolved.service. Feb 9 19:55:29.727091 kernel: audit: type=1130 audit(1707508529.683:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:29.727097 systemd[1]: Reached target nss-lookup.target. Feb 9 19:55:29.727104 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 19:55:29.727117 kernel: Bridge firewalling registered Feb 9 19:55:29.727123 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 19:55:29.727130 systemd[1]: Starting dracut-cmdline.service... Feb 9 19:55:29.727139 kernel: audit: type=1130 audit(1707508529.697:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:29.727145 kernel: SCSI subsystem initialized Feb 9 19:55:29.727152 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 19:55:29.727158 kernel: device-mapper: uevent: version 1.0.3 Feb 9 19:55:29.727165 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 19:55:29.727175 systemd-journald[216]: Journal started Feb 9 19:55:29.727209 systemd-journald[216]: Runtime Journal (/run/log/journal/fc3c7784233443cc8ce91917fc3de087) is 4.8M, max 38.8M, 34.0M free. Feb 9 19:55:29.730517 systemd[1]: Started systemd-journald.service. Feb 9 19:55:29.730535 kernel: audit: type=1130 audit(1707508529.726:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:29.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:29.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:29.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:29.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:29.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:29.665768 systemd-modules-load[217]: Inserted module 'overlay' Feb 9 19:55:29.681556 systemd-resolved[218]: Positive Trust Anchors: Feb 9 19:55:29.681563 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:55:29.681583 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:55:29.683273 systemd-resolved[218]: Defaulting to hostname 'linux'. Feb 9 19:55:29.697604 systemd-modules-load[217]: Inserted module 'br_netfilter' Feb 9 19:55:29.732042 dracut-cmdline[233]: dracut-dracut-053 Feb 9 19:55:29.732042 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Feb 9 19:55:29.732042 dracut-cmdline[233]: BEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:55:29.733004 systemd-modules-load[217]: Inserted module 'dm_multipath' Feb 9 19:55:29.733318 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:55:29.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:29.734233 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:55:29.737422 kernel: audit: type=1130 audit(1707508529.732:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:29.740789 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:55:29.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:29.743442 kernel: audit: type=1130 audit(1707508529.739:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:29.758417 kernel: Loading iSCSI transport class v2.0-870. Feb 9 19:55:29.766423 kernel: iscsi: registered transport (tcp) Feb 9 19:55:29.780422 kernel: iscsi: registered transport (qla4xxx) Feb 9 19:55:29.780450 kernel: QLogic iSCSI HBA Driver Feb 9 19:55:29.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:29.796778 systemd[1]: Finished dracut-cmdline.service. Feb 9 19:55:29.797371 systemd[1]: Starting dracut-pre-udev.service... Feb 9 19:55:29.800594 kernel: audit: type=1130 audit(1707508529.795:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:29.835429 kernel: raid6: avx2x4 gen() 48401 MB/s Feb 9 19:55:29.851422 kernel: raid6: avx2x4 xor() 21369 MB/s Feb 9 19:55:29.868422 kernel: raid6: avx2x2 gen() 53393 MB/s Feb 9 19:55:29.885455 kernel: raid6: avx2x2 xor() 31855 MB/s Feb 9 19:55:29.902453 kernel: raid6: avx2x1 gen() 44573 MB/s Feb 9 19:55:29.919420 kernel: raid6: avx2x1 xor() 27974 MB/s Feb 9 19:55:29.936421 kernel: raid6: sse2x4 gen() 20453 MB/s Feb 9 19:55:29.953450 kernel: raid6: sse2x4 xor() 11634 MB/s Feb 9 19:55:29.970424 kernel: raid6: sse2x2 gen() 21285 MB/s Feb 9 19:55:29.987424 kernel: raid6: sse2x2 xor() 13271 MB/s Feb 9 19:55:30.004428 kernel: raid6: sse2x1 gen() 18049 MB/s Feb 9 19:55:30.021829 kernel: raid6: sse2x1 xor() 8879 MB/s Feb 9 19:55:30.021863 kernel: raid6: using algorithm avx2x2 gen() 53393 MB/s Feb 9 19:55:30.021871 kernel: raid6: .... xor() 31855 MB/s, rmw enabled Feb 9 19:55:30.023427 kernel: raid6: using avx2x2 recovery algorithm Feb 9 19:55:30.032423 kernel: xor: automatically using best checksumming function avx Feb 9 19:55:30.091430 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 19:55:30.095607 systemd[1]: Finished dracut-pre-udev.service. Feb 9 19:55:30.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:30.096206 systemd[1]: Starting systemd-udevd.service... Feb 9 19:55:30.099543 kernel: audit: type=1130 audit(1707508530.094:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:30.094000 audit: BPF prog-id=7 op=LOAD Feb 9 19:55:30.094000 audit: BPF prog-id=8 op=LOAD Feb 9 19:55:30.106052 systemd-udevd[416]: Using default interface naming scheme 'v252'. Feb 9 19:55:30.108643 systemd[1]: Started systemd-udevd.service. Feb 9 19:55:30.109080 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 19:55:30.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:30.116848 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Feb 9 19:55:30.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:30.132196 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 19:55:30.132688 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:55:30.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:30.200370 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:55:30.250537 kernel: VMware PVSCSI driver - version 1.0.7.0-k Feb 9 19:55:30.250568 kernel: vmw_pvscsi: using 64bit dma Feb 9 19:55:30.251810 kernel: vmw_pvscsi: max_id: 16 Feb 9 19:55:30.251829 kernel: vmw_pvscsi: setting ring_pages to 8 Feb 9 19:55:30.264299 kernel: vmw_pvscsi: enabling reqCallThreshold Feb 9 19:55:30.264328 kernel: vmw_pvscsi: driver-based request coalescing enabled Feb 9 19:55:30.264339 kernel: vmw_pvscsi: using MSI-X Feb 9 19:55:30.264346 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Feb 9 19:55:30.265418 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Feb 9 19:55:30.265500 kernel: VMware vmxnet3 virtual NIC driver - version 1.6.0.0-k-NAPI Feb 9 19:55:30.266963 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Feb 9 19:55:30.273427 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Feb 9 19:55:30.279438 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Feb 9 19:55:30.279530 kernel: libata version 3.00 loaded. Feb 9 19:55:30.281423 kernel: ata_piix 0000:00:07.1: version 2.13 Feb 9 19:55:30.284467 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 19:55:30.284486 kernel: scsi host1: ata_piix Feb 9 19:55:30.289020 kernel: scsi host2: ata_piix Feb 9 19:55:30.289099 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Feb 9 19:55:30.289111 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Feb 9 19:55:30.293878 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Feb 9 19:55:30.293960 kernel: AVX2 version of gcm_enc/dec engaged. Feb 9 19:55:30.297424 kernel: AES CTR mode by8 optimization enabled Feb 9 19:55:30.458468 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Feb 9 19:55:30.462424 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Feb 9 19:55:30.474956 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Feb 9 19:55:30.475089 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 9 19:55:30.475180 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Feb 9 19:55:30.476019 kernel: sd 0:0:0:0: [sda] Cache data unavailable Feb 9 19:55:30.476102 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Feb 9 19:55:30.480433 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:55:30.481480 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 9 19:55:30.488475 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Feb 9 19:55:30.488581 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 9 19:55:30.506424 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Feb 9 19:55:30.623464 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 19:55:30.625452 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (470) Feb 9 19:55:30.629692 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 19:55:30.632667 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 19:55:30.632805 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 19:55:30.635879 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:55:30.636569 systemd[1]: Starting disk-uuid.service... Feb 9 19:55:30.661424 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:55:30.669421 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:55:31.698730 disk-uuid[550]: The operation has completed successfully. Feb 9 19:55:31.699428 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:55:31.740775 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 19:55:31.741105 systemd[1]: Finished disk-uuid.service. Feb 9 19:55:31.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:31.740000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:31.743581 systemd[1]: Starting verity-setup.service... Feb 9 19:55:31.754425 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 9 19:55:31.794687 systemd[1]: Found device dev-mapper-usr.device. Feb 9 19:55:31.795146 systemd[1]: Mounting sysusr-usr.mount... Feb 9 19:55:31.795739 systemd[1]: Finished verity-setup.service. Feb 9 19:55:31.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:31.934644 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 19:55:31.932961 systemd[1]: Mounted sysusr-usr.mount. Feb 9 19:55:31.933573 systemd[1]: Starting afterburn-network-kargs.service... Feb 9 19:55:31.934019 systemd[1]: Starting ignition-setup.service... Feb 9 19:55:31.949776 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:55:31.949806 kernel: BTRFS info (device sda6): using free space tree Feb 9 19:55:31.949815 kernel: BTRFS info (device sda6): has skinny extents Feb 9 19:55:31.955124 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 9 19:55:31.959113 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 19:55:31.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:31.966817 systemd[1]: Finished ignition-setup.service. Feb 9 19:55:31.967417 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 19:55:32.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:32.085999 systemd[1]: Finished afterburn-network-kargs.service. Feb 9 19:55:32.086760 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 19:55:32.137389 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 19:55:32.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:32.136000 audit: BPF prog-id=9 op=LOAD Feb 9 19:55:32.138291 systemd[1]: Starting systemd-networkd.service... Feb 9 19:55:32.152394 systemd-networkd[734]: lo: Link UP Feb 9 19:55:32.152615 systemd-networkd[734]: lo: Gained carrier Feb 9 19:55:32.152995 systemd-networkd[734]: Enumeration completed Feb 9 19:55:32.153159 systemd[1]: Started systemd-networkd.service. Feb 9 19:55:32.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:32.153322 systemd[1]: Reached target network.target. Feb 9 19:55:32.153623 systemd-networkd[734]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Feb 9 19:55:32.153883 systemd[1]: Starting iscsiuio.service... Feb 9 19:55:32.157242 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Feb 9 19:55:32.157352 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Feb 9 19:55:32.157225 systemd[1]: Started iscsiuio.service. Feb 9 19:55:32.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:32.157945 systemd-networkd[734]: ens192: Link UP Feb 9 19:55:32.157948 systemd-networkd[734]: ens192: Gained carrier Feb 9 19:55:32.158133 systemd[1]: Starting iscsid.service... Feb 9 19:55:32.160120 iscsid[739]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:55:32.160120 iscsid[739]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 19:55:32.160120 iscsid[739]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 19:55:32.160120 iscsid[739]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 19:55:32.160120 iscsid[739]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:55:32.161067 iscsid[739]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 19:55:32.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:32.161236 systemd[1]: Started iscsid.service. Feb 9 19:55:32.161806 systemd[1]: Starting dracut-initqueue.service... Feb 9 19:55:32.168705 systemd[1]: Finished dracut-initqueue.service. Feb 9 19:55:32.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:32.169006 systemd[1]: Reached target remote-fs-pre.target. Feb 9 19:55:32.169248 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:55:32.169458 systemd[1]: Reached target remote-fs.target. Feb 9 19:55:32.170077 systemd[1]: Starting dracut-pre-mount.service... Feb 9 19:55:32.175267 systemd[1]: Finished dracut-pre-mount.service. Feb 9 19:55:32.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:32.334210 ignition[606]: Ignition 2.14.0 Feb 9 19:55:32.334223 ignition[606]: Stage: fetch-offline Feb 9 19:55:32.334260 ignition[606]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:55:32.334275 ignition[606]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Feb 9 19:55:32.344287 ignition[606]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 9 19:55:32.344389 ignition[606]: parsed url from cmdline: "" Feb 9 19:55:32.344392 ignition[606]: no config URL provided Feb 9 19:55:32.344396 ignition[606]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:55:32.344403 ignition[606]: no config at "/usr/lib/ignition/user.ign" Feb 9 19:55:32.344892 ignition[606]: config successfully fetched Feb 9 19:55:32.344921 ignition[606]: parsing config with SHA512: e508535e44b4022f0df5175268cc844e00767be040cd0892406d300662201f1da288332c22bf7c7d1e6554a498a70f76466e2299b45a66fa6ffc57eb565bb999 Feb 9 19:55:32.396908 unknown[606]: fetched base config from "system" Feb 9 19:55:32.396918 unknown[606]: fetched user config from "vmware" Feb 9 19:55:32.398077 ignition[606]: fetch-offline: fetch-offline passed Feb 9 19:55:32.398113 ignition[606]: Ignition finished successfully Feb 9 19:55:32.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:32.398937 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 19:55:32.399084 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 9 19:55:32.399530 systemd[1]: Starting ignition-kargs.service... Feb 9 19:55:32.405483 ignition[754]: Ignition 2.14.0 Feb 9 19:55:32.405500 ignition[754]: Stage: kargs Feb 9 19:55:32.405583 ignition[754]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:55:32.405593 ignition[754]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Feb 9 19:55:32.406919 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 9 19:55:32.408652 ignition[754]: kargs: kargs passed Feb 9 19:55:32.408703 ignition[754]: Ignition finished successfully Feb 9 19:55:32.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:32.409774 systemd[1]: Finished ignition-kargs.service. Feb 9 19:55:32.410428 systemd[1]: Starting ignition-disks.service... Feb 9 19:55:32.415146 ignition[760]: Ignition 2.14.0 Feb 9 19:55:32.415155 ignition[760]: Stage: disks Feb 9 19:55:32.415231 ignition[760]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:55:32.415245 ignition[760]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Feb 9 19:55:32.416889 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 9 19:55:32.418978 ignition[760]: disks: disks passed Feb 9 19:55:32.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:32.419607 systemd[1]: Finished ignition-disks.service. Feb 9 19:55:32.419036 ignition[760]: Ignition finished successfully Feb 9 19:55:32.419778 systemd[1]: Reached target initrd-root-device.target. Feb 9 19:55:32.419874 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:55:32.419965 systemd[1]: Reached target local-fs.target. Feb 9 19:55:32.420049 systemd[1]: Reached target sysinit.target. Feb 9 19:55:32.420130 systemd[1]: Reached target basic.target. Feb 9 19:55:32.421343 systemd[1]: Starting systemd-fsck-root.service... Feb 9 19:55:32.433650 systemd-fsck[768]: ROOT: clean, 602/1628000 files, 124051/1617920 blocks Feb 9 19:55:32.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:32.435283 systemd[1]: Finished systemd-fsck-root.service. Feb 9 19:55:32.436076 systemd[1]: Mounting sysroot.mount... Feb 9 19:55:32.445439 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 19:55:32.445149 systemd[1]: Mounted sysroot.mount. Feb 9 19:55:32.445292 systemd[1]: Reached target initrd-root-fs.target. Feb 9 19:55:32.446360 systemd[1]: Mounting sysroot-usr.mount... Feb 9 19:55:32.446901 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 19:55:32.447084 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 19:55:32.447281 systemd[1]: Reached target ignition-diskful.target. Feb 9 19:55:32.448197 systemd[1]: Mounted sysroot-usr.mount. Feb 9 19:55:32.448828 systemd[1]: Starting initrd-setup-root.service... Feb 9 19:55:32.451774 initrd-setup-root[778]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 19:55:32.455356 initrd-setup-root[786]: cut: /sysroot/etc/group: No such file or directory Feb 9 19:55:32.457736 initrd-setup-root[794]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 19:55:32.459979 initrd-setup-root[802]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 19:55:32.533167 systemd[1]: Finished initrd-setup-root.service. Feb 9 19:55:32.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:32.533911 systemd[1]: Starting ignition-mount.service... Feb 9 19:55:32.534469 systemd[1]: Starting sysroot-boot.service... Feb 9 19:55:32.538883 bash[819]: umount: /sysroot/usr/share/oem: not mounted. Feb 9 19:55:32.544879 ignition[820]: INFO : Ignition 2.14.0 Feb 9 19:55:32.544879 ignition[820]: INFO : Stage: mount Feb 9 19:55:32.545202 ignition[820]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:55:32.545202 ignition[820]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Feb 9 19:55:32.547033 ignition[820]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 9 19:55:32.548969 ignition[820]: INFO : mount: mount passed Feb 9 19:55:32.549092 ignition[820]: INFO : Ignition finished successfully Feb 9 19:55:32.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:32.549635 systemd[1]: Finished ignition-mount.service. Feb 9 19:55:32.574516 systemd[1]: Finished sysroot-boot.service. Feb 9 19:55:32.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:32.886219 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:55:32.936434 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (829) Feb 9 19:55:32.939090 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:55:32.939118 kernel: BTRFS info (device sda6): using free space tree Feb 9 19:55:32.939131 kernel: BTRFS info (device sda6): has skinny extents Feb 9 19:55:32.943422 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 9 19:55:32.945230 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:55:32.945836 systemd[1]: Starting ignition-files.service... Feb 9 19:55:32.955010 ignition[849]: INFO : Ignition 2.14.0 Feb 9 19:55:32.955010 ignition[849]: INFO : Stage: files Feb 9 19:55:32.955349 ignition[849]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:55:32.955349 ignition[849]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Feb 9 19:55:32.956558 ignition[849]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 9 19:55:32.958821 ignition[849]: DEBUG : files: compiled without relabeling support, skipping Feb 9 19:55:32.959211 ignition[849]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 19:55:32.959211 ignition[849]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 19:55:32.963225 ignition[849]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 19:55:32.963458 ignition[849]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 19:55:32.964287 unknown[849]: wrote ssh authorized keys file for user: core Feb 9 19:55:32.964637 ignition[849]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 19:55:32.965084 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 19:55:32.965263 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 9 19:55:33.099181 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 19:55:33.173852 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 19:55:33.173852 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 9 19:55:33.174287 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Feb 9 19:55:33.406749 systemd-networkd[734]: ens192: Gained IPv6LL Feb 9 19:55:33.628005 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 19:55:33.760725 ignition[849]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Feb 9 19:55:33.761047 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 9 19:55:33.761047 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 9 19:55:33.761047 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Feb 9 19:55:34.187201 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 19:55:34.283607 ignition[849]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Feb 9 19:55:34.283981 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 9 19:55:34.283981 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:55:34.283981 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubeadm: attempt #1 Feb 9 19:55:34.349719 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 19:55:34.496860 ignition[849]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: f4daad200c8378dfdc6cb69af28eaca4215f2b4a2dbdf75f29f9210171cb5683bc873fc000319022e6b3ad61175475d77190734713ba9136644394e8a8faafa1 Feb 9 19:55:34.497169 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:55:34.497169 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:55:34.497169 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubelet: attempt #1 Feb 9 19:55:34.542041 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 19:55:34.943404 ignition[849]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: ce6ba764274162d38ac1c44e1fb1f0f835346f3afc5b508bb755b1b7d7170910f5812b0a1941b32e29d950e905bbd08ae761c87befad921db4d44969c8562e75 Feb 9 19:55:34.943763 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:55:34.943960 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 19:55:34.944156 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubectl: attempt #1 Feb 9 19:55:34.997356 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 19:55:35.247151 ignition[849]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 33cf3f6e37bcee4dff7ce14ab933c605d07353d4e31446dd2b52c3f05e0b150b60e531f6069f112d8a76331322a72b593537531e62104cfc7c70cb03d46f76b3 Feb 9 19:55:35.247605 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 19:55:35.247605 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:55:35.247605 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:55:35.247605 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 19:55:35.247605 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 9 19:55:35.692686 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 9 19:55:35.748944 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 19:55:35.749183 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 9 19:55:35.749183 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 19:55:35.749183 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 19:55:35.749183 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 19:55:35.749183 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 19:55:35.749183 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 19:55:35.749183 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 19:55:35.750272 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 19:55:35.750272 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:55:35.750272 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:55:35.753924 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" Feb 9 19:55:35.754103 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(10): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:55:35.759125 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4081441369" Feb 9 19:55:35.760533 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (851) Feb 9 19:55:35.760547 ignition[849]: CRITICAL : files: createFilesystemsFiles: createFiles: op(10): op(11): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4081441369": device or resource busy Feb 9 19:55:35.760547 ignition[849]: ERROR : files: createFilesystemsFiles: createFiles: op(10): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4081441369", trying btrfs: device or resource busy Feb 9 19:55:35.760547 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4081441369" Feb 9 19:55:35.761435 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4081441369" Feb 9 19:55:35.762600 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [started] unmounting "/mnt/oem4081441369" Feb 9 19:55:35.762810 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [finished] unmounting "/mnt/oem4081441369" Feb 9 19:55:35.763040 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" Feb 9 19:55:35.763519 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(14): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Feb 9 19:55:35.763519 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(14): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Feb 9 19:55:35.763519 ignition[849]: INFO : files: op(15): [started] processing unit "vmtoolsd.service" Feb 9 19:55:35.763519 ignition[849]: INFO : files: op(15): [finished] processing unit "vmtoolsd.service" Feb 9 19:55:35.763519 ignition[849]: INFO : files: op(16): [started] processing unit "prepare-critools.service" Feb 9 19:55:35.763519 ignition[849]: INFO : files: op(16): op(17): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:55:35.763519 ignition[849]: INFO : files: op(16): op(17): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:55:35.763519 ignition[849]: INFO : files: op(16): [finished] processing unit "prepare-critools.service" Feb 9 19:55:35.763519 ignition[849]: INFO : files: op(18): [started] processing unit "prepare-helm.service" Feb 9 19:55:35.763519 ignition[849]: INFO : files: op(18): op(19): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 19:55:35.763519 ignition[849]: INFO : files: op(18): op(19): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 19:55:35.763519 ignition[849]: INFO : files: op(18): [finished] processing unit "prepare-helm.service" Feb 9 19:55:35.763519 ignition[849]: INFO : files: op(1a): [started] processing unit "coreos-metadata.service" Feb 9 19:55:35.763519 ignition[849]: INFO : files: op(1a): op(1b): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 19:55:35.763402 systemd[1]: mnt-oem4081441369.mount: Deactivated successfully. Feb 9 19:55:35.766003 ignition[849]: INFO : files: op(1a): op(1b): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 19:55:35.766003 ignition[849]: INFO : files: op(1a): [finished] processing unit "coreos-metadata.service" Feb 9 19:55:35.766003 ignition[849]: INFO : files: op(1c): [started] processing unit "prepare-cni-plugins.service" Feb 9 19:55:35.766003 ignition[849]: INFO : files: op(1c): op(1d): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:55:35.766003 ignition[849]: INFO : files: op(1c): op(1d): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:55:35.766003 ignition[849]: INFO : files: op(1c): [finished] processing unit "prepare-cni-plugins.service" Feb 9 19:55:35.766003 ignition[849]: INFO : files: op(1e): [started] setting preset to enabled for "vmtoolsd.service" Feb 9 19:55:35.766003 ignition[849]: INFO : files: op(1e): [finished] setting preset to enabled for "vmtoolsd.service" Feb 9 19:55:35.766003 ignition[849]: INFO : files: op(1f): [started] setting preset to enabled for "prepare-critools.service" Feb 9 19:55:35.766003 ignition[849]: INFO : files: op(1f): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 19:55:35.766003 ignition[849]: INFO : files: op(20): [started] setting preset to enabled for "prepare-helm.service" Feb 9 19:55:35.766003 ignition[849]: INFO : files: op(20): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 19:55:35.766003 ignition[849]: INFO : files: op(21): [started] setting preset to disabled for "coreos-metadata.service" Feb 9 19:55:35.766003 ignition[849]: INFO : files: op(21): op(22): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 19:55:35.813719 ignition[849]: INFO : files: op(21): op(22): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 19:55:35.813719 ignition[849]: INFO : files: op(21): [finished] setting preset to disabled for "coreos-metadata.service" Feb 9 19:55:35.813719 ignition[849]: INFO : files: op(23): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:55:35.813719 ignition[849]: INFO : files: op(23): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:55:35.813719 ignition[849]: INFO : files: createResultFile: createFiles: op(24): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:55:35.813719 ignition[849]: INFO : files: createResultFile: createFiles: op(24): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:55:35.813719 ignition[849]: INFO : files: files passed Feb 9 19:55:35.813719 ignition[849]: INFO : Ignition finished successfully Feb 9 19:55:35.814154 systemd[1]: Finished ignition-files.service. Feb 9 19:55:35.815223 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 19:55:35.815386 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 19:55:35.816757 kernel: kauditd_printk_skb: 24 callbacks suppressed Feb 9 19:55:35.816776 kernel: audit: type=1130 audit(1707508535.813:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:35.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:35.815776 systemd[1]: Starting ignition-quench.service... Feb 9 19:55:35.824247 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 19:55:35.824295 systemd[1]: Finished ignition-quench.service. Feb 9 19:55:35.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:35.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:35.830001 kernel: audit: type=1130 audit(1707508535.823:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:35.830020 kernel: audit: type=1131 audit(1707508535.823:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:35.842690 initrd-setup-root-after-ignition[875]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 19:55:35.843016 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 19:55:35.845757 kernel: audit: type=1130 audit(1707508535.841:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:35.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:35.843165 systemd[1]: Reached target ignition-complete.target. Feb 9 19:55:35.846588 systemd[1]: Starting initrd-parse-etc.service... Feb 9 19:55:35.854335 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 19:55:35.854534 systemd[1]: Finished initrd-parse-etc.service. Feb 9 19:55:35.854806 systemd[1]: Reached target initrd-fs.target. Feb 9 19:55:35.857421 kernel: audit: type=1130 audit(1707508535.853:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:35.857437 kernel: audit: type=1131 audit(1707508535.853:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:35.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:35.853000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:35.859673 systemd[1]: Reached target initrd.target. Feb 9 19:55:35.859800 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 19:55:35.860186 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 19:55:35.866630 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 19:55:35.867211 systemd[1]: Starting initrd-cleanup.service... Feb 9 19:55:35.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:35.870434 kernel: audit: type=1130 audit(1707508535.865:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:35.872992 systemd[1]: Stopped target nss-lookup.target. Feb 9 19:55:35.873244 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 19:55:35.873552 systemd[1]: Stopped target timers.target. Feb 9 19:55:35.873805 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 19:55:35.873994 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 19:55:35.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:35.874323 systemd[1]: Stopped target initrd.target. Feb 9 19:55:35.876824 systemd[1]: Stopped target basic.target. Feb 9 19:55:35.877063 systemd[1]: Stopped target ignition-complete.target. Feb 9 19:55:35.877314 systemd[1]: Stopped target ignition-diskful.target. Feb 9 19:55:35.877447 kernel: audit: type=1131 audit(1707508535.872:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:35.877609 systemd[1]: Stopped target initrd-root-device.target. Feb 9 19:55:35.877870 systemd[1]: Stopped target remote-fs.target. Feb 9 19:55:35.878117 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 19:55:35.878365 systemd[1]: Stopped target sysinit.target. Feb 9 19:55:35.878623 systemd[1]: Stopped target local-fs.target. Feb 9 19:55:35.878861 systemd[1]: Stopped target local-fs-pre.target. Feb 9 19:55:35.879103 systemd[1]: Stopped target swap.target. Feb 9 19:55:35.879319 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 19:55:35.879492 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 19:55:35.879736 systemd[1]: Stopped target cryptsetup.target. Feb 9 19:55:35.879886 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 19:55:35.879942 systemd[1]: Stopped dracut-initqueue.service. Feb 9 19:55:35.880184 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 19:55:35.884952 kernel: audit: type=1131 audit(1707508535.878:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:35.884964 kernel: audit: type=1131 audit(1707508535.878:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:35.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:35.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:35.880239 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 19:55:35.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:35.885087 systemd[1]: Stopped target paths.target. Feb 9 19:55:35.885231 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 19:55:35.886575 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 19:55:35.886725 systemd[1]: Stopped target slices.target. Feb 9 19:55:35.886894 systemd[1]: Stopped target sockets.target. Feb 9 19:55:35.887054 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 19:55:35.887113 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 19:55:35.887362 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 19:55:35.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:35.887423 systemd[1]: Stopped ignition-files.service. Feb 9 19:55:35.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:35.887000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:35.887000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:35.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:35.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:35.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:35.888037 systemd[1]: Stopping ignition-mount.service... Feb 9 19:55:35.895695 iscsid[739]: iscsid shutting down. Feb 9 19:55:35.888241 systemd[1]: Stopping iscsid.service... Feb 9 19:55:35.895999 ignition[888]: INFO : Ignition 2.14.0 Feb 9 19:55:35.895999 ignition[888]: INFO : Stage: umount Feb 9 19:55:35.895999 ignition[888]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:55:35.895999 ignition[888]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Feb 9 19:55:35.895999 ignition[888]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 9 19:55:35.888713 systemd[1]: Stopping sysroot-boot.service... Feb 9 19:55:35.888807 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 19:55:35.888894 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 19:55:35.889077 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 19:55:35.889146 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 19:55:35.890780 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 19:55:35.890820 systemd[1]: Finished initrd-cleanup.service. Feb 9 19:55:35.893684 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 19:55:35.893732 systemd[1]: Stopped iscsid.service. Feb 9 19:55:35.895158 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 19:55:35.895175 systemd[1]: Closed iscsid.socket. Feb 9 19:55:35.895295 systemd[1]: Stopping iscsiuio.service... Feb 9 19:55:35.900508 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 19:55:35.900556 systemd[1]: Stopped iscsiuio.service. Feb 9 19:55:35.901482 systemd[1]: Stopped target network.target. Feb 9 19:55:35.901676 ignition[888]: INFO : umount: umount passed Feb 9 19:55:35.901676 ignition[888]: INFO : Ignition finished successfully Feb 9 19:55:35.902225 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 19:55:35.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:35.902249 systemd[1]: Closed iscsiuio.socket. Feb 9 19:55:35.902771 systemd[1]: Stopping systemd-networkd.service... Feb 9 19:55:35.903431 systemd[1]: Stopping systemd-resolved.service... Feb 9 19:55:35.904277 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 19:55:35.905027 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 19:55:35.905209 systemd[1]: Stopped ignition-mount.service. Feb 9 19:55:35.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:35.905590 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 19:55:35.905788 systemd[1]: Stopped sysroot-boot.service. Feb 9 19:55:35.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:35.906063 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 19:55:35.906085 systemd[1]: Stopped ignition-disks.service. Feb 9 19:55:35.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:35.907254 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 19:55:35.907276 systemd[1]: Stopped ignition-kargs.service. Feb 9 19:55:35.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:35.907700 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 19:55:35.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:35.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:35.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:35.907723 systemd[1]: Stopped ignition-setup.service. Feb 9 19:55:35.907878 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 19:55:35.907905 systemd[1]: Stopped initrd-setup-root.service. Feb 9 19:55:35.908121 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 19:55:35.908173 systemd[1]: Stopped systemd-resolved.service. Feb 9 19:55:35.909502 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 19:55:35.909561 systemd[1]: Stopped systemd-networkd.service. Feb 9 19:55:35.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:35.910239 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 19:55:35.909000 audit: BPF prog-id=6 op=UNLOAD Feb 9 19:55:35.910257 systemd[1]: Closed systemd-networkd.socket. Feb 9 19:55:35.911132 systemd[1]: Stopping network-cleanup.service... Feb 9 19:55:35.911399 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 19:55:35.911440 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 19:55:35.911810 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Feb 9 19:55:35.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:35.911839 systemd[1]: Stopped afterburn-network-kargs.service. Feb 9 19:55:35.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:35.911000 audit: BPF prog-id=9 op=UNLOAD Feb 9 19:55:35.912380 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:55:35.912401 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:55:35.911000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:35.912938 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 19:55:35.912961 systemd[1]: Stopped systemd-modules-load.service. Feb 9 19:55:35.911000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:35.914725 systemd[1]: Stopping systemd-udevd.service... Feb 9 19:55:35.915584 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 19:55:35.917491 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 19:55:35.918249 systemd[1]: Stopped network-cleanup.service. Feb 9 19:55:35.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:35.918663 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 19:55:35.918868 systemd[1]: Stopped systemd-udevd.service. Feb 9 19:55:35.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:35.919348 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 19:55:35.919656 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 19:55:35.919879 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 19:55:35.920029 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 19:55:35.920253 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 19:55:35.920278 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 19:55:35.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:35.920483 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 19:55:35.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:35.920508 systemd[1]: Stopped dracut-cmdline.service. Feb 9 19:55:35.920708 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 19:55:35.920728 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 19:55:35.921215 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 19:55:35.921327 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 9 19:55:35.921358 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 9 19:55:35.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:35.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:35.924656 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 19:55:35.924687 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 19:55:35.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:35.924909 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 19:55:35.924929 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 19:55:35.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:35.925550 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 9 19:55:35.925805 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 19:55:35.925858 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 19:55:35.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:35.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:35.926087 systemd[1]: Reached target initrd-switch-root.target. Feb 9 19:55:35.926530 systemd[1]: Starting initrd-switch-root.service... Feb 9 19:55:35.933194 systemd[1]: Switching root. Feb 9 19:55:35.949563 systemd-journald[216]: Journal stopped Feb 9 19:55:38.449114 systemd-journald[216]: Received SIGTERM from PID 1 (systemd). Feb 9 19:55:38.449135 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 19:55:38.449143 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 19:55:38.449149 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 19:55:38.449154 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 19:55:38.449161 kernel: SELinux: policy capability open_perms=1 Feb 9 19:55:38.449167 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 19:55:38.449173 kernel: SELinux: policy capability always_check_network=0 Feb 9 19:55:38.449178 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 19:55:38.449184 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 19:55:38.449189 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 19:55:38.449195 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 19:55:38.449202 systemd[1]: Successfully loaded SELinux policy in 41.587ms. Feb 9 19:55:38.449209 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.031ms. Feb 9 19:55:38.449217 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:55:38.449224 systemd[1]: Detected virtualization vmware. Feb 9 19:55:38.449231 systemd[1]: Detected architecture x86-64. Feb 9 19:55:38.449237 systemd[1]: Detected first boot. Feb 9 19:55:38.449244 systemd[1]: Initializing machine ID from random generator. Feb 9 19:55:38.449250 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 19:55:38.449257 systemd[1]: Populated /etc with preset unit settings. Feb 9 19:55:38.449263 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:55:38.449270 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:55:38.449277 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:55:38.449286 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 19:55:38.449292 systemd[1]: Stopped initrd-switch-root.service. Feb 9 19:55:38.449299 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 19:55:38.449306 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 19:55:38.449312 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 19:55:38.449319 systemd[1]: Created slice system-getty.slice. Feb 9 19:55:38.449325 systemd[1]: Created slice system-modprobe.slice. Feb 9 19:55:38.449333 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 19:55:38.449339 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 19:55:38.449346 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 19:55:38.449352 systemd[1]: Created slice user.slice. Feb 9 19:55:38.449358 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:55:38.449364 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 19:55:38.449371 systemd[1]: Set up automount boot.automount. Feb 9 19:55:38.449377 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 19:55:38.449385 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 19:55:38.449392 systemd[1]: Stopped target initrd-fs.target. Feb 9 19:55:38.449400 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 19:55:38.449407 systemd[1]: Reached target integritysetup.target. Feb 9 19:55:38.449427 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:55:38.449435 systemd[1]: Reached target remote-fs.target. Feb 9 19:55:38.449441 systemd[1]: Reached target slices.target. Feb 9 19:55:38.449448 systemd[1]: Reached target swap.target. Feb 9 19:55:38.449456 systemd[1]: Reached target torcx.target. Feb 9 19:55:38.449470 systemd[1]: Reached target veritysetup.target. Feb 9 19:55:38.449479 systemd[1]: Listening on systemd-coredump.socket. Feb 9 19:55:38.449490 systemd[1]: Listening on systemd-initctl.socket. Feb 9 19:55:38.449501 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:55:38.449511 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:55:38.449521 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:55:38.449535 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 19:55:38.449546 systemd[1]: Mounting dev-hugepages.mount... Feb 9 19:55:38.449556 systemd[1]: Mounting dev-mqueue.mount... Feb 9 19:55:38.449564 systemd[1]: Mounting media.mount... Feb 9 19:55:38.449571 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:55:38.449578 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 19:55:38.449585 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 19:55:38.449593 systemd[1]: Mounting tmp.mount... Feb 9 19:55:38.449600 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 19:55:38.449608 systemd[1]: Starting ignition-delete-config.service... Feb 9 19:55:38.449614 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:55:38.449621 systemd[1]: Starting modprobe@configfs.service... Feb 9 19:55:38.449628 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 19:55:38.449634 systemd[1]: Starting modprobe@drm.service... Feb 9 19:55:38.449641 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 19:55:38.449648 systemd[1]: Starting modprobe@fuse.service... Feb 9 19:55:38.449655 systemd[1]: Starting modprobe@loop.service... Feb 9 19:55:38.449663 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 19:55:38.449670 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 19:55:38.449676 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 19:55:38.449683 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 19:55:38.449690 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 19:55:38.449697 systemd[1]: Stopped systemd-journald.service. Feb 9 19:55:38.449704 systemd[1]: Starting systemd-journald.service... Feb 9 19:55:38.449710 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:55:38.449718 systemd[1]: Starting systemd-network-generator.service... Feb 9 19:55:38.449725 systemd[1]: Starting systemd-remount-fs.service... Feb 9 19:55:38.449732 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:55:38.449739 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 19:55:38.449746 systemd[1]: Stopped verity-setup.service. Feb 9 19:55:38.449753 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:55:38.449763 systemd-journald[1000]: Journal started Feb 9 19:55:38.449793 systemd-journald[1000]: Runtime Journal (/run/log/journal/6fb48d8179d946e4bdb051807ba4996b) is 4.8M, max 38.8M, 34.0M free. Feb 9 19:55:36.064000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 19:55:36.149000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:55:36.149000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:55:36.149000 audit: BPF prog-id=10 op=LOAD Feb 9 19:55:36.149000 audit: BPF prog-id=10 op=UNLOAD Feb 9 19:55:36.149000 audit: BPF prog-id=11 op=LOAD Feb 9 19:55:36.149000 audit: BPF prog-id=11 op=UNLOAD Feb 9 19:55:36.342000 audit[921]: AVC avc: denied { associate } for pid=921 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 19:55:36.342000 audit[921]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0000240e2 a1=c00002a060 a2=c000028040 a3=32 items=0 ppid=904 pid=921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:55:36.342000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:55:36.344000 audit[921]: AVC avc: denied { associate } for pid=921 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 19:55:36.344000 audit[921]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0000241b9 a2=1ed a3=0 items=2 ppid=904 pid=921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:55:36.344000 audit: CWD cwd="/" Feb 9 19:55:36.344000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:36.344000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:36.344000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:55:38.352000 audit: BPF prog-id=12 op=LOAD Feb 9 19:55:38.352000 audit: BPF prog-id=3 op=UNLOAD Feb 9 19:55:38.352000 audit: BPF prog-id=13 op=LOAD Feb 9 19:55:38.352000 audit: BPF prog-id=14 op=LOAD Feb 9 19:55:38.352000 audit: BPF prog-id=4 op=UNLOAD Feb 9 19:55:38.352000 audit: BPF prog-id=5 op=UNLOAD Feb 9 19:55:38.354000 audit: BPF prog-id=15 op=LOAD Feb 9 19:55:38.354000 audit: BPF prog-id=12 op=UNLOAD Feb 9 19:55:38.354000 audit: BPF prog-id=16 op=LOAD Feb 9 19:55:38.354000 audit: BPF prog-id=17 op=LOAD Feb 9 19:55:38.354000 audit: BPF prog-id=13 op=UNLOAD Feb 9 19:55:38.354000 audit: BPF prog-id=14 op=UNLOAD Feb 9 19:55:38.354000 audit: BPF prog-id=18 op=LOAD Feb 9 19:55:38.354000 audit: BPF prog-id=15 op=UNLOAD Feb 9 19:55:38.354000 audit: BPF prog-id=19 op=LOAD Feb 9 19:55:38.354000 audit: BPF prog-id=20 op=LOAD Feb 9 19:55:38.354000 audit: BPF prog-id=16 op=UNLOAD Feb 9 19:55:38.354000 audit: BPF prog-id=17 op=UNLOAD Feb 9 19:55:38.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:38.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:38.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:38.360000 audit: BPF prog-id=18 op=UNLOAD Feb 9 19:55:38.424000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:38.426000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:38.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:38.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:38.428000 audit: BPF prog-id=21 op=LOAD Feb 9 19:55:38.451443 systemd[1]: Started systemd-journald.service. Feb 9 19:55:38.428000 audit: BPF prog-id=22 op=LOAD Feb 9 19:55:38.428000 audit: BPF prog-id=23 op=LOAD Feb 9 19:55:38.428000 audit: BPF prog-id=19 op=UNLOAD Feb 9 19:55:38.428000 audit: BPF prog-id=20 op=UNLOAD Feb 9 19:55:38.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:38.446000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 19:55:38.446000 audit[1000]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7fff6c5453a0 a2=4000 a3=7fff6c54543c items=0 ppid=1 pid=1000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:55:38.446000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 19:55:38.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:36.308230 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2024-02-09T19:55:36Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:55:38.352597 systemd[1]: Queued start job for default target multi-user.target. Feb 9 19:55:36.319003 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2024-02-09T19:55:36Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 19:55:38.356590 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 19:55:36.319019 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2024-02-09T19:55:36Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 19:55:38.451429 systemd[1]: Mounted dev-hugepages.mount. Feb 9 19:55:36.319044 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2024-02-09T19:55:36Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 19:55:38.451571 systemd[1]: Mounted dev-mqueue.mount. Feb 9 19:55:36.319051 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2024-02-09T19:55:36Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 19:55:38.451696 systemd[1]: Mounted media.mount. Feb 9 19:55:36.319076 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2024-02-09T19:55:36Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 19:55:36.319086 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2024-02-09T19:55:36Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 19:55:36.319233 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2024-02-09T19:55:36Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 19:55:36.319263 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2024-02-09T19:55:36Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 19:55:38.452240 jq[987]: true Feb 9 19:55:36.319273 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2024-02-09T19:55:36Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 19:55:36.338970 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2024-02-09T19:55:36Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 19:55:36.338998 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2024-02-09T19:55:36Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 19:55:38.452616 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 19:55:36.339013 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2024-02-09T19:55:36Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 19:55:38.452743 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 19:55:36.339024 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2024-02-09T19:55:36Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 19:55:38.452882 systemd[1]: Mounted tmp.mount. Feb 9 19:55:36.339037 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2024-02-09T19:55:36Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 19:55:36.339047 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2024-02-09T19:55:36Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 19:55:38.059851 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2024-02-09T19:55:38Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:55:38.060005 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2024-02-09T19:55:38Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:55:38.060077 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2024-02-09T19:55:38Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:55:38.060202 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2024-02-09T19:55:38Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:55:38.060233 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2024-02-09T19:55:38Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 19:55:38.060283 /usr/lib/systemd/system-generators/torcx-generator[921]: time="2024-02-09T19:55:38Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 19:55:38.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:38.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:38.454000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:38.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:38.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:38.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:38.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:38.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:38.455703 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:55:38.455953 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 19:55:38.456028 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 19:55:38.456244 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 19:55:38.456334 systemd[1]: Finished modprobe@drm.service. Feb 9 19:55:38.456579 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 19:55:38.456648 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 19:55:38.456884 systemd[1]: Finished systemd-remount-fs.service. Feb 9 19:55:38.457181 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 19:55:38.458260 jq[1005]: true Feb 9 19:55:38.459644 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 19:55:38.460645 systemd[1]: Starting systemd-journal-flush.service... Feb 9 19:55:38.460784 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 19:55:38.461943 systemd[1]: Starting systemd-random-seed.service... Feb 9 19:55:38.462473 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 19:55:38.462564 systemd[1]: Finished modprobe@configfs.service. Feb 9 19:55:38.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:38.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:38.466378 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 19:55:38.468784 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 19:55:38.477336 systemd-journald[1000]: Time spent on flushing to /var/log/journal/6fb48d8179d946e4bdb051807ba4996b is 28.339ms for 2035 entries. Feb 9 19:55:38.477336 systemd-journald[1000]: System Journal (/var/log/journal/6fb48d8179d946e4bdb051807ba4996b) is 8.0M, max 584.8M, 576.8M free. Feb 9 19:55:38.516245 systemd-journald[1000]: Received client request to flush runtime journal. Feb 9 19:55:38.516280 kernel: loop: module loaded Feb 9 19:55:38.516294 kernel: fuse: init (API version 7.34) Feb 9 19:55:38.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:38.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:38.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:38.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:38.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:38.480000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:38.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:38.477687 systemd[1]: Finished systemd-network-generator.service. Feb 9 19:55:38.478578 systemd[1]: Reached target network-pre.target. Feb 9 19:55:38.479740 systemd[1]: Finished systemd-random-seed.service. Feb 9 19:55:38.480548 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 19:55:38.480624 systemd[1]: Finished modprobe@loop.service. Feb 9 19:55:38.480789 systemd[1]: Reached target first-boot-complete.target. Feb 9 19:55:38.480902 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 19:55:38.481889 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 19:55:38.481968 systemd[1]: Finished modprobe@fuse.service. Feb 9 19:55:38.483021 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 19:55:38.485117 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 19:55:38.488998 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:55:38.489974 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:55:38.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:38.516865 systemd[1]: Finished systemd-journal-flush.service. Feb 9 19:55:38.547674 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 19:55:38.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:38.548728 systemd[1]: Starting systemd-sysusers.service... Feb 9 19:55:38.564608 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:55:38.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:38.591047 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:55:38.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:38.592153 systemd[1]: Starting systemd-udev-settle.service... Feb 9 19:55:38.598162 udevadm[1052]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 9 19:55:38.608758 systemd[1]: Finished systemd-sysusers.service. Feb 9 19:55:38.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:38.609929 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:55:38.685759 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:55:38.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:38.709922 ignition[1010]: Ignition 2.14.0 Feb 9 19:55:38.710154 ignition[1010]: deleting config from guestinfo properties Feb 9 19:55:38.720586 ignition[1010]: Successfully deleted config Feb 9 19:55:38.721187 systemd[1]: Finished ignition-delete-config.service. Feb 9 19:55:38.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ignition-delete-config comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:39.088905 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 19:55:39.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:39.088000 audit: BPF prog-id=24 op=LOAD Feb 9 19:55:39.088000 audit: BPF prog-id=25 op=LOAD Feb 9 19:55:39.088000 audit: BPF prog-id=7 op=UNLOAD Feb 9 19:55:39.088000 audit: BPF prog-id=8 op=UNLOAD Feb 9 19:55:39.089982 systemd[1]: Starting systemd-udevd.service... Feb 9 19:55:39.101196 systemd-udevd[1055]: Using default interface naming scheme 'v252'. Feb 9 19:55:39.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:39.117000 audit: BPF prog-id=26 op=LOAD Feb 9 19:55:39.118335 systemd[1]: Started systemd-udevd.service. Feb 9 19:55:39.119626 systemd[1]: Starting systemd-networkd.service... Feb 9 19:55:39.126000 audit: BPF prog-id=27 op=LOAD Feb 9 19:55:39.126000 audit: BPF prog-id=28 op=LOAD Feb 9 19:55:39.126000 audit: BPF prog-id=29 op=LOAD Feb 9 19:55:39.128225 systemd[1]: Starting systemd-userdbd.service... Feb 9 19:55:39.142471 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 9 19:55:39.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:39.160025 systemd[1]: Started systemd-userdbd.service. Feb 9 19:55:39.179444 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 9 19:55:39.208587 kernel: ACPI: button: Power Button [PWRF] Feb 9 19:55:39.259000 audit[1060]: AVC avc: denied { confidentiality } for pid=1060 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 19:55:39.259000 audit[1060]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=559ac3a12a90 a1=32194 a2=7fdc4a52ebc5 a3=5 items=108 ppid=1055 pid=1060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:55:39.259000 audit: CWD cwd="/" Feb 9 19:55:39.259000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=1 name=(null) inode=24587 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=2 name=(null) inode=24587 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=3 name=(null) inode=24588 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=4 name=(null) inode=24587 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=5 name=(null) inode=24589 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=6 name=(null) inode=24587 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=7 name=(null) inode=24590 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=8 name=(null) inode=24590 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=9 name=(null) inode=24591 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=10 name=(null) inode=24590 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=11 name=(null) inode=24592 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=12 name=(null) inode=24590 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=13 name=(null) inode=24593 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=14 name=(null) inode=24590 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=15 name=(null) inode=24594 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=16 name=(null) inode=24590 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=17 name=(null) inode=24595 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=18 name=(null) inode=24587 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=19 name=(null) inode=24596 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=20 name=(null) inode=24596 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=21 name=(null) inode=24597 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=22 name=(null) inode=24596 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=23 name=(null) inode=24598 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=24 name=(null) inode=24596 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=25 name=(null) inode=24599 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=26 name=(null) inode=24596 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=27 name=(null) inode=24600 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=28 name=(null) inode=24596 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=29 name=(null) inode=24601 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=30 name=(null) inode=24587 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=31 name=(null) inode=24602 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=32 name=(null) inode=24602 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=33 name=(null) inode=24603 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=34 name=(null) inode=24602 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=35 name=(null) inode=24604 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=36 name=(null) inode=24602 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=37 name=(null) inode=24605 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=38 name=(null) inode=24602 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=39 name=(null) inode=24606 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=40 name=(null) inode=24602 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=41 name=(null) inode=24607 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=42 name=(null) inode=24587 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=43 name=(null) inode=24608 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=44 name=(null) inode=24608 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=45 name=(null) inode=24609 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=46 name=(null) inode=24608 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=47 name=(null) inode=24610 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=48 name=(null) inode=24608 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=49 name=(null) inode=24611 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=50 name=(null) inode=24608 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=51 name=(null) inode=24612 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=52 name=(null) inode=24608 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=53 name=(null) inode=24613 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=55 name=(null) inode=24614 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=56 name=(null) inode=24614 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=57 name=(null) inode=24615 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=58 name=(null) inode=24614 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=59 name=(null) inode=24616 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=60 name=(null) inode=24614 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=61 name=(null) inode=24617 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=62 name=(null) inode=24617 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=63 name=(null) inode=24618 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=64 name=(null) inode=24617 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=65 name=(null) inode=24619 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=66 name=(null) inode=24617 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=67 name=(null) inode=24620 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=68 name=(null) inode=24617 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=69 name=(null) inode=24621 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=70 name=(null) inode=24617 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=71 name=(null) inode=24622 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=72 name=(null) inode=24614 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=73 name=(null) inode=24623 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=74 name=(null) inode=24623 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=75 name=(null) inode=24624 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=76 name=(null) inode=24623 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=77 name=(null) inode=24625 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=78 name=(null) inode=24623 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=79 name=(null) inode=24626 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=80 name=(null) inode=24623 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=81 name=(null) inode=24627 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=82 name=(null) inode=24623 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=83 name=(null) inode=24628 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=84 name=(null) inode=24614 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=85 name=(null) inode=24629 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=86 name=(null) inode=24629 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=87 name=(null) inode=24630 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=88 name=(null) inode=24629 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=89 name=(null) inode=24631 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=90 name=(null) inode=24629 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=91 name=(null) inode=24632 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=92 name=(null) inode=24629 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=93 name=(null) inode=24633 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=94 name=(null) inode=24629 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=95 name=(null) inode=24634 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=96 name=(null) inode=24614 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=97 name=(null) inode=24635 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=98 name=(null) inode=24635 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=99 name=(null) inode=24636 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=100 name=(null) inode=24635 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=101 name=(null) inode=24637 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=102 name=(null) inode=24635 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=103 name=(null) inode=24638 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=104 name=(null) inode=24635 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=105 name=(null) inode=24639 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=106 name=(null) inode=24635 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PATH item=107 name=(null) inode=24640 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:55:39.259000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 19:55:39.271429 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1067) Feb 9 19:55:39.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:39.276570 systemd-networkd[1063]: lo: Link UP Feb 9 19:55:39.276576 systemd-networkd[1063]: lo: Gained carrier Feb 9 19:55:39.276868 systemd-networkd[1063]: Enumeration completed Feb 9 19:55:39.276928 systemd-networkd[1063]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Feb 9 19:55:39.277049 systemd[1]: Started systemd-networkd.service. Feb 9 19:55:39.279960 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Feb 9 19:55:39.280094 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Feb 9 19:55:39.280983 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): ens192: link becomes ready Feb 9 19:55:39.281440 systemd-networkd[1063]: ens192: Link UP Feb 9 19:55:39.281519 systemd-networkd[1063]: ens192: Gained carrier Feb 9 19:55:39.287430 kernel: vmw_vmci 0000:00:07.7: Found VMCI PCI device at 0x11080, irq 16 Feb 9 19:55:39.296439 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Feb 9 19:55:39.298438 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Feb 9 19:55:39.302033 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Feb 9 19:55:39.302060 kernel: Guest personality initialized and is active Feb 9 19:55:39.303789 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Feb 9 19:55:39.303825 kernel: Initialized host personality Feb 9 19:55:39.304420 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 19:55:39.311627 (udev-worker)[1070]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Feb 9 19:55:39.313311 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:55:39.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:39.329632 systemd[1]: Finished systemd-udev-settle.service. Feb 9 19:55:39.330499 systemd[1]: Starting lvm2-activation-early.service... Feb 9 19:55:39.367077 lvm[1088]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:55:39.386941 systemd[1]: Finished lvm2-activation-early.service. Feb 9 19:55:39.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:39.387121 systemd[1]: Reached target cryptsetup.target. Feb 9 19:55:39.388025 systemd[1]: Starting lvm2-activation.service... Feb 9 19:55:39.390269 lvm[1089]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:55:39.409934 systemd[1]: Finished lvm2-activation.service. Feb 9 19:55:39.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:39.410082 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:55:39.410741 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 19:55:39.450830 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 19:55:39.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:39.451025 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 19:55:39.451043 systemd[1]: Reached target local-fs.target. Feb 9 19:55:39.451130 systemd[1]: Reached target machines.target. Feb 9 19:55:39.452051 systemd[1]: Starting ldconfig.service... Feb 9 19:55:39.463652 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 19:55:39.463682 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:55:39.464511 systemd[1]: Starting systemd-boot-update.service... Feb 9 19:55:39.465279 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 19:55:39.465445 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:55:39.465470 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:55:39.466086 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 19:55:39.477284 systemd-tmpfiles[1095]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 19:55:39.481310 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1093 (bootctl) Feb 9 19:55:39.482090 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 19:55:39.485811 systemd-tmpfiles[1095]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 19:55:39.492750 systemd-tmpfiles[1095]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 19:55:40.375030 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 19:55:40.375470 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 19:55:40.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:40.382735 systemd-fsck[1099]: fsck.fat 4.2 (2021-01-31) Feb 9 19:55:40.382735 systemd-fsck[1099]: /dev/sda1: 789 files, 115339/258078 clusters Feb 9 19:55:40.384303 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 19:55:40.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:40.385381 systemd[1]: Mounting boot.mount... Feb 9 19:55:40.394796 systemd[1]: Mounted boot.mount. Feb 9 19:55:40.408107 systemd[1]: Finished systemd-boot-update.service. Feb 9 19:55:40.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:40.443655 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 19:55:40.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:40.444773 systemd[1]: Starting audit-rules.service... Feb 9 19:55:40.445687 systemd[1]: Starting clean-ca-certificates.service... Feb 9 19:55:40.445000 audit: BPF prog-id=30 op=LOAD Feb 9 19:55:40.446588 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 19:55:40.447000 audit: BPF prog-id=31 op=LOAD Feb 9 19:55:40.448304 systemd[1]: Starting systemd-resolved.service... Feb 9 19:55:40.449659 systemd[1]: Starting systemd-timesyncd.service... Feb 9 19:55:40.451234 systemd[1]: Starting systemd-update-utmp.service... Feb 9 19:55:40.454000 audit[1108]: SYSTEM_BOOT pid=1108 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 19:55:40.456923 systemd[1]: Finished systemd-update-utmp.service. Feb 9 19:55:40.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:40.459178 systemd[1]: Finished clean-ca-certificates.service. Feb 9 19:55:40.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:40.459311 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 19:55:40.486215 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 19:55:40.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:55:40.497000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 19:55:40.497000 audit[1122]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe0cf98690 a2=420 a3=0 items=0 ppid=1102 pid=1122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:55:40.497000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 19:55:40.499992 augenrules[1122]: No rules Feb 9 19:55:40.499808 systemd[1]: Finished audit-rules.service. Feb 9 19:55:40.502515 systemd-resolved[1106]: Positive Trust Anchors: Feb 9 19:55:40.502523 systemd-resolved[1106]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:55:40.502543 systemd-resolved[1106]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:55:40.509004 systemd[1]: Started systemd-timesyncd.service. Feb 9 19:55:40.509184 systemd[1]: Reached target time-set.target. Feb 9 19:55:40.525557 systemd-resolved[1106]: Defaulting to hostname 'linux'. Feb 9 19:55:40.527104 systemd[1]: Started systemd-resolved.service. Feb 9 19:55:40.527255 systemd[1]: Reached target network.target. Feb 9 19:55:40.527342 systemd[1]: Reached target nss-lookup.target. Feb 9 19:56:25.043389 systemd-resolved[1106]: Clock change detected. Flushing caches. Feb 9 19:56:25.043444 systemd-timesyncd[1107]: Contacted time server 216.240.36.24:123 (0.flatcar.pool.ntp.org). Feb 9 19:56:25.043923 systemd-timesyncd[1107]: Initial clock synchronization to Fri 2024-02-09 19:56:25.043335 UTC. Feb 9 19:56:25.195127 ldconfig[1092]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 19:56:25.212643 systemd[1]: Finished ldconfig.service. Feb 9 19:56:25.213643 systemd[1]: Starting systemd-update-done.service... Feb 9 19:56:25.217417 systemd[1]: Finished systemd-update-done.service. Feb 9 19:56:25.217565 systemd[1]: Reached target sysinit.target. Feb 9 19:56:25.217735 systemd[1]: Started motdgen.path. Feb 9 19:56:25.217834 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 19:56:25.218011 systemd[1]: Started logrotate.timer. Feb 9 19:56:25.218134 systemd[1]: Started mdadm.timer. Feb 9 19:56:25.218216 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 19:56:25.218304 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 19:56:25.218320 systemd[1]: Reached target paths.target. Feb 9 19:56:25.218399 systemd[1]: Reached target timers.target. Feb 9 19:56:25.218623 systemd[1]: Listening on dbus.socket. Feb 9 19:56:25.219380 systemd[1]: Starting docker.socket... Feb 9 19:56:25.221305 systemd[1]: Listening on sshd.socket. Feb 9 19:56:25.221447 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:56:25.221685 systemd[1]: Listening on docker.socket. Feb 9 19:56:25.221809 systemd[1]: Reached target sockets.target. Feb 9 19:56:25.221897 systemd[1]: Reached target basic.target. Feb 9 19:56:25.222004 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:56:25.222021 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:56:25.222708 systemd[1]: Starting containerd.service... Feb 9 19:56:25.223424 systemd[1]: Starting dbus.service... Feb 9 19:56:25.224139 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 19:56:25.225910 systemd[1]: Starting extend-filesystems.service... Feb 9 19:56:25.226044 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 19:56:25.227829 systemd[1]: Starting motdgen.service... Feb 9 19:56:25.228849 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 19:56:25.229974 systemd[1]: Starting prepare-critools.service... Feb 9 19:56:25.230184 jq[1133]: false Feb 9 19:56:25.231448 systemd[1]: Starting prepare-helm.service... Feb 9 19:56:25.232971 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 19:56:25.233878 systemd[1]: Starting sshd-keygen.service... Feb 9 19:56:25.237706 systemd[1]: Starting systemd-logind.service... Feb 9 19:56:25.237826 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:56:25.237855 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 19:56:25.238293 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 19:56:25.239196 systemd[1]: Starting update-engine.service... Feb 9 19:56:25.240219 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 19:56:25.241427 systemd[1]: Starting vmtoolsd.service... Feb 9 19:56:25.243185 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 19:56:25.243281 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 19:56:25.252286 jq[1150]: true Feb 9 19:56:25.247357 systemd[1]: Started vmtoolsd.service. Feb 9 19:56:25.261099 tar[1153]: ./ Feb 9 19:56:25.261099 tar[1153]: ./loopback Feb 9 19:56:25.261280 tar[1156]: linux-amd64/helm Feb 9 19:56:25.257741 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 19:56:25.257840 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 19:56:25.264194 tar[1155]: crictl Feb 9 19:56:25.269941 jq[1157]: true Feb 9 19:56:25.285023 extend-filesystems[1134]: Found sda Feb 9 19:56:25.285671 extend-filesystems[1134]: Found sda1 Feb 9 19:56:25.285671 extend-filesystems[1134]: Found sda2 Feb 9 19:56:25.285671 extend-filesystems[1134]: Found sda3 Feb 9 19:56:25.285671 extend-filesystems[1134]: Found usr Feb 9 19:56:25.285671 extend-filesystems[1134]: Found sda4 Feb 9 19:56:25.285671 extend-filesystems[1134]: Found sda6 Feb 9 19:56:25.285671 extend-filesystems[1134]: Found sda7 Feb 9 19:56:25.285671 extend-filesystems[1134]: Found sda9 Feb 9 19:56:25.285671 extend-filesystems[1134]: Checking size of /dev/sda9 Feb 9 19:56:25.287431 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 19:56:25.287540 systemd[1]: Finished motdgen.service. Feb 9 19:56:25.336708 extend-filesystems[1134]: Old size kept for /dev/sda9 Feb 9 19:56:25.336977 bash[1186]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:56:25.337108 extend-filesystems[1134]: Found sr0 Feb 9 19:56:25.337400 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 19:56:25.337638 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 19:56:25.337745 systemd[1]: Finished extend-filesystems.service. Feb 9 19:56:25.342714 tar[1153]: ./bandwidth Feb 9 19:56:25.344882 dbus-daemon[1132]: [system] SELinux support is enabled Feb 9 19:56:25.344977 systemd[1]: Started dbus.service. Feb 9 19:56:25.346223 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 19:56:25.346239 systemd[1]: Reached target system-config.target. Feb 9 19:56:25.346353 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 19:56:25.346362 systemd[1]: Reached target user-config.target. Feb 9 19:56:25.350668 kernel: NET: Registered PF_VSOCK protocol family Feb 9 19:56:25.361997 env[1162]: time="2024-02-09T19:56:25.361841930Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 19:56:25.365071 update_engine[1149]: I0209 19:56:25.363899 1149 main.cc:92] Flatcar Update Engine starting Feb 9 19:56:25.367542 systemd[1]: Started update-engine.service. Feb 9 19:56:25.367937 update_engine[1149]: I0209 19:56:25.367877 1149 update_check_scheduler.cc:74] Next update check in 10m53s Feb 9 19:56:25.368967 systemd[1]: Started locksmithd.service. Feb 9 19:56:25.370370 systemd-logind[1145]: Watching system buttons on /dev/input/event1 (Power Button) Feb 9 19:56:25.370388 systemd-logind[1145]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 9 19:56:25.372231 systemd-logind[1145]: New seat seat0. Feb 9 19:56:25.378592 systemd[1]: Started systemd-logind.service. Feb 9 19:56:25.462851 tar[1153]: ./ptp Feb 9 19:56:25.470779 env[1162]: time="2024-02-09T19:56:25.470745716Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 19:56:25.470860 env[1162]: time="2024-02-09T19:56:25.470840894Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:56:25.471865 env[1162]: time="2024-02-09T19:56:25.471553296Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:56:25.471865 env[1162]: time="2024-02-09T19:56:25.471861294Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:56:25.471986 env[1162]: time="2024-02-09T19:56:25.471971681Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:56:25.471986 env[1162]: time="2024-02-09T19:56:25.471983811Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 19:56:25.472036 env[1162]: time="2024-02-09T19:56:25.471991814Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 19:56:25.472036 env[1162]: time="2024-02-09T19:56:25.471997228Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 19:56:25.472070 env[1162]: time="2024-02-09T19:56:25.472039746Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:56:25.473300 env[1162]: time="2024-02-09T19:56:25.473286463Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:56:25.473379 env[1162]: time="2024-02-09T19:56:25.473363670Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:56:25.473379 env[1162]: time="2024-02-09T19:56:25.473376049Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 19:56:25.473429 env[1162]: time="2024-02-09T19:56:25.473408098Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 19:56:25.473429 env[1162]: time="2024-02-09T19:56:25.473415503Z" level=info msg="metadata content store policy set" policy=shared Feb 9 19:56:25.477711 env[1162]: time="2024-02-09T19:56:25.477692562Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 19:56:25.477787 env[1162]: time="2024-02-09T19:56:25.477712171Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 19:56:25.477787 env[1162]: time="2024-02-09T19:56:25.477722630Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 19:56:25.477787 env[1162]: time="2024-02-09T19:56:25.477743610Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 19:56:25.477787 env[1162]: time="2024-02-09T19:56:25.477758448Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 19:56:25.477787 env[1162]: time="2024-02-09T19:56:25.477770120Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 19:56:25.477787 env[1162]: time="2024-02-09T19:56:25.477779983Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 19:56:25.477916 env[1162]: time="2024-02-09T19:56:25.477790544Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 19:56:25.477916 env[1162]: time="2024-02-09T19:56:25.477801027Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 19:56:25.477916 env[1162]: time="2024-02-09T19:56:25.477812171Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 19:56:25.477916 env[1162]: time="2024-02-09T19:56:25.477820084Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 19:56:25.477916 env[1162]: time="2024-02-09T19:56:25.477827756Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 19:56:25.477916 env[1162]: time="2024-02-09T19:56:25.477885118Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 19:56:25.478029 env[1162]: time="2024-02-09T19:56:25.477934841Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 19:56:25.478083 env[1162]: time="2024-02-09T19:56:25.478071073Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 19:56:25.478113 env[1162]: time="2024-02-09T19:56:25.478088466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 19:56:25.478113 env[1162]: time="2024-02-09T19:56:25.478101110Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 19:56:25.478151 env[1162]: time="2024-02-09T19:56:25.478132347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 19:56:25.478151 env[1162]: time="2024-02-09T19:56:25.478140526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 19:56:25.478151 env[1162]: time="2024-02-09T19:56:25.478147189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 19:56:25.478197 env[1162]: time="2024-02-09T19:56:25.478153445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 19:56:25.478197 env[1162]: time="2024-02-09T19:56:25.478160478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 19:56:25.478197 env[1162]: time="2024-02-09T19:56:25.478166799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 19:56:25.478197 env[1162]: time="2024-02-09T19:56:25.478174916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 19:56:25.478197 env[1162]: time="2024-02-09T19:56:25.478181259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 19:56:25.478197 env[1162]: time="2024-02-09T19:56:25.478189089Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 19:56:25.478293 env[1162]: time="2024-02-09T19:56:25.478253053Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 19:56:25.478293 env[1162]: time="2024-02-09T19:56:25.478262445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 19:56:25.478293 env[1162]: time="2024-02-09T19:56:25.478269269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 19:56:25.478293 env[1162]: time="2024-02-09T19:56:25.478275945Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 19:56:25.478293 env[1162]: time="2024-02-09T19:56:25.478284669Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 19:56:25.478293 env[1162]: time="2024-02-09T19:56:25.478290981Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 19:56:25.478387 env[1162]: time="2024-02-09T19:56:25.478301111Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 19:56:25.478387 env[1162]: time="2024-02-09T19:56:25.478322284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 19:56:25.478472 env[1162]: time="2024-02-09T19:56:25.478441017Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 19:56:25.480859 env[1162]: time="2024-02-09T19:56:25.478475947Z" level=info msg="Connect containerd service" Feb 9 19:56:25.480859 env[1162]: time="2024-02-09T19:56:25.478496943Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 19:56:25.481056 env[1162]: time="2024-02-09T19:56:25.481041355Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:56:25.481217 env[1162]: time="2024-02-09T19:56:25.481204653Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 19:56:25.481245 env[1162]: time="2024-02-09T19:56:25.481229141Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 19:56:25.481301 systemd[1]: Started containerd.service. Feb 9 19:56:25.481793 env[1162]: time="2024-02-09T19:56:25.481779353Z" level=info msg="containerd successfully booted in 0.120427s" Feb 9 19:56:25.513601 env[1162]: time="2024-02-09T19:56:25.513179058Z" level=info msg="Start subscribing containerd event" Feb 9 19:56:25.513601 env[1162]: time="2024-02-09T19:56:25.513231795Z" level=info msg="Start recovering state" Feb 9 19:56:25.513601 env[1162]: time="2024-02-09T19:56:25.513285784Z" level=info msg="Start event monitor" Feb 9 19:56:25.513601 env[1162]: time="2024-02-09T19:56:25.513298232Z" level=info msg="Start snapshots syncer" Feb 9 19:56:25.513601 env[1162]: time="2024-02-09T19:56:25.513307409Z" level=info msg="Start cni network conf syncer for default" Feb 9 19:56:25.513601 env[1162]: time="2024-02-09T19:56:25.513313240Z" level=info msg="Start streaming server" Feb 9 19:56:25.537194 tar[1153]: ./vlan Feb 9 19:56:25.584305 tar[1153]: ./host-device Feb 9 19:56:25.636334 tar[1153]: ./tuning Feb 9 19:56:25.665878 systemd-networkd[1063]: ens192: Gained IPv6LL Feb 9 19:56:25.676457 tar[1153]: ./vrf Feb 9 19:56:25.723620 tar[1153]: ./sbr Feb 9 19:56:25.762579 tar[1153]: ./tap Feb 9 19:56:25.815684 tar[1153]: ./dhcp Feb 9 19:56:25.850171 tar[1156]: linux-amd64/LICENSE Feb 9 19:56:25.850258 tar[1156]: linux-amd64/README.md Feb 9 19:56:25.854075 systemd[1]: Finished prepare-helm.service. Feb 9 19:56:25.895677 tar[1153]: ./static Feb 9 19:56:25.916543 tar[1153]: ./firewall Feb 9 19:56:25.944395 tar[1153]: ./macvlan Feb 9 19:56:25.969801 tar[1153]: ./dummy Feb 9 19:56:25.994698 tar[1153]: ./bridge Feb 9 19:56:26.028745 tar[1153]: ./ipvlan Feb 9 19:56:26.029277 systemd[1]: Finished prepare-critools.service. Feb 9 19:56:26.054868 tar[1153]: ./portmap Feb 9 19:56:26.078343 tar[1153]: ./host-local Feb 9 19:56:26.124736 sshd_keygen[1166]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 19:56:26.127365 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 19:56:26.141071 systemd[1]: Finished sshd-keygen.service. Feb 9 19:56:26.142204 systemd[1]: Starting issuegen.service... Feb 9 19:56:26.146411 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 19:56:26.146505 systemd[1]: Finished issuegen.service. Feb 9 19:56:26.147635 systemd[1]: Starting systemd-user-sessions.service... Feb 9 19:56:26.153599 systemd[1]: Finished systemd-user-sessions.service. Feb 9 19:56:26.154629 systemd[1]: Started getty@tty1.service. Feb 9 19:56:26.155402 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 19:56:26.155589 systemd[1]: Reached target getty.target. Feb 9 19:56:26.155750 systemd[1]: Reached target multi-user.target. Feb 9 19:56:26.156628 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 19:56:26.161189 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 19:56:26.161322 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 19:56:26.161482 systemd[1]: Startup finished in 944ms (kernel) + 6.452s (initrd) + 5.644s (userspace) = 13.041s. Feb 9 19:56:26.215115 login[1266]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 19:56:26.216599 locksmithd[1197]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 19:56:26.217522 login[1267]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 19:56:26.223640 systemd[1]: Created slice user-500.slice. Feb 9 19:56:26.224452 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 19:56:26.230268 systemd-logind[1145]: New session 1 of user core. Feb 9 19:56:26.232543 systemd-logind[1145]: New session 2 of user core. Feb 9 19:56:26.234766 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 19:56:26.235826 systemd[1]: Starting user@500.service... Feb 9 19:56:26.238557 (systemd)[1271]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:56:26.285490 systemd[1271]: Queued start job for default target default.target. Feb 9 19:56:26.286259 systemd[1271]: Reached target paths.target. Feb 9 19:56:26.286354 systemd[1271]: Reached target sockets.target. Feb 9 19:56:26.286411 systemd[1271]: Reached target timers.target. Feb 9 19:56:26.286470 systemd[1271]: Reached target basic.target. Feb 9 19:56:26.286547 systemd[1271]: Reached target default.target. Feb 9 19:56:26.286591 systemd[1]: Started user@500.service. Feb 9 19:56:26.286854 systemd[1271]: Startup finished in 43ms. Feb 9 19:56:26.287324 systemd[1]: Started session-1.scope. Feb 9 19:56:26.287831 systemd[1]: Started session-2.scope. Feb 9 19:57:05.405600 systemd[1]: Created slice system-sshd.slice. Feb 9 19:57:05.406258 systemd[1]: Started sshd@0-139.178.70.107:22-139.178.89.65:42306.service. Feb 9 19:57:05.508894 sshd[1293]: Accepted publickey for core from 139.178.89.65 port 42306 ssh2: RSA SHA256:rEL1S6qAXEJti+jLtGl56AgBuj4qp94axBvYkXmrlvQ Feb 9 19:57:05.509940 sshd[1293]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:57:05.512977 systemd[1]: Started session-3.scope. Feb 9 19:57:05.513316 systemd-logind[1145]: New session 3 of user core. Feb 9 19:57:05.560557 systemd[1]: Started sshd@1-139.178.70.107:22-139.178.89.65:42310.service. Feb 9 19:57:05.597599 sshd[1298]: Accepted publickey for core from 139.178.89.65 port 42310 ssh2: RSA SHA256:rEL1S6qAXEJti+jLtGl56AgBuj4qp94axBvYkXmrlvQ Feb 9 19:57:05.598374 sshd[1298]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:57:05.601171 systemd-logind[1145]: New session 4 of user core. Feb 9 19:57:05.601624 systemd[1]: Started session-4.scope. Feb 9 19:57:05.653480 systemd[1]: Started sshd@2-139.178.70.107:22-139.178.89.65:42312.service. Feb 9 19:57:05.653853 sshd[1298]: pam_unix(sshd:session): session closed for user core Feb 9 19:57:05.655319 systemd[1]: sshd@1-139.178.70.107:22-139.178.89.65:42310.service: Deactivated successfully. Feb 9 19:57:05.655746 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 19:57:05.656221 systemd-logind[1145]: Session 4 logged out. Waiting for processes to exit. Feb 9 19:57:05.657058 systemd-logind[1145]: Removed session 4. Feb 9 19:57:05.681809 sshd[1303]: Accepted publickey for core from 139.178.89.65 port 42312 ssh2: RSA SHA256:rEL1S6qAXEJti+jLtGl56AgBuj4qp94axBvYkXmrlvQ Feb 9 19:57:05.682563 sshd[1303]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:57:05.685168 systemd-logind[1145]: New session 5 of user core. Feb 9 19:57:05.685710 systemd[1]: Started session-5.scope. Feb 9 19:57:05.733382 sshd[1303]: pam_unix(sshd:session): session closed for user core Feb 9 19:57:05.736259 systemd[1]: sshd@2-139.178.70.107:22-139.178.89.65:42312.service: Deactivated successfully. Feb 9 19:57:05.736744 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 19:57:05.737096 systemd-logind[1145]: Session 5 logged out. Waiting for processes to exit. Feb 9 19:57:05.737787 systemd[1]: Started sshd@3-139.178.70.107:22-139.178.89.65:42314.service. Feb 9 19:57:05.738359 systemd-logind[1145]: Removed session 5. Feb 9 19:57:05.764322 sshd[1310]: Accepted publickey for core from 139.178.89.65 port 42314 ssh2: RSA SHA256:rEL1S6qAXEJti+jLtGl56AgBuj4qp94axBvYkXmrlvQ Feb 9 19:57:05.765066 sshd[1310]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:57:05.767514 systemd-logind[1145]: New session 6 of user core. Feb 9 19:57:05.767974 systemd[1]: Started session-6.scope. Feb 9 19:57:05.818229 sshd[1310]: pam_unix(sshd:session): session closed for user core Feb 9 19:57:05.820500 systemd[1]: sshd@3-139.178.70.107:22-139.178.89.65:42314.service: Deactivated successfully. Feb 9 19:57:05.820900 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 19:57:05.821261 systemd-logind[1145]: Session 6 logged out. Waiting for processes to exit. Feb 9 19:57:05.822025 systemd[1]: Started sshd@4-139.178.70.107:22-139.178.89.65:42316.service. Feb 9 19:57:05.822615 systemd-logind[1145]: Removed session 6. Feb 9 19:57:05.848586 sshd[1316]: Accepted publickey for core from 139.178.89.65 port 42316 ssh2: RSA SHA256:rEL1S6qAXEJti+jLtGl56AgBuj4qp94axBvYkXmrlvQ Feb 9 19:57:05.849364 sshd[1316]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:57:05.851685 systemd-logind[1145]: New session 7 of user core. Feb 9 19:57:05.852254 systemd[1]: Started session-7.scope. Feb 9 19:57:05.939762 sudo[1319]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 19:57:05.939943 sudo[1319]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:57:06.711406 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 19:57:06.721337 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 19:57:06.721543 systemd[1]: Reached target network-online.target. Feb 9 19:57:06.722492 systemd[1]: Starting docker.service... Feb 9 19:57:06.744389 env[1335]: time="2024-02-09T19:57:06.744341378Z" level=info msg="Starting up" Feb 9 19:57:06.745028 env[1335]: time="2024-02-09T19:57:06.745016657Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 19:57:06.745117 env[1335]: time="2024-02-09T19:57:06.745108462Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 19:57:06.745173 env[1335]: time="2024-02-09T19:57:06.745162259Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 19:57:06.745218 env[1335]: time="2024-02-09T19:57:06.745208936Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 19:57:06.746138 env[1335]: time="2024-02-09T19:57:06.746121307Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 19:57:06.746138 env[1335]: time="2024-02-09T19:57:06.746132616Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 19:57:06.746195 env[1335]: time="2024-02-09T19:57:06.746140186Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 19:57:06.746195 env[1335]: time="2024-02-09T19:57:06.746145822Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 19:57:06.760532 env[1335]: time="2024-02-09T19:57:06.760511073Z" level=info msg="Loading containers: start." Feb 9 19:57:06.841677 kernel: Initializing XFRM netlink socket Feb 9 19:57:06.871377 env[1335]: time="2024-02-09T19:57:06.871350861Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 19:57:06.917262 systemd-networkd[1063]: docker0: Link UP Feb 9 19:57:06.921606 env[1335]: time="2024-02-09T19:57:06.921588946Z" level=info msg="Loading containers: done." Feb 9 19:57:06.927117 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3870460023-merged.mount: Deactivated successfully. Feb 9 19:57:06.929933 env[1335]: time="2024-02-09T19:57:06.929910054Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 19:57:06.930035 env[1335]: time="2024-02-09T19:57:06.930021202Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 19:57:06.930090 env[1335]: time="2024-02-09T19:57:06.930077365Z" level=info msg="Daemon has completed initialization" Feb 9 19:57:06.935866 systemd[1]: Started docker.service. Feb 9 19:57:06.940006 env[1335]: time="2024-02-09T19:57:06.939978086Z" level=info msg="API listen on /run/docker.sock" Feb 9 19:57:06.950222 systemd[1]: Reloading. Feb 9 19:57:07.015881 /usr/lib/systemd/system-generators/torcx-generator[1471]: time="2024-02-09T19:57:07Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:57:07.016500 /usr/lib/systemd/system-generators/torcx-generator[1471]: time="2024-02-09T19:57:07Z" level=info msg="torcx already run" Feb 9 19:57:07.040085 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:57:07.040098 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:57:07.052712 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:57:07.101541 systemd[1]: Started kubelet.service. Feb 9 19:57:07.150975 kubelet[1530]: E0209 19:57:07.150938 1530 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 9 19:57:07.152161 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:57:07.152233 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:57:07.603436 env[1162]: time="2024-02-09T19:57:07.603400551Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.6\"" Feb 9 19:57:08.215914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2477345522.mount: Deactivated successfully. Feb 9 19:57:09.669640 env[1162]: time="2024-02-09T19:57:09.669602414Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:09.703066 env[1162]: time="2024-02-09T19:57:09.703037535Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:70e88c5e3a8e409ff4604a5fdb1dacb736ea02ba0b7a3da635f294e953906f47,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:09.735636 env[1162]: time="2024-02-09T19:57:09.735610467Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:09.748911 env[1162]: time="2024-02-09T19:57:09.748881320Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:98a686df810b9f1de8e3b2ae869e79c51a36e7434d33c53f011852618aec0a68,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:09.749412 env[1162]: time="2024-02-09T19:57:09.749394114Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.6\" returns image reference \"sha256:70e88c5e3a8e409ff4604a5fdb1dacb736ea02ba0b7a3da635f294e953906f47\"" Feb 9 19:57:09.756043 env[1162]: time="2024-02-09T19:57:09.756012439Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.6\"" Feb 9 19:57:10.124018 update_engine[1149]: I0209 19:57:10.123764 1149 update_attempter.cc:509] Updating boot flags... Feb 9 19:57:11.829306 env[1162]: time="2024-02-09T19:57:11.829269878Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:11.840743 env[1162]: time="2024-02-09T19:57:11.840700828Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:18dbd2df3bb54036300d2af8b20ef60d479173946ff089a4d16e258b27faa55c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:11.847425 env[1162]: time="2024-02-09T19:57:11.847395506Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:11.858557 env[1162]: time="2024-02-09T19:57:11.858527816Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:80bdcd72cfe26028bb2fed75732fc2f511c35fa8d1edc03deae11f3490713c9e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:11.859111 env[1162]: time="2024-02-09T19:57:11.859088335Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.6\" returns image reference \"sha256:18dbd2df3bb54036300d2af8b20ef60d479173946ff089a4d16e258b27faa55c\"" Feb 9 19:57:11.865010 env[1162]: time="2024-02-09T19:57:11.864984784Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.6\"" Feb 9 19:57:14.381098 env[1162]: time="2024-02-09T19:57:14.381061466Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:14.383489 env[1162]: time="2024-02-09T19:57:14.383470879Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7597ecaaf12074e2980eee086736dbd01e566dc266351560001aa47dbbb0e5fe,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:14.384564 env[1162]: time="2024-02-09T19:57:14.384547615Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:14.385602 env[1162]: time="2024-02-09T19:57:14.385585439Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:a89db556c34d652d403d909882dbd97336f2e935b1c726b2e2b2c0400186ac39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:14.386110 env[1162]: time="2024-02-09T19:57:14.386092791Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.6\" returns image reference \"sha256:7597ecaaf12074e2980eee086736dbd01e566dc266351560001aa47dbbb0e5fe\"" Feb 9 19:57:14.393202 env[1162]: time="2024-02-09T19:57:14.393180048Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\"" Feb 9 19:57:15.979951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2385389956.mount: Deactivated successfully. Feb 9 19:57:16.578265 env[1162]: time="2024-02-09T19:57:16.578228228Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:16.583585 env[1162]: time="2024-02-09T19:57:16.583560545Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:342a759d88156b4f56ba522a1aed0e3d32d72542545346b40877f6583bebe05f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:16.589465 env[1162]: time="2024-02-09T19:57:16.589440616Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:16.599460 env[1162]: time="2024-02-09T19:57:16.599440774Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:3898a1671ae42be1cd3c2e777549bc7b5b306b8da3a224b747365f6679fb902a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:16.599654 env[1162]: time="2024-02-09T19:57:16.599636068Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\" returns image reference \"sha256:342a759d88156b4f56ba522a1aed0e3d32d72542545346b40877f6583bebe05f\"" Feb 9 19:57:16.605009 env[1162]: time="2024-02-09T19:57:16.604984199Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 19:57:17.278185 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 19:57:17.278272 systemd[1]: Stopped kubelet.service. Feb 9 19:57:17.279365 systemd[1]: Started kubelet.service. Feb 9 19:57:17.286338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1771851659.mount: Deactivated successfully. Feb 9 19:57:17.312675 env[1162]: time="2024-02-09T19:57:17.312331360Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:17.315269 kubelet[1592]: E0209 19:57:17.315241 1592 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 9 19:57:17.317354 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:57:17.317428 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:57:17.328462 env[1162]: time="2024-02-09T19:57:17.327721489Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:17.337130 env[1162]: time="2024-02-09T19:57:17.336424764Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:17.343697 env[1162]: time="2024-02-09T19:57:17.343682229Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:17.344033 env[1162]: time="2024-02-09T19:57:17.344020604Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 9 19:57:17.349617 env[1162]: time="2024-02-09T19:57:17.349597807Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\"" Feb 9 19:57:18.253135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3040296271.mount: Deactivated successfully. Feb 9 19:57:22.997469 env[1162]: time="2024-02-09T19:57:22.997437445Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:23.007599 env[1162]: time="2024-02-09T19:57:23.007581962Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:23.013407 env[1162]: time="2024-02-09T19:57:23.013390033Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:23.025350 env[1162]: time="2024-02-09T19:57:23.025320817Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:23.026004 env[1162]: time="2024-02-09T19:57:23.025978932Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\" returns image reference \"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9\"" Feb 9 19:57:23.037388 env[1162]: time="2024-02-09T19:57:23.037359104Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Feb 9 19:57:23.562406 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1507368555.mount: Deactivated successfully. Feb 9 19:57:24.080570 env[1162]: time="2024-02-09T19:57:24.080535521Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:24.081700 env[1162]: time="2024-02-09T19:57:24.081685987Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:24.082695 env[1162]: time="2024-02-09T19:57:24.082682677Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:24.083596 env[1162]: time="2024-02-09T19:57:24.083583490Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:24.083973 env[1162]: time="2024-02-09T19:57:24.083956111Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Feb 9 19:57:26.299231 systemd[1]: Stopped kubelet.service. Feb 9 19:57:26.308275 systemd[1]: Reloading. Feb 9 19:57:26.360127 /usr/lib/systemd/system-generators/torcx-generator[1690]: time="2024-02-09T19:57:26Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:57:26.360331 /usr/lib/systemd/system-generators/torcx-generator[1690]: time="2024-02-09T19:57:26Z" level=info msg="torcx already run" Feb 9 19:57:26.409716 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:57:26.409727 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:57:26.422192 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:57:26.470790 systemd[1]: Started kubelet.service. Feb 9 19:57:26.501154 kubelet[1751]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:57:26.501377 kubelet[1751]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 19:57:26.501418 kubelet[1751]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:57:26.501503 kubelet[1751]: I0209 19:57:26.501481 1751 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:57:26.839116 kubelet[1751]: I0209 19:57:26.839098 1751 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 9 19:57:26.839218 kubelet[1751]: I0209 19:57:26.839209 1751 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:57:26.839393 kubelet[1751]: I0209 19:57:26.839385 1751 server.go:895] "Client rotation is on, will bootstrap in background" Feb 9 19:57:26.852182 kubelet[1751]: I0209 19:57:26.852163 1751 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:57:26.852608 kubelet[1751]: E0209 19:57:26.852599 1751 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.70.107:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.70.107:6443: connect: connection refused Feb 9 19:57:26.858464 kubelet[1751]: I0209 19:57:26.858450 1751 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:57:26.858577 kubelet[1751]: I0209 19:57:26.858568 1751 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:57:26.858681 kubelet[1751]: I0209 19:57:26.858669 1751 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 9 19:57:26.858748 kubelet[1751]: I0209 19:57:26.858685 1751 topology_manager.go:138] "Creating topology manager with none policy" Feb 9 19:57:26.858748 kubelet[1751]: I0209 19:57:26.858691 1751 container_manager_linux.go:301] "Creating device plugin manager" Feb 9 19:57:26.859180 kubelet[1751]: I0209 19:57:26.859169 1751 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:57:26.860315 kubelet[1751]: I0209 19:57:26.860305 1751 kubelet.go:393] "Attempting to sync node with API server" Feb 9 19:57:26.860343 kubelet[1751]: I0209 19:57:26.860317 1751 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:57:26.860343 kubelet[1751]: I0209 19:57:26.860331 1751 kubelet.go:309] "Adding apiserver pod source" Feb 9 19:57:26.860343 kubelet[1751]: I0209 19:57:26.860340 1751 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:57:26.862818 kubelet[1751]: I0209 19:57:26.862807 1751 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:57:26.863902 kubelet[1751]: W0209 19:57:26.863890 1751 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 19:57:26.865598 kubelet[1751]: I0209 19:57:26.865586 1751 server.go:1232] "Started kubelet" Feb 9 19:57:26.865688 kubelet[1751]: W0209 19:57:26.865663 1751 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://139.178.70.107:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.107:6443: connect: connection refused Feb 9 19:57:26.865717 kubelet[1751]: E0209 19:57:26.865695 1751 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.107:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.107:6443: connect: connection refused Feb 9 19:57:26.869766 kubelet[1751]: W0209 19:57:26.869743 1751 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://139.178.70.107:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.107:6443: connect: connection refused Feb 9 19:57:26.869809 kubelet[1751]: E0209 19:57:26.869771 1751 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.107:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.107:6443: connect: connection refused Feb 9 19:57:26.869854 kubelet[1751]: I0209 19:57:26.869825 1751 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:57:26.870229 kubelet[1751]: I0209 19:57:26.870217 1751 server.go:462] "Adding debug handlers to kubelet server" Feb 9 19:57:26.871385 kubelet[1751]: I0209 19:57:26.871376 1751 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 19:57:26.871573 kubelet[1751]: I0209 19:57:26.871565 1751 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 9 19:57:26.871753 kubelet[1751]: E0209 19:57:26.871705 1751 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b24a114b548ad9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 57, 26, 865570521, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 57, 26, 865570521, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://139.178.70.107:6443/api/v1/namespaces/default/events": dial tcp 139.178.70.107:6443: connect: connection refused'(may retry after sleeping) Feb 9 19:57:26.872830 kubelet[1751]: E0209 19:57:26.872821 1751 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:57:26.872887 kubelet[1751]: E0209 19:57:26.872879 1751 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:57:26.873531 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 19:57:26.873691 kubelet[1751]: I0209 19:57:26.873683 1751 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:57:26.876616 kubelet[1751]: E0209 19:57:26.876608 1751 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 19:57:26.876704 kubelet[1751]: I0209 19:57:26.876696 1751 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 9 19:57:26.876800 kubelet[1751]: I0209 19:57:26.876793 1751 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:57:26.876866 kubelet[1751]: I0209 19:57:26.876860 1751 reconciler_new.go:29] "Reconciler: start to sync state" Feb 9 19:57:26.877129 kubelet[1751]: W0209 19:57:26.877111 1751 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://139.178.70.107:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.107:6443: connect: connection refused Feb 9 19:57:26.877185 kubelet[1751]: E0209 19:57:26.877178 1751 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.107:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.107:6443: connect: connection refused Feb 9 19:57:26.877457 kubelet[1751]: E0209 19:57:26.877449 1751 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.107:6443: connect: connection refused" interval="200ms" Feb 9 19:57:26.886584 kubelet[1751]: I0209 19:57:26.886564 1751 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 9 19:57:26.887489 kubelet[1751]: I0209 19:57:26.887476 1751 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 9 19:57:26.888699 kubelet[1751]: I0209 19:57:26.888690 1751 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 9 19:57:26.888756 kubelet[1751]: I0209 19:57:26.888749 1751 kubelet.go:2303] "Starting kubelet main sync loop" Feb 9 19:57:26.888846 kubelet[1751]: E0209 19:57:26.888838 1751 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 19:57:26.892437 kubelet[1751]: W0209 19:57:26.892415 1751 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://139.178.70.107:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.107:6443: connect: connection refused Feb 9 19:57:26.892503 kubelet[1751]: E0209 19:57:26.892495 1751 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.107:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.107:6443: connect: connection refused Feb 9 19:57:26.901393 kubelet[1751]: I0209 19:57:26.901370 1751 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:57:26.901393 kubelet[1751]: I0209 19:57:26.901390 1751 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:57:26.901393 kubelet[1751]: I0209 19:57:26.901398 1751 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:57:26.902121 kubelet[1751]: I0209 19:57:26.902110 1751 policy_none.go:49] "None policy: Start" Feb 9 19:57:26.902444 kubelet[1751]: I0209 19:57:26.902436 1751 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:57:26.902553 kubelet[1751]: I0209 19:57:26.902547 1751 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:57:26.905821 systemd[1]: Created slice kubepods.slice. Feb 9 19:57:26.908702 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 19:57:26.910832 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 19:57:26.916578 kubelet[1751]: I0209 19:57:26.916567 1751 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:57:26.916789 kubelet[1751]: I0209 19:57:26.916781 1751 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:57:26.917147 kubelet[1751]: E0209 19:57:26.917140 1751 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 9 19:57:26.978216 kubelet[1751]: I0209 19:57:26.978196 1751 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 19:57:26.978499 kubelet[1751]: E0209 19:57:26.978492 1751 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://139.178.70.107:6443/api/v1/nodes\": dial tcp 139.178.70.107:6443: connect: connection refused" node="localhost" Feb 9 19:57:26.989626 kubelet[1751]: I0209 19:57:26.989612 1751 topology_manager.go:215] "Topology Admit Handler" podUID="d0325d16aab19669b5fea4b6623890e6" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 9 19:57:26.990617 kubelet[1751]: I0209 19:57:26.990603 1751 topology_manager.go:215] "Topology Admit Handler" podUID="7d07dbebabc2ee53163c377516367b5e" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 9 19:57:26.991152 kubelet[1751]: I0209 19:57:26.991143 1751 topology_manager.go:215] "Topology Admit Handler" podUID="212dcc5e2f08bec92c239ac5786b7e2b" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 9 19:57:26.993753 systemd[1]: Created slice kubepods-burstable-podd0325d16aab19669b5fea4b6623890e6.slice. Feb 9 19:57:27.009415 systemd[1]: Created slice kubepods-burstable-pod7d07dbebabc2ee53163c377516367b5e.slice. Feb 9 19:57:27.015688 systemd[1]: Created slice kubepods-burstable-pod212dcc5e2f08bec92c239ac5786b7e2b.slice. Feb 9 19:57:27.078348 kubelet[1751]: I0209 19:57:27.078319 1751 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7d07dbebabc2ee53163c377516367b5e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7d07dbebabc2ee53163c377516367b5e\") " pod="kube-system/kube-apiserver-localhost" Feb 9 19:57:27.078436 kubelet[1751]: I0209 19:57:27.078358 1751 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 19:57:27.078436 kubelet[1751]: I0209 19:57:27.078381 1751 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 19:57:27.078436 kubelet[1751]: I0209 19:57:27.078402 1751 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 19:57:27.078436 kubelet[1751]: I0209 19:57:27.078427 1751 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7d07dbebabc2ee53163c377516367b5e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7d07dbebabc2ee53163c377516367b5e\") " pod="kube-system/kube-apiserver-localhost" Feb 9 19:57:27.078516 kubelet[1751]: I0209 19:57:27.078448 1751 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7d07dbebabc2ee53163c377516367b5e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7d07dbebabc2ee53163c377516367b5e\") " pod="kube-system/kube-apiserver-localhost" Feb 9 19:57:27.078516 kubelet[1751]: I0209 19:57:27.078466 1751 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 19:57:27.078516 kubelet[1751]: I0209 19:57:27.078485 1751 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 19:57:27.078516 kubelet[1751]: I0209 19:57:27.078505 1751 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d0325d16aab19669b5fea4b6623890e6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d0325d16aab19669b5fea4b6623890e6\") " pod="kube-system/kube-scheduler-localhost" Feb 9 19:57:27.078636 kubelet[1751]: E0209 19:57:27.078624 1751 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.107:6443: connect: connection refused" interval="400ms" Feb 9 19:57:27.181223 kubelet[1751]: I0209 19:57:27.179876 1751 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 19:57:27.181223 kubelet[1751]: E0209 19:57:27.180105 1751 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://139.178.70.107:6443/api/v1/nodes\": dial tcp 139.178.70.107:6443: connect: connection refused" node="localhost" Feb 9 19:57:27.309556 env[1162]: time="2024-02-09T19:57:27.309528397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d0325d16aab19669b5fea4b6623890e6,Namespace:kube-system,Attempt:0,}" Feb 9 19:57:27.315808 env[1162]: time="2024-02-09T19:57:27.315447304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7d07dbebabc2ee53163c377516367b5e,Namespace:kube-system,Attempt:0,}" Feb 9 19:57:27.317791 env[1162]: time="2024-02-09T19:57:27.317762735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:212dcc5e2f08bec92c239ac5786b7e2b,Namespace:kube-system,Attempt:0,}" Feb 9 19:57:27.480150 kubelet[1751]: E0209 19:57:27.479902 1751 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.107:6443: connect: connection refused" interval="800ms" Feb 9 19:57:27.581368 kubelet[1751]: I0209 19:57:27.581354 1751 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 19:57:27.581772 kubelet[1751]: E0209 19:57:27.581763 1751 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://139.178.70.107:6443/api/v1/nodes\": dial tcp 139.178.70.107:6443: connect: connection refused" node="localhost" Feb 9 19:57:27.831301 kubelet[1751]: W0209 19:57:27.831266 1751 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://139.178.70.107:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.107:6443: connect: connection refused Feb 9 19:57:27.831454 kubelet[1751]: E0209 19:57:27.831444 1751 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.107:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.107:6443: connect: connection refused Feb 9 19:57:27.885750 kubelet[1751]: W0209 19:57:27.885714 1751 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://139.178.70.107:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.107:6443: connect: connection refused Feb 9 19:57:27.885886 kubelet[1751]: E0209 19:57:27.885878 1751 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.107:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.107:6443: connect: connection refused Feb 9 19:57:27.978749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2555939496.mount: Deactivated successfully. Feb 9 19:57:27.982009 env[1162]: time="2024-02-09T19:57:27.981919047Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:27.983502 env[1162]: time="2024-02-09T19:57:27.983482431Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:27.984253 env[1162]: time="2024-02-09T19:57:27.984154566Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:27.984781 env[1162]: time="2024-02-09T19:57:27.984768856Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:27.987204 env[1162]: time="2024-02-09T19:57:27.987189073Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:27.996465 env[1162]: time="2024-02-09T19:57:27.996436181Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:27.997092 env[1162]: time="2024-02-09T19:57:27.997072093Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:27.997699 env[1162]: time="2024-02-09T19:57:27.997678692Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:27.998289 env[1162]: time="2024-02-09T19:57:27.998272957Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:27.998935 env[1162]: time="2024-02-09T19:57:27.998916775Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:27.999536 env[1162]: time="2024-02-09T19:57:27.999518726Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:28.005590 env[1162]: time="2024-02-09T19:57:28.005567039Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:28.008615 env[1162]: time="2024-02-09T19:57:28.008011311Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:57:28.008615 env[1162]: time="2024-02-09T19:57:28.008030624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:57:28.008615 env[1162]: time="2024-02-09T19:57:28.008048664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:57:28.008615 env[1162]: time="2024-02-09T19:57:28.008131707Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/16cfd8369358ad98923028ae4cfeb3f6d6d7c5c590b6386cd1ad5ff697ac5a35 pid=1788 runtime=io.containerd.runc.v2 Feb 9 19:57:28.019406 env[1162]: time="2024-02-09T19:57:28.019372561Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:57:28.019526 env[1162]: time="2024-02-09T19:57:28.019511993Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:57:28.019596 env[1162]: time="2024-02-09T19:57:28.019583065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:57:28.019725 env[1162]: time="2024-02-09T19:57:28.019709947Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/740f450ba61fcd8c1a12cfc0e43a48b36ce1108d3d0d0797a2e63c08209d3ad7 pid=1823 runtime=io.containerd.runc.v2 Feb 9 19:57:28.020427 env[1162]: time="2024-02-09T19:57:28.020394495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:57:28.020480 env[1162]: time="2024-02-09T19:57:28.020420440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:57:28.020480 env[1162]: time="2024-02-09T19:57:28.020432310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:57:28.020546 env[1162]: time="2024-02-09T19:57:28.020518628Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a47c515f0d24863122d702eb1c4e314b5ae8f82639ffde42e75e3040a0886d01 pid=1808 runtime=io.containerd.runc.v2 Feb 9 19:57:28.024291 systemd[1]: Started cri-containerd-16cfd8369358ad98923028ae4cfeb3f6d6d7c5c590b6386cd1ad5ff697ac5a35.scope. Feb 9 19:57:28.051393 systemd[1]: Started cri-containerd-740f450ba61fcd8c1a12cfc0e43a48b36ce1108d3d0d0797a2e63c08209d3ad7.scope. Feb 9 19:57:28.056889 systemd[1]: Started cri-containerd-a47c515f0d24863122d702eb1c4e314b5ae8f82639ffde42e75e3040a0886d01.scope. Feb 9 19:57:28.100837 env[1162]: time="2024-02-09T19:57:28.100760058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7d07dbebabc2ee53163c377516367b5e,Namespace:kube-system,Attempt:0,} returns sandbox id \"16cfd8369358ad98923028ae4cfeb3f6d6d7c5c590b6386cd1ad5ff697ac5a35\"" Feb 9 19:57:28.108548 env[1162]: time="2024-02-09T19:57:28.108475198Z" level=info msg="CreateContainer within sandbox \"16cfd8369358ad98923028ae4cfeb3f6d6d7c5c590b6386cd1ad5ff697ac5a35\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 19:57:28.109634 env[1162]: time="2024-02-09T19:57:28.109614510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:212dcc5e2f08bec92c239ac5786b7e2b,Namespace:kube-system,Attempt:0,} returns sandbox id \"a47c515f0d24863122d702eb1c4e314b5ae8f82639ffde42e75e3040a0886d01\"" Feb 9 19:57:28.109999 kubelet[1751]: W0209 19:57:28.109950 1751 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://139.178.70.107:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.107:6443: connect: connection refused Feb 9 19:57:28.109999 kubelet[1751]: E0209 19:57:28.109982 1751 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.107:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.107:6443: connect: connection refused Feb 9 19:57:28.111152 env[1162]: time="2024-02-09T19:57:28.111136382Z" level=info msg="CreateContainer within sandbox \"a47c515f0d24863122d702eb1c4e314b5ae8f82639ffde42e75e3040a0886d01\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 19:57:28.119621 env[1162]: time="2024-02-09T19:57:28.119592616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d0325d16aab19669b5fea4b6623890e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"740f450ba61fcd8c1a12cfc0e43a48b36ce1108d3d0d0797a2e63c08209d3ad7\"" Feb 9 19:57:28.120915 env[1162]: time="2024-02-09T19:57:28.120897913Z" level=info msg="CreateContainer within sandbox \"740f450ba61fcd8c1a12cfc0e43a48b36ce1108d3d0d0797a2e63c08209d3ad7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 19:57:28.281000 kubelet[1751]: E0209 19:57:28.280971 1751 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.107:6443: connect: connection refused" interval="1.6s" Feb 9 19:57:28.324839 kubelet[1751]: W0209 19:57:28.324768 1751 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://139.178.70.107:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.107:6443: connect: connection refused Feb 9 19:57:28.324839 kubelet[1751]: E0209 19:57:28.324819 1751 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.107:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.107:6443: connect: connection refused Feb 9 19:57:28.358244 env[1162]: time="2024-02-09T19:57:28.357770046Z" level=info msg="CreateContainer within sandbox \"740f450ba61fcd8c1a12cfc0e43a48b36ce1108d3d0d0797a2e63c08209d3ad7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"993e95cc2d7143ee9f2715720072525cacb05612ca325b25cb27c90ef82b4530\"" Feb 9 19:57:28.358434 env[1162]: time="2024-02-09T19:57:28.358316916Z" level=info msg="StartContainer for \"993e95cc2d7143ee9f2715720072525cacb05612ca325b25cb27c90ef82b4530\"" Feb 9 19:57:28.358841 env[1162]: time="2024-02-09T19:57:28.358802336Z" level=info msg="CreateContainer within sandbox \"a47c515f0d24863122d702eb1c4e314b5ae8f82639ffde42e75e3040a0886d01\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"708fd79c44c43fb323a5546417f9101c59f926e732ba9e2eebb19af7d95739bf\"" Feb 9 19:57:28.359065 env[1162]: time="2024-02-09T19:57:28.359052326Z" level=info msg="StartContainer for \"708fd79c44c43fb323a5546417f9101c59f926e732ba9e2eebb19af7d95739bf\"" Feb 9 19:57:28.359733 env[1162]: time="2024-02-09T19:57:28.359718487Z" level=info msg="CreateContainer within sandbox \"16cfd8369358ad98923028ae4cfeb3f6d6d7c5c590b6386cd1ad5ff697ac5a35\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a16d435ce153e4881f36ed8002d8fc37bf3aaacc08a04988fbeb4f1fa426db3b\"" Feb 9 19:57:28.360028 env[1162]: time="2024-02-09T19:57:28.360008427Z" level=info msg="StartContainer for \"a16d435ce153e4881f36ed8002d8fc37bf3aaacc08a04988fbeb4f1fa426db3b\"" Feb 9 19:57:28.375516 systemd[1]: Started cri-containerd-708fd79c44c43fb323a5546417f9101c59f926e732ba9e2eebb19af7d95739bf.scope. Feb 9 19:57:28.385310 kubelet[1751]: I0209 19:57:28.385063 1751 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 19:57:28.385310 kubelet[1751]: E0209 19:57:28.385297 1751 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://139.178.70.107:6443/api/v1/nodes\": dial tcp 139.178.70.107:6443: connect: connection refused" node="localhost" Feb 9 19:57:28.388471 systemd[1]: Started cri-containerd-a16d435ce153e4881f36ed8002d8fc37bf3aaacc08a04988fbeb4f1fa426db3b.scope. Feb 9 19:57:28.401285 systemd[1]: Started cri-containerd-993e95cc2d7143ee9f2715720072525cacb05612ca325b25cb27c90ef82b4530.scope. Feb 9 19:57:28.419297 env[1162]: time="2024-02-09T19:57:28.419271584Z" level=info msg="StartContainer for \"708fd79c44c43fb323a5546417f9101c59f926e732ba9e2eebb19af7d95739bf\" returns successfully" Feb 9 19:57:28.433158 env[1162]: time="2024-02-09T19:57:28.433132527Z" level=info msg="StartContainer for \"a16d435ce153e4881f36ed8002d8fc37bf3aaacc08a04988fbeb4f1fa426db3b\" returns successfully" Feb 9 19:57:28.449033 env[1162]: time="2024-02-09T19:57:28.449011245Z" level=info msg="StartContainer for \"993e95cc2d7143ee9f2715720072525cacb05612ca325b25cb27c90ef82b4530\" returns successfully" Feb 9 19:57:28.565059 kubelet[1751]: E0209 19:57:28.564975 1751 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b24a114b548ad9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 57, 26, 865570521, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 57, 26, 865570521, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://139.178.70.107:6443/api/v1/namespaces/default/events": dial tcp 139.178.70.107:6443: connect: connection refused'(may retry after sleeping) Feb 9 19:57:28.864270 kubelet[1751]: E0209 19:57:28.864237 1751 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.70.107:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.70.107:6443: connect: connection refused Feb 9 19:57:29.986491 kubelet[1751]: I0209 19:57:29.986239 1751 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 19:57:30.255614 kubelet[1751]: E0209 19:57:30.255551 1751 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 9 19:57:30.322918 kubelet[1751]: I0209 19:57:30.322891 1751 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 9 19:57:30.865953 kubelet[1751]: I0209 19:57:30.865933 1751 apiserver.go:52] "Watching apiserver" Feb 9 19:57:30.877068 kubelet[1751]: I0209 19:57:30.877040 1751 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:57:33.089409 systemd[1]: Reloading. Feb 9 19:57:33.154807 /usr/lib/systemd/system-generators/torcx-generator[2037]: time="2024-02-09T19:57:33Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:57:33.155689 /usr/lib/systemd/system-generators/torcx-generator[2037]: time="2024-02-09T19:57:33Z" level=info msg="torcx already run" Feb 9 19:57:33.217475 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:57:33.217490 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:57:33.232965 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:57:33.293448 systemd[1]: Stopping kubelet.service... Feb 9 19:57:33.312019 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 19:57:33.312142 systemd[1]: Stopped kubelet.service. Feb 9 19:57:33.313638 systemd[1]: Started kubelet.service. Feb 9 19:57:33.363913 kubelet[2097]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:57:33.364105 kubelet[2097]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 19:57:33.364143 kubelet[2097]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:57:33.364227 kubelet[2097]: I0209 19:57:33.364207 2097 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:57:33.367304 kubelet[2097]: I0209 19:57:33.367284 2097 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 9 19:57:33.367304 kubelet[2097]: I0209 19:57:33.367295 2097 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:57:33.367572 kubelet[2097]: I0209 19:57:33.367560 2097 server.go:895] "Client rotation is on, will bootstrap in background" Feb 9 19:57:33.368437 kubelet[2097]: I0209 19:57:33.368426 2097 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 19:57:33.369059 kubelet[2097]: I0209 19:57:33.369049 2097 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:57:33.372032 kubelet[2097]: I0209 19:57:33.372019 2097 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:57:33.372288 kubelet[2097]: I0209 19:57:33.372276 2097 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:57:33.372376 kubelet[2097]: I0209 19:57:33.372365 2097 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 9 19:57:33.372440 kubelet[2097]: I0209 19:57:33.372381 2097 topology_manager.go:138] "Creating topology manager with none policy" Feb 9 19:57:33.372440 kubelet[2097]: I0209 19:57:33.372386 2097 container_manager_linux.go:301] "Creating device plugin manager" Feb 9 19:57:33.372440 kubelet[2097]: I0209 19:57:33.372405 2097 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:57:33.372517 kubelet[2097]: I0209 19:57:33.372445 2097 kubelet.go:393] "Attempting to sync node with API server" Feb 9 19:57:33.372517 kubelet[2097]: I0209 19:57:33.372454 2097 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:57:33.372517 kubelet[2097]: I0209 19:57:33.372467 2097 kubelet.go:309] "Adding apiserver pod source" Feb 9 19:57:33.372517 kubelet[2097]: I0209 19:57:33.372475 2097 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:57:33.379876 kubelet[2097]: I0209 19:57:33.379578 2097 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:57:33.380447 kubelet[2097]: I0209 19:57:33.380434 2097 server.go:1232] "Started kubelet" Feb 9 19:57:33.381719 kubelet[2097]: I0209 19:57:33.381704 2097 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:57:33.382216 kubelet[2097]: I0209 19:57:33.382204 2097 server.go:462] "Adding debug handlers to kubelet server" Feb 9 19:57:33.382786 kubelet[2097]: I0209 19:57:33.382778 2097 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 19:57:33.385602 kubelet[2097]: I0209 19:57:33.385406 2097 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:57:33.388991 kubelet[2097]: I0209 19:57:33.388976 2097 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 9 19:57:33.389268 kubelet[2097]: I0209 19:57:33.389258 2097 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 9 19:57:33.392779 kubelet[2097]: I0209 19:57:33.392764 2097 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:57:33.392853 kubelet[2097]: I0209 19:57:33.392845 2097 reconciler_new.go:29] "Reconciler: start to sync state" Feb 9 19:57:33.392914 kubelet[2097]: E0209 19:57:33.392904 2097 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:57:33.392945 kubelet[2097]: E0209 19:57:33.392917 2097 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:57:33.395403 kubelet[2097]: I0209 19:57:33.394167 2097 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 9 19:57:33.395403 kubelet[2097]: I0209 19:57:33.394700 2097 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 9 19:57:33.395403 kubelet[2097]: I0209 19:57:33.394715 2097 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 9 19:57:33.395403 kubelet[2097]: I0209 19:57:33.394727 2097 kubelet.go:2303] "Starting kubelet main sync loop" Feb 9 19:57:33.395403 kubelet[2097]: E0209 19:57:33.394750 2097 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 19:57:33.424016 kubelet[2097]: I0209 19:57:33.423995 2097 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:57:33.424016 kubelet[2097]: I0209 19:57:33.424008 2097 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:57:33.424016 kubelet[2097]: I0209 19:57:33.424018 2097 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:57:33.424622 kubelet[2097]: I0209 19:57:33.424356 2097 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 19:57:33.424622 kubelet[2097]: I0209 19:57:33.424384 2097 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 9 19:57:33.424622 kubelet[2097]: I0209 19:57:33.424388 2097 policy_none.go:49] "None policy: Start" Feb 9 19:57:33.425343 kubelet[2097]: I0209 19:57:33.425334 2097 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:57:33.425400 kubelet[2097]: I0209 19:57:33.425393 2097 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:57:33.425534 kubelet[2097]: I0209 19:57:33.425525 2097 state_mem.go:75] "Updated machine memory state" Feb 9 19:57:33.427984 kubelet[2097]: I0209 19:57:33.427974 2097 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:57:33.428164 kubelet[2097]: I0209 19:57:33.428108 2097 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:57:33.471795 sudo[2126]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 9 19:57:33.472122 sudo[2126]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 9 19:57:33.490457 kubelet[2097]: I0209 19:57:33.490442 2097 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 19:57:33.494896 kubelet[2097]: I0209 19:57:33.494882 2097 topology_manager.go:215] "Topology Admit Handler" podUID="7d07dbebabc2ee53163c377516367b5e" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 9 19:57:33.495076 kubelet[2097]: I0209 19:57:33.495058 2097 topology_manager.go:215] "Topology Admit Handler" podUID="212dcc5e2f08bec92c239ac5786b7e2b" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 9 19:57:33.495153 kubelet[2097]: I0209 19:57:33.495147 2097 topology_manager.go:215] "Topology Admit Handler" podUID="d0325d16aab19669b5fea4b6623890e6" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 9 19:57:33.495224 kubelet[2097]: I0209 19:57:33.495151 2097 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Feb 9 19:57:33.495256 kubelet[2097]: I0209 19:57:33.495248 2097 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 9 19:57:33.694225 kubelet[2097]: I0209 19:57:33.694159 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7d07dbebabc2ee53163c377516367b5e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7d07dbebabc2ee53163c377516367b5e\") " pod="kube-system/kube-apiserver-localhost" Feb 9 19:57:33.694225 kubelet[2097]: I0209 19:57:33.694191 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7d07dbebabc2ee53163c377516367b5e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7d07dbebabc2ee53163c377516367b5e\") " pod="kube-system/kube-apiserver-localhost" Feb 9 19:57:33.694225 kubelet[2097]: I0209 19:57:33.694215 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 19:57:33.694363 kubelet[2097]: I0209 19:57:33.694235 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 19:57:33.694363 kubelet[2097]: I0209 19:57:33.694247 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 19:57:33.694363 kubelet[2097]: I0209 19:57:33.694277 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d0325d16aab19669b5fea4b6623890e6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d0325d16aab19669b5fea4b6623890e6\") " pod="kube-system/kube-scheduler-localhost" Feb 9 19:57:33.694363 kubelet[2097]: I0209 19:57:33.694296 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7d07dbebabc2ee53163c377516367b5e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7d07dbebabc2ee53163c377516367b5e\") " pod="kube-system/kube-apiserver-localhost" Feb 9 19:57:33.694363 kubelet[2097]: I0209 19:57:33.694308 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 19:57:33.694455 kubelet[2097]: I0209 19:57:33.694321 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/212dcc5e2f08bec92c239ac5786b7e2b-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"212dcc5e2f08bec92c239ac5786b7e2b\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 19:57:33.908378 sudo[2126]: pam_unix(sudo:session): session closed for user root Feb 9 19:57:34.373828 kubelet[2097]: I0209 19:57:34.373802 2097 apiserver.go:52] "Watching apiserver" Feb 9 19:57:34.393462 kubelet[2097]: I0209 19:57:34.393431 2097 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:57:34.423764 kubelet[2097]: E0209 19:57:34.423736 2097 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 9 19:57:34.436896 kubelet[2097]: I0209 19:57:34.436867 2097 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.436836402 podCreationTimestamp="2024-02-09 19:57:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:57:34.424430835 +0000 UTC m=+1.101610141" watchObservedRunningTime="2024-02-09 19:57:34.436836402 +0000 UTC m=+1.114015718" Feb 9 19:57:34.447685 kubelet[2097]: I0209 19:57:34.447666 2097 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.447638156 podCreationTimestamp="2024-02-09 19:57:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:57:34.447445839 +0000 UTC m=+1.124625139" watchObservedRunningTime="2024-02-09 19:57:34.447638156 +0000 UTC m=+1.124817452" Feb 9 19:57:34.447854 kubelet[2097]: I0209 19:57:34.447845 2097 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.447833784 podCreationTimestamp="2024-02-09 19:57:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:57:34.437326473 +0000 UTC m=+1.114505782" watchObservedRunningTime="2024-02-09 19:57:34.447833784 +0000 UTC m=+1.125013083" Feb 9 19:57:35.398356 sudo[1319]: pam_unix(sudo:session): session closed for user root Feb 9 19:57:35.399514 sshd[1316]: pam_unix(sshd:session): session closed for user core Feb 9 19:57:35.401181 systemd[1]: sshd@4-139.178.70.107:22-139.178.89.65:42316.service: Deactivated successfully. Feb 9 19:57:35.401644 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 19:57:35.401754 systemd[1]: session-7.scope: Consumed 3.073s CPU time. Feb 9 19:57:35.402216 systemd-logind[1145]: Session 7 logged out. Waiting for processes to exit. Feb 9 19:57:35.402758 systemd-logind[1145]: Removed session 7. Feb 9 19:57:45.893992 kubelet[2097]: I0209 19:57:45.893954 2097 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 19:57:45.894552 env[1162]: time="2024-02-09T19:57:45.894498556Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 19:57:45.894798 kubelet[2097]: I0209 19:57:45.894739 2097 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 19:57:46.841145 kubelet[2097]: I0209 19:57:46.841125 2097 topology_manager.go:215] "Topology Admit Handler" podUID="385c988f-de41-463e-b049-203064020e93" podNamespace="kube-system" podName="kube-proxy-g8gsg" Feb 9 19:57:46.844268 systemd[1]: Created slice kubepods-besteffort-pod385c988f_de41_463e_b049_203064020e93.slice. Feb 9 19:57:46.847151 kubelet[2097]: I0209 19:57:46.847131 2097 topology_manager.go:215] "Topology Admit Handler" podUID="9b5c29bd-36e6-409a-8fd6-648781eff461" podNamespace="kube-system" podName="cilium-w26vr" Feb 9 19:57:46.852945 systemd[1]: Created slice kubepods-burstable-pod9b5c29bd_36e6_409a_8fd6_648781eff461.slice. Feb 9 19:57:46.864526 kubelet[2097]: I0209 19:57:46.864501 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b5c29bd-36e6-409a-8fd6-648781eff461-lib-modules\") pod \"cilium-w26vr\" (UID: \"9b5c29bd-36e6-409a-8fd6-648781eff461\") " pod="kube-system/cilium-w26vr" Feb 9 19:57:46.864526 kubelet[2097]: I0209 19:57:46.864528 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9b5c29bd-36e6-409a-8fd6-648781eff461-cilium-config-path\") pod \"cilium-w26vr\" (UID: \"9b5c29bd-36e6-409a-8fd6-648781eff461\") " pod="kube-system/cilium-w26vr" Feb 9 19:57:46.864646 kubelet[2097]: I0209 19:57:46.864542 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/385c988f-de41-463e-b049-203064020e93-kube-proxy\") pod \"kube-proxy-g8gsg\" (UID: \"385c988f-de41-463e-b049-203064020e93\") " pod="kube-system/kube-proxy-g8gsg" Feb 9 19:57:46.864646 kubelet[2097]: I0209 19:57:46.864554 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9b5c29bd-36e6-409a-8fd6-648781eff461-cni-path\") pod \"cilium-w26vr\" (UID: \"9b5c29bd-36e6-409a-8fd6-648781eff461\") " pod="kube-system/cilium-w26vr" Feb 9 19:57:46.864646 kubelet[2097]: I0209 19:57:46.864566 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9b5c29bd-36e6-409a-8fd6-648781eff461-clustermesh-secrets\") pod \"cilium-w26vr\" (UID: \"9b5c29bd-36e6-409a-8fd6-648781eff461\") " pod="kube-system/cilium-w26vr" Feb 9 19:57:46.864646 kubelet[2097]: I0209 19:57:46.864591 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/385c988f-de41-463e-b049-203064020e93-lib-modules\") pod \"kube-proxy-g8gsg\" (UID: \"385c988f-de41-463e-b049-203064020e93\") " pod="kube-system/kube-proxy-g8gsg" Feb 9 19:57:46.864646 kubelet[2097]: I0209 19:57:46.864606 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9b5c29bd-36e6-409a-8fd6-648781eff461-cilium-cgroup\") pod \"cilium-w26vr\" (UID: \"9b5c29bd-36e6-409a-8fd6-648781eff461\") " pod="kube-system/cilium-w26vr" Feb 9 19:57:46.864646 kubelet[2097]: I0209 19:57:46.864620 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9b5c29bd-36e6-409a-8fd6-648781eff461-hostproc\") pod \"cilium-w26vr\" (UID: \"9b5c29bd-36e6-409a-8fd6-648781eff461\") " pod="kube-system/cilium-w26vr" Feb 9 19:57:46.864774 kubelet[2097]: I0209 19:57:46.864630 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9b5c29bd-36e6-409a-8fd6-648781eff461-host-proc-sys-net\") pod \"cilium-w26vr\" (UID: \"9b5c29bd-36e6-409a-8fd6-648781eff461\") " pod="kube-system/cilium-w26vr" Feb 9 19:57:46.864774 kubelet[2097]: I0209 19:57:46.864654 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9b5c29bd-36e6-409a-8fd6-648781eff461-bpf-maps\") pod \"cilium-w26vr\" (UID: \"9b5c29bd-36e6-409a-8fd6-648781eff461\") " pod="kube-system/cilium-w26vr" Feb 9 19:57:46.864774 kubelet[2097]: I0209 19:57:46.864679 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9b5c29bd-36e6-409a-8fd6-648781eff461-etc-cni-netd\") pod \"cilium-w26vr\" (UID: \"9b5c29bd-36e6-409a-8fd6-648781eff461\") " pod="kube-system/cilium-w26vr" Feb 9 19:57:46.864774 kubelet[2097]: I0209 19:57:46.864695 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9b5c29bd-36e6-409a-8fd6-648781eff461-cilium-run\") pod \"cilium-w26vr\" (UID: \"9b5c29bd-36e6-409a-8fd6-648781eff461\") " pod="kube-system/cilium-w26vr" Feb 9 19:57:46.864774 kubelet[2097]: I0209 19:57:46.864707 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b5c29bd-36e6-409a-8fd6-648781eff461-xtables-lock\") pod \"cilium-w26vr\" (UID: \"9b5c29bd-36e6-409a-8fd6-648781eff461\") " pod="kube-system/cilium-w26vr" Feb 9 19:57:46.864774 kubelet[2097]: I0209 19:57:46.864759 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9b5c29bd-36e6-409a-8fd6-648781eff461-host-proc-sys-kernel\") pod \"cilium-w26vr\" (UID: \"9b5c29bd-36e6-409a-8fd6-648781eff461\") " pod="kube-system/cilium-w26vr" Feb 9 19:57:46.864927 kubelet[2097]: I0209 19:57:46.864774 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs5w5\" (UniqueName: \"kubernetes.io/projected/9b5c29bd-36e6-409a-8fd6-648781eff461-kube-api-access-zs5w5\") pod \"cilium-w26vr\" (UID: \"9b5c29bd-36e6-409a-8fd6-648781eff461\") " pod="kube-system/cilium-w26vr" Feb 9 19:57:46.864927 kubelet[2097]: I0209 19:57:46.864786 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/385c988f-de41-463e-b049-203064020e93-xtables-lock\") pod \"kube-proxy-g8gsg\" (UID: \"385c988f-de41-463e-b049-203064020e93\") " pod="kube-system/kube-proxy-g8gsg" Feb 9 19:57:46.864927 kubelet[2097]: I0209 19:57:46.864797 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p52n7\" (UniqueName: \"kubernetes.io/projected/385c988f-de41-463e-b049-203064020e93-kube-api-access-p52n7\") pod \"kube-proxy-g8gsg\" (UID: \"385c988f-de41-463e-b049-203064020e93\") " pod="kube-system/kube-proxy-g8gsg" Feb 9 19:57:46.864927 kubelet[2097]: I0209 19:57:46.864810 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9b5c29bd-36e6-409a-8fd6-648781eff461-hubble-tls\") pod \"cilium-w26vr\" (UID: \"9b5c29bd-36e6-409a-8fd6-648781eff461\") " pod="kube-system/cilium-w26vr" Feb 9 19:57:46.946159 kubelet[2097]: I0209 19:57:46.946140 2097 topology_manager.go:215] "Topology Admit Handler" podUID="5cf05e01-2825-4b29-83ba-e077f22d3aac" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-7bzxh" Feb 9 19:57:46.949344 systemd[1]: Created slice kubepods-besteffort-pod5cf05e01_2825_4b29_83ba_e077f22d3aac.slice. Feb 9 19:57:46.965057 kubelet[2097]: I0209 19:57:46.965034 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5bcb\" (UniqueName: \"kubernetes.io/projected/5cf05e01-2825-4b29-83ba-e077f22d3aac-kube-api-access-q5bcb\") pod \"cilium-operator-6bc8ccdb58-7bzxh\" (UID: \"5cf05e01-2825-4b29-83ba-e077f22d3aac\") " pod="kube-system/cilium-operator-6bc8ccdb58-7bzxh" Feb 9 19:57:46.965159 kubelet[2097]: I0209 19:57:46.965091 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5cf05e01-2825-4b29-83ba-e077f22d3aac-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-7bzxh\" (UID: \"5cf05e01-2825-4b29-83ba-e077f22d3aac\") " pod="kube-system/cilium-operator-6bc8ccdb58-7bzxh" Feb 9 19:57:47.152496 env[1162]: time="2024-02-09T19:57:47.151459395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g8gsg,Uid:385c988f-de41-463e-b049-203064020e93,Namespace:kube-system,Attempt:0,}" Feb 9 19:57:47.158624 env[1162]: time="2024-02-09T19:57:47.156885687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w26vr,Uid:9b5c29bd-36e6-409a-8fd6-648781eff461,Namespace:kube-system,Attempt:0,}" Feb 9 19:57:47.164625 env[1162]: time="2024-02-09T19:57:47.164567938Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:57:47.164625 env[1162]: time="2024-02-09T19:57:47.164603256Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:57:47.164781 env[1162]: time="2024-02-09T19:57:47.164755580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:57:47.164916 env[1162]: time="2024-02-09T19:57:47.164880586Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5513b4d3c7675f6cb07dceb451289a610a191c9663ffad825ba079273a4dcdf1 pid=2178 runtime=io.containerd.runc.v2 Feb 9 19:57:47.167683 env[1162]: time="2024-02-09T19:57:47.167609135Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:57:47.167852 env[1162]: time="2024-02-09T19:57:47.167827149Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:57:47.167926 env[1162]: time="2024-02-09T19:57:47.167912963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:57:47.172339 env[1162]: time="2024-02-09T19:57:47.168893435Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1cfcd1f6b8ee0d9a5cd8dd24bf2b7b607123392bc94a18e92afe02a30fd44c2e pid=2189 runtime=io.containerd.runc.v2 Feb 9 19:57:47.178470 systemd[1]: Started cri-containerd-5513b4d3c7675f6cb07dceb451289a610a191c9663ffad825ba079273a4dcdf1.scope. Feb 9 19:57:47.201424 systemd[1]: Started cri-containerd-1cfcd1f6b8ee0d9a5cd8dd24bf2b7b607123392bc94a18e92afe02a30fd44c2e.scope. Feb 9 19:57:47.209550 env[1162]: time="2024-02-09T19:57:47.208488360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g8gsg,Uid:385c988f-de41-463e-b049-203064020e93,Namespace:kube-system,Attempt:0,} returns sandbox id \"5513b4d3c7675f6cb07dceb451289a610a191c9663ffad825ba079273a4dcdf1\"" Feb 9 19:57:47.212907 env[1162]: time="2024-02-09T19:57:47.212880010Z" level=info msg="CreateContainer within sandbox \"5513b4d3c7675f6cb07dceb451289a610a191c9663ffad825ba079273a4dcdf1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 19:57:47.222135 env[1162]: time="2024-02-09T19:57:47.222103337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w26vr,Uid:9b5c29bd-36e6-409a-8fd6-648781eff461,Namespace:kube-system,Attempt:0,} returns sandbox id \"1cfcd1f6b8ee0d9a5cd8dd24bf2b7b607123392bc94a18e92afe02a30fd44c2e\"" Feb 9 19:57:47.227599 env[1162]: time="2024-02-09T19:57:47.227567876Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 19:57:47.251013 env[1162]: time="2024-02-09T19:57:47.250984147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-7bzxh,Uid:5cf05e01-2825-4b29-83ba-e077f22d3aac,Namespace:kube-system,Attempt:0,}" Feb 9 19:57:47.259448 env[1162]: time="2024-02-09T19:57:47.259419855Z" level=info msg="CreateContainer within sandbox \"5513b4d3c7675f6cb07dceb451289a610a191c9663ffad825ba079273a4dcdf1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6002736226a6d50068a07f6ff3346b1d67d88d8052b9348c9904805265dd343f\"" Feb 9 19:57:47.261158 env[1162]: time="2024-02-09T19:57:47.260274069Z" level=info msg="StartContainer for \"6002736226a6d50068a07f6ff3346b1d67d88d8052b9348c9904805265dd343f\"" Feb 9 19:57:47.267154 env[1162]: time="2024-02-09T19:57:47.266972148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:57:47.267154 env[1162]: time="2024-02-09T19:57:47.266996532Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:57:47.267154 env[1162]: time="2024-02-09T19:57:47.267013064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:57:47.267540 env[1162]: time="2024-02-09T19:57:47.267510791Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/07f0fc61751dba61e53a65af7e649794016403f117838141d3e5ec4a2a12c0b5 pid=2267 runtime=io.containerd.runc.v2 Feb 9 19:57:47.274664 systemd[1]: Started cri-containerd-6002736226a6d50068a07f6ff3346b1d67d88d8052b9348c9904805265dd343f.scope. Feb 9 19:57:47.293175 systemd[1]: Started cri-containerd-07f0fc61751dba61e53a65af7e649794016403f117838141d3e5ec4a2a12c0b5.scope. Feb 9 19:57:47.323997 env[1162]: time="2024-02-09T19:57:47.323966531Z" level=info msg="StartContainer for \"6002736226a6d50068a07f6ff3346b1d67d88d8052b9348c9904805265dd343f\" returns successfully" Feb 9 19:57:47.334171 env[1162]: time="2024-02-09T19:57:47.334141648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-7bzxh,Uid:5cf05e01-2825-4b29-83ba-e077f22d3aac,Namespace:kube-system,Attempt:0,} returns sandbox id \"07f0fc61751dba61e53a65af7e649794016403f117838141d3e5ec4a2a12c0b5\"" Feb 9 19:57:47.433962 kubelet[2097]: I0209 19:57:47.433237 2097 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-g8gsg" podStartSLOduration=1.433213894 podCreationTimestamp="2024-02-09 19:57:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:57:47.433107498 +0000 UTC m=+14.110286804" watchObservedRunningTime="2024-02-09 19:57:47.433213894 +0000 UTC m=+14.110393193" Feb 9 19:57:51.449985 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2738389627.mount: Deactivated successfully. Feb 9 19:57:53.816506 env[1162]: time="2024-02-09T19:57:53.816431786Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:53.955946 env[1162]: time="2024-02-09T19:57:53.955922593Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:53.958752 env[1162]: time="2024-02-09T19:57:53.958613816Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:53.960702 env[1162]: time="2024-02-09T19:57:53.959078451Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 9 19:57:53.961229 env[1162]: time="2024-02-09T19:57:53.961158384Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 19:57:53.962290 env[1162]: time="2024-02-09T19:57:53.962265509Z" level=info msg="CreateContainer within sandbox \"1cfcd1f6b8ee0d9a5cd8dd24bf2b7b607123392bc94a18e92afe02a30fd44c2e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:57:53.967679 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2848145805.mount: Deactivated successfully. Feb 9 19:57:53.970337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2701304448.mount: Deactivated successfully. Feb 9 19:57:53.983385 env[1162]: time="2024-02-09T19:57:53.983359793Z" level=info msg="CreateContainer within sandbox \"1cfcd1f6b8ee0d9a5cd8dd24bf2b7b607123392bc94a18e92afe02a30fd44c2e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8e38a7c3249f863c8f23b3745dc39013594926da103e3e09874caa7c78a28382\"" Feb 9 19:57:53.984835 env[1162]: time="2024-02-09T19:57:53.984797917Z" level=info msg="StartContainer for \"8e38a7c3249f863c8f23b3745dc39013594926da103e3e09874caa7c78a28382\"" Feb 9 19:57:54.010983 systemd[1]: Started cri-containerd-8e38a7c3249f863c8f23b3745dc39013594926da103e3e09874caa7c78a28382.scope. Feb 9 19:57:54.031693 env[1162]: time="2024-02-09T19:57:54.031599473Z" level=info msg="StartContainer for \"8e38a7c3249f863c8f23b3745dc39013594926da103e3e09874caa7c78a28382\" returns successfully" Feb 9 19:57:54.037418 systemd[1]: cri-containerd-8e38a7c3249f863c8f23b3745dc39013594926da103e3e09874caa7c78a28382.scope: Deactivated successfully. Feb 9 19:57:54.400526 env[1162]: time="2024-02-09T19:57:54.400490804Z" level=info msg="shim disconnected" id=8e38a7c3249f863c8f23b3745dc39013594926da103e3e09874caa7c78a28382 Feb 9 19:57:54.400753 env[1162]: time="2024-02-09T19:57:54.400738240Z" level=warning msg="cleaning up after shim disconnected" id=8e38a7c3249f863c8f23b3745dc39013594926da103e3e09874caa7c78a28382 namespace=k8s.io Feb 9 19:57:54.400850 env[1162]: time="2024-02-09T19:57:54.400836567Z" level=info msg="cleaning up dead shim" Feb 9 19:57:54.406333 env[1162]: time="2024-02-09T19:57:54.406313741Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:57:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2497 runtime=io.containerd.runc.v2\n" Feb 9 19:57:54.450166 env[1162]: time="2024-02-09T19:57:54.450132208Z" level=info msg="CreateContainer within sandbox \"1cfcd1f6b8ee0d9a5cd8dd24bf2b7b607123392bc94a18e92afe02a30fd44c2e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:57:54.485024 env[1162]: time="2024-02-09T19:57:54.484993893Z" level=info msg="CreateContainer within sandbox \"1cfcd1f6b8ee0d9a5cd8dd24bf2b7b607123392bc94a18e92afe02a30fd44c2e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c87027210456fe804843186c6956df7c109e27b5b84b935ff5c07acd2e66ec10\"" Feb 9 19:57:54.485498 env[1162]: time="2024-02-09T19:57:54.485482926Z" level=info msg="StartContainer for \"c87027210456fe804843186c6956df7c109e27b5b84b935ff5c07acd2e66ec10\"" Feb 9 19:57:54.495314 systemd[1]: Started cri-containerd-c87027210456fe804843186c6956df7c109e27b5b84b935ff5c07acd2e66ec10.scope. Feb 9 19:57:54.514196 env[1162]: time="2024-02-09T19:57:54.514167872Z" level=info msg="StartContainer for \"c87027210456fe804843186c6956df7c109e27b5b84b935ff5c07acd2e66ec10\" returns successfully" Feb 9 19:57:54.521685 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:57:54.521846 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:57:54.522385 systemd[1]: Stopping systemd-sysctl.service... Feb 9 19:57:54.524582 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:57:54.530109 systemd[1]: cri-containerd-c87027210456fe804843186c6956df7c109e27b5b84b935ff5c07acd2e66ec10.scope: Deactivated successfully. Feb 9 19:57:54.537000 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:57:54.566523 env[1162]: time="2024-02-09T19:57:54.566496004Z" level=info msg="shim disconnected" id=c87027210456fe804843186c6956df7c109e27b5b84b935ff5c07acd2e66ec10 Feb 9 19:57:54.566702 env[1162]: time="2024-02-09T19:57:54.566690575Z" level=warning msg="cleaning up after shim disconnected" id=c87027210456fe804843186c6956df7c109e27b5b84b935ff5c07acd2e66ec10 namespace=k8s.io Feb 9 19:57:54.566764 env[1162]: time="2024-02-09T19:57:54.566754221Z" level=info msg="cleaning up dead shim" Feb 9 19:57:54.571076 env[1162]: time="2024-02-09T19:57:54.571057056Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:57:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2560 runtime=io.containerd.runc.v2\n" Feb 9 19:57:54.967739 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e38a7c3249f863c8f23b3745dc39013594926da103e3e09874caa7c78a28382-rootfs.mount: Deactivated successfully. Feb 9 19:57:55.262018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount995646259.mount: Deactivated successfully. Feb 9 19:57:55.448233 env[1162]: time="2024-02-09T19:57:55.443103333Z" level=info msg="CreateContainer within sandbox \"1cfcd1f6b8ee0d9a5cd8dd24bf2b7b607123392bc94a18e92afe02a30fd44c2e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:57:55.455014 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1071879950.mount: Deactivated successfully. Feb 9 19:57:55.483571 env[1162]: time="2024-02-09T19:57:55.483544254Z" level=info msg="CreateContainer within sandbox \"1cfcd1f6b8ee0d9a5cd8dd24bf2b7b607123392bc94a18e92afe02a30fd44c2e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4d86c564090a41c28c0907a06b43f545327e7b8010d3a5595ad7e0f346598509\"" Feb 9 19:57:55.485078 env[1162]: time="2024-02-09T19:57:55.484161954Z" level=info msg="StartContainer for \"4d86c564090a41c28c0907a06b43f545327e7b8010d3a5595ad7e0f346598509\"" Feb 9 19:57:55.505190 systemd[1]: Started cri-containerd-4d86c564090a41c28c0907a06b43f545327e7b8010d3a5595ad7e0f346598509.scope. Feb 9 19:57:55.554780 env[1162]: time="2024-02-09T19:57:55.554753324Z" level=info msg="StartContainer for \"4d86c564090a41c28c0907a06b43f545327e7b8010d3a5595ad7e0f346598509\" returns successfully" Feb 9 19:57:55.573485 systemd[1]: cri-containerd-4d86c564090a41c28c0907a06b43f545327e7b8010d3a5595ad7e0f346598509.scope: Deactivated successfully. Feb 9 19:57:55.962945 env[1162]: time="2024-02-09T19:57:55.962870523Z" level=info msg="shim disconnected" id=4d86c564090a41c28c0907a06b43f545327e7b8010d3a5595ad7e0f346598509 Feb 9 19:57:55.963098 env[1162]: time="2024-02-09T19:57:55.963087478Z" level=warning msg="cleaning up after shim disconnected" id=4d86c564090a41c28c0907a06b43f545327e7b8010d3a5595ad7e0f346598509 namespace=k8s.io Feb 9 19:57:55.963146 env[1162]: time="2024-02-09T19:57:55.963136251Z" level=info msg="cleaning up dead shim" Feb 9 19:57:55.964883 env[1162]: time="2024-02-09T19:57:55.964862494Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:55.965101 env[1162]: time="2024-02-09T19:57:55.965077343Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 9 19:57:55.965473 env[1162]: time="2024-02-09T19:57:55.965458354Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:55.966696 env[1162]: time="2024-02-09T19:57:55.966680629Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:57:55.968501 env[1162]: time="2024-02-09T19:57:55.968028426Z" level=info msg="CreateContainer within sandbox \"07f0fc61751dba61e53a65af7e649794016403f117838141d3e5ec4a2a12c0b5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 19:57:55.975125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4087762218.mount: Deactivated successfully. Feb 9 19:57:55.977513 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3090818959.mount: Deactivated successfully. Feb 9 19:57:55.982614 env[1162]: time="2024-02-09T19:57:55.982586997Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:57:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2617 runtime=io.containerd.runc.v2\n" Feb 9 19:57:55.984499 env[1162]: time="2024-02-09T19:57:55.984478672Z" level=info msg="CreateContainer within sandbox \"07f0fc61751dba61e53a65af7e649794016403f117838141d3e5ec4a2a12c0b5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"193143ee82dd10e9ce72d9e4e92fb72dc49eba24d5300d2819a722d7e1b9cd8b\"" Feb 9 19:57:55.985823 env[1162]: time="2024-02-09T19:57:55.985796290Z" level=info msg="StartContainer for \"193143ee82dd10e9ce72d9e4e92fb72dc49eba24d5300d2819a722d7e1b9cd8b\"" Feb 9 19:57:55.996699 systemd[1]: Started cri-containerd-193143ee82dd10e9ce72d9e4e92fb72dc49eba24d5300d2819a722d7e1b9cd8b.scope. Feb 9 19:57:56.020913 env[1162]: time="2024-02-09T19:57:56.020881515Z" level=info msg="StartContainer for \"193143ee82dd10e9ce72d9e4e92fb72dc49eba24d5300d2819a722d7e1b9cd8b\" returns successfully" Feb 9 19:57:56.446140 env[1162]: time="2024-02-09T19:57:56.446091816Z" level=info msg="CreateContainer within sandbox \"1cfcd1f6b8ee0d9a5cd8dd24bf2b7b607123392bc94a18e92afe02a30fd44c2e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 19:57:56.453284 kubelet[2097]: I0209 19:57:56.453260 2097 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-7bzxh" podStartSLOduration=1.8209970370000002 podCreationTimestamp="2024-02-09 19:57:46 +0000 UTC" firstStartedPulling="2024-02-09 19:57:47.33484607 +0000 UTC m=+14.012025362" lastFinishedPulling="2024-02-09 19:57:55.967082454 +0000 UTC m=+22.644261749" observedRunningTime="2024-02-09 19:57:56.452784895 +0000 UTC m=+23.129964195" watchObservedRunningTime="2024-02-09 19:57:56.453233424 +0000 UTC m=+23.130412724" Feb 9 19:57:56.474774 env[1162]: time="2024-02-09T19:57:56.474741563Z" level=info msg="CreateContainer within sandbox \"1cfcd1f6b8ee0d9a5cd8dd24bf2b7b607123392bc94a18e92afe02a30fd44c2e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3e6b3d43645b31dba08bb215d85712b052b51fb5ab9eaee3dd1435621a70b786\"" Feb 9 19:57:56.475376 env[1162]: time="2024-02-09T19:57:56.475346147Z" level=info msg="StartContainer for \"3e6b3d43645b31dba08bb215d85712b052b51fb5ab9eaee3dd1435621a70b786\"" Feb 9 19:57:56.486259 systemd[1]: Started cri-containerd-3e6b3d43645b31dba08bb215d85712b052b51fb5ab9eaee3dd1435621a70b786.scope. Feb 9 19:57:56.525524 env[1162]: time="2024-02-09T19:57:56.525494522Z" level=info msg="StartContainer for \"3e6b3d43645b31dba08bb215d85712b052b51fb5ab9eaee3dd1435621a70b786\" returns successfully" Feb 9 19:57:56.527957 systemd[1]: cri-containerd-3e6b3d43645b31dba08bb215d85712b052b51fb5ab9eaee3dd1435621a70b786.scope: Deactivated successfully. Feb 9 19:57:56.570351 env[1162]: time="2024-02-09T19:57:56.570313037Z" level=info msg="shim disconnected" id=3e6b3d43645b31dba08bb215d85712b052b51fb5ab9eaee3dd1435621a70b786 Feb 9 19:57:56.570555 env[1162]: time="2024-02-09T19:57:56.570544129Z" level=warning msg="cleaning up after shim disconnected" id=3e6b3d43645b31dba08bb215d85712b052b51fb5ab9eaee3dd1435621a70b786 namespace=k8s.io Feb 9 19:57:56.570610 env[1162]: time="2024-02-09T19:57:56.570600153Z" level=info msg="cleaning up dead shim" Feb 9 19:57:56.578382 env[1162]: time="2024-02-09T19:57:56.578334312Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:57:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2710 runtime=io.containerd.runc.v2\n" Feb 9 19:57:57.447597 env[1162]: time="2024-02-09T19:57:57.447569591Z" level=info msg="CreateContainer within sandbox \"1cfcd1f6b8ee0d9a5cd8dd24bf2b7b607123392bc94a18e92afe02a30fd44c2e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 19:57:57.454359 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2941315747.mount: Deactivated successfully. Feb 9 19:57:57.457544 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1870942821.mount: Deactivated successfully. Feb 9 19:57:57.459226 env[1162]: time="2024-02-09T19:57:57.459205344Z" level=info msg="CreateContainer within sandbox \"1cfcd1f6b8ee0d9a5cd8dd24bf2b7b607123392bc94a18e92afe02a30fd44c2e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9bc553ef6c0c165339648e0d122b012647a97ea782ebd5b243f622fd7fe8578c\"" Feb 9 19:57:57.459700 env[1162]: time="2024-02-09T19:57:57.459643127Z" level=info msg="StartContainer for \"9bc553ef6c0c165339648e0d122b012647a97ea782ebd5b243f622fd7fe8578c\"" Feb 9 19:57:57.471697 systemd[1]: Started cri-containerd-9bc553ef6c0c165339648e0d122b012647a97ea782ebd5b243f622fd7fe8578c.scope. Feb 9 19:57:57.499239 env[1162]: time="2024-02-09T19:57:57.499210637Z" level=info msg="StartContainer for \"9bc553ef6c0c165339648e0d122b012647a97ea782ebd5b243f622fd7fe8578c\" returns successfully" Feb 9 19:57:57.721893 kubelet[2097]: I0209 19:57:57.721264 2097 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 19:57:57.760565 kubelet[2097]: I0209 19:57:57.760297 2097 topology_manager.go:215] "Topology Admit Handler" podUID="7912f1aa-40fc-4b74-be8c-6cd4b40d0505" podNamespace="kube-system" podName="coredns-5dd5756b68-hbl9p" Feb 9 19:57:57.760565 kubelet[2097]: I0209 19:57:57.760448 2097 topology_manager.go:215] "Topology Admit Handler" podUID="bff028e6-c045-44a1-85a0-079177e589c8" podNamespace="kube-system" podName="coredns-5dd5756b68-cmwn7" Feb 9 19:57:57.767347 systemd[1]: Created slice kubepods-burstable-pod7912f1aa_40fc_4b74_be8c_6cd4b40d0505.slice. Feb 9 19:57:57.771630 systemd[1]: Created slice kubepods-burstable-podbff028e6_c045_44a1_85a0_079177e589c8.slice. Feb 9 19:57:57.922145 kubelet[2097]: I0209 19:57:57.922122 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htrqq\" (UniqueName: \"kubernetes.io/projected/7912f1aa-40fc-4b74-be8c-6cd4b40d0505-kube-api-access-htrqq\") pod \"coredns-5dd5756b68-hbl9p\" (UID: \"7912f1aa-40fc-4b74-be8c-6cd4b40d0505\") " pod="kube-system/coredns-5dd5756b68-hbl9p" Feb 9 19:57:57.922316 kubelet[2097]: I0209 19:57:57.922308 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7912f1aa-40fc-4b74-be8c-6cd4b40d0505-config-volume\") pod \"coredns-5dd5756b68-hbl9p\" (UID: \"7912f1aa-40fc-4b74-be8c-6cd4b40d0505\") " pod="kube-system/coredns-5dd5756b68-hbl9p" Feb 9 19:57:57.922404 kubelet[2097]: I0209 19:57:57.922397 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bff028e6-c045-44a1-85a0-079177e589c8-config-volume\") pod \"coredns-5dd5756b68-cmwn7\" (UID: \"bff028e6-c045-44a1-85a0-079177e589c8\") " pod="kube-system/coredns-5dd5756b68-cmwn7" Feb 9 19:57:57.922485 kubelet[2097]: I0209 19:57:57.922479 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vmqj\" (UniqueName: \"kubernetes.io/projected/bff028e6-c045-44a1-85a0-079177e589c8-kube-api-access-2vmqj\") pod \"coredns-5dd5756b68-cmwn7\" (UID: \"bff028e6-c045-44a1-85a0-079177e589c8\") " pod="kube-system/coredns-5dd5756b68-cmwn7" Feb 9 19:57:58.056677 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 9 19:57:58.071303 env[1162]: time="2024-02-09T19:57:58.071272978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-hbl9p,Uid:7912f1aa-40fc-4b74-be8c-6cd4b40d0505,Namespace:kube-system,Attempt:0,}" Feb 9 19:57:58.074247 env[1162]: time="2024-02-09T19:57:58.074229943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-cmwn7,Uid:bff028e6-c045-44a1-85a0-079177e589c8,Namespace:kube-system,Attempt:0,}" Feb 9 19:57:58.396689 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Feb 9 19:58:00.039444 systemd-networkd[1063]: cilium_host: Link UP Feb 9 19:58:00.042670 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 9 19:58:00.042729 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 19:58:00.040490 systemd-networkd[1063]: cilium_net: Link UP Feb 9 19:58:00.041454 systemd-networkd[1063]: cilium_net: Gained carrier Feb 9 19:58:00.042350 systemd-networkd[1063]: cilium_host: Gained carrier Feb 9 19:58:00.164161 systemd-networkd[1063]: cilium_vxlan: Link UP Feb 9 19:58:00.164165 systemd-networkd[1063]: cilium_vxlan: Gained carrier Feb 9 19:58:00.257834 systemd-networkd[1063]: cilium_net: Gained IPv6LL Feb 9 19:58:00.755677 kernel: NET: Registered PF_ALG protocol family Feb 9 19:58:00.769756 systemd-networkd[1063]: cilium_host: Gained IPv6LL Feb 9 19:58:01.431451 systemd-networkd[1063]: lxc_health: Link UP Feb 9 19:58:01.442760 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 19:58:01.437338 systemd-networkd[1063]: lxc_health: Gained carrier Feb 9 19:58:01.646568 systemd-networkd[1063]: lxcdbf17fecb5ca: Link UP Feb 9 19:58:01.652759 kernel: eth0: renamed from tmp22ee1 Feb 9 19:58:01.659219 systemd-networkd[1063]: lxcdbf17fecb5ca: Gained carrier Feb 9 19:58:01.664699 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcdbf17fecb5ca: link becomes ready Feb 9 19:58:01.662244 systemd-networkd[1063]: lxc924e22eee24d: Link UP Feb 9 19:58:01.671704 kernel: eth0: renamed from tmp4241b Feb 9 19:58:01.676800 systemd-networkd[1063]: lxc924e22eee24d: Gained carrier Feb 9 19:58:01.679687 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc924e22eee24d: link becomes ready Feb 9 19:58:02.177881 systemd-networkd[1063]: cilium_vxlan: Gained IPv6LL Feb 9 19:58:02.625973 systemd-networkd[1063]: lxc_health: Gained IPv6LL Feb 9 19:58:02.753830 systemd-networkd[1063]: lxc924e22eee24d: Gained IPv6LL Feb 9 19:58:03.172774 kubelet[2097]: I0209 19:58:03.172748 2097 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-w26vr" podStartSLOduration=10.436270616 podCreationTimestamp="2024-02-09 19:57:46 +0000 UTC" firstStartedPulling="2024-02-09 19:57:47.222873765 +0000 UTC m=+13.900053061" lastFinishedPulling="2024-02-09 19:57:53.959325594 +0000 UTC m=+20.636504886" observedRunningTime="2024-02-09 19:57:58.46768701 +0000 UTC m=+25.144866315" watchObservedRunningTime="2024-02-09 19:58:03.172722441 +0000 UTC m=+29.849901740" Feb 9 19:58:03.393811 systemd-networkd[1063]: lxcdbf17fecb5ca: Gained IPv6LL Feb 9 19:58:04.434955 env[1162]: time="2024-02-09T19:58:04.434921842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:58:04.435207 env[1162]: time="2024-02-09T19:58:04.435192170Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:58:04.435265 env[1162]: time="2024-02-09T19:58:04.435252327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:58:04.435685 env[1162]: time="2024-02-09T19:58:04.435474806Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/22ee1af486579a6b3d3e589837ca7933f239509594bd65b67e65d132b324286f pid=3265 runtime=io.containerd.runc.v2 Feb 9 19:58:04.456854 env[1162]: time="2024-02-09T19:58:04.456810103Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:58:04.456978 env[1162]: time="2024-02-09T19:58:04.456962283Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:58:04.457044 env[1162]: time="2024-02-09T19:58:04.457031408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:58:04.457695 env[1162]: time="2024-02-09T19:58:04.457226853Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4241b221622b3ceb53f8f08ed2b92fb031cb6374b28d640ac712d6df67501a08 pid=3280 runtime=io.containerd.runc.v2 Feb 9 19:58:04.475240 systemd[1]: run-containerd-runc-k8s.io-22ee1af486579a6b3d3e589837ca7933f239509594bd65b67e65d132b324286f-runc.SZm49G.mount: Deactivated successfully. Feb 9 19:58:04.478340 systemd[1]: Started cri-containerd-22ee1af486579a6b3d3e589837ca7933f239509594bd65b67e65d132b324286f.scope. Feb 9 19:58:04.481052 systemd[1]: Started cri-containerd-4241b221622b3ceb53f8f08ed2b92fb031cb6374b28d640ac712d6df67501a08.scope. Feb 9 19:58:04.497773 systemd-resolved[1106]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 19:58:04.507168 systemd-resolved[1106]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 19:58:04.528577 env[1162]: time="2024-02-09T19:58:04.528552603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-cmwn7,Uid:bff028e6-c045-44a1-85a0-079177e589c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"22ee1af486579a6b3d3e589837ca7933f239509594bd65b67e65d132b324286f\"" Feb 9 19:58:04.531237 env[1162]: time="2024-02-09T19:58:04.531215457Z" level=info msg="CreateContainer within sandbox \"22ee1af486579a6b3d3e589837ca7933f239509594bd65b67e65d132b324286f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 19:58:04.541147 env[1162]: time="2024-02-09T19:58:04.541112279Z" level=info msg="CreateContainer within sandbox \"22ee1af486579a6b3d3e589837ca7933f239509594bd65b67e65d132b324286f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ecffc5a5093bf3edb66adb65c49d089bed17b5a5a38b7c43b9a553e722224070\"" Feb 9 19:58:04.541745 env[1162]: time="2024-02-09T19:58:04.541714889Z" level=info msg="StartContainer for \"ecffc5a5093bf3edb66adb65c49d089bed17b5a5a38b7c43b9a553e722224070\"" Feb 9 19:58:04.553281 systemd[1]: Started cri-containerd-ecffc5a5093bf3edb66adb65c49d089bed17b5a5a38b7c43b9a553e722224070.scope. Feb 9 19:58:04.555195 env[1162]: time="2024-02-09T19:58:04.555166787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-hbl9p,Uid:7912f1aa-40fc-4b74-be8c-6cd4b40d0505,Namespace:kube-system,Attempt:0,} returns sandbox id \"4241b221622b3ceb53f8f08ed2b92fb031cb6374b28d640ac712d6df67501a08\"" Feb 9 19:58:04.556934 env[1162]: time="2024-02-09T19:58:04.556909191Z" level=info msg="CreateContainer within sandbox \"4241b221622b3ceb53f8f08ed2b92fb031cb6374b28d640ac712d6df67501a08\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 19:58:04.566845 env[1162]: time="2024-02-09T19:58:04.566819589Z" level=info msg="CreateContainer within sandbox \"4241b221622b3ceb53f8f08ed2b92fb031cb6374b28d640ac712d6df67501a08\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2530afcffb833b8bfc07aacfe89f1d2ffcc6a6a23798d452225414b6e844d36f\"" Feb 9 19:58:04.569137 env[1162]: time="2024-02-09T19:58:04.569110904Z" level=info msg="StartContainer for \"2530afcffb833b8bfc07aacfe89f1d2ffcc6a6a23798d452225414b6e844d36f\"" Feb 9 19:58:04.585846 systemd[1]: Started cri-containerd-2530afcffb833b8bfc07aacfe89f1d2ffcc6a6a23798d452225414b6e844d36f.scope. Feb 9 19:58:04.627080 env[1162]: time="2024-02-09T19:58:04.627055239Z" level=info msg="StartContainer for \"ecffc5a5093bf3edb66adb65c49d089bed17b5a5a38b7c43b9a553e722224070\" returns successfully" Feb 9 19:58:04.630973 env[1162]: time="2024-02-09T19:58:04.630941683Z" level=info msg="StartContainer for \"2530afcffb833b8bfc07aacfe89f1d2ffcc6a6a23798d452225414b6e844d36f\" returns successfully" Feb 9 19:58:05.478359 kubelet[2097]: I0209 19:58:05.478341 2097 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-cmwn7" podStartSLOduration=19.478317324 podCreationTimestamp="2024-02-09 19:57:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:58:05.477400295 +0000 UTC m=+32.154579599" watchObservedRunningTime="2024-02-09 19:58:05.478317324 +0000 UTC m=+32.155496624" Feb 9 19:58:05.487224 kubelet[2097]: I0209 19:58:05.486415 2097 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-hbl9p" podStartSLOduration=19.486386586 podCreationTimestamp="2024-02-09 19:57:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:58:05.485613638 +0000 UTC m=+32.162792941" watchObservedRunningTime="2024-02-09 19:58:05.486386586 +0000 UTC m=+32.163565890" Feb 9 19:58:51.626535 systemd[1]: Started sshd@5-139.178.70.107:22-139.178.89.65:57564.service. Feb 9 19:58:51.672316 sshd[3432]: Accepted publickey for core from 139.178.89.65 port 57564 ssh2: RSA SHA256:rEL1S6qAXEJti+jLtGl56AgBuj4qp94axBvYkXmrlvQ Feb 9 19:58:51.674312 sshd[3432]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:58:51.678242 systemd[1]: Started session-8.scope. Feb 9 19:58:51.678483 systemd-logind[1145]: New session 8 of user core. Feb 9 19:58:51.815200 sshd[3432]: pam_unix(sshd:session): session closed for user core Feb 9 19:58:51.816819 systemd[1]: sshd@5-139.178.70.107:22-139.178.89.65:57564.service: Deactivated successfully. Feb 9 19:58:51.817275 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 19:58:51.817651 systemd-logind[1145]: Session 8 logged out. Waiting for processes to exit. Feb 9 19:58:51.818119 systemd-logind[1145]: Removed session 8. Feb 9 19:58:56.818711 systemd[1]: Started sshd@6-139.178.70.107:22-139.178.89.65:57580.service. Feb 9 19:58:56.859306 sshd[3445]: Accepted publickey for core from 139.178.89.65 port 57580 ssh2: RSA SHA256:rEL1S6qAXEJti+jLtGl56AgBuj4qp94axBvYkXmrlvQ Feb 9 19:58:56.868701 sshd[3445]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:58:56.880231 systemd[1]: Started session-9.scope. Feb 9 19:58:56.880770 systemd-logind[1145]: New session 9 of user core. Feb 9 19:58:57.003606 sshd[3445]: pam_unix(sshd:session): session closed for user core Feb 9 19:58:57.005354 systemd[1]: sshd@6-139.178.70.107:22-139.178.89.65:57580.service: Deactivated successfully. Feb 9 19:58:57.005803 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 19:58:57.006142 systemd-logind[1145]: Session 9 logged out. Waiting for processes to exit. Feb 9 19:58:57.006536 systemd-logind[1145]: Removed session 9. Feb 9 19:59:02.007314 systemd[1]: Started sshd@7-139.178.70.107:22-139.178.89.65:48798.service. Feb 9 19:59:02.033592 sshd[3459]: Accepted publickey for core from 139.178.89.65 port 48798 ssh2: RSA SHA256:rEL1S6qAXEJti+jLtGl56AgBuj4qp94axBvYkXmrlvQ Feb 9 19:59:02.034274 sshd[3459]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:59:02.037201 systemd[1]: Started session-10.scope. Feb 9 19:59:02.037422 systemd-logind[1145]: New session 10 of user core. Feb 9 19:59:02.137093 sshd[3459]: pam_unix(sshd:session): session closed for user core Feb 9 19:59:02.138933 systemd[1]: sshd@7-139.178.70.107:22-139.178.89.65:48798.service: Deactivated successfully. Feb 9 19:59:02.139367 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 19:59:02.139647 systemd-logind[1145]: Session 10 logged out. Waiting for processes to exit. Feb 9 19:59:02.140085 systemd-logind[1145]: Removed session 10. Feb 9 19:59:07.140382 systemd[1]: Started sshd@8-139.178.70.107:22-139.178.89.65:48802.service. Feb 9 19:59:07.169636 sshd[3473]: Accepted publickey for core from 139.178.89.65 port 48802 ssh2: RSA SHA256:rEL1S6qAXEJti+jLtGl56AgBuj4qp94axBvYkXmrlvQ Feb 9 19:59:07.170386 sshd[3473]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:59:07.173412 systemd[1]: Started session-11.scope. Feb 9 19:59:07.174172 systemd-logind[1145]: New session 11 of user core. Feb 9 19:59:07.276521 sshd[3473]: pam_unix(sshd:session): session closed for user core Feb 9 19:59:07.278770 systemd[1]: Started sshd@9-139.178.70.107:22-139.178.89.65:48806.service. Feb 9 19:59:07.281438 systemd[1]: sshd@8-139.178.70.107:22-139.178.89.65:48802.service: Deactivated successfully. Feb 9 19:59:07.281964 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 19:59:07.282695 systemd-logind[1145]: Session 11 logged out. Waiting for processes to exit. Feb 9 19:59:07.283229 systemd-logind[1145]: Removed session 11. Feb 9 19:59:07.308635 sshd[3484]: Accepted publickey for core from 139.178.89.65 port 48806 ssh2: RSA SHA256:rEL1S6qAXEJti+jLtGl56AgBuj4qp94axBvYkXmrlvQ Feb 9 19:59:07.309374 sshd[3484]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:59:07.312412 systemd[1]: Started session-12.scope. Feb 9 19:59:07.313178 systemd-logind[1145]: New session 12 of user core. Feb 9 19:59:07.848099 systemd[1]: Started sshd@10-139.178.70.107:22-139.178.89.65:48822.service. Feb 9 19:59:07.850343 sshd[3484]: pam_unix(sshd:session): session closed for user core Feb 9 19:59:07.859021 systemd[1]: sshd@9-139.178.70.107:22-139.178.89.65:48806.service: Deactivated successfully. Feb 9 19:59:07.859492 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 19:59:07.860219 systemd-logind[1145]: Session 12 logged out. Waiting for processes to exit. Feb 9 19:59:07.860936 systemd-logind[1145]: Removed session 12. Feb 9 19:59:07.884783 sshd[3494]: Accepted publickey for core from 139.178.89.65 port 48822 ssh2: RSA SHA256:rEL1S6qAXEJti+jLtGl56AgBuj4qp94axBvYkXmrlvQ Feb 9 19:59:07.885527 sshd[3494]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:59:07.888503 systemd[1]: Started session-13.scope. Feb 9 19:59:07.889360 systemd-logind[1145]: New session 13 of user core. Feb 9 19:59:07.989840 sshd[3494]: pam_unix(sshd:session): session closed for user core Feb 9 19:59:07.991553 systemd[1]: sshd@10-139.178.70.107:22-139.178.89.65:48822.service: Deactivated successfully. Feb 9 19:59:07.992022 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 19:59:07.992814 systemd-logind[1145]: Session 13 logged out. Waiting for processes to exit. Feb 9 19:59:07.993383 systemd-logind[1145]: Removed session 13. Feb 9 19:59:12.994703 systemd[1]: Started sshd@11-139.178.70.107:22-139.178.89.65:49504.service. Feb 9 19:59:13.021903 sshd[3508]: Accepted publickey for core from 139.178.89.65 port 49504 ssh2: RSA SHA256:rEL1S6qAXEJti+jLtGl56AgBuj4qp94axBvYkXmrlvQ Feb 9 19:59:13.023249 sshd[3508]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:59:13.026320 systemd[1]: Started session-14.scope. Feb 9 19:59:13.027311 systemd-logind[1145]: New session 14 of user core. Feb 9 19:59:13.117813 sshd[3508]: pam_unix(sshd:session): session closed for user core Feb 9 19:59:13.121438 systemd[1]: Started sshd@12-139.178.70.107:22-139.178.89.65:49506.service. Feb 9 19:59:13.123201 systemd[1]: sshd@11-139.178.70.107:22-139.178.89.65:49504.service: Deactivated successfully. Feb 9 19:59:13.123868 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 19:59:13.124921 systemd-logind[1145]: Session 14 logged out. Waiting for processes to exit. Feb 9 19:59:13.125499 systemd-logind[1145]: Removed session 14. Feb 9 19:59:13.155366 sshd[3519]: Accepted publickey for core from 139.178.89.65 port 49506 ssh2: RSA SHA256:rEL1S6qAXEJti+jLtGl56AgBuj4qp94axBvYkXmrlvQ Feb 9 19:59:13.156202 sshd[3519]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:59:13.162862 systemd[1]: Started session-15.scope. Feb 9 19:59:13.163351 systemd-logind[1145]: New session 15 of user core. Feb 9 19:59:13.792916 sshd[3519]: pam_unix(sshd:session): session closed for user core Feb 9 19:59:13.796342 systemd[1]: Started sshd@13-139.178.70.107:22-139.178.89.65:49516.service. Feb 9 19:59:13.798569 systemd[1]: sshd@12-139.178.70.107:22-139.178.89.65:49506.service: Deactivated successfully. Feb 9 19:59:13.799003 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 19:59:13.799741 systemd-logind[1145]: Session 15 logged out. Waiting for processes to exit. Feb 9 19:59:13.800190 systemd-logind[1145]: Removed session 15. Feb 9 19:59:13.825455 sshd[3529]: Accepted publickey for core from 139.178.89.65 port 49516 ssh2: RSA SHA256:rEL1S6qAXEJti+jLtGl56AgBuj4qp94axBvYkXmrlvQ Feb 9 19:59:13.826545 sshd[3529]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:59:13.830273 systemd[1]: Started session-16.scope. Feb 9 19:59:13.830800 systemd-logind[1145]: New session 16 of user core. Feb 9 19:59:14.659356 sshd[3529]: pam_unix(sshd:session): session closed for user core Feb 9 19:59:14.662612 systemd[1]: Started sshd@14-139.178.70.107:22-139.178.89.65:49530.service. Feb 9 19:59:14.665387 systemd[1]: sshd@13-139.178.70.107:22-139.178.89.65:49516.service: Deactivated successfully. Feb 9 19:59:14.665962 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 19:59:14.666987 systemd-logind[1145]: Session 16 logged out. Waiting for processes to exit. Feb 9 19:59:14.667564 systemd-logind[1145]: Removed session 16. Feb 9 19:59:14.740357 sshd[3546]: Accepted publickey for core from 139.178.89.65 port 49530 ssh2: RSA SHA256:rEL1S6qAXEJti+jLtGl56AgBuj4qp94axBvYkXmrlvQ Feb 9 19:59:14.741370 sshd[3546]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:59:14.746940 systemd[1]: Started session-17.scope. Feb 9 19:59:14.747571 systemd-logind[1145]: New session 17 of user core. Feb 9 19:59:15.069528 sshd[3546]: pam_unix(sshd:session): session closed for user core Feb 9 19:59:15.071978 systemd[1]: Started sshd@15-139.178.70.107:22-139.178.89.65:49546.service. Feb 9 19:59:15.074204 systemd[1]: sshd@14-139.178.70.107:22-139.178.89.65:49530.service: Deactivated successfully. Feb 9 19:59:15.074642 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 19:59:15.075188 systemd-logind[1145]: Session 17 logged out. Waiting for processes to exit. Feb 9 19:59:15.077221 systemd-logind[1145]: Removed session 17. Feb 9 19:59:15.103115 sshd[3558]: Accepted publickey for core from 139.178.89.65 port 49546 ssh2: RSA SHA256:rEL1S6qAXEJti+jLtGl56AgBuj4qp94axBvYkXmrlvQ Feb 9 19:59:15.104095 sshd[3558]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:59:15.106877 systemd-logind[1145]: New session 18 of user core. Feb 9 19:59:15.107451 systemd[1]: Started session-18.scope. Feb 9 19:59:15.207453 sshd[3558]: pam_unix(sshd:session): session closed for user core Feb 9 19:59:15.209043 systemd-logind[1145]: Session 18 logged out. Waiting for processes to exit. Feb 9 19:59:15.209125 systemd[1]: sshd@15-139.178.70.107:22-139.178.89.65:49546.service: Deactivated successfully. Feb 9 19:59:15.209525 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 19:59:15.209962 systemd-logind[1145]: Removed session 18. Feb 9 19:59:20.210393 systemd[1]: Started sshd@16-139.178.70.107:22-139.178.89.65:42420.service. Feb 9 19:59:20.236912 sshd[3575]: Accepted publickey for core from 139.178.89.65 port 42420 ssh2: RSA SHA256:rEL1S6qAXEJti+jLtGl56AgBuj4qp94axBvYkXmrlvQ Feb 9 19:59:20.238047 sshd[3575]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:59:20.241364 systemd[1]: Started session-19.scope. Feb 9 19:59:20.241847 systemd-logind[1145]: New session 19 of user core. Feb 9 19:59:20.363360 sshd[3575]: pam_unix(sshd:session): session closed for user core Feb 9 19:59:20.364999 systemd[1]: sshd@16-139.178.70.107:22-139.178.89.65:42420.service: Deactivated successfully. Feb 9 19:59:20.365478 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 19:59:20.366059 systemd-logind[1145]: Session 19 logged out. Waiting for processes to exit. Feb 9 19:59:20.366571 systemd-logind[1145]: Removed session 19. Feb 9 19:59:25.366147 systemd[1]: Started sshd@17-139.178.70.107:22-139.178.89.65:42424.service. Feb 9 19:59:25.393192 sshd[3586]: Accepted publickey for core from 139.178.89.65 port 42424 ssh2: RSA SHA256:rEL1S6qAXEJti+jLtGl56AgBuj4qp94axBvYkXmrlvQ Feb 9 19:59:25.394322 sshd[3586]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:59:25.397857 systemd[1]: Started session-20.scope. Feb 9 19:59:25.398842 systemd-logind[1145]: New session 20 of user core. Feb 9 19:59:25.483333 sshd[3586]: pam_unix(sshd:session): session closed for user core Feb 9 19:59:25.484939 systemd[1]: sshd@17-139.178.70.107:22-139.178.89.65:42424.service: Deactivated successfully. Feb 9 19:59:25.485378 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 19:59:25.486022 systemd-logind[1145]: Session 20 logged out. Waiting for processes to exit. Feb 9 19:59:25.486518 systemd-logind[1145]: Removed session 20. Feb 9 19:59:30.487296 systemd[1]: Started sshd@18-139.178.70.107:22-139.178.89.65:49898.service. Feb 9 19:59:30.514814 sshd[3599]: Accepted publickey for core from 139.178.89.65 port 49898 ssh2: RSA SHA256:rEL1S6qAXEJti+jLtGl56AgBuj4qp94axBvYkXmrlvQ Feb 9 19:59:30.516309 sshd[3599]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:59:30.520666 systemd[1]: Started session-21.scope. Feb 9 19:59:30.521272 systemd-logind[1145]: New session 21 of user core. Feb 9 19:59:30.612367 sshd[3599]: pam_unix(sshd:session): session closed for user core Feb 9 19:59:30.613933 systemd[1]: sshd@18-139.178.70.107:22-139.178.89.65:49898.service: Deactivated successfully. Feb 9 19:59:30.614406 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 19:59:30.614986 systemd-logind[1145]: Session 21 logged out. Waiting for processes to exit. Feb 9 19:59:30.615492 systemd-logind[1145]: Removed session 21. Feb 9 19:59:35.615641 systemd[1]: Started sshd@19-139.178.70.107:22-139.178.89.65:49910.service. Feb 9 19:59:35.642647 sshd[3613]: Accepted publickey for core from 139.178.89.65 port 49910 ssh2: RSA SHA256:rEL1S6qAXEJti+jLtGl56AgBuj4qp94axBvYkXmrlvQ Feb 9 19:59:35.643560 sshd[3613]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:59:35.646426 systemd-logind[1145]: New session 22 of user core. Feb 9 19:59:35.646998 systemd[1]: Started session-22.scope. Feb 9 19:59:35.735735 sshd[3613]: pam_unix(sshd:session): session closed for user core Feb 9 19:59:35.738306 systemd[1]: Started sshd@20-139.178.70.107:22-139.178.89.65:49922.service. Feb 9 19:59:35.741230 systemd[1]: sshd@19-139.178.70.107:22-139.178.89.65:49910.service: Deactivated successfully. Feb 9 19:59:35.741674 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 19:59:35.742106 systemd-logind[1145]: Session 22 logged out. Waiting for processes to exit. Feb 9 19:59:35.742631 systemd-logind[1145]: Removed session 22. Feb 9 19:59:35.765974 sshd[3624]: Accepted publickey for core from 139.178.89.65 port 49922 ssh2: RSA SHA256:rEL1S6qAXEJti+jLtGl56AgBuj4qp94axBvYkXmrlvQ Feb 9 19:59:35.766755 sshd[3624]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:59:35.769200 systemd-logind[1145]: New session 23 of user core. Feb 9 19:59:35.769746 systemd[1]: Started session-23.scope. Feb 9 19:59:37.599159 env[1162]: time="2024-02-09T19:59:37.599134504Z" level=info msg="StopContainer for \"193143ee82dd10e9ce72d9e4e92fb72dc49eba24d5300d2819a722d7e1b9cd8b\" with timeout 30 (s)" Feb 9 19:59:37.599509 env[1162]: time="2024-02-09T19:59:37.599492579Z" level=info msg="Stop container \"193143ee82dd10e9ce72d9e4e92fb72dc49eba24d5300d2819a722d7e1b9cd8b\" with signal terminated" Feb 9 19:59:37.615320 systemd[1]: cri-containerd-193143ee82dd10e9ce72d9e4e92fb72dc49eba24d5300d2819a722d7e1b9cd8b.scope: Deactivated successfully. Feb 9 19:59:37.644173 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-193143ee82dd10e9ce72d9e4e92fb72dc49eba24d5300d2819a722d7e1b9cd8b-rootfs.mount: Deactivated successfully. Feb 9 19:59:37.658750 env[1162]: time="2024-02-09T19:59:37.658704039Z" level=info msg="shim disconnected" id=193143ee82dd10e9ce72d9e4e92fb72dc49eba24d5300d2819a722d7e1b9cd8b Feb 9 19:59:37.658750 env[1162]: time="2024-02-09T19:59:37.658746296Z" level=warning msg="cleaning up after shim disconnected" id=193143ee82dd10e9ce72d9e4e92fb72dc49eba24d5300d2819a722d7e1b9cd8b namespace=k8s.io Feb 9 19:59:37.658952 env[1162]: time="2024-02-09T19:59:37.658760334Z" level=info msg="cleaning up dead shim" Feb 9 19:59:37.664627 env[1162]: time="2024-02-09T19:59:37.664583605Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:59:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3669 runtime=io.containerd.runc.v2\n" Feb 9 19:59:37.665499 env[1162]: time="2024-02-09T19:59:37.665474371Z" level=info msg="StopContainer for \"193143ee82dd10e9ce72d9e4e92fb72dc49eba24d5300d2819a722d7e1b9cd8b\" returns successfully" Feb 9 19:59:37.666847 env[1162]: time="2024-02-09T19:59:37.666827313Z" level=info msg="StopPodSandbox for \"07f0fc61751dba61e53a65af7e649794016403f117838141d3e5ec4a2a12c0b5\"" Feb 9 19:59:37.666907 env[1162]: time="2024-02-09T19:59:37.666876797Z" level=info msg="Container to stop \"193143ee82dd10e9ce72d9e4e92fb72dc49eba24d5300d2819a722d7e1b9cd8b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:59:37.668014 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-07f0fc61751dba61e53a65af7e649794016403f117838141d3e5ec4a2a12c0b5-shm.mount: Deactivated successfully. Feb 9 19:59:37.673432 systemd[1]: cri-containerd-07f0fc61751dba61e53a65af7e649794016403f117838141d3e5ec4a2a12c0b5.scope: Deactivated successfully. Feb 9 19:59:37.677722 env[1162]: time="2024-02-09T19:59:37.677681638Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:59:37.681553 env[1162]: time="2024-02-09T19:59:37.681526165Z" level=info msg="StopContainer for \"9bc553ef6c0c165339648e0d122b012647a97ea782ebd5b243f622fd7fe8578c\" with timeout 2 (s)" Feb 9 19:59:37.681816 env[1162]: time="2024-02-09T19:59:37.681802300Z" level=info msg="Stop container \"9bc553ef6c0c165339648e0d122b012647a97ea782ebd5b243f622fd7fe8578c\" with signal terminated" Feb 9 19:59:37.690995 systemd-networkd[1063]: lxc_health: Link DOWN Feb 9 19:59:37.690999 systemd-networkd[1063]: lxc_health: Lost carrier Feb 9 19:59:37.694812 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-07f0fc61751dba61e53a65af7e649794016403f117838141d3e5ec4a2a12c0b5-rootfs.mount: Deactivated successfully. Feb 9 19:59:37.746950 env[1162]: time="2024-02-09T19:59:37.746906675Z" level=info msg="shim disconnected" id=07f0fc61751dba61e53a65af7e649794016403f117838141d3e5ec4a2a12c0b5 Feb 9 19:59:37.746950 env[1162]: time="2024-02-09T19:59:37.746943162Z" level=warning msg="cleaning up after shim disconnected" id=07f0fc61751dba61e53a65af7e649794016403f117838141d3e5ec4a2a12c0b5 namespace=k8s.io Feb 9 19:59:37.746950 env[1162]: time="2024-02-09T19:59:37.746949580Z" level=info msg="cleaning up dead shim" Feb 9 19:59:37.752565 env[1162]: time="2024-02-09T19:59:37.752540615Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:59:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3711 runtime=io.containerd.runc.v2\n" Feb 9 19:59:37.752867 env[1162]: time="2024-02-09T19:59:37.752852296Z" level=info msg="TearDown network for sandbox \"07f0fc61751dba61e53a65af7e649794016403f117838141d3e5ec4a2a12c0b5\" successfully" Feb 9 19:59:37.752932 env[1162]: time="2024-02-09T19:59:37.752919384Z" level=info msg="StopPodSandbox for \"07f0fc61751dba61e53a65af7e649794016403f117838141d3e5ec4a2a12c0b5\" returns successfully" Feb 9 19:59:37.765962 systemd[1]: cri-containerd-9bc553ef6c0c165339648e0d122b012647a97ea782ebd5b243f622fd7fe8578c.scope: Deactivated successfully. Feb 9 19:59:37.766112 systemd[1]: cri-containerd-9bc553ef6c0c165339648e0d122b012647a97ea782ebd5b243f622fd7fe8578c.scope: Consumed 4.820s CPU time. Feb 9 19:59:37.786377 env[1162]: time="2024-02-09T19:59:37.786343882Z" level=info msg="shim disconnected" id=9bc553ef6c0c165339648e0d122b012647a97ea782ebd5b243f622fd7fe8578c Feb 9 19:59:37.786377 env[1162]: time="2024-02-09T19:59:37.786375321Z" level=warning msg="cleaning up after shim disconnected" id=9bc553ef6c0c165339648e0d122b012647a97ea782ebd5b243f622fd7fe8578c namespace=k8s.io Feb 9 19:59:37.786585 env[1162]: time="2024-02-09T19:59:37.786383444Z" level=info msg="cleaning up dead shim" Feb 9 19:59:37.791904 env[1162]: time="2024-02-09T19:59:37.791877353Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:59:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3736 runtime=io.containerd.runc.v2\n" Feb 9 19:59:37.792730 env[1162]: time="2024-02-09T19:59:37.792712471Z" level=info msg="StopContainer for \"9bc553ef6c0c165339648e0d122b012647a97ea782ebd5b243f622fd7fe8578c\" returns successfully" Feb 9 19:59:37.793087 env[1162]: time="2024-02-09T19:59:37.793069484Z" level=info msg="StopPodSandbox for \"1cfcd1f6b8ee0d9a5cd8dd24bf2b7b607123392bc94a18e92afe02a30fd44c2e\"" Feb 9 19:59:37.793125 env[1162]: time="2024-02-09T19:59:37.793107875Z" level=info msg="Container to stop \"4d86c564090a41c28c0907a06b43f545327e7b8010d3a5595ad7e0f346598509\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:59:37.793125 env[1162]: time="2024-02-09T19:59:37.793121969Z" level=info msg="Container to stop \"3e6b3d43645b31dba08bb215d85712b052b51fb5ab9eaee3dd1435621a70b786\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:59:37.793167 env[1162]: time="2024-02-09T19:59:37.793130465Z" level=info msg="Container to stop \"8e38a7c3249f863c8f23b3745dc39013594926da103e3e09874caa7c78a28382\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:59:37.793167 env[1162]: time="2024-02-09T19:59:37.793138852Z" level=info msg="Container to stop \"c87027210456fe804843186c6956df7c109e27b5b84b935ff5c07acd2e66ec10\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:59:37.793167 env[1162]: time="2024-02-09T19:59:37.793151998Z" level=info msg="Container to stop \"9bc553ef6c0c165339648e0d122b012647a97ea782ebd5b243f622fd7fe8578c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:59:37.796942 systemd[1]: cri-containerd-1cfcd1f6b8ee0d9a5cd8dd24bf2b7b607123392bc94a18e92afe02a30fd44c2e.scope: Deactivated successfully. Feb 9 19:59:37.813985 env[1162]: time="2024-02-09T19:59:37.813948127Z" level=info msg="shim disconnected" id=1cfcd1f6b8ee0d9a5cd8dd24bf2b7b607123392bc94a18e92afe02a30fd44c2e Feb 9 19:59:37.814107 env[1162]: time="2024-02-09T19:59:37.813986434Z" level=warning msg="cleaning up after shim disconnected" id=1cfcd1f6b8ee0d9a5cd8dd24bf2b7b607123392bc94a18e92afe02a30fd44c2e namespace=k8s.io Feb 9 19:59:37.814107 env[1162]: time="2024-02-09T19:59:37.813995866Z" level=info msg="cleaning up dead shim" Feb 9 19:59:37.819748 env[1162]: time="2024-02-09T19:59:37.819718580Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:59:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3767 runtime=io.containerd.runc.v2\n" Feb 9 19:59:37.820037 env[1162]: time="2024-02-09T19:59:37.820021916Z" level=info msg="TearDown network for sandbox \"1cfcd1f6b8ee0d9a5cd8dd24bf2b7b607123392bc94a18e92afe02a30fd44c2e\" successfully" Feb 9 19:59:37.820095 env[1162]: time="2024-02-09T19:59:37.820082996Z" level=info msg="StopPodSandbox for \"1cfcd1f6b8ee0d9a5cd8dd24bf2b7b607123392bc94a18e92afe02a30fd44c2e\" returns successfully" Feb 9 19:59:37.841919 kubelet[2097]: I0209 19:59:37.841882 2097 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5cf05e01-2825-4b29-83ba-e077f22d3aac-cilium-config-path\") pod \"5cf05e01-2825-4b29-83ba-e077f22d3aac\" (UID: \"5cf05e01-2825-4b29-83ba-e077f22d3aac\") " Feb 9 19:59:37.842949 kubelet[2097]: I0209 19:59:37.842481 2097 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5bcb\" (UniqueName: \"kubernetes.io/projected/5cf05e01-2825-4b29-83ba-e077f22d3aac-kube-api-access-q5bcb\") pod \"5cf05e01-2825-4b29-83ba-e077f22d3aac\" (UID: \"5cf05e01-2825-4b29-83ba-e077f22d3aac\") " Feb 9 19:59:37.845805 kubelet[2097]: I0209 19:59:37.844941 2097 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cf05e01-2825-4b29-83ba-e077f22d3aac-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5cf05e01-2825-4b29-83ba-e077f22d3aac" (UID: "5cf05e01-2825-4b29-83ba-e077f22d3aac"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:59:37.944129 kubelet[2097]: I0209 19:59:37.943128 2097 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9b5c29bd-36e6-409a-8fd6-648781eff461-hubble-tls\") pod \"9b5c29bd-36e6-409a-8fd6-648781eff461\" (UID: \"9b5c29bd-36e6-409a-8fd6-648781eff461\") " Feb 9 19:59:37.944129 kubelet[2097]: I0209 19:59:37.943162 2097 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9b5c29bd-36e6-409a-8fd6-648781eff461-cilium-run\") pod \"9b5c29bd-36e6-409a-8fd6-648781eff461\" (UID: \"9b5c29bd-36e6-409a-8fd6-648781eff461\") " Feb 9 19:59:37.944129 kubelet[2097]: I0209 19:59:37.943175 2097 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9b5c29bd-36e6-409a-8fd6-648781eff461-hostproc\") pod \"9b5c29bd-36e6-409a-8fd6-648781eff461\" (UID: \"9b5c29bd-36e6-409a-8fd6-648781eff461\") " Feb 9 19:59:37.944129 kubelet[2097]: I0209 19:59:37.943191 2097 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9b5c29bd-36e6-409a-8fd6-648781eff461-etc-cni-netd\") pod \"9b5c29bd-36e6-409a-8fd6-648781eff461\" (UID: \"9b5c29bd-36e6-409a-8fd6-648781eff461\") " Feb 9 19:59:37.944129 kubelet[2097]: I0209 19:59:37.943208 2097 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zs5w5\" (UniqueName: \"kubernetes.io/projected/9b5c29bd-36e6-409a-8fd6-648781eff461-kube-api-access-zs5w5\") pod \"9b5c29bd-36e6-409a-8fd6-648781eff461\" (UID: \"9b5c29bd-36e6-409a-8fd6-648781eff461\") " Feb 9 19:59:37.944129 kubelet[2097]: I0209 19:59:37.943229 2097 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9b5c29bd-36e6-409a-8fd6-648781eff461-cni-path\") pod \"9b5c29bd-36e6-409a-8fd6-648781eff461\" (UID: \"9b5c29bd-36e6-409a-8fd6-648781eff461\") " Feb 9 19:59:37.944417 kubelet[2097]: I0209 19:59:37.943256 2097 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9b5c29bd-36e6-409a-8fd6-648781eff461-cilium-cgroup\") pod \"9b5c29bd-36e6-409a-8fd6-648781eff461\" (UID: \"9b5c29bd-36e6-409a-8fd6-648781eff461\") " Feb 9 19:59:37.944417 kubelet[2097]: I0209 19:59:37.943275 2097 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9b5c29bd-36e6-409a-8fd6-648781eff461-host-proc-sys-net\") pod \"9b5c29bd-36e6-409a-8fd6-648781eff461\" (UID: \"9b5c29bd-36e6-409a-8fd6-648781eff461\") " Feb 9 19:59:37.944417 kubelet[2097]: I0209 19:59:37.943287 2097 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9b5c29bd-36e6-409a-8fd6-648781eff461-bpf-maps\") pod \"9b5c29bd-36e6-409a-8fd6-648781eff461\" (UID: \"9b5c29bd-36e6-409a-8fd6-648781eff461\") " Feb 9 19:59:37.944417 kubelet[2097]: I0209 19:59:37.943310 2097 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9b5c29bd-36e6-409a-8fd6-648781eff461-cilium-config-path\") pod \"9b5c29bd-36e6-409a-8fd6-648781eff461\" (UID: \"9b5c29bd-36e6-409a-8fd6-648781eff461\") " Feb 9 19:59:37.944417 kubelet[2097]: I0209 19:59:37.944325 2097 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9b5c29bd-36e6-409a-8fd6-648781eff461-clustermesh-secrets\") pod \"9b5c29bd-36e6-409a-8fd6-648781eff461\" (UID: \"9b5c29bd-36e6-409a-8fd6-648781eff461\") " Feb 9 19:59:37.944417 kubelet[2097]: I0209 19:59:37.944343 2097 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b5c29bd-36e6-409a-8fd6-648781eff461-xtables-lock\") pod \"9b5c29bd-36e6-409a-8fd6-648781eff461\" (UID: \"9b5c29bd-36e6-409a-8fd6-648781eff461\") " Feb 9 19:59:37.944560 kubelet[2097]: I0209 19:59:37.944356 2097 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9b5c29bd-36e6-409a-8fd6-648781eff461-host-proc-sys-kernel\") pod \"9b5c29bd-36e6-409a-8fd6-648781eff461\" (UID: \"9b5c29bd-36e6-409a-8fd6-648781eff461\") " Feb 9 19:59:37.944560 kubelet[2097]: I0209 19:59:37.944378 2097 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b5c29bd-36e6-409a-8fd6-648781eff461-lib-modules\") pod \"9b5c29bd-36e6-409a-8fd6-648781eff461\" (UID: \"9b5c29bd-36e6-409a-8fd6-648781eff461\") " Feb 9 19:59:37.944784 kubelet[2097]: I0209 19:59:37.944767 2097 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5cf05e01-2825-4b29-83ba-e077f22d3aac-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 9 19:59:37.944826 kubelet[2097]: I0209 19:59:37.944805 2097 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b5c29bd-36e6-409a-8fd6-648781eff461-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9b5c29bd-36e6-409a-8fd6-648781eff461" (UID: "9b5c29bd-36e6-409a-8fd6-648781eff461"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:59:37.944946 kubelet[2097]: I0209 19:59:37.944925 2097 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b5c29bd-36e6-409a-8fd6-648781eff461-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9b5c29bd-36e6-409a-8fd6-648781eff461" (UID: "9b5c29bd-36e6-409a-8fd6-648781eff461"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:59:37.944987 kubelet[2097]: I0209 19:59:37.944951 2097 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b5c29bd-36e6-409a-8fd6-648781eff461-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9b5c29bd-36e6-409a-8fd6-648781eff461" (UID: "9b5c29bd-36e6-409a-8fd6-648781eff461"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:59:37.944987 kubelet[2097]: I0209 19:59:37.944964 2097 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b5c29bd-36e6-409a-8fd6-648781eff461-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9b5c29bd-36e6-409a-8fd6-648781eff461" (UID: "9b5c29bd-36e6-409a-8fd6-648781eff461"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:59:37.945124 kubelet[2097]: I0209 19:59:37.945108 2097 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cf05e01-2825-4b29-83ba-e077f22d3aac-kube-api-access-q5bcb" (OuterVolumeSpecName: "kube-api-access-q5bcb") pod "5cf05e01-2825-4b29-83ba-e077f22d3aac" (UID: "5cf05e01-2825-4b29-83ba-e077f22d3aac"). InnerVolumeSpecName "kube-api-access-q5bcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:59:37.945220 kubelet[2097]: I0209 19:59:37.945207 2097 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b5c29bd-36e6-409a-8fd6-648781eff461-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9b5c29bd-36e6-409a-8fd6-648781eff461" (UID: "9b5c29bd-36e6-409a-8fd6-648781eff461"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:59:37.945286 kubelet[2097]: I0209 19:59:37.945275 2097 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b5c29bd-36e6-409a-8fd6-648781eff461-hostproc" (OuterVolumeSpecName: "hostproc") pod "9b5c29bd-36e6-409a-8fd6-648781eff461" (UID: "9b5c29bd-36e6-409a-8fd6-648781eff461"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:59:37.945362 kubelet[2097]: I0209 19:59:37.945352 2097 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b5c29bd-36e6-409a-8fd6-648781eff461-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9b5c29bd-36e6-409a-8fd6-648781eff461" (UID: "9b5c29bd-36e6-409a-8fd6-648781eff461"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:59:37.948076 kubelet[2097]: I0209 19:59:37.948053 2097 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b5c29bd-36e6-409a-8fd6-648781eff461-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9b5c29bd-36e6-409a-8fd6-648781eff461" (UID: "9b5c29bd-36e6-409a-8fd6-648781eff461"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:59:37.948183 kubelet[2097]: I0209 19:59:37.948163 2097 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b5c29bd-36e6-409a-8fd6-648781eff461-cni-path" (OuterVolumeSpecName: "cni-path") pod "9b5c29bd-36e6-409a-8fd6-648781eff461" (UID: "9b5c29bd-36e6-409a-8fd6-648781eff461"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:59:37.948675 kubelet[2097]: I0209 19:59:37.948615 2097 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b5c29bd-36e6-409a-8fd6-648781eff461-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9b5c29bd-36e6-409a-8fd6-648781eff461" (UID: "9b5c29bd-36e6-409a-8fd6-648781eff461"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:59:37.948731 kubelet[2097]: I0209 19:59:37.948650 2097 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b5c29bd-36e6-409a-8fd6-648781eff461-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9b5c29bd-36e6-409a-8fd6-648781eff461" (UID: "9b5c29bd-36e6-409a-8fd6-648781eff461"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:59:37.948731 kubelet[2097]: I0209 19:59:37.948692 2097 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b5c29bd-36e6-409a-8fd6-648781eff461-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9b5c29bd-36e6-409a-8fd6-648781eff461" (UID: "9b5c29bd-36e6-409a-8fd6-648781eff461"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:59:37.949286 kubelet[2097]: I0209 19:59:37.949266 2097 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b5c29bd-36e6-409a-8fd6-648781eff461-kube-api-access-zs5w5" (OuterVolumeSpecName: "kube-api-access-zs5w5") pod "9b5c29bd-36e6-409a-8fd6-648781eff461" (UID: "9b5c29bd-36e6-409a-8fd6-648781eff461"). InnerVolumeSpecName "kube-api-access-zs5w5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:59:37.950743 kubelet[2097]: I0209 19:59:37.950725 2097 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b5c29bd-36e6-409a-8fd6-648781eff461-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9b5c29bd-36e6-409a-8fd6-648781eff461" (UID: "9b5c29bd-36e6-409a-8fd6-648781eff461"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:59:38.045773 kubelet[2097]: I0209 19:59:38.045736 2097 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b5c29bd-36e6-409a-8fd6-648781eff461-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 9 19:59:38.045773 kubelet[2097]: I0209 19:59:38.045764 2097 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9b5c29bd-36e6-409a-8fd6-648781eff461-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 9 19:59:38.045773 kubelet[2097]: I0209 19:59:38.045774 2097 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9b5c29bd-36e6-409a-8fd6-648781eff461-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 9 19:59:38.045773 kubelet[2097]: I0209 19:59:38.045783 2097 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9b5c29bd-36e6-409a-8fd6-648781eff461-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 9 19:59:38.046057 kubelet[2097]: I0209 19:59:38.045792 2097 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9b5c29bd-36e6-409a-8fd6-648781eff461-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 9 19:59:38.046057 kubelet[2097]: I0209 19:59:38.045802 2097 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-zs5w5\" (UniqueName: \"kubernetes.io/projected/9b5c29bd-36e6-409a-8fd6-648781eff461-kube-api-access-zs5w5\") on node \"localhost\" DevicePath \"\"" Feb 9 19:59:38.046057 kubelet[2097]: I0209 19:59:38.045808 2097 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9b5c29bd-36e6-409a-8fd6-648781eff461-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 9 19:59:38.046057 kubelet[2097]: I0209 19:59:38.045815 2097 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9b5c29bd-36e6-409a-8fd6-648781eff461-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 9 19:59:38.046057 kubelet[2097]: I0209 19:59:38.045823 2097 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9b5c29bd-36e6-409a-8fd6-648781eff461-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 9 19:59:38.046057 kubelet[2097]: I0209 19:59:38.045833 2097 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9b5c29bd-36e6-409a-8fd6-648781eff461-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 9 19:59:38.046057 kubelet[2097]: I0209 19:59:38.045841 2097 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9b5c29bd-36e6-409a-8fd6-648781eff461-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 9 19:59:38.046057 kubelet[2097]: I0209 19:59:38.045849 2097 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9b5c29bd-36e6-409a-8fd6-648781eff461-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 9 19:59:38.046257 kubelet[2097]: I0209 19:59:38.045856 2097 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b5c29bd-36e6-409a-8fd6-648781eff461-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 9 19:59:38.046257 kubelet[2097]: I0209 19:59:38.045863 2097 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9b5c29bd-36e6-409a-8fd6-648781eff461-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 9 19:59:38.046257 kubelet[2097]: I0209 19:59:38.045871 2097 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-q5bcb\" (UniqueName: \"kubernetes.io/projected/5cf05e01-2825-4b29-83ba-e077f22d3aac-kube-api-access-q5bcb\") on node \"localhost\" DevicePath \"\"" Feb 9 19:59:38.458100 kubelet[2097]: E0209 19:59:38.457965 2097 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:59:38.606024 systemd[1]: Removed slice kubepods-besteffort-pod5cf05e01_2825_4b29_83ba_e077f22d3aac.slice. Feb 9 19:59:38.608939 kubelet[2097]: I0209 19:59:38.608912 2097 scope.go:117] "RemoveContainer" containerID="193143ee82dd10e9ce72d9e4e92fb72dc49eba24d5300d2819a722d7e1b9cd8b" Feb 9 19:59:38.612783 env[1162]: time="2024-02-09T19:59:38.612204214Z" level=info msg="RemoveContainer for \"193143ee82dd10e9ce72d9e4e92fb72dc49eba24d5300d2819a722d7e1b9cd8b\"" Feb 9 19:59:38.615461 env[1162]: time="2024-02-09T19:59:38.615428168Z" level=info msg="RemoveContainer for \"193143ee82dd10e9ce72d9e4e92fb72dc49eba24d5300d2819a722d7e1b9cd8b\" returns successfully" Feb 9 19:59:38.619026 systemd[1]: Removed slice kubepods-burstable-pod9b5c29bd_36e6_409a_8fd6_648781eff461.slice. Feb 9 19:59:38.619081 systemd[1]: kubepods-burstable-pod9b5c29bd_36e6_409a_8fd6_648781eff461.slice: Consumed 4.890s CPU time. Feb 9 19:59:38.620189 kubelet[2097]: I0209 19:59:38.620172 2097 scope.go:117] "RemoveContainer" containerID="193143ee82dd10e9ce72d9e4e92fb72dc49eba24d5300d2819a722d7e1b9cd8b" Feb 9 19:59:38.621009 env[1162]: time="2024-02-09T19:59:38.620953718Z" level=error msg="ContainerStatus for \"193143ee82dd10e9ce72d9e4e92fb72dc49eba24d5300d2819a722d7e1b9cd8b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"193143ee82dd10e9ce72d9e4e92fb72dc49eba24d5300d2819a722d7e1b9cd8b\": not found" Feb 9 19:59:38.622268 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9bc553ef6c0c165339648e0d122b012647a97ea782ebd5b243f622fd7fe8578c-rootfs.mount: Deactivated successfully. Feb 9 19:59:38.622332 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1cfcd1f6b8ee0d9a5cd8dd24bf2b7b607123392bc94a18e92afe02a30fd44c2e-rootfs.mount: Deactivated successfully. Feb 9 19:59:38.622370 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1cfcd1f6b8ee0d9a5cd8dd24bf2b7b607123392bc94a18e92afe02a30fd44c2e-shm.mount: Deactivated successfully. Feb 9 19:59:38.622409 systemd[1]: var-lib-kubelet-pods-5cf05e01\x2d2825\x2d4b29\x2d83ba\x2de077f22d3aac-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq5bcb.mount: Deactivated successfully. Feb 9 19:59:38.622452 systemd[1]: var-lib-kubelet-pods-9b5c29bd\x2d36e6\x2d409a\x2d8fd6\x2d648781eff461-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzs5w5.mount: Deactivated successfully. Feb 9 19:59:38.622489 systemd[1]: var-lib-kubelet-pods-9b5c29bd\x2d36e6\x2d409a\x2d8fd6\x2d648781eff461-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 19:59:38.622532 systemd[1]: var-lib-kubelet-pods-9b5c29bd\x2d36e6\x2d409a\x2d8fd6\x2d648781eff461-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 19:59:38.623725 kubelet[2097]: E0209 19:59:38.623708 2097 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"193143ee82dd10e9ce72d9e4e92fb72dc49eba24d5300d2819a722d7e1b9cd8b\": not found" containerID="193143ee82dd10e9ce72d9e4e92fb72dc49eba24d5300d2819a722d7e1b9cd8b" Feb 9 19:59:38.623843 kubelet[2097]: I0209 19:59:38.623833 2097 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"193143ee82dd10e9ce72d9e4e92fb72dc49eba24d5300d2819a722d7e1b9cd8b"} err="failed to get container status \"193143ee82dd10e9ce72d9e4e92fb72dc49eba24d5300d2819a722d7e1b9cd8b\": rpc error: code = NotFound desc = an error occurred when try to find container \"193143ee82dd10e9ce72d9e4e92fb72dc49eba24d5300d2819a722d7e1b9cd8b\": not found" Feb 9 19:59:38.624007 kubelet[2097]: I0209 19:59:38.623985 2097 scope.go:117] "RemoveContainer" containerID="9bc553ef6c0c165339648e0d122b012647a97ea782ebd5b243f622fd7fe8578c" Feb 9 19:59:38.627515 env[1162]: time="2024-02-09T19:59:38.627495150Z" level=info msg="RemoveContainer for \"9bc553ef6c0c165339648e0d122b012647a97ea782ebd5b243f622fd7fe8578c\"" Feb 9 19:59:38.628787 env[1162]: time="2024-02-09T19:59:38.628765994Z" level=info msg="RemoveContainer for \"9bc553ef6c0c165339648e0d122b012647a97ea782ebd5b243f622fd7fe8578c\" returns successfully" Feb 9 19:59:38.628973 kubelet[2097]: I0209 19:59:38.628962 2097 scope.go:117] "RemoveContainer" containerID="3e6b3d43645b31dba08bb215d85712b052b51fb5ab9eaee3dd1435621a70b786" Feb 9 19:59:38.630351 env[1162]: time="2024-02-09T19:59:38.630330700Z" level=info msg="RemoveContainer for \"3e6b3d43645b31dba08bb215d85712b052b51fb5ab9eaee3dd1435621a70b786\"" Feb 9 19:59:38.632683 env[1162]: time="2024-02-09T19:59:38.632635520Z" level=info msg="RemoveContainer for \"3e6b3d43645b31dba08bb215d85712b052b51fb5ab9eaee3dd1435621a70b786\" returns successfully" Feb 9 19:59:38.633626 kubelet[2097]: I0209 19:59:38.633212 2097 scope.go:117] "RemoveContainer" containerID="4d86c564090a41c28c0907a06b43f545327e7b8010d3a5595ad7e0f346598509" Feb 9 19:59:38.634654 env[1162]: time="2024-02-09T19:59:38.634633015Z" level=info msg="RemoveContainer for \"4d86c564090a41c28c0907a06b43f545327e7b8010d3a5595ad7e0f346598509\"" Feb 9 19:59:38.635912 env[1162]: time="2024-02-09T19:59:38.635899083Z" level=info msg="RemoveContainer for \"4d86c564090a41c28c0907a06b43f545327e7b8010d3a5595ad7e0f346598509\" returns successfully" Feb 9 19:59:38.636107 kubelet[2097]: I0209 19:59:38.636091 2097 scope.go:117] "RemoveContainer" containerID="c87027210456fe804843186c6956df7c109e27b5b84b935ff5c07acd2e66ec10" Feb 9 19:59:38.637088 env[1162]: time="2024-02-09T19:59:38.637068848Z" level=info msg="RemoveContainer for \"c87027210456fe804843186c6956df7c109e27b5b84b935ff5c07acd2e66ec10\"" Feb 9 19:59:38.638072 env[1162]: time="2024-02-09T19:59:38.638054776Z" level=info msg="RemoveContainer for \"c87027210456fe804843186c6956df7c109e27b5b84b935ff5c07acd2e66ec10\" returns successfully" Feb 9 19:59:38.638169 kubelet[2097]: I0209 19:59:38.638160 2097 scope.go:117] "RemoveContainer" containerID="8e38a7c3249f863c8f23b3745dc39013594926da103e3e09874caa7c78a28382" Feb 9 19:59:38.638948 env[1162]: time="2024-02-09T19:59:38.638931569Z" level=info msg="RemoveContainer for \"8e38a7c3249f863c8f23b3745dc39013594926da103e3e09874caa7c78a28382\"" Feb 9 19:59:38.640096 env[1162]: time="2024-02-09T19:59:38.640083168Z" level=info msg="RemoveContainer for \"8e38a7c3249f863c8f23b3745dc39013594926da103e3e09874caa7c78a28382\" returns successfully" Feb 9 19:59:38.640273 kubelet[2097]: I0209 19:59:38.640260 2097 scope.go:117] "RemoveContainer" containerID="9bc553ef6c0c165339648e0d122b012647a97ea782ebd5b243f622fd7fe8578c" Feb 9 19:59:38.640423 env[1162]: time="2024-02-09T19:59:38.640378339Z" level=error msg="ContainerStatus for \"9bc553ef6c0c165339648e0d122b012647a97ea782ebd5b243f622fd7fe8578c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9bc553ef6c0c165339648e0d122b012647a97ea782ebd5b243f622fd7fe8578c\": not found" Feb 9 19:59:38.640530 kubelet[2097]: E0209 19:59:38.640516 2097 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9bc553ef6c0c165339648e0d122b012647a97ea782ebd5b243f622fd7fe8578c\": not found" containerID="9bc553ef6c0c165339648e0d122b012647a97ea782ebd5b243f622fd7fe8578c" Feb 9 19:59:38.640566 kubelet[2097]: I0209 19:59:38.640540 2097 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9bc553ef6c0c165339648e0d122b012647a97ea782ebd5b243f622fd7fe8578c"} err="failed to get container status \"9bc553ef6c0c165339648e0d122b012647a97ea782ebd5b243f622fd7fe8578c\": rpc error: code = NotFound desc = an error occurred when try to find container \"9bc553ef6c0c165339648e0d122b012647a97ea782ebd5b243f622fd7fe8578c\": not found" Feb 9 19:59:38.640566 kubelet[2097]: I0209 19:59:38.640545 2097 scope.go:117] "RemoveContainer" containerID="3e6b3d43645b31dba08bb215d85712b052b51fb5ab9eaee3dd1435621a70b786" Feb 9 19:59:38.640650 env[1162]: time="2024-02-09T19:59:38.640623133Z" level=error msg="ContainerStatus for \"3e6b3d43645b31dba08bb215d85712b052b51fb5ab9eaee3dd1435621a70b786\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3e6b3d43645b31dba08bb215d85712b052b51fb5ab9eaee3dd1435621a70b786\": not found" Feb 9 19:59:38.640782 kubelet[2097]: E0209 19:59:38.640774 2097 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3e6b3d43645b31dba08bb215d85712b052b51fb5ab9eaee3dd1435621a70b786\": not found" containerID="3e6b3d43645b31dba08bb215d85712b052b51fb5ab9eaee3dd1435621a70b786" Feb 9 19:59:38.640849 kubelet[2097]: I0209 19:59:38.640831 2097 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3e6b3d43645b31dba08bb215d85712b052b51fb5ab9eaee3dd1435621a70b786"} err="failed to get container status \"3e6b3d43645b31dba08bb215d85712b052b51fb5ab9eaee3dd1435621a70b786\": rpc error: code = NotFound desc = an error occurred when try to find container \"3e6b3d43645b31dba08bb215d85712b052b51fb5ab9eaee3dd1435621a70b786\": not found" Feb 9 19:59:38.640897 kubelet[2097]: I0209 19:59:38.640890 2097 scope.go:117] "RemoveContainer" containerID="4d86c564090a41c28c0907a06b43f545327e7b8010d3a5595ad7e0f346598509" Feb 9 19:59:38.641060 env[1162]: time="2024-02-09T19:59:38.641023206Z" level=error msg="ContainerStatus for \"4d86c564090a41c28c0907a06b43f545327e7b8010d3a5595ad7e0f346598509\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4d86c564090a41c28c0907a06b43f545327e7b8010d3a5595ad7e0f346598509\": not found" Feb 9 19:59:38.641104 kubelet[2097]: E0209 19:59:38.641100 2097 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4d86c564090a41c28c0907a06b43f545327e7b8010d3a5595ad7e0f346598509\": not found" containerID="4d86c564090a41c28c0907a06b43f545327e7b8010d3a5595ad7e0f346598509" Feb 9 19:59:38.641157 kubelet[2097]: I0209 19:59:38.641113 2097 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4d86c564090a41c28c0907a06b43f545327e7b8010d3a5595ad7e0f346598509"} err="failed to get container status \"4d86c564090a41c28c0907a06b43f545327e7b8010d3a5595ad7e0f346598509\": rpc error: code = NotFound desc = an error occurred when try to find container \"4d86c564090a41c28c0907a06b43f545327e7b8010d3a5595ad7e0f346598509\": not found" Feb 9 19:59:38.641157 kubelet[2097]: I0209 19:59:38.641118 2097 scope.go:117] "RemoveContainer" containerID="c87027210456fe804843186c6956df7c109e27b5b84b935ff5c07acd2e66ec10" Feb 9 19:59:38.641629 kubelet[2097]: E0209 19:59:38.641466 2097 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c87027210456fe804843186c6956df7c109e27b5b84b935ff5c07acd2e66ec10\": not found" containerID="c87027210456fe804843186c6956df7c109e27b5b84b935ff5c07acd2e66ec10" Feb 9 19:59:38.641629 kubelet[2097]: I0209 19:59:38.641478 2097 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c87027210456fe804843186c6956df7c109e27b5b84b935ff5c07acd2e66ec10"} err="failed to get container status \"c87027210456fe804843186c6956df7c109e27b5b84b935ff5c07acd2e66ec10\": rpc error: code = NotFound desc = an error occurred when try to find container \"c87027210456fe804843186c6956df7c109e27b5b84b935ff5c07acd2e66ec10\": not found" Feb 9 19:59:38.641629 kubelet[2097]: I0209 19:59:38.641482 2097 scope.go:117] "RemoveContainer" containerID="8e38a7c3249f863c8f23b3745dc39013594926da103e3e09874caa7c78a28382" Feb 9 19:59:38.641739 env[1162]: time="2024-02-09T19:59:38.641377097Z" level=error msg="ContainerStatus for \"c87027210456fe804843186c6956df7c109e27b5b84b935ff5c07acd2e66ec10\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c87027210456fe804843186c6956df7c109e27b5b84b935ff5c07acd2e66ec10\": not found" Feb 9 19:59:38.641739 env[1162]: time="2024-02-09T19:59:38.641573545Z" level=error msg="ContainerStatus for \"8e38a7c3249f863c8f23b3745dc39013594926da103e3e09874caa7c78a28382\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8e38a7c3249f863c8f23b3745dc39013594926da103e3e09874caa7c78a28382\": not found" Feb 9 19:59:38.641853 kubelet[2097]: E0209 19:59:38.641845 2097 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8e38a7c3249f863c8f23b3745dc39013594926da103e3e09874caa7c78a28382\": not found" containerID="8e38a7c3249f863c8f23b3745dc39013594926da103e3e09874caa7c78a28382" Feb 9 19:59:38.641929 kubelet[2097]: I0209 19:59:38.641920 2097 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8e38a7c3249f863c8f23b3745dc39013594926da103e3e09874caa7c78a28382"} err="failed to get container status \"8e38a7c3249f863c8f23b3745dc39013594926da103e3e09874caa7c78a28382\": rpc error: code = NotFound desc = an error occurred when try to find container \"8e38a7c3249f863c8f23b3745dc39013594926da103e3e09874caa7c78a28382\": not found" Feb 9 19:59:39.396938 kubelet[2097]: I0209 19:59:39.396921 2097 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="5cf05e01-2825-4b29-83ba-e077f22d3aac" path="/var/lib/kubelet/pods/5cf05e01-2825-4b29-83ba-e077f22d3aac/volumes" Feb 9 19:59:39.397848 kubelet[2097]: I0209 19:59:39.397839 2097 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="9b5c29bd-36e6-409a-8fd6-648781eff461" path="/var/lib/kubelet/pods/9b5c29bd-36e6-409a-8fd6-648781eff461/volumes" Feb 9 19:59:39.540982 sshd[3624]: pam_unix(sshd:session): session closed for user core Feb 9 19:59:39.544197 systemd[1]: Started sshd@21-139.178.70.107:22-139.178.89.65:38100.service. Feb 9 19:59:39.546246 systemd[1]: sshd@20-139.178.70.107:22-139.178.89.65:49922.service: Deactivated successfully. Feb 9 19:59:39.546958 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 19:59:39.547441 systemd-logind[1145]: Session 23 logged out. Waiting for processes to exit. Feb 9 19:59:39.548199 systemd-logind[1145]: Removed session 23. Feb 9 19:59:39.580507 sshd[3785]: Accepted publickey for core from 139.178.89.65 port 38100 ssh2: RSA SHA256:rEL1S6qAXEJti+jLtGl56AgBuj4qp94axBvYkXmrlvQ Feb 9 19:59:39.581427 sshd[3785]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:59:39.587700 systemd-logind[1145]: New session 24 of user core. Feb 9 19:59:39.588382 systemd[1]: Started session-24.scope. Feb 9 19:59:40.084208 kubelet[2097]: I0209 19:59:40.084189 2097 topology_manager.go:215] "Topology Admit Handler" podUID="be5a026c-d926-49b3-ac67-2e42557d9896" podNamespace="kube-system" podName="cilium-556c8" Feb 9 19:59:40.084368 kubelet[2097]: E0209 19:59:40.084359 2097 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9b5c29bd-36e6-409a-8fd6-648781eff461" containerName="mount-bpf-fs" Feb 9 19:59:40.084424 kubelet[2097]: E0209 19:59:40.084417 2097 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9b5c29bd-36e6-409a-8fd6-648781eff461" containerName="mount-cgroup" Feb 9 19:59:40.084470 kubelet[2097]: E0209 19:59:40.084464 2097 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9b5c29bd-36e6-409a-8fd6-648781eff461" containerName="apply-sysctl-overwrites" Feb 9 19:59:40.084514 kubelet[2097]: E0209 19:59:40.084507 2097 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5cf05e01-2825-4b29-83ba-e077f22d3aac" containerName="cilium-operator" Feb 9 19:59:40.084557 kubelet[2097]: E0209 19:59:40.084550 2097 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9b5c29bd-36e6-409a-8fd6-648781eff461" containerName="clean-cilium-state" Feb 9 19:59:40.084606 kubelet[2097]: E0209 19:59:40.084599 2097 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9b5c29bd-36e6-409a-8fd6-648781eff461" containerName="cilium-agent" Feb 9 19:59:40.084676 kubelet[2097]: I0209 19:59:40.084668 2097 memory_manager.go:346] "RemoveStaleState removing state" podUID="9b5c29bd-36e6-409a-8fd6-648781eff461" containerName="cilium-agent" Feb 9 19:59:40.084718 kubelet[2097]: I0209 19:59:40.084711 2097 memory_manager.go:346] "RemoveStaleState removing state" podUID="5cf05e01-2825-4b29-83ba-e077f22d3aac" containerName="cilium-operator" Feb 9 19:59:40.090844 systemd[1]: Created slice kubepods-burstable-podbe5a026c_d926_49b3_ac67_2e42557d9896.slice. Feb 9 19:59:40.092516 systemd[1]: Started sshd@22-139.178.70.107:22-139.178.89.65:38110.service. Feb 9 19:59:40.097335 sshd[3785]: pam_unix(sshd:session): session closed for user core Feb 9 19:59:40.098712 systemd[1]: sshd@21-139.178.70.107:22-139.178.89.65:38100.service: Deactivated successfully. Feb 9 19:59:40.099214 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 19:59:40.099871 systemd-logind[1145]: Session 24 logged out. Waiting for processes to exit. Feb 9 19:59:40.102063 systemd-logind[1145]: Removed session 24. Feb 9 19:59:40.128647 sshd[3795]: Accepted publickey for core from 139.178.89.65 port 38110 ssh2: RSA SHA256:rEL1S6qAXEJti+jLtGl56AgBuj4qp94axBvYkXmrlvQ Feb 9 19:59:40.129713 sshd[3795]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:59:40.132209 systemd-logind[1145]: New session 25 of user core. Feb 9 19:59:40.132814 systemd[1]: Started session-25.scope. Feb 9 19:59:40.159035 kubelet[2097]: I0209 19:59:40.159013 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/be5a026c-d926-49b3-ac67-2e42557d9896-cni-path\") pod \"cilium-556c8\" (UID: \"be5a026c-d926-49b3-ac67-2e42557d9896\") " pod="kube-system/cilium-556c8" Feb 9 19:59:40.159125 kubelet[2097]: I0209 19:59:40.159050 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/be5a026c-d926-49b3-ac67-2e42557d9896-cilium-cgroup\") pod \"cilium-556c8\" (UID: \"be5a026c-d926-49b3-ac67-2e42557d9896\") " pod="kube-system/cilium-556c8" Feb 9 19:59:40.159125 kubelet[2097]: I0209 19:59:40.159068 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/be5a026c-d926-49b3-ac67-2e42557d9896-cilium-ipsec-secrets\") pod \"cilium-556c8\" (UID: \"be5a026c-d926-49b3-ac67-2e42557d9896\") " pod="kube-system/cilium-556c8" Feb 9 19:59:40.159125 kubelet[2097]: I0209 19:59:40.159081 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/be5a026c-d926-49b3-ac67-2e42557d9896-clustermesh-secrets\") pod \"cilium-556c8\" (UID: \"be5a026c-d926-49b3-ac67-2e42557d9896\") " pod="kube-system/cilium-556c8" Feb 9 19:59:40.159125 kubelet[2097]: I0209 19:59:40.159095 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/be5a026c-d926-49b3-ac67-2e42557d9896-hubble-tls\") pod \"cilium-556c8\" (UID: \"be5a026c-d926-49b3-ac67-2e42557d9896\") " pod="kube-system/cilium-556c8" Feb 9 19:59:40.159125 kubelet[2097]: I0209 19:59:40.159109 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/be5a026c-d926-49b3-ac67-2e42557d9896-cilium-run\") pod \"cilium-556c8\" (UID: \"be5a026c-d926-49b3-ac67-2e42557d9896\") " pod="kube-system/cilium-556c8" Feb 9 19:59:40.159245 kubelet[2097]: I0209 19:59:40.159128 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/be5a026c-d926-49b3-ac67-2e42557d9896-xtables-lock\") pod \"cilium-556c8\" (UID: \"be5a026c-d926-49b3-ac67-2e42557d9896\") " pod="kube-system/cilium-556c8" Feb 9 19:59:40.159245 kubelet[2097]: I0209 19:59:40.159141 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/be5a026c-d926-49b3-ac67-2e42557d9896-cilium-config-path\") pod \"cilium-556c8\" (UID: \"be5a026c-d926-49b3-ac67-2e42557d9896\") " pod="kube-system/cilium-556c8" Feb 9 19:59:40.159245 kubelet[2097]: I0209 19:59:40.159153 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/be5a026c-d926-49b3-ac67-2e42557d9896-host-proc-sys-kernel\") pod \"cilium-556c8\" (UID: \"be5a026c-d926-49b3-ac67-2e42557d9896\") " pod="kube-system/cilium-556c8" Feb 9 19:59:40.159245 kubelet[2097]: I0209 19:59:40.159164 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/be5a026c-d926-49b3-ac67-2e42557d9896-hostproc\") pod \"cilium-556c8\" (UID: \"be5a026c-d926-49b3-ac67-2e42557d9896\") " pod="kube-system/cilium-556c8" Feb 9 19:59:40.159245 kubelet[2097]: I0209 19:59:40.159179 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/be5a026c-d926-49b3-ac67-2e42557d9896-etc-cni-netd\") pod \"cilium-556c8\" (UID: \"be5a026c-d926-49b3-ac67-2e42557d9896\") " pod="kube-system/cilium-556c8" Feb 9 19:59:40.159245 kubelet[2097]: I0209 19:59:40.159191 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/be5a026c-d926-49b3-ac67-2e42557d9896-host-proc-sys-net\") pod \"cilium-556c8\" (UID: \"be5a026c-d926-49b3-ac67-2e42557d9896\") " pod="kube-system/cilium-556c8" Feb 9 19:59:40.159354 kubelet[2097]: I0209 19:59:40.159212 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/be5a026c-d926-49b3-ac67-2e42557d9896-bpf-maps\") pod \"cilium-556c8\" (UID: \"be5a026c-d926-49b3-ac67-2e42557d9896\") " pod="kube-system/cilium-556c8" Feb 9 19:59:40.159354 kubelet[2097]: I0209 19:59:40.159223 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/be5a026c-d926-49b3-ac67-2e42557d9896-lib-modules\") pod \"cilium-556c8\" (UID: \"be5a026c-d926-49b3-ac67-2e42557d9896\") " pod="kube-system/cilium-556c8" Feb 9 19:59:40.159354 kubelet[2097]: I0209 19:59:40.159235 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd7mz\" (UniqueName: \"kubernetes.io/projected/be5a026c-d926-49b3-ac67-2e42557d9896-kube-api-access-zd7mz\") pod \"cilium-556c8\" (UID: \"be5a026c-d926-49b3-ac67-2e42557d9896\") " pod="kube-system/cilium-556c8" Feb 9 19:59:40.289314 sshd[3795]: pam_unix(sshd:session): session closed for user core Feb 9 19:59:40.292037 systemd[1]: Started sshd@23-139.178.70.107:22-139.178.89.65:38124.service. Feb 9 19:59:40.296437 systemd[1]: sshd@22-139.178.70.107:22-139.178.89.65:38110.service: Deactivated successfully. Feb 9 19:59:40.297210 systemd[1]: session-25.scope: Deactivated successfully. Feb 9 19:59:40.297238 systemd-logind[1145]: Session 25 logged out. Waiting for processes to exit. Feb 9 19:59:40.298056 systemd-logind[1145]: Removed session 25. Feb 9 19:59:40.304797 env[1162]: time="2024-02-09T19:59:40.304432270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-556c8,Uid:be5a026c-d926-49b3-ac67-2e42557d9896,Namespace:kube-system,Attempt:0,}" Feb 9 19:59:40.316786 env[1162]: time="2024-02-09T19:59:40.316741296Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:59:40.316786 env[1162]: time="2024-02-09T19:59:40.316781036Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:59:40.316912 env[1162]: time="2024-02-09T19:59:40.316795773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:59:40.316912 env[1162]: time="2024-02-09T19:59:40.316877258Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5ee84bcd328765e8e32ed0ef54aefd0ca123fca28cd4b15f6701a6f697a8f32a pid=3821 runtime=io.containerd.runc.v2 Feb 9 19:59:40.326602 sshd[3810]: Accepted publickey for core from 139.178.89.65 port 38124 ssh2: RSA SHA256:rEL1S6qAXEJti+jLtGl56AgBuj4qp94axBvYkXmrlvQ Feb 9 19:59:40.327584 sshd[3810]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:59:40.331231 systemd-logind[1145]: New session 26 of user core. Feb 9 19:59:40.331995 systemd[1]: Started session-26.scope. Feb 9 19:59:40.335806 systemd[1]: Started cri-containerd-5ee84bcd328765e8e32ed0ef54aefd0ca123fca28cd4b15f6701a6f697a8f32a.scope. Feb 9 19:59:40.360331 env[1162]: time="2024-02-09T19:59:40.360306949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-556c8,Uid:be5a026c-d926-49b3-ac67-2e42557d9896,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ee84bcd328765e8e32ed0ef54aefd0ca123fca28cd4b15f6701a6f697a8f32a\"" Feb 9 19:59:40.362462 env[1162]: time="2024-02-09T19:59:40.362392157Z" level=info msg="CreateContainer within sandbox \"5ee84bcd328765e8e32ed0ef54aefd0ca123fca28cd4b15f6701a6f697a8f32a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:59:40.399943 env[1162]: time="2024-02-09T19:59:40.399896982Z" level=info msg="CreateContainer within sandbox \"5ee84bcd328765e8e32ed0ef54aefd0ca123fca28cd4b15f6701a6f697a8f32a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"96e748509bf7c6707c9af8554dd536113ad5999bebc51e4759a318591ab6c66c\"" Feb 9 19:59:40.400276 env[1162]: time="2024-02-09T19:59:40.400262937Z" level=info msg="StartContainer for \"96e748509bf7c6707c9af8554dd536113ad5999bebc51e4759a318591ab6c66c\"" Feb 9 19:59:40.414125 systemd[1]: Started cri-containerd-96e748509bf7c6707c9af8554dd536113ad5999bebc51e4759a318591ab6c66c.scope. Feb 9 19:59:40.424512 systemd[1]: cri-containerd-96e748509bf7c6707c9af8554dd536113ad5999bebc51e4759a318591ab6c66c.scope: Deactivated successfully. Feb 9 19:59:40.424686 systemd[1]: Stopped cri-containerd-96e748509bf7c6707c9af8554dd536113ad5999bebc51e4759a318591ab6c66c.scope. Feb 9 19:59:40.464443 env[1162]: time="2024-02-09T19:59:40.464406741Z" level=info msg="shim disconnected" id=96e748509bf7c6707c9af8554dd536113ad5999bebc51e4759a318591ab6c66c Feb 9 19:59:40.464621 env[1162]: time="2024-02-09T19:59:40.464610168Z" level=warning msg="cleaning up after shim disconnected" id=96e748509bf7c6707c9af8554dd536113ad5999bebc51e4759a318591ab6c66c namespace=k8s.io Feb 9 19:59:40.464686 env[1162]: time="2024-02-09T19:59:40.464676459Z" level=info msg="cleaning up dead shim" Feb 9 19:59:40.471303 env[1162]: time="2024-02-09T19:59:40.471271966Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:59:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3887 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T19:59:40Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/96e748509bf7c6707c9af8554dd536113ad5999bebc51e4759a318591ab6c66c/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 19:59:40.471626 env[1162]: time="2024-02-09T19:59:40.471550709Z" level=error msg="copy shim log" error="read /proc/self/fd/38: file already closed" Feb 9 19:59:40.474746 env[1162]: time="2024-02-09T19:59:40.474710273Z" level=error msg="Failed to pipe stdout of container \"96e748509bf7c6707c9af8554dd536113ad5999bebc51e4759a318591ab6c66c\"" error="reading from a closed fifo" Feb 9 19:59:40.474746 env[1162]: time="2024-02-09T19:59:40.471811917Z" level=error msg="Failed to pipe stderr of container \"96e748509bf7c6707c9af8554dd536113ad5999bebc51e4759a318591ab6c66c\"" error="reading from a closed fifo" Feb 9 19:59:40.475414 env[1162]: time="2024-02-09T19:59:40.475381220Z" level=error msg="StartContainer for \"96e748509bf7c6707c9af8554dd536113ad5999bebc51e4759a318591ab6c66c\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 19:59:40.476917 kubelet[2097]: E0209 19:59:40.476848 2097 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="96e748509bf7c6707c9af8554dd536113ad5999bebc51e4759a318591ab6c66c" Feb 9 19:59:40.479498 kubelet[2097]: E0209 19:59:40.479421 2097 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 19:59:40.479498 kubelet[2097]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 19:59:40.479498 kubelet[2097]: rm /hostbin/cilium-mount Feb 9 19:59:40.479720 kubelet[2097]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-zd7mz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-556c8_kube-system(be5a026c-d926-49b3-ac67-2e42557d9896): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 19:59:40.479720 kubelet[2097]: E0209 19:59:40.479478 2097 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-556c8" podUID="be5a026c-d926-49b3-ac67-2e42557d9896" Feb 9 19:59:40.620620 env[1162]: time="2024-02-09T19:59:40.620454304Z" level=info msg="StopPodSandbox for \"5ee84bcd328765e8e32ed0ef54aefd0ca123fca28cd4b15f6701a6f697a8f32a\"" Feb 9 19:59:40.620620 env[1162]: time="2024-02-09T19:59:40.620501648Z" level=info msg="Container to stop \"96e748509bf7c6707c9af8554dd536113ad5999bebc51e4759a318591ab6c66c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:59:40.628116 systemd[1]: cri-containerd-5ee84bcd328765e8e32ed0ef54aefd0ca123fca28cd4b15f6701a6f697a8f32a.scope: Deactivated successfully. Feb 9 19:59:40.666421 env[1162]: time="2024-02-09T19:59:40.666380992Z" level=info msg="shim disconnected" id=5ee84bcd328765e8e32ed0ef54aefd0ca123fca28cd4b15f6701a6f697a8f32a Feb 9 19:59:40.666637 env[1162]: time="2024-02-09T19:59:40.666626120Z" level=warning msg="cleaning up after shim disconnected" id=5ee84bcd328765e8e32ed0ef54aefd0ca123fca28cd4b15f6701a6f697a8f32a namespace=k8s.io Feb 9 19:59:40.666697 env[1162]: time="2024-02-09T19:59:40.666687033Z" level=info msg="cleaning up dead shim" Feb 9 19:59:40.671621 env[1162]: time="2024-02-09T19:59:40.671598163Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:59:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3918 runtime=io.containerd.runc.v2\n" Feb 9 19:59:40.671889 env[1162]: time="2024-02-09T19:59:40.671874057Z" level=info msg="TearDown network for sandbox \"5ee84bcd328765e8e32ed0ef54aefd0ca123fca28cd4b15f6701a6f697a8f32a\" successfully" Feb 9 19:59:40.671952 env[1162]: time="2024-02-09T19:59:40.671940395Z" level=info msg="StopPodSandbox for \"5ee84bcd328765e8e32ed0ef54aefd0ca123fca28cd4b15f6701a6f697a8f32a\" returns successfully" Feb 9 19:59:40.763265 kubelet[2097]: I0209 19:59:40.763230 2097 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/be5a026c-d926-49b3-ac67-2e42557d9896-host-proc-sys-kernel\") pod \"be5a026c-d926-49b3-ac67-2e42557d9896\" (UID: \"be5a026c-d926-49b3-ac67-2e42557d9896\") " Feb 9 19:59:40.763383 kubelet[2097]: I0209 19:59:40.763304 2097 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be5a026c-d926-49b3-ac67-2e42557d9896-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "be5a026c-d926-49b3-ac67-2e42557d9896" (UID: "be5a026c-d926-49b3-ac67-2e42557d9896"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:59:40.763383 kubelet[2097]: I0209 19:59:40.763323 2097 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/be5a026c-d926-49b3-ac67-2e42557d9896-etc-cni-netd\") pod \"be5a026c-d926-49b3-ac67-2e42557d9896\" (UID: \"be5a026c-d926-49b3-ac67-2e42557d9896\") " Feb 9 19:59:40.763383 kubelet[2097]: I0209 19:59:40.763336 2097 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/be5a026c-d926-49b3-ac67-2e42557d9896-cilium-run\") pod \"be5a026c-d926-49b3-ac67-2e42557d9896\" (UID: \"be5a026c-d926-49b3-ac67-2e42557d9896\") " Feb 9 19:59:40.763383 kubelet[2097]: I0209 19:59:40.763356 2097 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be5a026c-d926-49b3-ac67-2e42557d9896-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "be5a026c-d926-49b3-ac67-2e42557d9896" (UID: "be5a026c-d926-49b3-ac67-2e42557d9896"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:59:40.763383 kubelet[2097]: I0209 19:59:40.763366 2097 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be5a026c-d926-49b3-ac67-2e42557d9896-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "be5a026c-d926-49b3-ac67-2e42557d9896" (UID: "be5a026c-d926-49b3-ac67-2e42557d9896"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:59:40.763383 kubelet[2097]: I0209 19:59:40.763377 2097 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/be5a026c-d926-49b3-ac67-2e42557d9896-xtables-lock\") pod \"be5a026c-d926-49b3-ac67-2e42557d9896\" (UID: \"be5a026c-d926-49b3-ac67-2e42557d9896\") " Feb 9 19:59:40.763502 kubelet[2097]: I0209 19:59:40.763386 2097 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/be5a026c-d926-49b3-ac67-2e42557d9896-lib-modules\") pod \"be5a026c-d926-49b3-ac67-2e42557d9896\" (UID: \"be5a026c-d926-49b3-ac67-2e42557d9896\") " Feb 9 19:59:40.763502 kubelet[2097]: I0209 19:59:40.763409 2097 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/be5a026c-d926-49b3-ac67-2e42557d9896-cilium-ipsec-secrets\") pod \"be5a026c-d926-49b3-ac67-2e42557d9896\" (UID: \"be5a026c-d926-49b3-ac67-2e42557d9896\") " Feb 9 19:59:40.763502 kubelet[2097]: I0209 19:59:40.763431 2097 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be5a026c-d926-49b3-ac67-2e42557d9896-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "be5a026c-d926-49b3-ac67-2e42557d9896" (UID: "be5a026c-d926-49b3-ac67-2e42557d9896"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:59:40.763502 kubelet[2097]: I0209 19:59:40.763440 2097 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be5a026c-d926-49b3-ac67-2e42557d9896-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "be5a026c-d926-49b3-ac67-2e42557d9896" (UID: "be5a026c-d926-49b3-ac67-2e42557d9896"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:59:40.766099 kubelet[2097]: I0209 19:59:40.763636 2097 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/be5a026c-d926-49b3-ac67-2e42557d9896-hubble-tls\") pod \"be5a026c-d926-49b3-ac67-2e42557d9896\" (UID: \"be5a026c-d926-49b3-ac67-2e42557d9896\") " Feb 9 19:59:40.766099 kubelet[2097]: I0209 19:59:40.763668 2097 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/be5a026c-d926-49b3-ac67-2e42557d9896-bpf-maps\") pod \"be5a026c-d926-49b3-ac67-2e42557d9896\" (UID: \"be5a026c-d926-49b3-ac67-2e42557d9896\") " Feb 9 19:59:40.766099 kubelet[2097]: I0209 19:59:40.763691 2097 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zd7mz\" (UniqueName: \"kubernetes.io/projected/be5a026c-d926-49b3-ac67-2e42557d9896-kube-api-access-zd7mz\") pod \"be5a026c-d926-49b3-ac67-2e42557d9896\" (UID: \"be5a026c-d926-49b3-ac67-2e42557d9896\") " Feb 9 19:59:40.766099 kubelet[2097]: I0209 19:59:40.763704 2097 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/be5a026c-d926-49b3-ac67-2e42557d9896-cilium-cgroup\") pod \"be5a026c-d926-49b3-ac67-2e42557d9896\" (UID: \"be5a026c-d926-49b3-ac67-2e42557d9896\") " Feb 9 19:59:40.766099 kubelet[2097]: I0209 19:59:40.763716 2097 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/be5a026c-d926-49b3-ac67-2e42557d9896-cilium-config-path\") pod \"be5a026c-d926-49b3-ac67-2e42557d9896\" (UID: \"be5a026c-d926-49b3-ac67-2e42557d9896\") " Feb 9 19:59:40.766099 kubelet[2097]: I0209 19:59:40.763747 2097 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/be5a026c-d926-49b3-ac67-2e42557d9896-cni-path\") pod \"be5a026c-d926-49b3-ac67-2e42557d9896\" (UID: \"be5a026c-d926-49b3-ac67-2e42557d9896\") " Feb 9 19:59:40.766099 kubelet[2097]: I0209 19:59:40.763764 2097 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/be5a026c-d926-49b3-ac67-2e42557d9896-clustermesh-secrets\") pod \"be5a026c-d926-49b3-ac67-2e42557d9896\" (UID: \"be5a026c-d926-49b3-ac67-2e42557d9896\") " Feb 9 19:59:40.766099 kubelet[2097]: I0209 19:59:40.763776 2097 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/be5a026c-d926-49b3-ac67-2e42557d9896-hostproc\") pod \"be5a026c-d926-49b3-ac67-2e42557d9896\" (UID: \"be5a026c-d926-49b3-ac67-2e42557d9896\") " Feb 9 19:59:40.766099 kubelet[2097]: I0209 19:59:40.763788 2097 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/be5a026c-d926-49b3-ac67-2e42557d9896-host-proc-sys-net\") pod \"be5a026c-d926-49b3-ac67-2e42557d9896\" (UID: \"be5a026c-d926-49b3-ac67-2e42557d9896\") " Feb 9 19:59:40.766099 kubelet[2097]: I0209 19:59:40.763819 2097 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/be5a026c-d926-49b3-ac67-2e42557d9896-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 9 19:59:40.766099 kubelet[2097]: I0209 19:59:40.763829 2097 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/be5a026c-d926-49b3-ac67-2e42557d9896-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 9 19:59:40.766099 kubelet[2097]: I0209 19:59:40.763834 2097 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/be5a026c-d926-49b3-ac67-2e42557d9896-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 9 19:59:40.766099 kubelet[2097]: I0209 19:59:40.763839 2097 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/be5a026c-d926-49b3-ac67-2e42557d9896-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 9 19:59:40.766099 kubelet[2097]: I0209 19:59:40.763845 2097 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/be5a026c-d926-49b3-ac67-2e42557d9896-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 9 19:59:40.766099 kubelet[2097]: I0209 19:59:40.763855 2097 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be5a026c-d926-49b3-ac67-2e42557d9896-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "be5a026c-d926-49b3-ac67-2e42557d9896" (UID: "be5a026c-d926-49b3-ac67-2e42557d9896"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:59:40.766099 kubelet[2097]: I0209 19:59:40.764895 2097 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be5a026c-d926-49b3-ac67-2e42557d9896-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "be5a026c-d926-49b3-ac67-2e42557d9896" (UID: "be5a026c-d926-49b3-ac67-2e42557d9896"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:59:40.766099 kubelet[2097]: I0209 19:59:40.764916 2097 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be5a026c-d926-49b3-ac67-2e42557d9896-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "be5a026c-d926-49b3-ac67-2e42557d9896" (UID: "be5a026c-d926-49b3-ac67-2e42557d9896"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:59:40.772510 kubelet[2097]: I0209 19:59:40.765218 2097 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be5a026c-d926-49b3-ac67-2e42557d9896-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "be5a026c-d926-49b3-ac67-2e42557d9896" (UID: "be5a026c-d926-49b3-ac67-2e42557d9896"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:59:40.772510 kubelet[2097]: I0209 19:59:40.769358 2097 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be5a026c-d926-49b3-ac67-2e42557d9896-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "be5a026c-d926-49b3-ac67-2e42557d9896" (UID: "be5a026c-d926-49b3-ac67-2e42557d9896"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:59:40.772510 kubelet[2097]: I0209 19:59:40.769382 2097 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be5a026c-d926-49b3-ac67-2e42557d9896-hostproc" (OuterVolumeSpecName: "hostproc") pod "be5a026c-d926-49b3-ac67-2e42557d9896" (UID: "be5a026c-d926-49b3-ac67-2e42557d9896"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:59:40.772510 kubelet[2097]: I0209 19:59:40.769418 2097 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be5a026c-d926-49b3-ac67-2e42557d9896-kube-api-access-zd7mz" (OuterVolumeSpecName: "kube-api-access-zd7mz") pod "be5a026c-d926-49b3-ac67-2e42557d9896" (UID: "be5a026c-d926-49b3-ac67-2e42557d9896"). InnerVolumeSpecName "kube-api-access-zd7mz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:59:40.772510 kubelet[2097]: I0209 19:59:40.769446 2097 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be5a026c-d926-49b3-ac67-2e42557d9896-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "be5a026c-d926-49b3-ac67-2e42557d9896" (UID: "be5a026c-d926-49b3-ac67-2e42557d9896"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:59:40.772510 kubelet[2097]: I0209 19:59:40.769459 2097 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be5a026c-d926-49b3-ac67-2e42557d9896-cni-path" (OuterVolumeSpecName: "cni-path") pod "be5a026c-d926-49b3-ac67-2e42557d9896" (UID: "be5a026c-d926-49b3-ac67-2e42557d9896"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:59:40.772510 kubelet[2097]: I0209 19:59:40.769546 2097 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be5a026c-d926-49b3-ac67-2e42557d9896-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "be5a026c-d926-49b3-ac67-2e42557d9896" (UID: "be5a026c-d926-49b3-ac67-2e42557d9896"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:59:40.864346 kubelet[2097]: I0209 19:59:40.864302 2097 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/be5a026c-d926-49b3-ac67-2e42557d9896-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Feb 9 19:59:40.864346 kubelet[2097]: I0209 19:59:40.864328 2097 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/be5a026c-d926-49b3-ac67-2e42557d9896-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 9 19:59:40.864346 kubelet[2097]: I0209 19:59:40.864336 2097 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/be5a026c-d926-49b3-ac67-2e42557d9896-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 9 19:59:40.864346 kubelet[2097]: I0209 19:59:40.864342 2097 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-zd7mz\" (UniqueName: \"kubernetes.io/projected/be5a026c-d926-49b3-ac67-2e42557d9896-kube-api-access-zd7mz\") on node \"localhost\" DevicePath \"\"" Feb 9 19:59:40.864346 kubelet[2097]: I0209 19:59:40.864348 2097 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/be5a026c-d926-49b3-ac67-2e42557d9896-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 9 19:59:40.864346 kubelet[2097]: I0209 19:59:40.864353 2097 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/be5a026c-d926-49b3-ac67-2e42557d9896-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 9 19:59:40.864346 kubelet[2097]: I0209 19:59:40.864358 2097 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/be5a026c-d926-49b3-ac67-2e42557d9896-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 9 19:59:40.864346 kubelet[2097]: I0209 19:59:40.864364 2097 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/be5a026c-d926-49b3-ac67-2e42557d9896-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 9 19:59:40.864346 kubelet[2097]: I0209 19:59:40.864369 2097 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/be5a026c-d926-49b3-ac67-2e42557d9896-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 9 19:59:40.864629 kubelet[2097]: I0209 19:59:40.864376 2097 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/be5a026c-d926-49b3-ac67-2e42557d9896-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 9 19:59:41.264078 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5ee84bcd328765e8e32ed0ef54aefd0ca123fca28cd4b15f6701a6f697a8f32a-shm.mount: Deactivated successfully. Feb 9 19:59:41.264145 systemd[1]: var-lib-kubelet-pods-be5a026c\x2dd926\x2d49b3\x2dac67\x2d2e42557d9896-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzd7mz.mount: Deactivated successfully. Feb 9 19:59:41.264181 systemd[1]: var-lib-kubelet-pods-be5a026c\x2dd926\x2d49b3\x2dac67\x2d2e42557d9896-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 19:59:41.264212 systemd[1]: var-lib-kubelet-pods-be5a026c\x2dd926\x2d49b3\x2dac67\x2d2e42557d9896-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 19:59:41.264246 systemd[1]: var-lib-kubelet-pods-be5a026c\x2dd926\x2d49b3\x2dac67\x2d2e42557d9896-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 19:59:41.399113 systemd[1]: Removed slice kubepods-burstable-podbe5a026c_d926_49b3_ac67_2e42557d9896.slice. Feb 9 19:59:41.622303 kubelet[2097]: I0209 19:59:41.622281 2097 scope.go:117] "RemoveContainer" containerID="96e748509bf7c6707c9af8554dd536113ad5999bebc51e4759a318591ab6c66c" Feb 9 19:59:41.623636 env[1162]: time="2024-02-09T19:59:41.623595021Z" level=info msg="RemoveContainer for \"96e748509bf7c6707c9af8554dd536113ad5999bebc51e4759a318591ab6c66c\"" Feb 9 19:59:41.641076 env[1162]: time="2024-02-09T19:59:41.641043777Z" level=info msg="RemoveContainer for \"96e748509bf7c6707c9af8554dd536113ad5999bebc51e4759a318591ab6c66c\" returns successfully" Feb 9 19:59:41.660649 kubelet[2097]: I0209 19:59:41.660628 2097 topology_manager.go:215] "Topology Admit Handler" podUID="05056c83-0d33-4909-8945-8942d7ce93b8" podNamespace="kube-system" podName="cilium-f7f4t" Feb 9 19:59:41.660786 kubelet[2097]: E0209 19:59:41.660674 2097 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="be5a026c-d926-49b3-ac67-2e42557d9896" containerName="mount-cgroup" Feb 9 19:59:41.660786 kubelet[2097]: I0209 19:59:41.660691 2097 memory_manager.go:346] "RemoveStaleState removing state" podUID="be5a026c-d926-49b3-ac67-2e42557d9896" containerName="mount-cgroup" Feb 9 19:59:41.677761 systemd[1]: Created slice kubepods-burstable-pod05056c83_0d33_4909_8945_8942d7ce93b8.slice. Feb 9 19:59:41.769351 kubelet[2097]: I0209 19:59:41.769326 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/05056c83-0d33-4909-8945-8942d7ce93b8-cilium-run\") pod \"cilium-f7f4t\" (UID: \"05056c83-0d33-4909-8945-8942d7ce93b8\") " pod="kube-system/cilium-f7f4t" Feb 9 19:59:41.769529 kubelet[2097]: I0209 19:59:41.769518 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/05056c83-0d33-4909-8945-8942d7ce93b8-bpf-maps\") pod \"cilium-f7f4t\" (UID: \"05056c83-0d33-4909-8945-8942d7ce93b8\") " pod="kube-system/cilium-f7f4t" Feb 9 19:59:41.769616 kubelet[2097]: I0209 19:59:41.769607 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/05056c83-0d33-4909-8945-8942d7ce93b8-etc-cni-netd\") pod \"cilium-f7f4t\" (UID: \"05056c83-0d33-4909-8945-8942d7ce93b8\") " pod="kube-system/cilium-f7f4t" Feb 9 19:59:41.769755 kubelet[2097]: I0209 19:59:41.769736 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/05056c83-0d33-4909-8945-8942d7ce93b8-clustermesh-secrets\") pod \"cilium-f7f4t\" (UID: \"05056c83-0d33-4909-8945-8942d7ce93b8\") " pod="kube-system/cilium-f7f4t" Feb 9 19:59:41.769809 kubelet[2097]: I0209 19:59:41.769780 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/05056c83-0d33-4909-8945-8942d7ce93b8-cilium-cgroup\") pod \"cilium-f7f4t\" (UID: \"05056c83-0d33-4909-8945-8942d7ce93b8\") " pod="kube-system/cilium-f7f4t" Feb 9 19:59:41.769809 kubelet[2097]: I0209 19:59:41.769802 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/05056c83-0d33-4909-8945-8942d7ce93b8-xtables-lock\") pod \"cilium-f7f4t\" (UID: \"05056c83-0d33-4909-8945-8942d7ce93b8\") " pod="kube-system/cilium-f7f4t" Feb 9 19:59:41.769870 kubelet[2097]: I0209 19:59:41.769829 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/05056c83-0d33-4909-8945-8942d7ce93b8-host-proc-sys-kernel\") pod \"cilium-f7f4t\" (UID: \"05056c83-0d33-4909-8945-8942d7ce93b8\") " pod="kube-system/cilium-f7f4t" Feb 9 19:59:41.769870 kubelet[2097]: I0209 19:59:41.769847 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/05056c83-0d33-4909-8945-8942d7ce93b8-hostproc\") pod \"cilium-f7f4t\" (UID: \"05056c83-0d33-4909-8945-8942d7ce93b8\") " pod="kube-system/cilium-f7f4t" Feb 9 19:59:41.769870 kubelet[2097]: I0209 19:59:41.769861 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/05056c83-0d33-4909-8945-8942d7ce93b8-lib-modules\") pod \"cilium-f7f4t\" (UID: \"05056c83-0d33-4909-8945-8942d7ce93b8\") " pod="kube-system/cilium-f7f4t" Feb 9 19:59:41.769950 kubelet[2097]: I0209 19:59:41.769876 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/05056c83-0d33-4909-8945-8942d7ce93b8-cilium-ipsec-secrets\") pod \"cilium-f7f4t\" (UID: \"05056c83-0d33-4909-8945-8942d7ce93b8\") " pod="kube-system/cilium-f7f4t" Feb 9 19:59:41.769950 kubelet[2097]: I0209 19:59:41.769890 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/05056c83-0d33-4909-8945-8942d7ce93b8-host-proc-sys-net\") pod \"cilium-f7f4t\" (UID: \"05056c83-0d33-4909-8945-8942d7ce93b8\") " pod="kube-system/cilium-f7f4t" Feb 9 19:59:41.769950 kubelet[2097]: I0209 19:59:41.769904 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5zlm\" (UniqueName: \"kubernetes.io/projected/05056c83-0d33-4909-8945-8942d7ce93b8-kube-api-access-z5zlm\") pod \"cilium-f7f4t\" (UID: \"05056c83-0d33-4909-8945-8942d7ce93b8\") " pod="kube-system/cilium-f7f4t" Feb 9 19:59:41.769950 kubelet[2097]: I0209 19:59:41.769919 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/05056c83-0d33-4909-8945-8942d7ce93b8-hubble-tls\") pod \"cilium-f7f4t\" (UID: \"05056c83-0d33-4909-8945-8942d7ce93b8\") " pod="kube-system/cilium-f7f4t" Feb 9 19:59:41.769950 kubelet[2097]: I0209 19:59:41.769935 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/05056c83-0d33-4909-8945-8942d7ce93b8-cilium-config-path\") pod \"cilium-f7f4t\" (UID: \"05056c83-0d33-4909-8945-8942d7ce93b8\") " pod="kube-system/cilium-f7f4t" Feb 9 19:59:41.769950 kubelet[2097]: I0209 19:59:41.769949 2097 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/05056c83-0d33-4909-8945-8942d7ce93b8-cni-path\") pod \"cilium-f7f4t\" (UID: \"05056c83-0d33-4909-8945-8942d7ce93b8\") " pod="kube-system/cilium-f7f4t" Feb 9 19:59:41.980676 env[1162]: time="2024-02-09T19:59:41.979915764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f7f4t,Uid:05056c83-0d33-4909-8945-8942d7ce93b8,Namespace:kube-system,Attempt:0,}" Feb 9 19:59:42.065325 env[1162]: time="2024-02-09T19:59:42.065269136Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:59:42.065453 env[1162]: time="2024-02-09T19:59:42.065303724Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:59:42.065544 env[1162]: time="2024-02-09T19:59:42.065441425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:59:42.065961 env[1162]: time="2024-02-09T19:59:42.065716683Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fa827241776402350d5ed3231c480443bb60492ee29f2b4895f2f4c4367a2287 pid=3948 runtime=io.containerd.runc.v2 Feb 9 19:59:42.074904 systemd[1]: Started cri-containerd-fa827241776402350d5ed3231c480443bb60492ee29f2b4895f2f4c4367a2287.scope. Feb 9 19:59:42.093104 env[1162]: time="2024-02-09T19:59:42.093070498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f7f4t,Uid:05056c83-0d33-4909-8945-8942d7ce93b8,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa827241776402350d5ed3231c480443bb60492ee29f2b4895f2f4c4367a2287\"" Feb 9 19:59:42.094752 env[1162]: time="2024-02-09T19:59:42.094725012Z" level=info msg="CreateContainer within sandbox \"fa827241776402350d5ed3231c480443bb60492ee29f2b4895f2f4c4367a2287\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:59:42.116683 env[1162]: time="2024-02-09T19:59:42.116626685Z" level=info msg="CreateContainer within sandbox \"fa827241776402350d5ed3231c480443bb60492ee29f2b4895f2f4c4367a2287\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4adaa1d55e30c9feac44046337ed422caf11a0c99987c8555135f685259f8a72\"" Feb 9 19:59:42.117845 env[1162]: time="2024-02-09T19:59:42.117820993Z" level=info msg="StartContainer for \"4adaa1d55e30c9feac44046337ed422caf11a0c99987c8555135f685259f8a72\"" Feb 9 19:59:42.128309 systemd[1]: Started cri-containerd-4adaa1d55e30c9feac44046337ed422caf11a0c99987c8555135f685259f8a72.scope. Feb 9 19:59:42.147582 env[1162]: time="2024-02-09T19:59:42.147554012Z" level=info msg="StartContainer for \"4adaa1d55e30c9feac44046337ed422caf11a0c99987c8555135f685259f8a72\" returns successfully" Feb 9 19:59:42.207123 systemd[1]: cri-containerd-4adaa1d55e30c9feac44046337ed422caf11a0c99987c8555135f685259f8a72.scope: Deactivated successfully. Feb 9 19:59:42.251463 env[1162]: time="2024-02-09T19:59:42.251394564Z" level=info msg="shim disconnected" id=4adaa1d55e30c9feac44046337ed422caf11a0c99987c8555135f685259f8a72 Feb 9 19:59:42.251463 env[1162]: time="2024-02-09T19:59:42.251420753Z" level=warning msg="cleaning up after shim disconnected" id=4adaa1d55e30c9feac44046337ed422caf11a0c99987c8555135f685259f8a72 namespace=k8s.io Feb 9 19:59:42.251463 env[1162]: time="2024-02-09T19:59:42.251427272Z" level=info msg="cleaning up dead shim" Feb 9 19:59:42.257426 env[1162]: time="2024-02-09T19:59:42.257398301Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:59:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4032 runtime=io.containerd.runc.v2\n" Feb 9 19:59:42.626329 env[1162]: time="2024-02-09T19:59:42.626299101Z" level=info msg="CreateContainer within sandbox \"fa827241776402350d5ed3231c480443bb60492ee29f2b4895f2f4c4367a2287\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:59:42.702593 env[1162]: time="2024-02-09T19:59:42.702527167Z" level=info msg="CreateContainer within sandbox \"fa827241776402350d5ed3231c480443bb60492ee29f2b4895f2f4c4367a2287\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8b6e0d7d0f0c2d3192fce1f3f830828ccca8cf48313504b5623fc0b917000ecf\"" Feb 9 19:59:42.703003 env[1162]: time="2024-02-09T19:59:42.702989707Z" level=info msg="StartContainer for \"8b6e0d7d0f0c2d3192fce1f3f830828ccca8cf48313504b5623fc0b917000ecf\"" Feb 9 19:59:42.714573 systemd[1]: Started cri-containerd-8b6e0d7d0f0c2d3192fce1f3f830828ccca8cf48313504b5623fc0b917000ecf.scope. Feb 9 19:59:42.746157 env[1162]: time="2024-02-09T19:59:42.746126707Z" level=info msg="StartContainer for \"8b6e0d7d0f0c2d3192fce1f3f830828ccca8cf48313504b5623fc0b917000ecf\" returns successfully" Feb 9 19:59:42.784530 systemd[1]: cri-containerd-8b6e0d7d0f0c2d3192fce1f3f830828ccca8cf48313504b5623fc0b917000ecf.scope: Deactivated successfully. Feb 9 19:59:42.826213 env[1162]: time="2024-02-09T19:59:42.826177468Z" level=info msg="shim disconnected" id=8b6e0d7d0f0c2d3192fce1f3f830828ccca8cf48313504b5623fc0b917000ecf Feb 9 19:59:42.826213 env[1162]: time="2024-02-09T19:59:42.826210449Z" level=warning msg="cleaning up after shim disconnected" id=8b6e0d7d0f0c2d3192fce1f3f830828ccca8cf48313504b5623fc0b917000ecf namespace=k8s.io Feb 9 19:59:42.826373 env[1162]: time="2024-02-09T19:59:42.826218963Z" level=info msg="cleaning up dead shim" Feb 9 19:59:42.831807 env[1162]: time="2024-02-09T19:59:42.831776573Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:59:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4093 runtime=io.containerd.runc.v2\n" Feb 9 19:59:43.264233 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b6e0d7d0f0c2d3192fce1f3f830828ccca8cf48313504b5623fc0b917000ecf-rootfs.mount: Deactivated successfully. Feb 9 19:59:43.397031 kubelet[2097]: I0209 19:59:43.397008 2097 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="be5a026c-d926-49b3-ac67-2e42557d9896" path="/var/lib/kubelet/pods/be5a026c-d926-49b3-ac67-2e42557d9896/volumes" Feb 9 19:59:43.460038 kubelet[2097]: E0209 19:59:43.460009 2097 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:59:43.578284 kubelet[2097]: W0209 19:59:43.578252 2097 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe5a026c_d926_49b3_ac67_2e42557d9896.slice/cri-containerd-96e748509bf7c6707c9af8554dd536113ad5999bebc51e4759a318591ab6c66c.scope WatchSource:0}: container "96e748509bf7c6707c9af8554dd536113ad5999bebc51e4759a318591ab6c66c" in namespace "k8s.io": not found Feb 9 19:59:43.631651 env[1162]: time="2024-02-09T19:59:43.631614550Z" level=info msg="CreateContainer within sandbox \"fa827241776402350d5ed3231c480443bb60492ee29f2b4895f2f4c4367a2287\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:59:43.648051 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount424614838.mount: Deactivated successfully. Feb 9 19:59:43.652301 env[1162]: time="2024-02-09T19:59:43.652272174Z" level=info msg="CreateContainer within sandbox \"fa827241776402350d5ed3231c480443bb60492ee29f2b4895f2f4c4367a2287\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9ec8b85f9079bdd1066accb107f25019caaed803340c6c0ad5ca953ee1cbb341\"" Feb 9 19:59:43.652882 env[1162]: time="2024-02-09T19:59:43.652863631Z" level=info msg="StartContainer for \"9ec8b85f9079bdd1066accb107f25019caaed803340c6c0ad5ca953ee1cbb341\"" Feb 9 19:59:43.667108 systemd[1]: Started cri-containerd-9ec8b85f9079bdd1066accb107f25019caaed803340c6c0ad5ca953ee1cbb341.scope. Feb 9 19:59:43.688864 env[1162]: time="2024-02-09T19:59:43.688832580Z" level=info msg="StartContainer for \"9ec8b85f9079bdd1066accb107f25019caaed803340c6c0ad5ca953ee1cbb341\" returns successfully" Feb 9 19:59:43.696156 systemd[1]: cri-containerd-9ec8b85f9079bdd1066accb107f25019caaed803340c6c0ad5ca953ee1cbb341.scope: Deactivated successfully. Feb 9 19:59:43.742104 env[1162]: time="2024-02-09T19:59:43.742075921Z" level=info msg="shim disconnected" id=9ec8b85f9079bdd1066accb107f25019caaed803340c6c0ad5ca953ee1cbb341 Feb 9 19:59:43.742261 env[1162]: time="2024-02-09T19:59:43.742250181Z" level=warning msg="cleaning up after shim disconnected" id=9ec8b85f9079bdd1066accb107f25019caaed803340c6c0ad5ca953ee1cbb341 namespace=k8s.io Feb 9 19:59:43.742318 env[1162]: time="2024-02-09T19:59:43.742308745Z" level=info msg="cleaning up dead shim" Feb 9 19:59:43.746654 env[1162]: time="2024-02-09T19:59:43.746639345Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:59:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4152 runtime=io.containerd.runc.v2\n" Feb 9 19:59:44.264421 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ec8b85f9079bdd1066accb107f25019caaed803340c6c0ad5ca953ee1cbb341-rootfs.mount: Deactivated successfully. Feb 9 19:59:44.634281 env[1162]: time="2024-02-09T19:59:44.634225610Z" level=info msg="CreateContainer within sandbox \"fa827241776402350d5ed3231c480443bb60492ee29f2b4895f2f4c4367a2287\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 19:59:44.641202 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3566819510.mount: Deactivated successfully. Feb 9 19:59:44.645815 env[1162]: time="2024-02-09T19:59:44.645759081Z" level=info msg="CreateContainer within sandbox \"fa827241776402350d5ed3231c480443bb60492ee29f2b4895f2f4c4367a2287\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a6272b2b0c4f42382487ff8a14c3e7539c6295f3619665278daa564d24833335\"" Feb 9 19:59:44.646897 env[1162]: time="2024-02-09T19:59:44.646871680Z" level=info msg="StartContainer for \"a6272b2b0c4f42382487ff8a14c3e7539c6295f3619665278daa564d24833335\"" Feb 9 19:59:44.662056 systemd[1]: Started cri-containerd-a6272b2b0c4f42382487ff8a14c3e7539c6295f3619665278daa564d24833335.scope. Feb 9 19:59:44.679085 systemd[1]: cri-containerd-a6272b2b0c4f42382487ff8a14c3e7539c6295f3619665278daa564d24833335.scope: Deactivated successfully. Feb 9 19:59:44.680135 env[1162]: time="2024-02-09T19:59:44.680093038Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05056c83_0d33_4909_8945_8942d7ce93b8.slice/cri-containerd-a6272b2b0c4f42382487ff8a14c3e7539c6295f3619665278daa564d24833335.scope/memory.events\": no such file or directory" Feb 9 19:59:44.687145 env[1162]: time="2024-02-09T19:59:44.687114823Z" level=info msg="StartContainer for \"a6272b2b0c4f42382487ff8a14c3e7539c6295f3619665278daa564d24833335\" returns successfully" Feb 9 19:59:44.715120 env[1162]: time="2024-02-09T19:59:44.715086520Z" level=info msg="shim disconnected" id=a6272b2b0c4f42382487ff8a14c3e7539c6295f3619665278daa564d24833335 Feb 9 19:59:44.715120 env[1162]: time="2024-02-09T19:59:44.715117106Z" level=warning msg="cleaning up after shim disconnected" id=a6272b2b0c4f42382487ff8a14c3e7539c6295f3619665278daa564d24833335 namespace=k8s.io Feb 9 19:59:44.715290 env[1162]: time="2024-02-09T19:59:44.715125134Z" level=info msg="cleaning up dead shim" Feb 9 19:59:44.719810 env[1162]: time="2024-02-09T19:59:44.719789460Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:59:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4204 runtime=io.containerd.runc.v2\n" Feb 9 19:59:45.264348 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a6272b2b0c4f42382487ff8a14c3e7539c6295f3619665278daa564d24833335-rootfs.mount: Deactivated successfully. Feb 9 19:59:45.639805 env[1162]: time="2024-02-09T19:59:45.639764947Z" level=info msg="CreateContainer within sandbox \"fa827241776402350d5ed3231c480443bb60492ee29f2b4895f2f4c4367a2287\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 19:59:45.648935 env[1162]: time="2024-02-09T19:59:45.646883015Z" level=info msg="CreateContainer within sandbox \"fa827241776402350d5ed3231c480443bb60492ee29f2b4895f2f4c4367a2287\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7e252ea47937c62f8b47604886414b5db2d16d762c316f8cc1acfbf175737e9a\"" Feb 9 19:59:45.647879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1561139642.mount: Deactivated successfully. Feb 9 19:59:45.649335 env[1162]: time="2024-02-09T19:59:45.649322179Z" level=info msg="StartContainer for \"7e252ea47937c62f8b47604886414b5db2d16d762c316f8cc1acfbf175737e9a\"" Feb 9 19:59:45.668744 systemd[1]: Started cri-containerd-7e252ea47937c62f8b47604886414b5db2d16d762c316f8cc1acfbf175737e9a.scope. Feb 9 19:59:45.688130 env[1162]: time="2024-02-09T19:59:45.688101028Z" level=info msg="StartContainer for \"7e252ea47937c62f8b47604886414b5db2d16d762c316f8cc1acfbf175737e9a\" returns successfully" Feb 9 19:59:46.258677 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 9 19:59:46.264634 kubelet[2097]: I0209 19:59:46.264614 2097 setters.go:552] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-02-09T19:59:46Z","lastTransitionTime":"2024-02-09T19:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 9 19:59:46.647274 kubelet[2097]: I0209 19:59:46.647250 2097 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-f7f4t" podStartSLOduration=5.646934116 podCreationTimestamp="2024-02-09 19:59:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:59:46.646921061 +0000 UTC m=+133.324100370" watchObservedRunningTime="2024-02-09 19:59:46.646934116 +0000 UTC m=+133.324113420" Feb 9 19:59:46.684564 kubelet[2097]: W0209 19:59:46.684537 2097 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05056c83_0d33_4909_8945_8942d7ce93b8.slice/cri-containerd-4adaa1d55e30c9feac44046337ed422caf11a0c99987c8555135f685259f8a72.scope WatchSource:0}: task 4adaa1d55e30c9feac44046337ed422caf11a0c99987c8555135f685259f8a72 not found: not found Feb 9 19:59:48.518394 systemd-networkd[1063]: lxc_health: Link UP Feb 9 19:59:48.536015 systemd-networkd[1063]: lxc_health: Gained carrier Feb 9 19:59:48.536675 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 19:59:49.790185 kubelet[2097]: W0209 19:59:49.790151 2097 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05056c83_0d33_4909_8945_8942d7ce93b8.slice/cri-containerd-8b6e0d7d0f0c2d3192fce1f3f830828ccca8cf48313504b5623fc0b917000ecf.scope WatchSource:0}: task 8b6e0d7d0f0c2d3192fce1f3f830828ccca8cf48313504b5623fc0b917000ecf not found: not found Feb 9 19:59:50.081746 systemd-networkd[1063]: lxc_health: Gained IPv6LL Feb 9 19:59:50.830419 systemd[1]: run-containerd-runc-k8s.io-7e252ea47937c62f8b47604886414b5db2d16d762c316f8cc1acfbf175737e9a-runc.b7ZoPU.mount: Deactivated successfully. Feb 9 19:59:52.898069 kubelet[2097]: W0209 19:59:52.897353 2097 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05056c83_0d33_4909_8945_8942d7ce93b8.slice/cri-containerd-9ec8b85f9079bdd1066accb107f25019caaed803340c6c0ad5ca953ee1cbb341.scope WatchSource:0}: task 9ec8b85f9079bdd1066accb107f25019caaed803340c6c0ad5ca953ee1cbb341 not found: not found Feb 9 19:59:55.061171 sshd[3810]: pam_unix(sshd:session): session closed for user core Feb 9 19:59:55.065936 systemd[1]: sshd@23-139.178.70.107:22-139.178.89.65:38124.service: Deactivated successfully. Feb 9 19:59:55.066391 systemd[1]: session-26.scope: Deactivated successfully. Feb 9 19:59:55.067064 systemd-logind[1145]: Session 26 logged out. Waiting for processes to exit. Feb 9 19:59:55.067550 systemd-logind[1145]: Removed session 26. Feb 9 19:59:56.003188 kubelet[2097]: W0209 19:59:56.003138 2097 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05056c83_0d33_4909_8945_8942d7ce93b8.slice/cri-containerd-a6272b2b0c4f42382487ff8a14c3e7539c6295f3619665278daa564d24833335.scope WatchSource:0}: task a6272b2b0c4f42382487ff8a14c3e7539c6295f3619665278daa564d24833335 not found: not found